00:00:00.001 Started by upstream project "autotest-per-patch" build number 132861 00:00:00.001 originally caused by: 00:00:00.001 Started by user sys_sgci 00:00:00.135 Checking out git https://review.spdk.io/gerrit/a/build_pool/jenkins_build_pool into /var/jenkins_home/workspace/nvmf-tcp-vg-autotest_script/33b20b30f0a51e6b52980845e0f6aa336787973ad45e341fbbf98d1b65b265d4 to read jbp/jenkins/jjb-config/jobs/autotest-downstream/autotest-vg.groovy 00:00:00.136 The recommended git tool is: git 00:00:00.136 using credential 00000000-0000-0000-0000-000000000002 00:00:00.138 > git rev-parse --resolve-git-dir /var/jenkins_home/workspace/nvmf-tcp-vg-autotest_script/33b20b30f0a51e6b52980845e0f6aa336787973ad45e341fbbf98d1b65b265d4/jbp/.git # timeout=10 00:00:00.197 Fetching changes from the remote Git repository 00:00:00.198 > git config remote.origin.url https://review.spdk.io/gerrit/a/build_pool/jenkins_build_pool # timeout=10 00:00:00.268 Using shallow fetch with depth 1 00:00:00.268 Fetching upstream changes from https://review.spdk.io/gerrit/a/build_pool/jenkins_build_pool 00:00:00.268 > git --version # timeout=10 00:00:00.317 > git --version # 'git version 2.39.2' 00:00:00.317 using GIT_ASKPASS to set credentials SPDKCI HTTPS Credentials 00:00:00.348 Setting http proxy: proxy-dmz.intel.com:911 00:00:00.348 > git fetch --tags --force --progress --depth=1 -- https://review.spdk.io/gerrit/a/build_pool/jenkins_build_pool refs/heads/master # timeout=5 00:00:07.444 > git rev-parse origin/FETCH_HEAD^{commit} # timeout=10 00:00:07.457 > git rev-parse FETCH_HEAD^{commit} # timeout=10 00:00:07.471 Checking out Revision db4637e8b949f278f369ec13f70585206ccd9507 (FETCH_HEAD) 00:00:07.471 > git config core.sparsecheckout # timeout=10 00:00:07.484 > git read-tree -mu HEAD # timeout=10 00:00:07.502 > git checkout -f db4637e8b949f278f369ec13f70585206ccd9507 # timeout=5 00:00:07.533 Commit message: "jenkins/jjb-config: Add missing SPDK_TEST_NVME_INTERRUPT flag" 00:00:07.533 > git rev-list --no-walk db4637e8b949f278f369ec13f70585206ccd9507 # timeout=10 00:00:07.649 [Pipeline] Start of Pipeline 00:00:07.659 [Pipeline] library 00:00:07.661 Loading library shm_lib@master 00:00:08.629 Library shm_lib@master is cached. Copying from home. 00:00:08.661 [Pipeline] node 00:00:08.745 Running on VM-host-SM0 in /var/jenkins/workspace/nvmf-tcp-vg-autotest_3 00:00:08.748 [Pipeline] { 00:00:08.756 [Pipeline] catchError 00:00:08.757 [Pipeline] { 00:00:08.771 [Pipeline] wrap 00:00:08.781 [Pipeline] { 00:00:08.793 [Pipeline] stage 00:00:08.795 [Pipeline] { (Prologue) 00:00:08.812 [Pipeline] echo 00:00:08.813 Node: VM-host-SM0 00:00:08.818 [Pipeline] cleanWs 00:00:08.826 [WS-CLEANUP] Deleting project workspace... 00:00:08.826 [WS-CLEANUP] Deferred wipeout is used... 00:00:08.831 [WS-CLEANUP] done 00:00:09.024 [Pipeline] setCustomBuildProperty 00:00:09.119 [Pipeline] httpRequest 00:00:11.554 [Pipeline] echo 00:00:11.556 Sorcerer 10.211.164.20 is alive 00:00:11.565 [Pipeline] retry 00:00:11.568 [Pipeline] { 00:00:11.581 [Pipeline] httpRequest 00:00:11.586 HttpMethod: GET 00:00:11.587 URL: http://10.211.164.20/packages/jbp_db4637e8b949f278f369ec13f70585206ccd9507.tar.gz 00:00:11.588 Sending request to url: http://10.211.164.20/packages/jbp_db4637e8b949f278f369ec13f70585206ccd9507.tar.gz 00:00:11.607 Response Code: HTTP/1.1 200 OK 00:00:11.607 Success: Status code 200 is in the accepted range: 200,404 00:00:11.608 Saving response body to /var/jenkins/workspace/nvmf-tcp-vg-autotest_3/jbp_db4637e8b949f278f369ec13f70585206ccd9507.tar.gz 00:00:17.797 [Pipeline] } 00:00:17.815 [Pipeline] // retry 00:00:17.824 [Pipeline] sh 00:00:18.106 + tar --no-same-owner -xf jbp_db4637e8b949f278f369ec13f70585206ccd9507.tar.gz 00:00:18.120 [Pipeline] httpRequest 00:00:19.274 [Pipeline] echo 00:00:19.275 Sorcerer 10.211.164.20 is alive 00:00:19.285 [Pipeline] retry 00:00:19.287 [Pipeline] { 00:00:19.303 [Pipeline] httpRequest 00:00:19.308 HttpMethod: GET 00:00:19.308 URL: http://10.211.164.20/packages/spdk_c360129e7ca8b8645c12ef2d2cdc671e00def108.tar.gz 00:00:19.309 Sending request to url: http://10.211.164.20/packages/spdk_c360129e7ca8b8645c12ef2d2cdc671e00def108.tar.gz 00:00:19.314 Response Code: HTTP/1.1 200 OK 00:00:19.314 Success: Status code 200 is in the accepted range: 200,404 00:00:19.315 Saving response body to /var/jenkins/workspace/nvmf-tcp-vg-autotest_3/spdk_c360129e7ca8b8645c12ef2d2cdc671e00def108.tar.gz 00:01:25.570 [Pipeline] } 00:01:25.588 [Pipeline] // retry 00:01:25.595 [Pipeline] sh 00:01:25.873 + tar --no-same-owner -xf spdk_c360129e7ca8b8645c12ef2d2cdc671e00def108.tar.gz 00:01:29.171 [Pipeline] sh 00:01:29.489 + git -C spdk log --oneline -n5 00:01:29.489 c360129e7 [TEST] 00:01:29.489 e01cb43b8 mk/spdk.common.mk sed the minor version 00:01:29.489 d58eef2a2 nvme/rdma: Fix reinserting qpair in connecting list after stale state 00:01:29.489 2104eacf0 test/check_so_deps: use VERSION to look for prior tags 00:01:29.489 66289a6db build: use VERSION file for storing version 00:01:29.506 [Pipeline] writeFile 00:01:29.522 [Pipeline] sh 00:01:29.803 + jbp/jenkins/jjb-config/jobs/scripts/autorun_quirks.sh 00:01:29.815 [Pipeline] sh 00:01:30.097 + cat autorun-spdk.conf 00:01:30.097 SPDK_RUN_FUNCTIONAL_TEST=1 00:01:30.097 SPDK_TEST_NVMF=1 00:01:30.097 SPDK_TEST_NVMF_TRANSPORT=tcp 00:01:30.097 SPDK_TEST_USDT=1 00:01:30.097 SPDK_TEST_NVMF_MDNS=1 00:01:30.097 SPDK_RUN_UBSAN=1 00:01:30.097 NET_TYPE=virt 00:01:30.097 SPDK_JSONRPC_GO_CLIENT=1 00:01:30.097 SPDK_ABI_DIR=/home/vagrant/spdk_repo/spdk-abi 00:01:30.104 RUN_NIGHTLY=0 00:01:30.106 [Pipeline] } 00:01:30.120 [Pipeline] // stage 00:01:30.135 [Pipeline] stage 00:01:30.137 [Pipeline] { (Run VM) 00:01:30.150 [Pipeline] sh 00:01:30.432 + jbp/jenkins/jjb-config/jobs/scripts/prepare_nvme.sh 00:01:30.432 + echo 'Start stage prepare_nvme.sh' 00:01:30.432 Start stage prepare_nvme.sh 00:01:30.432 + [[ -n 2 ]] 00:01:30.432 + disk_prefix=ex2 00:01:30.432 + [[ -n /var/jenkins/workspace/nvmf-tcp-vg-autotest_3 ]] 00:01:30.432 + [[ -e /var/jenkins/workspace/nvmf-tcp-vg-autotest_3/autorun-spdk.conf ]] 00:01:30.432 + source /var/jenkins/workspace/nvmf-tcp-vg-autotest_3/autorun-spdk.conf 00:01:30.432 ++ SPDK_RUN_FUNCTIONAL_TEST=1 00:01:30.432 ++ SPDK_TEST_NVMF=1 00:01:30.432 ++ SPDK_TEST_NVMF_TRANSPORT=tcp 00:01:30.432 ++ SPDK_TEST_USDT=1 00:01:30.432 ++ SPDK_TEST_NVMF_MDNS=1 00:01:30.432 ++ SPDK_RUN_UBSAN=1 00:01:30.432 ++ NET_TYPE=virt 00:01:30.432 ++ SPDK_JSONRPC_GO_CLIENT=1 00:01:30.432 ++ SPDK_ABI_DIR=/home/vagrant/spdk_repo/spdk-abi 00:01:30.432 ++ RUN_NIGHTLY=0 00:01:30.432 + cd /var/jenkins/workspace/nvmf-tcp-vg-autotest_3 00:01:30.432 + nvme_files=() 00:01:30.432 + declare -A nvme_files 00:01:30.432 + backend_dir=/var/lib/libvirt/images/backends 00:01:30.432 + nvme_files['nvme.img']=5G 00:01:30.432 + nvme_files['nvme-cmb.img']=5G 00:01:30.432 + nvme_files['nvme-multi0.img']=4G 00:01:30.432 + nvme_files['nvme-multi1.img']=4G 00:01:30.432 + nvme_files['nvme-multi2.img']=4G 00:01:30.432 + nvme_files['nvme-openstack.img']=8G 00:01:30.432 + nvme_files['nvme-zns.img']=5G 00:01:30.432 + (( SPDK_TEST_NVME_PMR == 1 )) 00:01:30.432 + (( SPDK_TEST_FTL == 1 )) 00:01:30.432 + (( SPDK_TEST_NVME_FDP == 1 )) 00:01:30.432 + [[ ! -d /var/lib/libvirt/images/backends ]] 00:01:30.432 + for nvme in "${!nvme_files[@]}" 00:01:30.432 + sudo -E spdk/scripts/vagrant/create_nvme_img.sh -n /var/lib/libvirt/images/backends/ex2-nvme-multi2.img -s 4G 00:01:30.432 Formatting '/var/lib/libvirt/images/backends/ex2-nvme-multi2.img', fmt=raw size=4294967296 preallocation=falloc 00:01:30.432 + for nvme in "${!nvme_files[@]}" 00:01:30.432 + sudo -E spdk/scripts/vagrant/create_nvme_img.sh -n /var/lib/libvirt/images/backends/ex2-nvme-cmb.img -s 5G 00:01:30.432 Formatting '/var/lib/libvirt/images/backends/ex2-nvme-cmb.img', fmt=raw size=5368709120 preallocation=falloc 00:01:30.432 + for nvme in "${!nvme_files[@]}" 00:01:30.432 + sudo -E spdk/scripts/vagrant/create_nvme_img.sh -n /var/lib/libvirt/images/backends/ex2-nvme-openstack.img -s 8G 00:01:30.432 Formatting '/var/lib/libvirt/images/backends/ex2-nvme-openstack.img', fmt=raw size=8589934592 preallocation=falloc 00:01:30.432 + for nvme in "${!nvme_files[@]}" 00:01:30.432 + sudo -E spdk/scripts/vagrant/create_nvme_img.sh -n /var/lib/libvirt/images/backends/ex2-nvme-zns.img -s 5G 00:01:30.432 Formatting '/var/lib/libvirt/images/backends/ex2-nvme-zns.img', fmt=raw size=5368709120 preallocation=falloc 00:01:30.432 + for nvme in "${!nvme_files[@]}" 00:01:30.432 + sudo -E spdk/scripts/vagrant/create_nvme_img.sh -n /var/lib/libvirt/images/backends/ex2-nvme-multi1.img -s 4G 00:01:30.432 Formatting '/var/lib/libvirt/images/backends/ex2-nvme-multi1.img', fmt=raw size=4294967296 preallocation=falloc 00:01:30.432 + for nvme in "${!nvme_files[@]}" 00:01:30.432 + sudo -E spdk/scripts/vagrant/create_nvme_img.sh -n /var/lib/libvirt/images/backends/ex2-nvme-multi0.img -s 4G 00:01:30.432 Formatting '/var/lib/libvirt/images/backends/ex2-nvme-multi0.img', fmt=raw size=4294967296 preallocation=falloc 00:01:30.432 + for nvme in "${!nvme_files[@]}" 00:01:30.432 + sudo -E spdk/scripts/vagrant/create_nvme_img.sh -n /var/lib/libvirt/images/backends/ex2-nvme.img -s 5G 00:01:30.691 Formatting '/var/lib/libvirt/images/backends/ex2-nvme.img', fmt=raw size=5368709120 preallocation=falloc 00:01:30.691 ++ sudo grep -rl ex2-nvme.img /etc/libvirt/qemu 00:01:30.691 + echo 'End stage prepare_nvme.sh' 00:01:30.691 End stage prepare_nvme.sh 00:01:30.702 [Pipeline] sh 00:01:30.983 + DISTRO=fedora39 CPUS=10 RAM=12288 jbp/jenkins/jjb-config/jobs/scripts/vagrant_create_vm.sh 00:01:30.983 Setup: -n 10 -s 12288 -x http://proxy-dmz.intel.com:911 -p libvirt --qemu-emulator=/usr/local/qemu/vanilla-v8.0.0/bin/qemu-system-x86_64 --nic-model=e1000 -b /var/lib/libvirt/images/backends/ex2-nvme.img -b /var/lib/libvirt/images/backends/ex2-nvme-multi0.img,nvme,/var/lib/libvirt/images/backends/ex2-nvme-multi1.img:/var/lib/libvirt/images/backends/ex2-nvme-multi2.img -H -a -v -f fedora39 00:01:30.983 00:01:30.983 DIR=/var/jenkins/workspace/nvmf-tcp-vg-autotest_3/spdk/scripts/vagrant 00:01:30.983 SPDK_DIR=/var/jenkins/workspace/nvmf-tcp-vg-autotest_3/spdk 00:01:30.983 VAGRANT_TARGET=/var/jenkins/workspace/nvmf-tcp-vg-autotest_3 00:01:30.983 HELP=0 00:01:30.983 DRY_RUN=0 00:01:30.983 NVME_FILE=/var/lib/libvirt/images/backends/ex2-nvme.img,/var/lib/libvirt/images/backends/ex2-nvme-multi0.img, 00:01:30.983 NVME_DISKS_TYPE=nvme,nvme, 00:01:30.983 NVME_AUTO_CREATE=0 00:01:30.983 NVME_DISKS_NAMESPACES=,/var/lib/libvirt/images/backends/ex2-nvme-multi1.img:/var/lib/libvirt/images/backends/ex2-nvme-multi2.img, 00:01:30.983 NVME_CMB=,, 00:01:30.983 NVME_PMR=,, 00:01:30.983 NVME_ZNS=,, 00:01:30.983 NVME_MS=,, 00:01:30.983 NVME_FDP=,, 00:01:30.983 SPDK_VAGRANT_DISTRO=fedora39 00:01:30.983 SPDK_VAGRANT_VMCPU=10 00:01:30.983 SPDK_VAGRANT_VMRAM=12288 00:01:30.983 SPDK_VAGRANT_PROVIDER=libvirt 00:01:30.983 SPDK_VAGRANT_HTTP_PROXY=http://proxy-dmz.intel.com:911 00:01:30.983 SPDK_QEMU_EMULATOR=/usr/local/qemu/vanilla-v8.0.0/bin/qemu-system-x86_64 00:01:30.983 SPDK_OPENSTACK_NETWORK=0 00:01:30.983 VAGRANT_PACKAGE_BOX=0 00:01:30.983 VAGRANTFILE=/var/jenkins/workspace/nvmf-tcp-vg-autotest_3/spdk/scripts/vagrant/Vagrantfile 00:01:30.983 FORCE_DISTRO=true 00:01:30.983 VAGRANT_BOX_VERSION= 00:01:30.983 EXTRA_VAGRANTFILES= 00:01:30.983 NIC_MODEL=e1000 00:01:30.983 00:01:30.983 mkdir: created directory '/var/jenkins/workspace/nvmf-tcp-vg-autotest_3/fedora39-libvirt' 00:01:30.983 /var/jenkins/workspace/nvmf-tcp-vg-autotest_3/fedora39-libvirt /var/jenkins/workspace/nvmf-tcp-vg-autotest_3 00:01:34.269 Bringing machine 'default' up with 'libvirt' provider... 00:01:34.835 ==> default: Creating image (snapshot of base box volume). 00:01:35.093 ==> default: Creating domain with the following settings... 00:01:35.093 ==> default: -- Name: fedora39-39-1.5-1721788873-2326_default_1734348215_6f9901c0cd30d65f8b6c 00:01:35.093 ==> default: -- Domain type: kvm 00:01:35.093 ==> default: -- Cpus: 10 00:01:35.093 ==> default: -- Feature: acpi 00:01:35.093 ==> default: -- Feature: apic 00:01:35.093 ==> default: -- Feature: pae 00:01:35.093 ==> default: -- Memory: 12288M 00:01:35.093 ==> default: -- Memory Backing: hugepages: 00:01:35.093 ==> default: -- Management MAC: 00:01:35.093 ==> default: -- Loader: 00:01:35.093 ==> default: -- Nvram: 00:01:35.093 ==> default: -- Base box: spdk/fedora39 00:01:35.093 ==> default: -- Storage pool: default 00:01:35.093 ==> default: -- Image: /var/lib/libvirt/images/fedora39-39-1.5-1721788873-2326_default_1734348215_6f9901c0cd30d65f8b6c.img (20G) 00:01:35.093 ==> default: -- Volume Cache: default 00:01:35.093 ==> default: -- Kernel: 00:01:35.093 ==> default: -- Initrd: 00:01:35.093 ==> default: -- Graphics Type: vnc 00:01:35.093 ==> default: -- Graphics Port: -1 00:01:35.093 ==> default: -- Graphics IP: 127.0.0.1 00:01:35.093 ==> default: -- Graphics Password: Not defined 00:01:35.093 ==> default: -- Video Type: cirrus 00:01:35.093 ==> default: -- Video VRAM: 9216 00:01:35.093 ==> default: -- Sound Type: 00:01:35.093 ==> default: -- Keymap: en-us 00:01:35.093 ==> default: -- TPM Path: 00:01:35.093 ==> default: -- INPUT: type=mouse, bus=ps2 00:01:35.093 ==> default: -- Command line args: 00:01:35.093 ==> default: -> value=-device, 00:01:35.093 ==> default: -> value=nvme,id=nvme-0,serial=12340,addr=0x10, 00:01:35.093 ==> default: -> value=-drive, 00:01:35.093 ==> default: -> value=format=raw,file=/var/lib/libvirt/images/backends/ex2-nvme.img,if=none,id=nvme-0-drive0, 00:01:35.093 ==> default: -> value=-device, 00:01:35.093 ==> default: -> value=nvme-ns,drive=nvme-0-drive0,bus=nvme-0,nsid=1,zoned=false,logical_block_size=4096,physical_block_size=4096, 00:01:35.093 ==> default: -> value=-device, 00:01:35.093 ==> default: -> value=nvme,id=nvme-1,serial=12341,addr=0x11, 00:01:35.093 ==> default: -> value=-drive, 00:01:35.093 ==> default: -> value=format=raw,file=/var/lib/libvirt/images/backends/ex2-nvme-multi0.img,if=none,id=nvme-1-drive0, 00:01:35.093 ==> default: -> value=-device, 00:01:35.093 ==> default: -> value=nvme-ns,drive=nvme-1-drive0,bus=nvme-1,nsid=1,zoned=false,logical_block_size=4096,physical_block_size=4096, 00:01:35.093 ==> default: -> value=-drive, 00:01:35.093 ==> default: -> value=format=raw,file=/var/lib/libvirt/images/backends/ex2-nvme-multi1.img,if=none,id=nvme-1-drive1, 00:01:35.093 ==> default: -> value=-device, 00:01:35.093 ==> default: -> value=nvme-ns,drive=nvme-1-drive1,bus=nvme-1,nsid=2,zoned=false,logical_block_size=4096,physical_block_size=4096, 00:01:35.093 ==> default: -> value=-drive, 00:01:35.093 ==> default: -> value=format=raw,file=/var/lib/libvirt/images/backends/ex2-nvme-multi2.img,if=none,id=nvme-1-drive2, 00:01:35.093 ==> default: -> value=-device, 00:01:35.093 ==> default: -> value=nvme-ns,drive=nvme-1-drive2,bus=nvme-1,nsid=3,zoned=false,logical_block_size=4096,physical_block_size=4096, 00:01:35.093 ==> default: Creating shared folders metadata... 00:01:35.352 ==> default: Starting domain. 00:01:37.263 ==> default: Waiting for domain to get an IP address... 00:01:55.341 ==> default: Waiting for SSH to become available... 00:01:55.341 ==> default: Configuring and enabling network interfaces... 00:01:57.873 default: SSH address: 192.168.121.234:22 00:01:57.873 default: SSH username: vagrant 00:01:57.873 default: SSH auth method: private key 00:02:00.404 ==> default: Rsyncing folder: /mnt/jenkins_nvme/jenkins/workspace/nvmf-tcp-vg-autotest_3/spdk/ => /home/vagrant/spdk_repo/spdk 00:02:08.519 ==> default: Mounting SSHFS shared folder... 00:02:09.086 ==> default: Mounting folder via SSHFS: /mnt/jenkins_nvme/jenkins/workspace/nvmf-tcp-vg-autotest_3/fedora39-libvirt/output => /home/vagrant/spdk_repo/output 00:02:09.086 ==> default: Checking Mount.. 00:02:10.461 ==> default: Folder Successfully Mounted! 00:02:10.461 ==> default: Running provisioner: file... 00:02:11.392 default: ~/.gitconfig => .gitconfig 00:02:11.651 00:02:11.651 SUCCESS! 00:02:11.651 00:02:11.651 cd to /var/jenkins/workspace/nvmf-tcp-vg-autotest_3/fedora39-libvirt and type "vagrant ssh" to use. 00:02:11.651 Use vagrant "suspend" and vagrant "resume" to stop and start. 00:02:11.651 Use vagrant "destroy" followed by "rm -rf /var/jenkins/workspace/nvmf-tcp-vg-autotest_3/fedora39-libvirt" to destroy all trace of vm. 00:02:11.651 00:02:11.660 [Pipeline] } 00:02:11.675 [Pipeline] // stage 00:02:11.684 [Pipeline] dir 00:02:11.685 Running in /var/jenkins/workspace/nvmf-tcp-vg-autotest_3/fedora39-libvirt 00:02:11.686 [Pipeline] { 00:02:11.699 [Pipeline] catchError 00:02:11.701 [Pipeline] { 00:02:11.714 [Pipeline] sh 00:02:11.993 + vagrant ssh-config --host vagrant 00:02:11.993 + sed -ne /^Host/,$p 00:02:11.993 + tee ssh_conf 00:02:16.180 Host vagrant 00:02:16.180 HostName 192.168.121.234 00:02:16.180 User vagrant 00:02:16.180 Port 22 00:02:16.180 UserKnownHostsFile /dev/null 00:02:16.180 StrictHostKeyChecking no 00:02:16.180 PasswordAuthentication no 00:02:16.180 IdentityFile /var/lib/libvirt/images/.vagrant.d/boxes/spdk-VAGRANTSLASH-fedora39/39-1.5-1721788873-2326/libvirt/fedora39 00:02:16.180 IdentitiesOnly yes 00:02:16.180 LogLevel FATAL 00:02:16.180 ForwardAgent yes 00:02:16.180 ForwardX11 yes 00:02:16.180 00:02:16.194 [Pipeline] withEnv 00:02:16.196 [Pipeline] { 00:02:16.209 [Pipeline] sh 00:02:16.490 + /usr/local/bin/ssh -t -F ssh_conf vagrant@vagrant #!/bin/bash 00:02:16.490 source /etc/os-release 00:02:16.490 [[ -e /image.version ]] && img=$(< /image.version) 00:02:16.490 # Minimal, systemd-like check. 00:02:16.490 if [[ -e /.dockerenv ]]; then 00:02:16.490 # Clear garbage from the node's name: 00:02:16.490 # agt-er_autotest_547-896 -> autotest_547-896 00:02:16.490 # $HOSTNAME is the actual container id 00:02:16.490 agent=$HOSTNAME@${DOCKER_SWARM_PLUGIN_JENKINS_AGENT_NAME#*_} 00:02:16.490 if grep -q "/etc/hostname" /proc/self/mountinfo; then 00:02:16.490 # We can assume this is a mount from a host where container is running, 00:02:16.490 # so fetch its hostname to easily identify the target swarm worker. 00:02:16.490 container="$(< /etc/hostname) ($agent)" 00:02:16.490 else 00:02:16.490 # Fallback 00:02:16.490 container=$agent 00:02:16.490 fi 00:02:16.490 fi 00:02:16.490 echo "${NAME} ${VERSION_ID}|$(uname -r)|${img:-N/A}|${container:-N/A}" 00:02:16.490 00:02:16.758 [Pipeline] } 00:02:16.775 [Pipeline] // withEnv 00:02:16.783 [Pipeline] setCustomBuildProperty 00:02:16.798 [Pipeline] stage 00:02:16.800 [Pipeline] { (Tests) 00:02:16.818 [Pipeline] sh 00:02:17.098 + scp -F ssh_conf -r /var/jenkins/workspace/nvmf-tcp-vg-autotest_3/jbp/jenkins/jjb-config/jobs/scripts/autoruner.sh vagrant@vagrant:./ 00:02:17.370 [Pipeline] sh 00:02:17.655 + scp -F ssh_conf -r /var/jenkins/workspace/nvmf-tcp-vg-autotest_3/jbp/jenkins/jjb-config/jobs/scripts/pkgdep-autoruner.sh vagrant@vagrant:./ 00:02:17.927 [Pipeline] timeout 00:02:17.927 Timeout set to expire in 1 hr 0 min 00:02:17.929 [Pipeline] { 00:02:17.943 [Pipeline] sh 00:02:18.223 + /usr/local/bin/ssh -t -F ssh_conf vagrant@vagrant git -C spdk_repo/spdk reset --hard 00:02:18.818 HEAD is now at c360129e7 [TEST] 00:02:18.830 [Pipeline] sh 00:02:19.109 + /usr/local/bin/ssh -t -F ssh_conf vagrant@vagrant sudo chown vagrant:vagrant spdk_repo 00:02:19.380 [Pipeline] sh 00:02:19.660 + scp -F ssh_conf -r /var/jenkins/workspace/nvmf-tcp-vg-autotest_3/autorun-spdk.conf vagrant@vagrant:spdk_repo 00:02:19.934 [Pipeline] sh 00:02:20.214 + /usr/local/bin/ssh -t -F ssh_conf vagrant@vagrant JOB_BASE_NAME=nvmf-tcp-vg-autotest ./autoruner.sh spdk_repo 00:02:20.472 ++ readlink -f spdk_repo 00:02:20.472 + DIR_ROOT=/home/vagrant/spdk_repo 00:02:20.472 + [[ -n /home/vagrant/spdk_repo ]] 00:02:20.472 + DIR_SPDK=/home/vagrant/spdk_repo/spdk 00:02:20.472 + DIR_OUTPUT=/home/vagrant/spdk_repo/output 00:02:20.472 + [[ -d /home/vagrant/spdk_repo/spdk ]] 00:02:20.472 + [[ ! -d /home/vagrant/spdk_repo/output ]] 00:02:20.472 + [[ -d /home/vagrant/spdk_repo/output ]] 00:02:20.472 + [[ nvmf-tcp-vg-autotest == pkgdep-* ]] 00:02:20.472 + cd /home/vagrant/spdk_repo 00:02:20.472 + source /etc/os-release 00:02:20.472 ++ NAME='Fedora Linux' 00:02:20.472 ++ VERSION='39 (Cloud Edition)' 00:02:20.472 ++ ID=fedora 00:02:20.472 ++ VERSION_ID=39 00:02:20.472 ++ VERSION_CODENAME= 00:02:20.472 ++ PLATFORM_ID=platform:f39 00:02:20.472 ++ PRETTY_NAME='Fedora Linux 39 (Cloud Edition)' 00:02:20.472 ++ ANSI_COLOR='0;38;2;60;110;180' 00:02:20.472 ++ LOGO=fedora-logo-icon 00:02:20.473 ++ CPE_NAME=cpe:/o:fedoraproject:fedora:39 00:02:20.473 ++ HOME_URL=https://fedoraproject.org/ 00:02:20.473 ++ DOCUMENTATION_URL=https://docs.fedoraproject.org/en-US/fedora/f39/system-administrators-guide/ 00:02:20.473 ++ SUPPORT_URL=https://ask.fedoraproject.org/ 00:02:20.473 ++ BUG_REPORT_URL=https://bugzilla.redhat.com/ 00:02:20.473 ++ REDHAT_BUGZILLA_PRODUCT=Fedora 00:02:20.473 ++ REDHAT_BUGZILLA_PRODUCT_VERSION=39 00:02:20.473 ++ REDHAT_SUPPORT_PRODUCT=Fedora 00:02:20.473 ++ REDHAT_SUPPORT_PRODUCT_VERSION=39 00:02:20.473 ++ SUPPORT_END=2024-11-12 00:02:20.473 ++ VARIANT='Cloud Edition' 00:02:20.473 ++ VARIANT_ID=cloud 00:02:20.473 + uname -a 00:02:20.473 Linux fedora39-cloud-1721788873-2326 6.8.9-200.fc39.x86_64 #1 SMP PREEMPT_DYNAMIC Wed Jul 24 03:04:40 UTC 2024 x86_64 GNU/Linux 00:02:20.473 + sudo /home/vagrant/spdk_repo/spdk/scripts/setup.sh status 00:02:20.731 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:02:20.731 Hugepages 00:02:20.731 node hugesize free / total 00:02:20.731 node0 1048576kB 0 / 0 00:02:20.990 node0 2048kB 0 / 0 00:02:20.990 00:02:20.990 Type BDF Vendor Device NUMA Driver Device Block devices 00:02:20.990 virtio 0000:00:03.0 1af4 1001 unknown virtio-pci - vda 00:02:20.990 NVMe 0000:00:10.0 1b36 0010 unknown nvme nvme0 nvme0n1 00:02:20.990 NVMe 0000:00:11.0 1b36 0010 unknown nvme nvme1 nvme1n1 nvme1n2 nvme1n3 00:02:20.990 + rm -f /tmp/spdk-ld-path 00:02:20.990 + source autorun-spdk.conf 00:02:20.990 ++ SPDK_RUN_FUNCTIONAL_TEST=1 00:02:20.990 ++ SPDK_TEST_NVMF=1 00:02:20.990 ++ SPDK_TEST_NVMF_TRANSPORT=tcp 00:02:20.990 ++ SPDK_TEST_USDT=1 00:02:20.990 ++ SPDK_TEST_NVMF_MDNS=1 00:02:20.990 ++ SPDK_RUN_UBSAN=1 00:02:20.990 ++ NET_TYPE=virt 00:02:20.990 ++ SPDK_JSONRPC_GO_CLIENT=1 00:02:20.990 ++ SPDK_ABI_DIR=/home/vagrant/spdk_repo/spdk-abi 00:02:20.990 ++ RUN_NIGHTLY=0 00:02:20.990 + (( SPDK_TEST_NVME_CMB == 1 || SPDK_TEST_NVME_PMR == 1 )) 00:02:20.990 + [[ -n '' ]] 00:02:20.990 + sudo git config --global --add safe.directory /home/vagrant/spdk_repo/spdk 00:02:20.990 + for M in /var/spdk/build-*-manifest.txt 00:02:20.990 + [[ -f /var/spdk/build-kernel-manifest.txt ]] 00:02:20.990 + cp /var/spdk/build-kernel-manifest.txt /home/vagrant/spdk_repo/output/ 00:02:20.990 + for M in /var/spdk/build-*-manifest.txt 00:02:20.990 + [[ -f /var/spdk/build-pkg-manifest.txt ]] 00:02:20.990 + cp /var/spdk/build-pkg-manifest.txt /home/vagrant/spdk_repo/output/ 00:02:20.990 + for M in /var/spdk/build-*-manifest.txt 00:02:20.990 + [[ -f /var/spdk/build-repo-manifest.txt ]] 00:02:20.990 + cp /var/spdk/build-repo-manifest.txt /home/vagrant/spdk_repo/output/ 00:02:20.990 ++ uname 00:02:20.990 + [[ Linux == \L\i\n\u\x ]] 00:02:20.990 + sudo dmesg -T 00:02:20.990 + sudo dmesg --clear 00:02:20.990 + dmesg_pid=5260 00:02:20.990 + [[ Fedora Linux == FreeBSD ]] 00:02:20.990 + export UNBIND_ENTIRE_IOMMU_GROUP=yes 00:02:20.990 + UNBIND_ENTIRE_IOMMU_GROUP=yes 00:02:20.990 + [[ -e /var/spdk/dependencies/vhost/spdk_test_image.qcow2 ]] 00:02:20.990 + sudo dmesg -Tw 00:02:20.990 + [[ -x /usr/src/fio-static/fio ]] 00:02:20.990 + export FIO_BIN=/usr/src/fio-static/fio 00:02:20.990 + FIO_BIN=/usr/src/fio-static/fio 00:02:20.990 + [[ '' == \/\q\e\m\u\_\v\f\i\o\/* ]] 00:02:20.990 + [[ ! -v VFIO_QEMU_BIN ]] 00:02:20.990 + [[ -e /usr/local/qemu/vfio-user-latest ]] 00:02:20.990 + export VFIO_QEMU_BIN=/usr/local/qemu/vfio-user-latest/bin/qemu-system-x86_64 00:02:20.990 + VFIO_QEMU_BIN=/usr/local/qemu/vfio-user-latest/bin/qemu-system-x86_64 00:02:20.990 + [[ -e /usr/local/qemu/vanilla-latest ]] 00:02:20.990 + export QEMU_BIN=/usr/local/qemu/vanilla-latest/bin/qemu-system-x86_64 00:02:20.990 + QEMU_BIN=/usr/local/qemu/vanilla-latest/bin/qemu-system-x86_64 00:02:20.990 + spdk/autorun.sh /home/vagrant/spdk_repo/autorun-spdk.conf 00:02:21.249 11:24:21 -- common/autotest_common.sh@1710 -- $ [[ n == y ]] 00:02:21.249 11:24:21 -- spdk/autorun.sh@20 -- $ source /home/vagrant/spdk_repo/autorun-spdk.conf 00:02:21.249 11:24:21 -- spdk_repo/autorun-spdk.conf@1 -- $ SPDK_RUN_FUNCTIONAL_TEST=1 00:02:21.249 11:24:21 -- spdk_repo/autorun-spdk.conf@2 -- $ SPDK_TEST_NVMF=1 00:02:21.249 11:24:21 -- spdk_repo/autorun-spdk.conf@3 -- $ SPDK_TEST_NVMF_TRANSPORT=tcp 00:02:21.249 11:24:21 -- spdk_repo/autorun-spdk.conf@4 -- $ SPDK_TEST_USDT=1 00:02:21.249 11:24:21 -- spdk_repo/autorun-spdk.conf@5 -- $ SPDK_TEST_NVMF_MDNS=1 00:02:21.249 11:24:21 -- spdk_repo/autorun-spdk.conf@6 -- $ SPDK_RUN_UBSAN=1 00:02:21.250 11:24:21 -- spdk_repo/autorun-spdk.conf@7 -- $ NET_TYPE=virt 00:02:21.250 11:24:21 -- spdk_repo/autorun-spdk.conf@8 -- $ SPDK_JSONRPC_GO_CLIENT=1 00:02:21.250 11:24:21 -- spdk_repo/autorun-spdk.conf@9 -- $ SPDK_ABI_DIR=/home/vagrant/spdk_repo/spdk-abi 00:02:21.250 11:24:21 -- spdk_repo/autorun-spdk.conf@10 -- $ RUN_NIGHTLY=0 00:02:21.250 11:24:21 -- spdk/autorun.sh@22 -- $ trap 'timing_finish || exit 1' EXIT 00:02:21.250 11:24:21 -- spdk/autorun.sh@25 -- $ /home/vagrant/spdk_repo/spdk/autobuild.sh /home/vagrant/spdk_repo/autorun-spdk.conf 00:02:21.250 11:24:21 -- common/autotest_common.sh@1710 -- $ [[ n == y ]] 00:02:21.250 11:24:21 -- common/autobuild_common.sh@15 -- $ source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:02:21.250 11:24:21 -- scripts/common.sh@15 -- $ shopt -s extglob 00:02:21.250 11:24:21 -- scripts/common.sh@544 -- $ [[ -e /bin/wpdk_common.sh ]] 00:02:21.250 11:24:21 -- scripts/common.sh@552 -- $ [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:02:21.250 11:24:21 -- scripts/common.sh@553 -- $ source /etc/opt/spdk-pkgdep/paths/export.sh 00:02:21.250 11:24:21 -- paths/export.sh@2 -- $ PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/vagrant/.local/bin:/home/vagrant/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:02:21.250 11:24:21 -- paths/export.sh@3 -- $ PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/vagrant/.local/bin:/home/vagrant/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:02:21.250 11:24:21 -- paths/export.sh@4 -- $ PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/vagrant/.local/bin:/home/vagrant/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:02:21.250 11:24:21 -- paths/export.sh@5 -- $ export PATH 00:02:21.250 11:24:21 -- paths/export.sh@6 -- $ echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/vagrant/.local/bin:/home/vagrant/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:02:21.250 11:24:21 -- common/autobuild_common.sh@492 -- $ out=/home/vagrant/spdk_repo/spdk/../output 00:02:21.250 11:24:21 -- common/autobuild_common.sh@493 -- $ date +%s 00:02:21.250 11:24:21 -- common/autobuild_common.sh@493 -- $ mktemp -dt spdk_1734348261.XXXXXX 00:02:21.250 11:24:21 -- common/autobuild_common.sh@493 -- $ SPDK_WORKSPACE=/tmp/spdk_1734348261.HyRb4U 00:02:21.250 11:24:21 -- common/autobuild_common.sh@495 -- $ [[ -n '' ]] 00:02:21.250 11:24:21 -- common/autobuild_common.sh@499 -- $ '[' -n '' ']' 00:02:21.250 11:24:21 -- common/autobuild_common.sh@502 -- $ scanbuild_exclude='--exclude /home/vagrant/spdk_repo/spdk/dpdk/' 00:02:21.250 11:24:21 -- common/autobuild_common.sh@506 -- $ scanbuild_exclude+=' --exclude /home/vagrant/spdk_repo/spdk/xnvme --exclude /tmp' 00:02:21.250 11:24:21 -- common/autobuild_common.sh@508 -- $ scanbuild='scan-build -o /home/vagrant/spdk_repo/spdk/../output/scan-build-tmp --exclude /home/vagrant/spdk_repo/spdk/dpdk/ --exclude /home/vagrant/spdk_repo/spdk/xnvme --exclude /tmp --status-bugs' 00:02:21.250 11:24:21 -- common/autobuild_common.sh@509 -- $ get_config_params 00:02:21.250 11:24:21 -- common/autotest_common.sh@409 -- $ xtrace_disable 00:02:21.250 11:24:21 -- common/autotest_common.sh@10 -- $ set +x 00:02:21.250 11:24:21 -- common/autobuild_common.sh@509 -- $ config_params='--enable-debug --enable-werror --with-rdma --with-usdt --with-idxd --with-fio=/usr/src/fio --with-iscsi-initiator --disable-unit-tests --enable-ubsan --enable-coverage --with-ublk --with-avahi --with-golang' 00:02:21.250 11:24:21 -- common/autobuild_common.sh@511 -- $ start_monitor_resources 00:02:21.250 11:24:21 -- pm/common@17 -- $ local monitor 00:02:21.250 11:24:21 -- pm/common@19 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:02:21.250 11:24:21 -- pm/common@19 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:02:21.250 11:24:21 -- pm/common@25 -- $ sleep 1 00:02:21.250 11:24:21 -- pm/common@21 -- $ date +%s 00:02:21.250 11:24:21 -- pm/common@21 -- $ date +%s 00:02:21.250 11:24:21 -- pm/common@21 -- $ /home/vagrant/spdk_repo/spdk/scripts/perf/pm/collect-cpu-load -d /home/vagrant/spdk_repo/spdk/../output/power -l -p monitor.autobuild.sh.1734348261 00:02:21.250 11:24:21 -- pm/common@21 -- $ /home/vagrant/spdk_repo/spdk/scripts/perf/pm/collect-vmstat -d /home/vagrant/spdk_repo/spdk/../output/power -l -p monitor.autobuild.sh.1734348261 00:02:21.250 Redirecting to /home/vagrant/spdk_repo/spdk/../output/power/monitor.autobuild.sh.1734348261_collect-cpu-load.pm.log 00:02:21.250 Redirecting to /home/vagrant/spdk_repo/spdk/../output/power/monitor.autobuild.sh.1734348261_collect-vmstat.pm.log 00:02:22.185 11:24:22 -- common/autobuild_common.sh@512 -- $ trap stop_monitor_resources EXIT 00:02:22.185 11:24:22 -- spdk/autobuild.sh@11 -- $ SPDK_TEST_AUTOBUILD= 00:02:22.185 11:24:22 -- spdk/autobuild.sh@12 -- $ umask 022 00:02:22.185 11:24:22 -- spdk/autobuild.sh@13 -- $ cd /home/vagrant/spdk_repo/spdk 00:02:22.185 11:24:22 -- spdk/autobuild.sh@16 -- $ date -u 00:02:22.185 Mon Dec 16 11:24:22 AM UTC 2024 00:02:22.185 11:24:22 -- spdk/autobuild.sh@17 -- $ git describe --tags 00:02:22.185 v25.01-rc1-3-gc360129e7 00:02:22.185 11:24:22 -- spdk/autobuild.sh@19 -- $ '[' 0 -eq 1 ']' 00:02:22.185 11:24:22 -- spdk/autobuild.sh@23 -- $ '[' 1 -eq 1 ']' 00:02:22.185 11:24:22 -- spdk/autobuild.sh@24 -- $ run_test ubsan echo 'using ubsan' 00:02:22.185 11:24:22 -- common/autotest_common.sh@1105 -- $ '[' 3 -le 1 ']' 00:02:22.185 11:24:22 -- common/autotest_common.sh@1111 -- $ xtrace_disable 00:02:22.185 11:24:22 -- common/autotest_common.sh@10 -- $ set +x 00:02:22.185 ************************************ 00:02:22.185 START TEST ubsan 00:02:22.185 ************************************ 00:02:22.185 using ubsan 00:02:22.185 11:24:22 ubsan -- common/autotest_common.sh@1129 -- $ echo 'using ubsan' 00:02:22.185 00:02:22.185 real 0m0.000s 00:02:22.185 user 0m0.000s 00:02:22.185 sys 0m0.000s 00:02:22.185 11:24:22 ubsan -- common/autotest_common.sh@1130 -- $ xtrace_disable 00:02:22.185 ************************************ 00:02:22.185 END TEST ubsan 00:02:22.185 11:24:22 ubsan -- common/autotest_common.sh@10 -- $ set +x 00:02:22.185 ************************************ 00:02:22.185 11:24:22 -- spdk/autobuild.sh@27 -- $ '[' -n '' ']' 00:02:22.185 11:24:22 -- spdk/autobuild.sh@31 -- $ case "$SPDK_TEST_AUTOBUILD" in 00:02:22.185 11:24:22 -- spdk/autobuild.sh@47 -- $ [[ 0 -eq 1 ]] 00:02:22.185 11:24:22 -- spdk/autobuild.sh@51 -- $ [[ 0 -eq 1 ]] 00:02:22.185 11:24:22 -- spdk/autobuild.sh@55 -- $ [[ -n '' ]] 00:02:22.185 11:24:22 -- spdk/autobuild.sh@57 -- $ [[ 0 -eq 1 ]] 00:02:22.185 11:24:22 -- spdk/autobuild.sh@59 -- $ [[ 0 -eq 1 ]] 00:02:22.185 11:24:22 -- spdk/autobuild.sh@62 -- $ [[ 0 -eq 1 ]] 00:02:22.185 11:24:22 -- spdk/autobuild.sh@67 -- $ /home/vagrant/spdk_repo/spdk/configure --enable-debug --enable-werror --with-rdma --with-usdt --with-idxd --with-fio=/usr/src/fio --with-iscsi-initiator --disable-unit-tests --enable-ubsan --enable-coverage --with-ublk --with-avahi --with-golang --with-shared 00:02:22.443 Using default SPDK env in /home/vagrant/spdk_repo/spdk/lib/env_dpdk 00:02:22.443 Using default DPDK in /home/vagrant/spdk_repo/spdk/dpdk/build 00:02:22.701 Using 'verbs' RDMA provider 00:02:35.912 Configuring ISA-L (logfile: /home/vagrant/spdk_repo/spdk/.spdk-isal.log)...done. 00:02:50.904 Configuring ISA-L-crypto (logfile: /home/vagrant/spdk_repo/spdk/.spdk-isal-crypto.log)...done. 00:02:50.904 go version go1.21.1 linux/amd64 00:02:50.904 Creating mk/config.mk...done. 00:02:50.904 Creating mk/cc.flags.mk...done. 00:02:50.904 Type 'make' to build. 00:02:50.904 11:24:50 -- spdk/autobuild.sh@70 -- $ run_test make make -j10 00:02:50.904 11:24:50 -- common/autotest_common.sh@1105 -- $ '[' 3 -le 1 ']' 00:02:50.904 11:24:50 -- common/autotest_common.sh@1111 -- $ xtrace_disable 00:02:50.904 11:24:50 -- common/autotest_common.sh@10 -- $ set +x 00:02:50.904 ************************************ 00:02:50.904 START TEST make 00:02:50.904 ************************************ 00:02:50.904 11:24:50 make -- common/autotest_common.sh@1129 -- $ make -j10 00:03:05.805 The Meson build system 00:03:05.805 Version: 1.5.0 00:03:05.805 Source dir: /home/vagrant/spdk_repo/spdk/dpdk 00:03:05.805 Build dir: /home/vagrant/spdk_repo/spdk/dpdk/build-tmp 00:03:05.805 Build type: native build 00:03:05.805 Program cat found: YES (/usr/bin/cat) 00:03:05.805 Project name: DPDK 00:03:05.805 Project version: 24.03.0 00:03:05.805 C compiler for the host machine: cc (gcc 13.3.1 "cc (GCC) 13.3.1 20240522 (Red Hat 13.3.1-1)") 00:03:05.805 C linker for the host machine: cc ld.bfd 2.40-14 00:03:05.805 Host machine cpu family: x86_64 00:03:05.805 Host machine cpu: x86_64 00:03:05.805 Message: ## Building in Developer Mode ## 00:03:05.805 Program pkg-config found: YES (/usr/bin/pkg-config) 00:03:05.805 Program check-symbols.sh found: YES (/home/vagrant/spdk_repo/spdk/dpdk/buildtools/check-symbols.sh) 00:03:05.805 Program options-ibverbs-static.sh found: YES (/home/vagrant/spdk_repo/spdk/dpdk/buildtools/options-ibverbs-static.sh) 00:03:05.805 Program python3 found: YES (/usr/bin/python3) 00:03:05.805 Program cat found: YES (/usr/bin/cat) 00:03:05.805 Compiler for C supports arguments -march=native: YES 00:03:05.805 Checking for size of "void *" : 8 00:03:05.805 Checking for size of "void *" : 8 (cached) 00:03:05.805 Compiler for C supports link arguments -Wl,--undefined-version: YES 00:03:05.805 Library m found: YES 00:03:05.805 Library numa found: YES 00:03:05.805 Has header "numaif.h" : YES 00:03:05.805 Library fdt found: NO 00:03:05.805 Library execinfo found: NO 00:03:05.805 Has header "execinfo.h" : YES 00:03:05.805 Found pkg-config: YES (/usr/bin/pkg-config) 1.9.5 00:03:05.805 Run-time dependency libarchive found: NO (tried pkgconfig) 00:03:05.805 Run-time dependency libbsd found: NO (tried pkgconfig) 00:03:05.805 Run-time dependency jansson found: NO (tried pkgconfig) 00:03:05.805 Run-time dependency openssl found: YES 3.1.1 00:03:05.805 Run-time dependency libpcap found: YES 1.10.4 00:03:05.805 Has header "pcap.h" with dependency libpcap: YES 00:03:05.805 Compiler for C supports arguments -Wcast-qual: YES 00:03:05.805 Compiler for C supports arguments -Wdeprecated: YES 00:03:05.805 Compiler for C supports arguments -Wformat: YES 00:03:05.805 Compiler for C supports arguments -Wformat-nonliteral: NO 00:03:05.805 Compiler for C supports arguments -Wformat-security: NO 00:03:05.805 Compiler for C supports arguments -Wmissing-declarations: YES 00:03:05.805 Compiler for C supports arguments -Wmissing-prototypes: YES 00:03:05.805 Compiler for C supports arguments -Wnested-externs: YES 00:03:05.805 Compiler for C supports arguments -Wold-style-definition: YES 00:03:05.805 Compiler for C supports arguments -Wpointer-arith: YES 00:03:05.805 Compiler for C supports arguments -Wsign-compare: YES 00:03:05.805 Compiler for C supports arguments -Wstrict-prototypes: YES 00:03:05.805 Compiler for C supports arguments -Wundef: YES 00:03:05.805 Compiler for C supports arguments -Wwrite-strings: YES 00:03:05.805 Compiler for C supports arguments -Wno-address-of-packed-member: YES 00:03:05.805 Compiler for C supports arguments -Wno-packed-not-aligned: YES 00:03:05.805 Compiler for C supports arguments -Wno-missing-field-initializers: YES 00:03:05.805 Compiler for C supports arguments -Wno-zero-length-bounds: YES 00:03:05.805 Program objdump found: YES (/usr/bin/objdump) 00:03:05.805 Compiler for C supports arguments -mavx512f: YES 00:03:05.805 Checking if "AVX512 checking" compiles: YES 00:03:05.805 Fetching value of define "__SSE4_2__" : 1 00:03:05.805 Fetching value of define "__AES__" : 1 00:03:05.805 Fetching value of define "__AVX__" : 1 00:03:05.805 Fetching value of define "__AVX2__" : 1 00:03:05.805 Fetching value of define "__AVX512BW__" : (undefined) 00:03:05.805 Fetching value of define "__AVX512CD__" : (undefined) 00:03:05.805 Fetching value of define "__AVX512DQ__" : (undefined) 00:03:05.805 Fetching value of define "__AVX512F__" : (undefined) 00:03:05.805 Fetching value of define "__AVX512VL__" : (undefined) 00:03:05.805 Fetching value of define "__PCLMUL__" : 1 00:03:05.805 Fetching value of define "__RDRND__" : 1 00:03:05.805 Fetching value of define "__RDSEED__" : 1 00:03:05.805 Fetching value of define "__VPCLMULQDQ__" : (undefined) 00:03:05.805 Fetching value of define "__znver1__" : (undefined) 00:03:05.805 Fetching value of define "__znver2__" : (undefined) 00:03:05.805 Fetching value of define "__znver3__" : (undefined) 00:03:05.805 Fetching value of define "__znver4__" : (undefined) 00:03:05.805 Compiler for C supports arguments -Wno-format-truncation: YES 00:03:05.805 Message: lib/log: Defining dependency "log" 00:03:05.805 Message: lib/kvargs: Defining dependency "kvargs" 00:03:05.805 Message: lib/telemetry: Defining dependency "telemetry" 00:03:05.805 Checking for function "getentropy" : NO 00:03:05.805 Message: lib/eal: Defining dependency "eal" 00:03:05.805 Message: lib/ring: Defining dependency "ring" 00:03:05.805 Message: lib/rcu: Defining dependency "rcu" 00:03:05.805 Message: lib/mempool: Defining dependency "mempool" 00:03:05.805 Message: lib/mbuf: Defining dependency "mbuf" 00:03:05.805 Fetching value of define "__PCLMUL__" : 1 (cached) 00:03:05.805 Fetching value of define "__AVX512F__" : (undefined) (cached) 00:03:05.805 Compiler for C supports arguments -mpclmul: YES 00:03:05.805 Compiler for C supports arguments -maes: YES 00:03:05.805 Compiler for C supports arguments -mavx512f: YES (cached) 00:03:05.805 Compiler for C supports arguments -mavx512bw: YES 00:03:05.805 Compiler for C supports arguments -mavx512dq: YES 00:03:05.805 Compiler for C supports arguments -mavx512vl: YES 00:03:05.805 Compiler for C supports arguments -mvpclmulqdq: YES 00:03:05.805 Compiler for C supports arguments -mavx2: YES 00:03:05.805 Compiler for C supports arguments -mavx: YES 00:03:05.805 Message: lib/net: Defining dependency "net" 00:03:05.805 Message: lib/meter: Defining dependency "meter" 00:03:05.805 Message: lib/ethdev: Defining dependency "ethdev" 00:03:05.805 Message: lib/pci: Defining dependency "pci" 00:03:05.805 Message: lib/cmdline: Defining dependency "cmdline" 00:03:05.805 Message: lib/hash: Defining dependency "hash" 00:03:05.805 Message: lib/timer: Defining dependency "timer" 00:03:05.805 Message: lib/compressdev: Defining dependency "compressdev" 00:03:05.805 Message: lib/cryptodev: Defining dependency "cryptodev" 00:03:05.805 Message: lib/dmadev: Defining dependency "dmadev" 00:03:05.805 Compiler for C supports arguments -Wno-cast-qual: YES 00:03:05.806 Message: lib/power: Defining dependency "power" 00:03:05.806 Message: lib/reorder: Defining dependency "reorder" 00:03:05.806 Message: lib/security: Defining dependency "security" 00:03:05.806 Has header "linux/userfaultfd.h" : YES 00:03:05.806 Has header "linux/vduse.h" : YES 00:03:05.806 Message: lib/vhost: Defining dependency "vhost" 00:03:05.806 Compiler for C supports arguments -Wno-format-truncation: YES (cached) 00:03:05.806 Message: drivers/bus/pci: Defining dependency "bus_pci" 00:03:05.806 Message: drivers/bus/vdev: Defining dependency "bus_vdev" 00:03:05.806 Message: drivers/mempool/ring: Defining dependency "mempool_ring" 00:03:05.806 Message: Disabling raw/* drivers: missing internal dependency "rawdev" 00:03:05.806 Message: Disabling regex/* drivers: missing internal dependency "regexdev" 00:03:05.806 Message: Disabling ml/* drivers: missing internal dependency "mldev" 00:03:05.806 Message: Disabling event/* drivers: missing internal dependency "eventdev" 00:03:05.806 Message: Disabling baseband/* drivers: missing internal dependency "bbdev" 00:03:05.806 Message: Disabling gpu/* drivers: missing internal dependency "gpudev" 00:03:05.806 Program doxygen found: YES (/usr/local/bin/doxygen) 00:03:05.806 Configuring doxy-api-html.conf using configuration 00:03:05.806 Configuring doxy-api-man.conf using configuration 00:03:05.806 Program mandb found: YES (/usr/bin/mandb) 00:03:05.806 Program sphinx-build found: NO 00:03:05.806 Configuring rte_build_config.h using configuration 00:03:05.806 Message: 00:03:05.806 ================= 00:03:05.806 Applications Enabled 00:03:05.806 ================= 00:03:05.806 00:03:05.806 apps: 00:03:05.806 00:03:05.806 00:03:05.806 Message: 00:03:05.806 ================= 00:03:05.806 Libraries Enabled 00:03:05.806 ================= 00:03:05.806 00:03:05.806 libs: 00:03:05.806 log, kvargs, telemetry, eal, ring, rcu, mempool, mbuf, 00:03:05.806 net, meter, ethdev, pci, cmdline, hash, timer, compressdev, 00:03:05.806 cryptodev, dmadev, power, reorder, security, vhost, 00:03:05.806 00:03:05.806 Message: 00:03:05.806 =============== 00:03:05.806 Drivers Enabled 00:03:05.806 =============== 00:03:05.806 00:03:05.806 common: 00:03:05.806 00:03:05.806 bus: 00:03:05.806 pci, vdev, 00:03:05.806 mempool: 00:03:05.806 ring, 00:03:05.806 dma: 00:03:05.806 00:03:05.806 net: 00:03:05.806 00:03:05.806 crypto: 00:03:05.806 00:03:05.806 compress: 00:03:05.806 00:03:05.806 vdpa: 00:03:05.806 00:03:05.806 00:03:05.806 Message: 00:03:05.806 ================= 00:03:05.806 Content Skipped 00:03:05.806 ================= 00:03:05.806 00:03:05.806 apps: 00:03:05.806 dumpcap: explicitly disabled via build config 00:03:05.806 graph: explicitly disabled via build config 00:03:05.806 pdump: explicitly disabled via build config 00:03:05.806 proc-info: explicitly disabled via build config 00:03:05.806 test-acl: explicitly disabled via build config 00:03:05.806 test-bbdev: explicitly disabled via build config 00:03:05.806 test-cmdline: explicitly disabled via build config 00:03:05.806 test-compress-perf: explicitly disabled via build config 00:03:05.806 test-crypto-perf: explicitly disabled via build config 00:03:05.806 test-dma-perf: explicitly disabled via build config 00:03:05.806 test-eventdev: explicitly disabled via build config 00:03:05.806 test-fib: explicitly disabled via build config 00:03:05.806 test-flow-perf: explicitly disabled via build config 00:03:05.806 test-gpudev: explicitly disabled via build config 00:03:05.806 test-mldev: explicitly disabled via build config 00:03:05.806 test-pipeline: explicitly disabled via build config 00:03:05.806 test-pmd: explicitly disabled via build config 00:03:05.806 test-regex: explicitly disabled via build config 00:03:05.806 test-sad: explicitly disabled via build config 00:03:05.806 test-security-perf: explicitly disabled via build config 00:03:05.806 00:03:05.806 libs: 00:03:05.806 argparse: explicitly disabled via build config 00:03:05.806 metrics: explicitly disabled via build config 00:03:05.806 acl: explicitly disabled via build config 00:03:05.806 bbdev: explicitly disabled via build config 00:03:05.806 bitratestats: explicitly disabled via build config 00:03:05.806 bpf: explicitly disabled via build config 00:03:05.806 cfgfile: explicitly disabled via build config 00:03:05.806 distributor: explicitly disabled via build config 00:03:05.806 efd: explicitly disabled via build config 00:03:05.806 eventdev: explicitly disabled via build config 00:03:05.806 dispatcher: explicitly disabled via build config 00:03:05.806 gpudev: explicitly disabled via build config 00:03:05.806 gro: explicitly disabled via build config 00:03:05.806 gso: explicitly disabled via build config 00:03:05.806 ip_frag: explicitly disabled via build config 00:03:05.806 jobstats: explicitly disabled via build config 00:03:05.806 latencystats: explicitly disabled via build config 00:03:05.806 lpm: explicitly disabled via build config 00:03:05.806 member: explicitly disabled via build config 00:03:05.806 pcapng: explicitly disabled via build config 00:03:05.806 rawdev: explicitly disabled via build config 00:03:05.806 regexdev: explicitly disabled via build config 00:03:05.806 mldev: explicitly disabled via build config 00:03:05.806 rib: explicitly disabled via build config 00:03:05.806 sched: explicitly disabled via build config 00:03:05.806 stack: explicitly disabled via build config 00:03:05.806 ipsec: explicitly disabled via build config 00:03:05.806 pdcp: explicitly disabled via build config 00:03:05.806 fib: explicitly disabled via build config 00:03:05.806 port: explicitly disabled via build config 00:03:05.806 pdump: explicitly disabled via build config 00:03:05.806 table: explicitly disabled via build config 00:03:05.806 pipeline: explicitly disabled via build config 00:03:05.806 graph: explicitly disabled via build config 00:03:05.806 node: explicitly disabled via build config 00:03:05.806 00:03:05.806 drivers: 00:03:05.806 common/cpt: not in enabled drivers build config 00:03:05.806 common/dpaax: not in enabled drivers build config 00:03:05.806 common/iavf: not in enabled drivers build config 00:03:05.806 common/idpf: not in enabled drivers build config 00:03:05.806 common/ionic: not in enabled drivers build config 00:03:05.806 common/mvep: not in enabled drivers build config 00:03:05.806 common/octeontx: not in enabled drivers build config 00:03:05.806 bus/auxiliary: not in enabled drivers build config 00:03:05.806 bus/cdx: not in enabled drivers build config 00:03:05.806 bus/dpaa: not in enabled drivers build config 00:03:05.806 bus/fslmc: not in enabled drivers build config 00:03:05.806 bus/ifpga: not in enabled drivers build config 00:03:05.806 bus/platform: not in enabled drivers build config 00:03:05.806 bus/uacce: not in enabled drivers build config 00:03:05.806 bus/vmbus: not in enabled drivers build config 00:03:05.806 common/cnxk: not in enabled drivers build config 00:03:05.806 common/mlx5: not in enabled drivers build config 00:03:05.806 common/nfp: not in enabled drivers build config 00:03:05.806 common/nitrox: not in enabled drivers build config 00:03:05.806 common/qat: not in enabled drivers build config 00:03:05.806 common/sfc_efx: not in enabled drivers build config 00:03:05.806 mempool/bucket: not in enabled drivers build config 00:03:05.806 mempool/cnxk: not in enabled drivers build config 00:03:05.806 mempool/dpaa: not in enabled drivers build config 00:03:05.806 mempool/dpaa2: not in enabled drivers build config 00:03:05.806 mempool/octeontx: not in enabled drivers build config 00:03:05.806 mempool/stack: not in enabled drivers build config 00:03:05.806 dma/cnxk: not in enabled drivers build config 00:03:05.806 dma/dpaa: not in enabled drivers build config 00:03:05.806 dma/dpaa2: not in enabled drivers build config 00:03:05.806 dma/hisilicon: not in enabled drivers build config 00:03:05.806 dma/idxd: not in enabled drivers build config 00:03:05.806 dma/ioat: not in enabled drivers build config 00:03:05.806 dma/skeleton: not in enabled drivers build config 00:03:05.806 net/af_packet: not in enabled drivers build config 00:03:05.806 net/af_xdp: not in enabled drivers build config 00:03:05.806 net/ark: not in enabled drivers build config 00:03:05.806 net/atlantic: not in enabled drivers build config 00:03:05.806 net/avp: not in enabled drivers build config 00:03:05.806 net/axgbe: not in enabled drivers build config 00:03:05.806 net/bnx2x: not in enabled drivers build config 00:03:05.806 net/bnxt: not in enabled drivers build config 00:03:05.806 net/bonding: not in enabled drivers build config 00:03:05.806 net/cnxk: not in enabled drivers build config 00:03:05.806 net/cpfl: not in enabled drivers build config 00:03:05.806 net/cxgbe: not in enabled drivers build config 00:03:05.806 net/dpaa: not in enabled drivers build config 00:03:05.806 net/dpaa2: not in enabled drivers build config 00:03:05.806 net/e1000: not in enabled drivers build config 00:03:05.806 net/ena: not in enabled drivers build config 00:03:05.806 net/enetc: not in enabled drivers build config 00:03:05.806 net/enetfec: not in enabled drivers build config 00:03:05.806 net/enic: not in enabled drivers build config 00:03:05.806 net/failsafe: not in enabled drivers build config 00:03:05.806 net/fm10k: not in enabled drivers build config 00:03:05.806 net/gve: not in enabled drivers build config 00:03:05.806 net/hinic: not in enabled drivers build config 00:03:05.806 net/hns3: not in enabled drivers build config 00:03:05.806 net/i40e: not in enabled drivers build config 00:03:05.806 net/iavf: not in enabled drivers build config 00:03:05.806 net/ice: not in enabled drivers build config 00:03:05.806 net/idpf: not in enabled drivers build config 00:03:05.806 net/igc: not in enabled drivers build config 00:03:05.806 net/ionic: not in enabled drivers build config 00:03:05.806 net/ipn3ke: not in enabled drivers build config 00:03:05.806 net/ixgbe: not in enabled drivers build config 00:03:05.806 net/mana: not in enabled drivers build config 00:03:05.806 net/memif: not in enabled drivers build config 00:03:05.806 net/mlx4: not in enabled drivers build config 00:03:05.806 net/mlx5: not in enabled drivers build config 00:03:05.806 net/mvneta: not in enabled drivers build config 00:03:05.806 net/mvpp2: not in enabled drivers build config 00:03:05.806 net/netvsc: not in enabled drivers build config 00:03:05.806 net/nfb: not in enabled drivers build config 00:03:05.806 net/nfp: not in enabled drivers build config 00:03:05.806 net/ngbe: not in enabled drivers build config 00:03:05.806 net/null: not in enabled drivers build config 00:03:05.806 net/octeontx: not in enabled drivers build config 00:03:05.806 net/octeon_ep: not in enabled drivers build config 00:03:05.806 net/pcap: not in enabled drivers build config 00:03:05.806 net/pfe: not in enabled drivers build config 00:03:05.806 net/qede: not in enabled drivers build config 00:03:05.806 net/ring: not in enabled drivers build config 00:03:05.806 net/sfc: not in enabled drivers build config 00:03:05.806 net/softnic: not in enabled drivers build config 00:03:05.806 net/tap: not in enabled drivers build config 00:03:05.806 net/thunderx: not in enabled drivers build config 00:03:05.807 net/txgbe: not in enabled drivers build config 00:03:05.807 net/vdev_netvsc: not in enabled drivers build config 00:03:05.807 net/vhost: not in enabled drivers build config 00:03:05.807 net/virtio: not in enabled drivers build config 00:03:05.807 net/vmxnet3: not in enabled drivers build config 00:03:05.807 raw/*: missing internal dependency, "rawdev" 00:03:05.807 crypto/armv8: not in enabled drivers build config 00:03:05.807 crypto/bcmfs: not in enabled drivers build config 00:03:05.807 crypto/caam_jr: not in enabled drivers build config 00:03:05.807 crypto/ccp: not in enabled drivers build config 00:03:05.807 crypto/cnxk: not in enabled drivers build config 00:03:05.807 crypto/dpaa_sec: not in enabled drivers build config 00:03:05.807 crypto/dpaa2_sec: not in enabled drivers build config 00:03:05.807 crypto/ipsec_mb: not in enabled drivers build config 00:03:05.807 crypto/mlx5: not in enabled drivers build config 00:03:05.807 crypto/mvsam: not in enabled drivers build config 00:03:05.807 crypto/nitrox: not in enabled drivers build config 00:03:05.807 crypto/null: not in enabled drivers build config 00:03:05.807 crypto/octeontx: not in enabled drivers build config 00:03:05.807 crypto/openssl: not in enabled drivers build config 00:03:05.807 crypto/scheduler: not in enabled drivers build config 00:03:05.807 crypto/uadk: not in enabled drivers build config 00:03:05.807 crypto/virtio: not in enabled drivers build config 00:03:05.807 compress/isal: not in enabled drivers build config 00:03:05.807 compress/mlx5: not in enabled drivers build config 00:03:05.807 compress/nitrox: not in enabled drivers build config 00:03:05.807 compress/octeontx: not in enabled drivers build config 00:03:05.807 compress/zlib: not in enabled drivers build config 00:03:05.807 regex/*: missing internal dependency, "regexdev" 00:03:05.807 ml/*: missing internal dependency, "mldev" 00:03:05.807 vdpa/ifc: not in enabled drivers build config 00:03:05.807 vdpa/mlx5: not in enabled drivers build config 00:03:05.807 vdpa/nfp: not in enabled drivers build config 00:03:05.807 vdpa/sfc: not in enabled drivers build config 00:03:05.807 event/*: missing internal dependency, "eventdev" 00:03:05.807 baseband/*: missing internal dependency, "bbdev" 00:03:05.807 gpu/*: missing internal dependency, "gpudev" 00:03:05.807 00:03:05.807 00:03:05.807 Build targets in project: 85 00:03:05.807 00:03:05.807 DPDK 24.03.0 00:03:05.807 00:03:05.807 User defined options 00:03:05.807 buildtype : debug 00:03:05.807 default_library : shared 00:03:05.807 libdir : lib 00:03:05.807 prefix : /home/vagrant/spdk_repo/spdk/dpdk/build 00:03:05.807 c_args : -Wno-stringop-overflow -fcommon -Wno-stringop-overread -Wno-array-bounds -fPIC -Werror 00:03:05.807 c_link_args : 00:03:05.807 cpu_instruction_set: native 00:03:05.807 disable_apps : dumpcap,graph,pdump,proc-info,test-acl,test-bbdev,test-cmdline,test-compress-perf,test-crypto-perf,test-dma-perf,test-eventdev,test-fib,test-flow-perf,test-gpudev,test-mldev,test-pipeline,test-pmd,test-regex,test-sad,test-security-perf,test 00:03:05.807 disable_libs : acl,argparse,bbdev,bitratestats,bpf,cfgfile,dispatcher,distributor,efd,eventdev,fib,gpudev,graph,gro,gso,ip_frag,ipsec,jobstats,latencystats,lpm,member,metrics,mldev,node,pcapng,pdcp,pdump,pipeline,port,rawdev,regexdev,rib,sched,stack,table 00:03:05.807 enable_docs : false 00:03:05.807 enable_drivers : bus,bus/pci,bus/vdev,mempool/ring,power/acpi,power/amd_pstate,power/cppc,power/intel_pstate,power/intel_uncore,power/kvm_vm 00:03:05.807 enable_kmods : false 00:03:05.807 max_lcores : 128 00:03:05.807 tests : false 00:03:05.807 00:03:05.807 Found ninja-1.11.1.git.kitware.jobserver-1 at /usr/local/bin/ninja 00:03:05.807 ninja: Entering directory `/home/vagrant/spdk_repo/spdk/dpdk/build-tmp' 00:03:05.807 [1/268] Compiling C object lib/librte_log.a.p/log_log_linux.c.o 00:03:05.807 [2/268] Compiling C object lib/librte_kvargs.a.p/kvargs_rte_kvargs.c.o 00:03:05.807 [3/268] Linking static target lib/librte_kvargs.a 00:03:05.807 [4/268] Compiling C object lib/librte_log.a.p/log_log.c.o 00:03:05.807 [5/268] Compiling C object lib/librte_telemetry.a.p/telemetry_telemetry_data.c.o 00:03:05.807 [6/268] Linking static target lib/librte_log.a 00:03:05.807 [7/268] Generating lib/kvargs.sym_chk with a custom command (wrapped by meson to capture output) 00:03:05.807 [8/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_class.c.o 00:03:05.807 [9/268] Compiling C object lib/librte_telemetry.a.p/telemetry_telemetry_legacy.c.o 00:03:05.807 [10/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_config.c.o 00:03:05.807 [11/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_errno.c.o 00:03:05.807 [12/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_bus.c.o 00:03:05.807 [13/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_hexdump.c.o 00:03:06.065 [14/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_debug.c.o 00:03:06.065 [15/268] Compiling C object lib/librte_telemetry.a.p/telemetry_telemetry.c.o 00:03:06.065 [16/268] Linking static target lib/librte_telemetry.a 00:03:06.065 [17/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_dev.c.o 00:03:06.323 [18/268] Generating lib/log.sym_chk with a custom command (wrapped by meson to capture output) 00:03:06.323 [19/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_devargs.c.o 00:03:06.323 [20/268] Linking target lib/librte_log.so.24.1 00:03:06.323 [21/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_launch.c.o 00:03:06.581 [22/268] Generating symbol file lib/librte_log.so.24.1.p/librte_log.so.24.1.symbols 00:03:06.581 [23/268] Linking target lib/librte_kvargs.so.24.1 00:03:06.839 [24/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_mcfg.c.o 00:03:06.839 [25/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_interrupts.c.o 00:03:06.839 [26/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_memalloc.c.o 00:03:06.839 [27/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_lcore.c.o 00:03:06.839 [28/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_string_fns.c.o 00:03:06.839 [29/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_fbarray.c.o 00:03:06.839 [30/268] Generating symbol file lib/librte_kvargs.so.24.1.p/librte_kvargs.so.24.1.symbols 00:03:07.097 [31/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_uuid.c.o 00:03:07.097 [32/268] Generating lib/telemetry.sym_chk with a custom command (wrapped by meson to capture output) 00:03:07.097 [33/268] Linking target lib/librte_telemetry.so.24.1 00:03:07.097 [34/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_memory.c.o 00:03:07.355 [35/268] Generating symbol file lib/librte_telemetry.so.24.1.p/librte_telemetry.so.24.1.symbols 00:03:07.355 [36/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_memzone.c.o 00:03:07.355 [37/268] Compiling C object lib/librte_eal.a.p/eal_common_rte_reciprocal.c.o 00:03:07.355 [38/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_options.c.o 00:03:07.614 [39/268] Compiling C object lib/librte_eal.a.p/eal_common_rte_version.c.o 00:03:07.614 [40/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_tailqs.c.o 00:03:07.872 [41/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_timer.c.o 00:03:07.872 [42/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_cpuflags.c.o 00:03:07.872 [43/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_hypervisor.c.o 00:03:07.872 [44/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_thread.c.o 00:03:07.872 [45/268] Compiling C object lib/librte_eal.a.p/eal_common_malloc_elem.c.o 00:03:07.872 [46/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_trace_points.c.o 00:03:08.130 [47/268] Compiling C object lib/librte_eal.a.p/eal_common_rte_random.c.o 00:03:08.130 [48/268] Compiling C object lib/librte_eal.a.p/eal_common_malloc_heap.c.o 00:03:08.130 [49/268] Compiling C object lib/librte_eal.a.p/eal_common_rte_malloc.c.o 00:03:08.387 [50/268] Compiling C object lib/librte_eal.a.p/eal_unix_eal_debug.c.o 00:03:08.387 [51/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_dynmem.c.o 00:03:08.645 [52/268] Compiling C object lib/librte_eal.a.p/eal_common_rte_service.c.o 00:03:08.645 [53/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_trace.c.o 00:03:08.902 [54/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_trace_utils.c.o 00:03:08.902 [55/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_trace_ctf.c.o 00:03:08.902 [56/268] Compiling C object lib/librte_eal.a.p/eal_common_hotplug_mp.c.o 00:03:08.902 [57/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_proc.c.o 00:03:09.160 [58/268] Compiling C object lib/librte_eal.a.p/eal_common_malloc_mp.c.o 00:03:09.160 [59/268] Compiling C object lib/librte_eal.a.p/eal_common_rte_keepalive.c.o 00:03:09.160 [60/268] Compiling C object lib/librte_eal.a.p/eal_unix_eal_file.c.o 00:03:09.160 [61/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_cpuflags.c.o 00:03:09.418 [62/268] Compiling C object lib/librte_eal.a.p/eal_unix_eal_firmware.c.o 00:03:09.418 [63/268] Compiling C object lib/librte_eal.a.p/eal_unix_eal_filesystem.c.o 00:03:09.675 [64/268] Compiling C object lib/librte_eal.a.p/eal_unix_eal_unix_timer.c.o 00:03:09.675 [65/268] Compiling C object lib/librte_eal.a.p/eal_unix_eal_unix_memory.c.o 00:03:09.675 [66/268] Compiling C object lib/librte_eal.a.p/eal_unix_eal_unix_thread.c.o 00:03:09.933 [67/268] Compiling C object lib/librte_eal.a.p/eal_unix_rte_thread.c.o 00:03:09.933 [68/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal.c.o 00:03:09.933 [69/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_hugepage_info.c.o 00:03:10.191 [70/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_alarm.c.o 00:03:10.191 [71/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_dev.c.o 00:03:10.191 [72/268] Compiling C object lib/librte_eal.a.p/eal_x86_rte_cpuflags.c.o 00:03:10.191 [73/268] Compiling C object lib/librte_eal.a.p/eal_x86_rte_hypervisor.c.o 00:03:10.191 [74/268] Compiling C object lib/librte_eal.a.p/eal_x86_rte_spinlock.c.o 00:03:10.449 [75/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_lcore.c.o 00:03:10.449 [76/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_thread.c.o 00:03:10.449 [77/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_interrupts.c.o 00:03:10.706 [78/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_timer.c.o 00:03:10.706 [79/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_vfio_mp_sync.c.o 00:03:10.706 [80/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_memory.c.o 00:03:10.706 [81/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_memalloc.c.o 00:03:10.963 [82/268] Compiling C object lib/librte_ring.a.p/ring_rte_ring.c.o 00:03:10.963 [83/268] Linking static target lib/librte_ring.a 00:03:11.220 [84/268] Compiling C object lib/librte_eal.a.p/eal_x86_rte_cycles.c.o 00:03:11.220 [85/268] Compiling C object lib/librte_mempool.a.p/mempool_rte_mempool_ops_default.c.o 00:03:11.220 [86/268] Compiling C object lib/librte_mempool.a.p/mempool_rte_mempool.c.o 00:03:11.220 [87/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_vfio.c.o 00:03:11.220 [88/268] Compiling C object lib/librte_eal.a.p/eal_x86_rte_power_intrinsics.c.o 00:03:11.220 [89/268] Compiling C object lib/librte_mempool.a.p/mempool_mempool_trace_points.c.o 00:03:11.220 [90/268] Linking static target lib/librte_eal.a 00:03:11.477 [91/268] Compiling C object lib/librte_mempool.a.p/mempool_rte_mempool_ops.c.o 00:03:11.477 [92/268] Linking static target lib/librte_mempool.a 00:03:11.477 [93/268] Compiling C object lib/librte_rcu.a.p/rcu_rte_rcu_qsbr.c.o 00:03:11.477 [94/268] Linking static target lib/librte_rcu.a 00:03:11.477 [95/268] Generating lib/ring.sym_chk with a custom command (wrapped by meson to capture output) 00:03:11.735 [96/268] Compiling C object lib/net/libnet_crc_avx512_lib.a.p/net_crc_avx512.c.o 00:03:11.735 [97/268] Linking static target lib/net/libnet_crc_avx512_lib.a 00:03:11.735 [98/268] Compiling C object lib/librte_mbuf.a.p/mbuf_rte_mbuf_pool_ops.c.o 00:03:11.735 [99/268] Compiling C object lib/librte_mbuf.a.p/mbuf_rte_mbuf_dyn.c.o 00:03:11.993 [100/268] Compiling C object lib/librte_mbuf.a.p/mbuf_rte_mbuf.c.o 00:03:11.993 [101/268] Generating lib/rcu.sym_chk with a custom command (wrapped by meson to capture output) 00:03:11.993 [102/268] Compiling C object lib/librte_net.a.p/net_rte_arp.c.o 00:03:11.993 [103/268] Compiling C object lib/librte_mbuf.a.p/mbuf_rte_mbuf_ptype.c.o 00:03:11.993 [104/268] Linking static target lib/librte_mbuf.a 00:03:12.250 [105/268] Compiling C object lib/librte_net.a.p/net_rte_net_crc.c.o 00:03:12.250 [106/268] Compiling C object lib/librte_net.a.p/net_rte_net.c.o 00:03:12.250 [107/268] Compiling C object lib/librte_net.a.p/net_rte_ether.c.o 00:03:12.508 [108/268] Compiling C object lib/librte_net.a.p/net_net_crc_sse.c.o 00:03:12.508 [109/268] Compiling C object lib/librte_meter.a.p/meter_rte_meter.c.o 00:03:12.508 [110/268] Linking static target lib/librte_net.a 00:03:12.508 [111/268] Linking static target lib/librte_meter.a 00:03:12.767 [112/268] Generating lib/mempool.sym_chk with a custom command (wrapped by meson to capture output) 00:03:12.767 [113/268] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_private.c.o 00:03:13.024 [114/268] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_profile.c.o 00:03:13.024 [115/268] Generating lib/meter.sym_chk with a custom command (wrapped by meson to capture output) 00:03:13.025 [116/268] Generating lib/net.sym_chk with a custom command (wrapped by meson to capture output) 00:03:13.025 [117/268] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_driver.c.o 00:03:13.025 [118/268] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_class_eth.c.o 00:03:13.282 [119/268] Generating lib/mbuf.sym_chk with a custom command (wrapped by meson to capture output) 00:03:13.848 [120/268] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_ethdev_cman.c.o 00:03:13.848 [121/268] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_ethdev_telemetry.c.o 00:03:13.848 [122/268] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_trace_points.c.o 00:03:13.848 [123/268] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_mtr.c.o 00:03:14.107 [124/268] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_tm.c.o 00:03:14.107 [125/268] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_telemetry.c.o 00:03:14.107 [126/268] Compiling C object lib/librte_pci.a.p/pci_rte_pci.c.o 00:03:14.107 [127/268] Linking static target lib/librte_pci.a 00:03:14.107 [128/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline.c.o 00:03:14.365 [129/268] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_common.c.o 00:03:14.365 [130/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_cirbuf.c.o 00:03:14.365 [131/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse.c.o 00:03:14.365 [132/268] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_8472.c.o 00:03:14.365 [133/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_num.c.o 00:03:14.622 [134/268] Generating lib/pci.sym_chk with a custom command (wrapped by meson to capture output) 00:03:14.622 [135/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_portlist.c.o 00:03:14.622 [136/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_string.c.o 00:03:14.622 [137/268] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_linux_ethtool.c.o 00:03:14.622 [138/268] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_flow.c.o 00:03:14.622 [139/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_socket.c.o 00:03:14.622 [140/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_vt100.c.o 00:03:14.622 [141/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_rdline.c.o 00:03:14.622 [142/268] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_8636.c.o 00:03:14.622 [143/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_os_unix.c.o 00:03:14.880 [144/268] Compiling C object lib/librte_hash.a.p/hash_rte_hash_crc.c.o 00:03:14.880 [145/268] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_8079.c.o 00:03:14.880 [146/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_etheraddr.c.o 00:03:14.880 [147/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_ipaddr.c.o 00:03:14.880 [148/268] Linking static target lib/librte_cmdline.a 00:03:15.447 [149/268] Compiling C object lib/librte_hash.a.p/hash_rte_fbk_hash.c.o 00:03:15.447 [150/268] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_ethdev.c.o 00:03:15.447 [151/268] Linking static target lib/librte_ethdev.a 00:03:15.447 [152/268] Compiling C object lib/librte_compressdev.a.p/compressdev_rte_compressdev.c.o 00:03:15.705 [153/268] Compiling C object lib/librte_compressdev.a.p/compressdev_rte_compressdev_pmd.c.o 00:03:15.705 [154/268] Compiling C object lib/librte_hash.a.p/hash_rte_thash_gfni.c.o 00:03:15.705 [155/268] Compiling C object lib/librte_timer.a.p/timer_rte_timer.c.o 00:03:15.705 [156/268] Linking static target lib/librte_timer.a 00:03:15.961 [157/268] Compiling C object lib/librte_hash.a.p/hash_rte_thash.c.o 00:03:15.961 [158/268] Compiling C object lib/librte_hash.a.p/hash_rte_cuckoo_hash.c.o 00:03:15.961 [159/268] Linking static target lib/librte_hash.a 00:03:15.961 [160/268] Compiling C object lib/librte_compressdev.a.p/compressdev_rte_comp.c.o 00:03:15.961 [161/268] Linking static target lib/librte_compressdev.a 00:03:16.219 [162/268] Compiling C object lib/librte_cryptodev.a.p/cryptodev_cryptodev_pmd.c.o 00:03:16.219 [163/268] Compiling C object lib/librte_cryptodev.a.p/cryptodev_cryptodev_trace_points.c.o 00:03:16.487 [164/268] Generating lib/timer.sym_chk with a custom command (wrapped by meson to capture output) 00:03:16.487 [165/268] Compiling C object lib/librte_power.a.p/power_guest_channel.c.o 00:03:16.754 [166/268] Compiling C object lib/librte_dmadev.a.p/dmadev_rte_dmadev.c.o 00:03:16.754 [167/268] Compiling C object lib/librte_dmadev.a.p/dmadev_rte_dmadev_trace_points.c.o 00:03:16.754 [168/268] Compiling C object lib/librte_power.a.p/power_power_common.c.o 00:03:16.754 [169/268] Linking static target lib/librte_dmadev.a 00:03:16.754 [170/268] Generating lib/cmdline.sym_chk with a custom command (wrapped by meson to capture output) 00:03:16.754 [171/268] Compiling C object lib/librte_power.a.p/power_power_kvm_vm.c.o 00:03:17.011 [172/268] Compiling C object lib/librte_cryptodev.a.p/cryptodev_rte_cryptodev.c.o 00:03:17.011 [173/268] Linking static target lib/librte_cryptodev.a 00:03:17.011 [174/268] Compiling C object lib/librte_power.a.p/power_power_amd_pstate_cpufreq.c.o 00:03:17.011 [175/268] Compiling C object lib/librte_power.a.p/power_power_acpi_cpufreq.c.o 00:03:17.011 [176/268] Generating lib/hash.sym_chk with a custom command (wrapped by meson to capture output) 00:03:17.011 [177/268] Generating lib/compressdev.sym_chk with a custom command (wrapped by meson to capture output) 00:03:17.269 [178/268] Compiling C object lib/librte_power.a.p/power_power_cppc_cpufreq.c.o 00:03:17.527 [179/268] Compiling C object lib/librte_power.a.p/power_power_intel_uncore.c.o 00:03:17.527 [180/268] Compiling C object lib/librte_power.a.p/power_rte_power.c.o 00:03:17.527 [181/268] Generating lib/dmadev.sym_chk with a custom command (wrapped by meson to capture output) 00:03:17.785 [182/268] Compiling C object lib/librte_power.a.p/power_rte_power_uncore.c.o 00:03:17.785 [183/268] Compiling C object lib/librte_vhost.a.p/vhost_fd_man.c.o 00:03:17.785 [184/268] Compiling C object lib/librte_power.a.p/power_power_pstate_cpufreq.c.o 00:03:17.785 [185/268] Compiling C object lib/librte_power.a.p/power_rte_power_pmd_mgmt.c.o 00:03:17.785 [186/268] Linking static target lib/librte_power.a 00:03:18.043 [187/268] Compiling C object lib/librte_reorder.a.p/reorder_rte_reorder.c.o 00:03:18.043 [188/268] Linking static target lib/librte_reorder.a 00:03:18.300 [189/268] Compiling C object lib/librte_security.a.p/security_rte_security.c.o 00:03:18.300 [190/268] Linking static target lib/librte_security.a 00:03:18.300 [191/268] Compiling C object lib/librte_vhost.a.p/vhost_socket.c.o 00:03:18.558 [192/268] Compiling C object lib/librte_vhost.a.p/vhost_vdpa.c.o 00:03:18.558 [193/268] Compiling C object lib/librte_vhost.a.p/vhost_iotlb.c.o 00:03:18.815 [194/268] Generating lib/reorder.sym_chk with a custom command (wrapped by meson to capture output) 00:03:18.815 [195/268] Compiling C object lib/librte_vhost.a.p/vhost_vhost.c.o 00:03:19.073 [196/268] Generating lib/power.sym_chk with a custom command (wrapped by meson to capture output) 00:03:19.073 [197/268] Generating lib/security.sym_chk with a custom command (wrapped by meson to capture output) 00:03:19.331 [198/268] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_pci_params.c.o 00:03:19.331 [199/268] Compiling C object lib/librte_vhost.a.p/vhost_virtio_net_ctrl.c.o 00:03:19.331 [200/268] Compiling C object lib/librte_vhost.a.p/vhost_vhost_user.c.o 00:03:19.589 [201/268] Generating lib/cryptodev.sym_chk with a custom command (wrapped by meson to capture output) 00:03:19.589 [202/268] Compiling C object lib/librte_vhost.a.p/vhost_vduse.c.o 00:03:20.153 [203/268] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_linux_pci.c.o 00:03:20.153 [204/268] Compiling C object drivers/libtmp_rte_bus_vdev.a.p/bus_vdev_vdev_params.c.o 00:03:20.153 [205/268] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_linux_pci_uio.c.o 00:03:20.153 [206/268] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_linux_pci_vfio.c.o 00:03:20.153 [207/268] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_pci_common.c.o 00:03:20.411 [208/268] Compiling C object lib/librte_vhost.a.p/vhost_vhost_crypto.c.o 00:03:20.411 [209/268] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_pci_common_uio.c.o 00:03:20.411 [210/268] Linking static target drivers/libtmp_rte_bus_pci.a 00:03:20.411 [211/268] Compiling C object drivers/libtmp_rte_bus_vdev.a.p/bus_vdev_vdev.c.o 00:03:20.411 [212/268] Linking static target drivers/libtmp_rte_bus_vdev.a 00:03:20.669 [213/268] Generating drivers/rte_bus_pci.pmd.c with a custom command 00:03:20.669 [214/268] Compiling C object drivers/librte_bus_pci.a.p/meson-generated_.._rte_bus_pci.pmd.c.o 00:03:20.669 [215/268] Compiling C object drivers/librte_bus_pci.so.24.1.p/meson-generated_.._rte_bus_pci.pmd.c.o 00:03:20.669 [216/268] Linking static target drivers/librte_bus_pci.a 00:03:20.669 [217/268] Generating drivers/rte_bus_vdev.pmd.c with a custom command 00:03:20.669 [218/268] Compiling C object drivers/librte_bus_vdev.a.p/meson-generated_.._rte_bus_vdev.pmd.c.o 00:03:20.669 [219/268] Compiling C object drivers/librte_bus_vdev.so.24.1.p/meson-generated_.._rte_bus_vdev.pmd.c.o 00:03:20.669 [220/268] Linking static target drivers/librte_bus_vdev.a 00:03:20.669 [221/268] Compiling C object drivers/libtmp_rte_mempool_ring.a.p/mempool_ring_rte_mempool_ring.c.o 00:03:20.669 [222/268] Linking static target drivers/libtmp_rte_mempool_ring.a 00:03:20.927 [223/268] Generating drivers/rte_bus_vdev.sym_chk with a custom command (wrapped by meson to capture output) 00:03:20.927 [224/268] Generating drivers/rte_mempool_ring.pmd.c with a custom command 00:03:20.927 [225/268] Compiling C object drivers/librte_mempool_ring.a.p/meson-generated_.._rte_mempool_ring.pmd.c.o 00:03:20.927 [226/268] Compiling C object drivers/librte_mempool_ring.so.24.1.p/meson-generated_.._rte_mempool_ring.pmd.c.o 00:03:20.927 [227/268] Linking static target drivers/librte_mempool_ring.a 00:03:21.185 [228/268] Generating drivers/rte_bus_pci.sym_chk with a custom command (wrapped by meson to capture output) 00:03:21.763 [229/268] Compiling C object lib/librte_vhost.a.p/vhost_virtio_net.c.o 00:03:21.763 [230/268] Linking static target lib/librte_vhost.a 00:03:22.712 [231/268] Generating lib/eal.sym_chk with a custom command (wrapped by meson to capture output) 00:03:22.712 [232/268] Linking target lib/librte_eal.so.24.1 00:03:22.712 [233/268] Generating symbol file lib/librte_eal.so.24.1.p/librte_eal.so.24.1.symbols 00:03:22.712 [234/268] Linking target lib/librte_timer.so.24.1 00:03:22.712 [235/268] Linking target lib/librte_ring.so.24.1 00:03:22.712 [236/268] Linking target lib/librte_dmadev.so.24.1 00:03:22.712 [237/268] Linking target lib/librte_pci.so.24.1 00:03:22.712 [238/268] Linking target lib/librte_meter.so.24.1 00:03:22.712 [239/268] Linking target drivers/librte_bus_vdev.so.24.1 00:03:22.970 [240/268] Generating symbol file lib/librte_dmadev.so.24.1.p/librte_dmadev.so.24.1.symbols 00:03:22.970 [241/268] Generating symbol file lib/librte_timer.so.24.1.p/librte_timer.so.24.1.symbols 00:03:22.970 [242/268] Generating symbol file lib/librte_pci.so.24.1.p/librte_pci.so.24.1.symbols 00:03:22.970 [243/268] Generating symbol file lib/librte_ring.so.24.1.p/librte_ring.so.24.1.symbols 00:03:22.970 [244/268] Generating lib/vhost.sym_chk with a custom command (wrapped by meson to capture output) 00:03:22.970 [245/268] Linking target drivers/librte_bus_pci.so.24.1 00:03:22.970 [246/268] Generating symbol file lib/librte_meter.so.24.1.p/librte_meter.so.24.1.symbols 00:03:22.970 [247/268] Linking target lib/librte_rcu.so.24.1 00:03:22.970 [248/268] Linking target lib/librte_mempool.so.24.1 00:03:23.228 [249/268] Generating symbol file lib/librte_rcu.so.24.1.p/librte_rcu.so.24.1.symbols 00:03:23.228 [250/268] Generating symbol file lib/librte_mempool.so.24.1.p/librte_mempool.so.24.1.symbols 00:03:23.228 [251/268] Linking target lib/librte_mbuf.so.24.1 00:03:23.228 [252/268] Linking target drivers/librte_mempool_ring.so.24.1 00:03:23.228 [253/268] Generating lib/ethdev.sym_chk with a custom command (wrapped by meson to capture output) 00:03:23.228 [254/268] Generating symbol file lib/librte_mbuf.so.24.1.p/librte_mbuf.so.24.1.symbols 00:03:23.486 [255/268] Linking target lib/librte_reorder.so.24.1 00:03:23.486 [256/268] Linking target lib/librte_compressdev.so.24.1 00:03:23.486 [257/268] Linking target lib/librte_cryptodev.so.24.1 00:03:23.486 [258/268] Linking target lib/librte_net.so.24.1 00:03:23.486 [259/268] Generating symbol file lib/librte_cryptodev.so.24.1.p/librte_cryptodev.so.24.1.symbols 00:03:23.486 [260/268] Generating symbol file lib/librte_net.so.24.1.p/librte_net.so.24.1.symbols 00:03:23.486 [261/268] Linking target lib/librte_hash.so.24.1 00:03:23.486 [262/268] Linking target lib/librte_security.so.24.1 00:03:23.486 [263/268] Linking target lib/librte_cmdline.so.24.1 00:03:23.486 [264/268] Linking target lib/librte_ethdev.so.24.1 00:03:23.744 [265/268] Generating symbol file lib/librte_hash.so.24.1.p/librte_hash.so.24.1.symbols 00:03:23.744 [266/268] Generating symbol file lib/librte_ethdev.so.24.1.p/librte_ethdev.so.24.1.symbols 00:03:23.744 [267/268] Linking target lib/librte_power.so.24.1 00:03:23.744 [268/268] Linking target lib/librte_vhost.so.24.1 00:03:23.744 INFO: autodetecting backend as ninja 00:03:23.744 INFO: calculating backend command to run: /usr/local/bin/ninja -C /home/vagrant/spdk_repo/spdk/dpdk/build-tmp -j 10 00:03:50.338 CC lib/ut_mock/mock.o 00:03:50.338 CC lib/ut/ut.o 00:03:50.338 CC lib/log/log.o 00:03:50.338 CC lib/log/log_flags.o 00:03:50.338 CC lib/log/log_deprecated.o 00:03:50.338 LIB libspdk_ut.a 00:03:50.338 SO libspdk_ut.so.2.0 00:03:50.338 LIB libspdk_log.a 00:03:50.338 LIB libspdk_ut_mock.a 00:03:50.338 SO libspdk_log.so.7.1 00:03:50.338 SYMLINK libspdk_ut.so 00:03:50.338 SO libspdk_ut_mock.so.6.0 00:03:50.338 SYMLINK libspdk_log.so 00:03:50.338 SYMLINK libspdk_ut_mock.so 00:03:50.338 CC lib/ioat/ioat.o 00:03:50.338 CC lib/util/base64.o 00:03:50.338 CC lib/util/cpuset.o 00:03:50.338 CC lib/util/crc16.o 00:03:50.338 CC lib/util/bit_array.o 00:03:50.338 CC lib/util/crc32c.o 00:03:50.338 CC lib/util/crc32.o 00:03:50.338 CXX lib/trace_parser/trace.o 00:03:50.338 CC lib/dma/dma.o 00:03:50.338 CC lib/vfio_user/host/vfio_user_pci.o 00:03:50.338 CC lib/util/crc32_ieee.o 00:03:50.338 CC lib/util/crc64.o 00:03:50.338 CC lib/util/dif.o 00:03:50.338 CC lib/util/fd.o 00:03:50.338 CC lib/util/fd_group.o 00:03:50.338 CC lib/util/file.o 00:03:50.338 LIB libspdk_dma.a 00:03:50.338 CC lib/vfio_user/host/vfio_user.o 00:03:50.338 SO libspdk_dma.so.5.0 00:03:50.338 CC lib/util/hexlify.o 00:03:50.338 LIB libspdk_ioat.a 00:03:50.338 SYMLINK libspdk_dma.so 00:03:50.338 CC lib/util/iov.o 00:03:50.338 SO libspdk_ioat.so.7.0 00:03:50.338 CC lib/util/math.o 00:03:50.338 CC lib/util/net.o 00:03:50.338 SYMLINK libspdk_ioat.so 00:03:50.338 CC lib/util/pipe.o 00:03:50.338 CC lib/util/strerror_tls.o 00:03:50.338 CC lib/util/string.o 00:03:50.338 LIB libspdk_vfio_user.a 00:03:50.338 CC lib/util/uuid.o 00:03:50.338 SO libspdk_vfio_user.so.5.0 00:03:50.338 CC lib/util/xor.o 00:03:50.338 CC lib/util/zipf.o 00:03:50.338 SYMLINK libspdk_vfio_user.so 00:03:50.338 CC lib/util/md5.o 00:03:50.597 LIB libspdk_util.a 00:03:50.856 SO libspdk_util.so.10.1 00:03:50.856 LIB libspdk_trace_parser.a 00:03:50.856 SYMLINK libspdk_util.so 00:03:50.856 SO libspdk_trace_parser.so.6.0 00:03:51.114 SYMLINK libspdk_trace_parser.so 00:03:51.114 CC lib/idxd/idxd.o 00:03:51.114 CC lib/env_dpdk/env.o 00:03:51.114 CC lib/conf/conf.o 00:03:51.114 CC lib/idxd/idxd_kernel.o 00:03:51.114 CC lib/env_dpdk/memory.o 00:03:51.114 CC lib/rdma_utils/rdma_utils.o 00:03:51.114 CC lib/env_dpdk/pci.o 00:03:51.114 CC lib/idxd/idxd_user.o 00:03:51.114 CC lib/vmd/vmd.o 00:03:51.114 CC lib/json/json_parse.o 00:03:51.372 CC lib/vmd/led.o 00:03:51.372 LIB libspdk_conf.a 00:03:51.372 CC lib/json/json_util.o 00:03:51.372 SO libspdk_conf.so.6.0 00:03:51.372 CC lib/json/json_write.o 00:03:51.372 LIB libspdk_rdma_utils.a 00:03:51.372 SYMLINK libspdk_conf.so 00:03:51.372 SO libspdk_rdma_utils.so.1.0 00:03:51.372 CC lib/env_dpdk/init.o 00:03:51.372 CC lib/env_dpdk/threads.o 00:03:51.372 SYMLINK libspdk_rdma_utils.so 00:03:51.372 CC lib/env_dpdk/pci_ioat.o 00:03:51.372 CC lib/env_dpdk/pci_virtio.o 00:03:51.630 CC lib/env_dpdk/pci_vmd.o 00:03:51.630 CC lib/env_dpdk/pci_idxd.o 00:03:51.630 CC lib/env_dpdk/pci_event.o 00:03:51.630 LIB libspdk_idxd.a 00:03:51.630 SO libspdk_idxd.so.12.1 00:03:51.630 LIB libspdk_json.a 00:03:51.630 LIB libspdk_vmd.a 00:03:51.630 CC lib/env_dpdk/sigbus_handler.o 00:03:51.630 CC lib/env_dpdk/pci_dpdk.o 00:03:51.888 SO libspdk_json.so.6.0 00:03:51.888 SO libspdk_vmd.so.6.0 00:03:51.888 CC lib/env_dpdk/pci_dpdk_2207.o 00:03:51.888 CC lib/rdma_provider/common.o 00:03:51.888 SYMLINK libspdk_idxd.so 00:03:51.888 CC lib/env_dpdk/pci_dpdk_2211.o 00:03:51.888 CC lib/rdma_provider/rdma_provider_verbs.o 00:03:51.888 SYMLINK libspdk_vmd.so 00:03:51.888 SYMLINK libspdk_json.so 00:03:52.147 LIB libspdk_rdma_provider.a 00:03:52.147 CC lib/jsonrpc/jsonrpc_server.o 00:03:52.147 CC lib/jsonrpc/jsonrpc_server_tcp.o 00:03:52.147 CC lib/jsonrpc/jsonrpc_client.o 00:03:52.147 CC lib/jsonrpc/jsonrpc_client_tcp.o 00:03:52.147 SO libspdk_rdma_provider.so.7.0 00:03:52.147 SYMLINK libspdk_rdma_provider.so 00:03:52.405 LIB libspdk_jsonrpc.a 00:03:52.405 SO libspdk_jsonrpc.so.6.0 00:03:52.405 SYMLINK libspdk_jsonrpc.so 00:03:52.405 LIB libspdk_env_dpdk.a 00:03:52.663 SO libspdk_env_dpdk.so.15.1 00:03:52.663 CC lib/rpc/rpc.o 00:03:52.663 SYMLINK libspdk_env_dpdk.so 00:03:52.921 LIB libspdk_rpc.a 00:03:52.921 SO libspdk_rpc.so.6.0 00:03:53.181 SYMLINK libspdk_rpc.so 00:03:53.438 CC lib/keyring/keyring.o 00:03:53.438 CC lib/keyring/keyring_rpc.o 00:03:53.438 CC lib/notify/notify.o 00:03:53.438 CC lib/notify/notify_rpc.o 00:03:53.438 CC lib/trace/trace.o 00:03:53.438 CC lib/trace/trace_flags.o 00:03:53.438 CC lib/trace/trace_rpc.o 00:03:53.438 LIB libspdk_notify.a 00:03:53.697 SO libspdk_notify.so.6.0 00:03:53.697 SYMLINK libspdk_notify.so 00:03:53.697 LIB libspdk_keyring.a 00:03:53.697 LIB libspdk_trace.a 00:03:53.697 SO libspdk_keyring.so.2.0 00:03:53.697 SO libspdk_trace.so.11.0 00:03:53.697 SYMLINK libspdk_keyring.so 00:03:53.697 SYMLINK libspdk_trace.so 00:03:53.954 CC lib/thread/thread.o 00:03:53.954 CC lib/thread/iobuf.o 00:03:53.954 CC lib/sock/sock.o 00:03:53.954 CC lib/sock/sock_rpc.o 00:03:54.523 LIB libspdk_sock.a 00:03:54.523 SO libspdk_sock.so.10.0 00:03:54.781 SYMLINK libspdk_sock.so 00:03:55.039 CC lib/nvme/nvme_ctrlr.o 00:03:55.039 CC lib/nvme/nvme_ctrlr_cmd.o 00:03:55.039 CC lib/nvme/nvme_fabric.o 00:03:55.039 CC lib/nvme/nvme_ns.o 00:03:55.039 CC lib/nvme/nvme_ns_cmd.o 00:03:55.039 CC lib/nvme/nvme_pcie_common.o 00:03:55.039 CC lib/nvme/nvme_pcie.o 00:03:55.039 CC lib/nvme/nvme_qpair.o 00:03:55.039 CC lib/nvme/nvme.o 00:03:55.972 LIB libspdk_thread.a 00:03:55.972 CC lib/nvme/nvme_quirks.o 00:03:55.972 SO libspdk_thread.so.11.0 00:03:55.972 CC lib/nvme/nvme_transport.o 00:03:55.972 SYMLINK libspdk_thread.so 00:03:55.972 CC lib/nvme/nvme_discovery.o 00:03:55.972 CC lib/nvme/nvme_ctrlr_ocssd_cmd.o 00:03:55.972 CC lib/nvme/nvme_ns_ocssd_cmd.o 00:03:55.972 CC lib/nvme/nvme_tcp.o 00:03:55.972 CC lib/nvme/nvme_opal.o 00:03:55.972 CC lib/nvme/nvme_io_msg.o 00:03:56.229 CC lib/nvme/nvme_poll_group.o 00:03:56.487 CC lib/nvme/nvme_zns.o 00:03:56.487 CC lib/nvme/nvme_stubs.o 00:03:56.487 CC lib/nvme/nvme_auth.o 00:03:56.487 CC lib/nvme/nvme_cuse.o 00:03:56.487 CC lib/nvme/nvme_rdma.o 00:03:57.051 CC lib/accel/accel.o 00:03:57.051 CC lib/blob/blobstore.o 00:03:57.051 CC lib/accel/accel_rpc.o 00:03:57.051 CC lib/accel/accel_sw.o 00:03:57.309 CC lib/init/json_config.o 00:03:57.309 CC lib/virtio/virtio.o 00:03:57.309 CC lib/init/subsystem.o 00:03:57.567 CC lib/fsdev/fsdev.o 00:03:57.567 CC lib/virtio/virtio_vhost_user.o 00:03:57.567 CC lib/virtio/virtio_vfio_user.o 00:03:57.567 CC lib/virtio/virtio_pci.o 00:03:57.567 CC lib/blob/request.o 00:03:57.567 CC lib/init/subsystem_rpc.o 00:03:57.567 CC lib/fsdev/fsdev_io.o 00:03:57.825 CC lib/init/rpc.o 00:03:57.825 CC lib/blob/zeroes.o 00:03:57.825 CC lib/fsdev/fsdev_rpc.o 00:03:57.825 LIB libspdk_virtio.a 00:03:57.825 CC lib/blob/blob_bs_dev.o 00:03:57.825 SO libspdk_virtio.so.7.0 00:03:57.825 LIB libspdk_init.a 00:03:57.825 SO libspdk_init.so.6.0 00:03:57.825 SYMLINK libspdk_virtio.so 00:03:58.083 SYMLINK libspdk_init.so 00:03:58.083 LIB libspdk_nvme.a 00:03:58.083 LIB libspdk_fsdev.a 00:03:58.083 SO libspdk_fsdev.so.2.0 00:03:58.083 LIB libspdk_accel.a 00:03:58.384 SYMLINK libspdk_fsdev.so 00:03:58.384 SO libspdk_accel.so.16.0 00:03:58.384 SO libspdk_nvme.so.15.0 00:03:58.384 CC lib/event/log_rpc.o 00:03:58.384 CC lib/event/reactor.o 00:03:58.384 CC lib/event/app.o 00:03:58.384 CC lib/event/app_rpc.o 00:03:58.384 CC lib/event/scheduler_static.o 00:03:58.384 SYMLINK libspdk_accel.so 00:03:58.384 CC lib/fuse_dispatcher/fuse_dispatcher.o 00:03:58.644 CC lib/bdev/bdev.o 00:03:58.644 CC lib/bdev/bdev_rpc.o 00:03:58.644 CC lib/bdev/bdev_zone.o 00:03:58.644 CC lib/bdev/part.o 00:03:58.644 CC lib/bdev/scsi_nvme.o 00:03:58.644 SYMLINK libspdk_nvme.so 00:03:58.644 LIB libspdk_event.a 00:03:58.644 SO libspdk_event.so.14.0 00:03:58.903 SYMLINK libspdk_event.so 00:03:59.161 LIB libspdk_fuse_dispatcher.a 00:03:59.161 SO libspdk_fuse_dispatcher.so.1.0 00:03:59.161 SYMLINK libspdk_fuse_dispatcher.so 00:04:00.096 LIB libspdk_blob.a 00:04:00.096 SO libspdk_blob.so.12.0 00:04:00.354 SYMLINK libspdk_blob.so 00:04:00.612 CC lib/blobfs/blobfs.o 00:04:00.612 CC lib/blobfs/tree.o 00:04:00.612 CC lib/lvol/lvol.o 00:04:01.547 LIB libspdk_bdev.a 00:04:01.547 LIB libspdk_blobfs.a 00:04:01.547 SO libspdk_bdev.so.17.0 00:04:01.547 SO libspdk_blobfs.so.11.0 00:04:01.547 SYMLINK libspdk_blobfs.so 00:04:01.547 SYMLINK libspdk_bdev.so 00:04:01.547 LIB libspdk_lvol.a 00:04:01.547 SO libspdk_lvol.so.11.0 00:04:01.547 SYMLINK libspdk_lvol.so 00:04:01.806 CC lib/nbd/nbd.o 00:04:01.806 CC lib/nbd/nbd_rpc.o 00:04:01.806 CC lib/nvmf/ctrlr.o 00:04:01.806 CC lib/nvmf/ctrlr_discovery.o 00:04:01.806 CC lib/nvmf/subsystem.o 00:04:01.806 CC lib/nvmf/ctrlr_bdev.o 00:04:01.806 CC lib/nvmf/nvmf.o 00:04:01.806 CC lib/ublk/ublk.o 00:04:01.806 CC lib/ftl/ftl_core.o 00:04:01.806 CC lib/scsi/dev.o 00:04:02.064 CC lib/scsi/lun.o 00:04:02.064 CC lib/ublk/ublk_rpc.o 00:04:02.064 CC lib/ftl/ftl_init.o 00:04:02.064 LIB libspdk_nbd.a 00:04:02.064 SO libspdk_nbd.so.7.0 00:04:02.322 CC lib/nvmf/nvmf_rpc.o 00:04:02.322 SYMLINK libspdk_nbd.so 00:04:02.322 CC lib/nvmf/transport.o 00:04:02.322 CC lib/scsi/port.o 00:04:02.322 CC lib/nvmf/tcp.o 00:04:02.322 CC lib/ftl/ftl_layout.o 00:04:02.322 LIB libspdk_ublk.a 00:04:02.322 SO libspdk_ublk.so.3.0 00:04:02.580 CC lib/scsi/scsi.o 00:04:02.581 SYMLINK libspdk_ublk.so 00:04:02.581 CC lib/scsi/scsi_bdev.o 00:04:02.581 CC lib/ftl/ftl_debug.o 00:04:02.581 CC lib/ftl/ftl_io.o 00:04:02.581 CC lib/nvmf/stubs.o 00:04:02.839 CC lib/ftl/ftl_sb.o 00:04:02.839 CC lib/ftl/ftl_l2p.o 00:04:02.839 CC lib/ftl/ftl_l2p_flat.o 00:04:02.839 CC lib/ftl/ftl_nv_cache.o 00:04:02.839 CC lib/scsi/scsi_pr.o 00:04:03.098 CC lib/ftl/ftl_band.o 00:04:03.098 CC lib/nvmf/mdns_server.o 00:04:03.098 CC lib/nvmf/rdma.o 00:04:03.098 CC lib/nvmf/auth.o 00:04:03.098 CC lib/ftl/ftl_band_ops.o 00:04:03.098 CC lib/ftl/ftl_writer.o 00:04:03.357 CC lib/scsi/scsi_rpc.o 00:04:03.357 CC lib/scsi/task.o 00:04:03.357 CC lib/ftl/ftl_rq.o 00:04:03.357 CC lib/ftl/ftl_reloc.o 00:04:03.615 CC lib/ftl/ftl_l2p_cache.o 00:04:03.615 CC lib/ftl/ftl_p2l.o 00:04:03.615 CC lib/ftl/ftl_p2l_log.o 00:04:03.615 LIB libspdk_scsi.a 00:04:03.615 CC lib/ftl/mngt/ftl_mngt.o 00:04:03.615 SO libspdk_scsi.so.9.0 00:04:03.930 SYMLINK libspdk_scsi.so 00:04:03.930 CC lib/ftl/mngt/ftl_mngt_bdev.o 00:04:03.930 CC lib/ftl/mngt/ftl_mngt_shutdown.o 00:04:03.930 CC lib/ftl/mngt/ftl_mngt_startup.o 00:04:03.930 CC lib/ftl/mngt/ftl_mngt_md.o 00:04:03.930 CC lib/ftl/mngt/ftl_mngt_misc.o 00:04:04.189 CC lib/ftl/mngt/ftl_mngt_ioch.o 00:04:04.189 CC lib/ftl/mngt/ftl_mngt_l2p.o 00:04:04.189 CC lib/ftl/mngt/ftl_mngt_band.o 00:04:04.189 CC lib/iscsi/conn.o 00:04:04.189 CC lib/ftl/mngt/ftl_mngt_self_test.o 00:04:04.189 CC lib/iscsi/init_grp.o 00:04:04.189 CC lib/vhost/vhost.o 00:04:04.189 CC lib/ftl/mngt/ftl_mngt_p2l.o 00:04:04.189 CC lib/iscsi/iscsi.o 00:04:04.189 CC lib/vhost/vhost_rpc.o 00:04:04.189 CC lib/vhost/vhost_scsi.o 00:04:04.449 CC lib/vhost/vhost_blk.o 00:04:04.449 CC lib/ftl/mngt/ftl_mngt_recovery.o 00:04:04.449 CC lib/iscsi/param.o 00:04:04.449 CC lib/iscsi/portal_grp.o 00:04:04.708 CC lib/iscsi/tgt_node.o 00:04:04.708 CC lib/vhost/rte_vhost_user.o 00:04:04.708 CC lib/ftl/mngt/ftl_mngt_upgrade.o 00:04:04.967 CC lib/iscsi/iscsi_subsystem.o 00:04:04.967 CC lib/iscsi/iscsi_rpc.o 00:04:04.967 CC lib/ftl/utils/ftl_conf.o 00:04:04.967 LIB libspdk_nvmf.a 00:04:05.226 CC lib/iscsi/task.o 00:04:05.226 CC lib/ftl/utils/ftl_md.o 00:04:05.226 SO libspdk_nvmf.so.20.0 00:04:05.226 CC lib/ftl/utils/ftl_mempool.o 00:04:05.226 CC lib/ftl/utils/ftl_bitmap.o 00:04:05.226 CC lib/ftl/utils/ftl_property.o 00:04:05.485 CC lib/ftl/utils/ftl_layout_tracker_bdev.o 00:04:05.485 CC lib/ftl/upgrade/ftl_layout_upgrade.o 00:04:05.485 CC lib/ftl/upgrade/ftl_sb_upgrade.o 00:04:05.485 SYMLINK libspdk_nvmf.so 00:04:05.485 CC lib/ftl/upgrade/ftl_p2l_upgrade.o 00:04:05.485 CC lib/ftl/upgrade/ftl_band_upgrade.o 00:04:05.485 CC lib/ftl/upgrade/ftl_chunk_upgrade.o 00:04:05.485 CC lib/ftl/upgrade/ftl_trim_upgrade.o 00:04:05.485 CC lib/ftl/upgrade/ftl_sb_v3.o 00:04:05.485 CC lib/ftl/upgrade/ftl_sb_v5.o 00:04:05.485 CC lib/ftl/nvc/ftl_nvc_dev.o 00:04:05.743 CC lib/ftl/nvc/ftl_nvc_bdev_vss.o 00:04:05.743 CC lib/ftl/nvc/ftl_nvc_bdev_non_vss.o 00:04:05.743 LIB libspdk_iscsi.a 00:04:05.743 CC lib/ftl/nvc/ftl_nvc_bdev_common.o 00:04:05.743 CC lib/ftl/base/ftl_base_dev.o 00:04:05.743 SO libspdk_iscsi.so.8.0 00:04:05.743 CC lib/ftl/base/ftl_base_bdev.o 00:04:05.743 CC lib/ftl/ftl_trace.o 00:04:05.743 LIB libspdk_vhost.a 00:04:06.002 SYMLINK libspdk_iscsi.so 00:04:06.002 SO libspdk_vhost.so.8.0 00:04:06.002 SYMLINK libspdk_vhost.so 00:04:06.002 LIB libspdk_ftl.a 00:04:06.261 SO libspdk_ftl.so.9.0 00:04:06.519 SYMLINK libspdk_ftl.so 00:04:07.085 CC module/env_dpdk/env_dpdk_rpc.o 00:04:07.085 CC module/keyring/file/keyring.o 00:04:07.085 CC module/scheduler/gscheduler/gscheduler.o 00:04:07.085 CC module/accel/ioat/accel_ioat.o 00:04:07.085 CC module/sock/posix/posix.o 00:04:07.085 CC module/scheduler/dpdk_governor/dpdk_governor.o 00:04:07.085 CC module/blob/bdev/blob_bdev.o 00:04:07.085 CC module/accel/error/accel_error.o 00:04:07.085 CC module/scheduler/dynamic/scheduler_dynamic.o 00:04:07.085 CC module/fsdev/aio/fsdev_aio.o 00:04:07.085 LIB libspdk_env_dpdk_rpc.a 00:04:07.085 SO libspdk_env_dpdk_rpc.so.6.0 00:04:07.085 SYMLINK libspdk_env_dpdk_rpc.so 00:04:07.344 LIB libspdk_scheduler_gscheduler.a 00:04:07.344 CC module/keyring/file/keyring_rpc.o 00:04:07.344 SO libspdk_scheduler_gscheduler.so.4.0 00:04:07.344 LIB libspdk_scheduler_dpdk_governor.a 00:04:07.344 CC module/accel/ioat/accel_ioat_rpc.o 00:04:07.344 SO libspdk_scheduler_dpdk_governor.so.4.0 00:04:07.344 CC module/accel/error/accel_error_rpc.o 00:04:07.344 LIB libspdk_scheduler_dynamic.a 00:04:07.344 SYMLINK libspdk_scheduler_gscheduler.so 00:04:07.344 SO libspdk_scheduler_dynamic.so.4.0 00:04:07.344 SYMLINK libspdk_scheduler_dpdk_governor.so 00:04:07.344 LIB libspdk_blob_bdev.a 00:04:07.344 LIB libspdk_keyring_file.a 00:04:07.344 SO libspdk_blob_bdev.so.12.0 00:04:07.344 SYMLINK libspdk_scheduler_dynamic.so 00:04:07.344 CC module/fsdev/aio/fsdev_aio_rpc.o 00:04:07.344 SO libspdk_keyring_file.so.2.0 00:04:07.344 LIB libspdk_accel_ioat.a 00:04:07.344 CC module/keyring/linux/keyring.o 00:04:07.344 LIB libspdk_accel_error.a 00:04:07.344 SO libspdk_accel_ioat.so.6.0 00:04:07.344 SYMLINK libspdk_blob_bdev.so 00:04:07.344 SYMLINK libspdk_keyring_file.so 00:04:07.344 CC module/keyring/linux/keyring_rpc.o 00:04:07.603 SO libspdk_accel_error.so.2.0 00:04:07.603 CC module/fsdev/aio/linux_aio_mgr.o 00:04:07.603 SYMLINK libspdk_accel_ioat.so 00:04:07.603 CC module/accel/dsa/accel_dsa.o 00:04:07.603 CC module/accel/iaa/accel_iaa.o 00:04:07.603 SYMLINK libspdk_accel_error.so 00:04:07.603 CC module/accel/iaa/accel_iaa_rpc.o 00:04:07.603 CC module/accel/dsa/accel_dsa_rpc.o 00:04:07.603 LIB libspdk_keyring_linux.a 00:04:07.603 SO libspdk_keyring_linux.so.1.0 00:04:07.603 SYMLINK libspdk_keyring_linux.so 00:04:07.862 LIB libspdk_fsdev_aio.a 00:04:07.862 LIB libspdk_accel_iaa.a 00:04:07.862 SO libspdk_fsdev_aio.so.1.0 00:04:07.862 LIB libspdk_sock_posix.a 00:04:07.862 CC module/bdev/delay/vbdev_delay.o 00:04:07.862 SO libspdk_accel_iaa.so.3.0 00:04:07.862 LIB libspdk_accel_dsa.a 00:04:07.862 SO libspdk_sock_posix.so.6.0 00:04:07.862 CC module/blobfs/bdev/blobfs_bdev.o 00:04:07.862 SO libspdk_accel_dsa.so.5.0 00:04:07.862 SYMLINK libspdk_accel_iaa.so 00:04:07.862 SYMLINK libspdk_fsdev_aio.so 00:04:07.862 CC module/bdev/delay/vbdev_delay_rpc.o 00:04:07.862 CC module/bdev/error/vbdev_error.o 00:04:07.862 CC module/bdev/gpt/gpt.o 00:04:07.862 CC module/bdev/lvol/vbdev_lvol.o 00:04:07.862 SYMLINK libspdk_accel_dsa.so 00:04:07.862 CC module/bdev/malloc/bdev_malloc.o 00:04:07.862 SYMLINK libspdk_sock_posix.so 00:04:07.862 CC module/bdev/gpt/vbdev_gpt.o 00:04:07.862 CC module/bdev/error/vbdev_error_rpc.o 00:04:08.121 CC module/blobfs/bdev/blobfs_bdev_rpc.o 00:04:08.121 CC module/bdev/lvol/vbdev_lvol_rpc.o 00:04:08.121 CC module/bdev/null/bdev_null.o 00:04:08.121 LIB libspdk_bdev_error.a 00:04:08.121 LIB libspdk_bdev_delay.a 00:04:08.121 LIB libspdk_bdev_gpt.a 00:04:08.121 SO libspdk_bdev_error.so.6.0 00:04:08.121 SO libspdk_bdev_delay.so.6.0 00:04:08.121 SO libspdk_bdev_gpt.so.6.0 00:04:08.379 LIB libspdk_blobfs_bdev.a 00:04:08.379 SO libspdk_blobfs_bdev.so.6.0 00:04:08.379 SYMLINK libspdk_bdev_error.so 00:04:08.379 CC module/bdev/malloc/bdev_malloc_rpc.o 00:04:08.379 SYMLINK libspdk_bdev_delay.so 00:04:08.379 CC module/bdev/nvme/bdev_nvme.o 00:04:08.379 SYMLINK libspdk_bdev_gpt.so 00:04:08.379 CC module/bdev/passthru/vbdev_passthru.o 00:04:08.379 CC module/bdev/nvme/bdev_nvme_rpc.o 00:04:08.380 CC module/bdev/null/bdev_null_rpc.o 00:04:08.380 SYMLINK libspdk_blobfs_bdev.so 00:04:08.380 CC module/bdev/passthru/vbdev_passthru_rpc.o 00:04:08.380 CC module/bdev/raid/bdev_raid.o 00:04:08.380 LIB libspdk_bdev_malloc.a 00:04:08.638 LIB libspdk_bdev_lvol.a 00:04:08.638 CC module/bdev/split/vbdev_split.o 00:04:08.638 SO libspdk_bdev_malloc.so.6.0 00:04:08.638 SO libspdk_bdev_lvol.so.6.0 00:04:08.638 CC module/bdev/raid/bdev_raid_rpc.o 00:04:08.638 LIB libspdk_bdev_null.a 00:04:08.638 SO libspdk_bdev_null.so.6.0 00:04:08.638 SYMLINK libspdk_bdev_malloc.so 00:04:08.638 CC module/bdev/split/vbdev_split_rpc.o 00:04:08.638 SYMLINK libspdk_bdev_lvol.so 00:04:08.638 LIB libspdk_bdev_passthru.a 00:04:08.638 SYMLINK libspdk_bdev_null.so 00:04:08.638 CC module/bdev/nvme/nvme_rpc.o 00:04:08.638 SO libspdk_bdev_passthru.so.6.0 00:04:08.638 CC module/bdev/zone_block/vbdev_zone_block.o 00:04:08.897 CC module/bdev/raid/bdev_raid_sb.o 00:04:08.897 SYMLINK libspdk_bdev_passthru.so 00:04:08.897 CC module/bdev/raid/raid0.o 00:04:08.897 CC module/bdev/raid/raid1.o 00:04:08.897 LIB libspdk_bdev_split.a 00:04:08.897 CC module/bdev/aio/bdev_aio.o 00:04:08.897 SO libspdk_bdev_split.so.6.0 00:04:08.897 SYMLINK libspdk_bdev_split.so 00:04:08.897 CC module/bdev/aio/bdev_aio_rpc.o 00:04:08.897 CC module/bdev/nvme/bdev_mdns_client.o 00:04:09.155 CC module/bdev/raid/concat.o 00:04:09.155 CC module/bdev/zone_block/vbdev_zone_block_rpc.o 00:04:09.155 CC module/bdev/nvme/vbdev_opal.o 00:04:09.155 CC module/bdev/nvme/vbdev_opal_rpc.o 00:04:09.155 CC module/bdev/nvme/bdev_nvme_cuse_rpc.o 00:04:09.155 CC module/bdev/ftl/bdev_ftl.o 00:04:09.155 LIB libspdk_bdev_aio.a 00:04:09.155 SO libspdk_bdev_aio.so.6.0 00:04:09.155 LIB libspdk_bdev_zone_block.a 00:04:09.414 SO libspdk_bdev_zone_block.so.6.0 00:04:09.414 SYMLINK libspdk_bdev_aio.so 00:04:09.414 CC module/bdev/ftl/bdev_ftl_rpc.o 00:04:09.414 SYMLINK libspdk_bdev_zone_block.so 00:04:09.414 CC module/bdev/iscsi/bdev_iscsi.o 00:04:09.414 CC module/bdev/iscsi/bdev_iscsi_rpc.o 00:04:09.414 CC module/bdev/virtio/bdev_virtio_scsi.o 00:04:09.414 CC module/bdev/virtio/bdev_virtio_blk.o 00:04:09.414 CC module/bdev/virtio/bdev_virtio_rpc.o 00:04:09.414 LIB libspdk_bdev_ftl.a 00:04:09.672 LIB libspdk_bdev_raid.a 00:04:09.672 SO libspdk_bdev_ftl.so.6.0 00:04:09.672 SO libspdk_bdev_raid.so.6.0 00:04:09.672 SYMLINK libspdk_bdev_ftl.so 00:04:09.672 SYMLINK libspdk_bdev_raid.so 00:04:09.931 LIB libspdk_bdev_iscsi.a 00:04:09.931 SO libspdk_bdev_iscsi.so.6.0 00:04:09.931 SYMLINK libspdk_bdev_iscsi.so 00:04:09.931 LIB libspdk_bdev_virtio.a 00:04:10.190 SO libspdk_bdev_virtio.so.6.0 00:04:10.190 SYMLINK libspdk_bdev_virtio.so 00:04:11.191 LIB libspdk_bdev_nvme.a 00:04:11.191 SO libspdk_bdev_nvme.so.7.1 00:04:11.191 SYMLINK libspdk_bdev_nvme.so 00:04:11.758 CC module/event/subsystems/iobuf/iobuf.o 00:04:11.758 CC module/event/subsystems/sock/sock.o 00:04:11.758 CC module/event/subsystems/vmd/vmd.o 00:04:11.758 CC module/event/subsystems/iobuf/iobuf_rpc.o 00:04:11.758 CC module/event/subsystems/vmd/vmd_rpc.o 00:04:11.758 CC module/event/subsystems/fsdev/fsdev.o 00:04:11.758 CC module/event/subsystems/keyring/keyring.o 00:04:11.758 CC module/event/subsystems/vhost_blk/vhost_blk.o 00:04:11.758 CC module/event/subsystems/scheduler/scheduler.o 00:04:12.016 LIB libspdk_event_keyring.a 00:04:12.016 LIB libspdk_event_fsdev.a 00:04:12.016 LIB libspdk_event_sock.a 00:04:12.016 LIB libspdk_event_vhost_blk.a 00:04:12.016 LIB libspdk_event_scheduler.a 00:04:12.016 LIB libspdk_event_vmd.a 00:04:12.016 LIB libspdk_event_iobuf.a 00:04:12.016 SO libspdk_event_fsdev.so.1.0 00:04:12.016 SO libspdk_event_keyring.so.1.0 00:04:12.016 SO libspdk_event_vhost_blk.so.3.0 00:04:12.016 SO libspdk_event_scheduler.so.4.0 00:04:12.016 SO libspdk_event_sock.so.5.0 00:04:12.016 SO libspdk_event_vmd.so.6.0 00:04:12.016 SO libspdk_event_iobuf.so.3.0 00:04:12.016 SYMLINK libspdk_event_vhost_blk.so 00:04:12.016 SYMLINK libspdk_event_fsdev.so 00:04:12.016 SYMLINK libspdk_event_keyring.so 00:04:12.016 SYMLINK libspdk_event_sock.so 00:04:12.016 SYMLINK libspdk_event_scheduler.so 00:04:12.016 SYMLINK libspdk_event_vmd.so 00:04:12.016 SYMLINK libspdk_event_iobuf.so 00:04:12.275 CC module/event/subsystems/accel/accel.o 00:04:12.533 LIB libspdk_event_accel.a 00:04:12.533 SO libspdk_event_accel.so.6.0 00:04:12.533 SYMLINK libspdk_event_accel.so 00:04:13.100 CC module/event/subsystems/bdev/bdev.o 00:04:13.100 LIB libspdk_event_bdev.a 00:04:13.100 SO libspdk_event_bdev.so.6.0 00:04:13.358 SYMLINK libspdk_event_bdev.so 00:04:13.617 CC module/event/subsystems/ublk/ublk.o 00:04:13.617 CC module/event/subsystems/scsi/scsi.o 00:04:13.617 CC module/event/subsystems/nbd/nbd.o 00:04:13.617 CC module/event/subsystems/nvmf/nvmf_rpc.o 00:04:13.617 CC module/event/subsystems/nvmf/nvmf_tgt.o 00:04:13.617 LIB libspdk_event_ublk.a 00:04:13.617 LIB libspdk_event_nbd.a 00:04:13.617 SO libspdk_event_ublk.so.3.0 00:04:13.617 LIB libspdk_event_scsi.a 00:04:13.875 SO libspdk_event_nbd.so.6.0 00:04:13.875 SO libspdk_event_scsi.so.6.0 00:04:13.875 SYMLINK libspdk_event_ublk.so 00:04:13.875 SYMLINK libspdk_event_nbd.so 00:04:13.875 SYMLINK libspdk_event_scsi.so 00:04:13.875 LIB libspdk_event_nvmf.a 00:04:13.875 SO libspdk_event_nvmf.so.6.0 00:04:13.875 SYMLINK libspdk_event_nvmf.so 00:04:14.134 CC module/event/subsystems/iscsi/iscsi.o 00:04:14.134 CC module/event/subsystems/vhost_scsi/vhost_scsi.o 00:04:14.392 LIB libspdk_event_iscsi.a 00:04:14.392 LIB libspdk_event_vhost_scsi.a 00:04:14.392 SO libspdk_event_iscsi.so.6.0 00:04:14.392 SO libspdk_event_vhost_scsi.so.3.0 00:04:14.392 SYMLINK libspdk_event_iscsi.so 00:04:14.392 SYMLINK libspdk_event_vhost_scsi.so 00:04:14.650 SO libspdk.so.6.0 00:04:14.650 SYMLINK libspdk.so 00:04:14.908 CXX app/trace/trace.o 00:04:14.908 CC app/trace_record/trace_record.o 00:04:14.908 CC app/spdk_lspci/spdk_lspci.o 00:04:14.908 CC app/spdk_nvme_perf/perf.o 00:04:14.908 CC app/spdk_nvme_identify/identify.o 00:04:14.908 CC app/nvmf_tgt/nvmf_main.o 00:04:14.908 CC app/spdk_tgt/spdk_tgt.o 00:04:14.908 CC app/iscsi_tgt/iscsi_tgt.o 00:04:15.166 CC test/thread/poller_perf/poller_perf.o 00:04:15.166 CC examples/util/zipf/zipf.o 00:04:15.166 LINK spdk_lspci 00:04:15.166 LINK spdk_tgt 00:04:15.166 LINK nvmf_tgt 00:04:15.166 LINK spdk_trace_record 00:04:15.166 LINK poller_perf 00:04:15.166 LINK zipf 00:04:15.166 LINK iscsi_tgt 00:04:15.424 CC app/spdk_nvme_discover/discovery_aer.o 00:04:15.424 LINK spdk_trace 00:04:15.424 CC app/spdk_top/spdk_top.o 00:04:15.683 LINK spdk_nvme_discover 00:04:15.683 CC app/spdk_dd/spdk_dd.o 00:04:15.683 CC examples/vmd/lsvmd/lsvmd.o 00:04:15.683 CC examples/ioat/perf/perf.o 00:04:15.683 CC test/dma/test_dma/test_dma.o 00:04:15.683 CC examples/idxd/perf/perf.o 00:04:15.683 CC examples/ioat/verify/verify.o 00:04:15.683 LINK spdk_nvme_identify 00:04:15.683 LINK spdk_nvme_perf 00:04:15.683 LINK lsvmd 00:04:15.941 LINK ioat_perf 00:04:15.941 LINK verify 00:04:15.941 LINK spdk_dd 00:04:15.941 LINK idxd_perf 00:04:16.199 CC examples/vmd/led/led.o 00:04:16.199 CC app/vhost/vhost.o 00:04:16.199 LINK test_dma 00:04:16.199 CC app/fio/nvme/fio_plugin.o 00:04:16.199 CC examples/interrupt_tgt/interrupt_tgt.o 00:04:16.199 CC test/app/bdev_svc/bdev_svc.o 00:04:16.199 LINK led 00:04:16.458 LINK vhost 00:04:16.458 LINK spdk_top 00:04:16.458 CC examples/sock/hello_world/hello_sock.o 00:04:16.458 LINK interrupt_tgt 00:04:16.458 CC examples/thread/thread/thread_ex.o 00:04:16.458 LINK bdev_svc 00:04:16.458 CC app/fio/bdev/fio_plugin.o 00:04:16.458 TEST_HEADER include/spdk/accel.h 00:04:16.458 TEST_HEADER include/spdk/accel_module.h 00:04:16.458 TEST_HEADER include/spdk/assert.h 00:04:16.458 TEST_HEADER include/spdk/barrier.h 00:04:16.458 TEST_HEADER include/spdk/base64.h 00:04:16.458 TEST_HEADER include/spdk/bdev.h 00:04:16.717 TEST_HEADER include/spdk/bdev_module.h 00:04:16.717 TEST_HEADER include/spdk/bdev_zone.h 00:04:16.717 TEST_HEADER include/spdk/bit_array.h 00:04:16.717 TEST_HEADER include/spdk/bit_pool.h 00:04:16.717 TEST_HEADER include/spdk/blob_bdev.h 00:04:16.717 TEST_HEADER include/spdk/blobfs_bdev.h 00:04:16.717 TEST_HEADER include/spdk/blobfs.h 00:04:16.717 TEST_HEADER include/spdk/blob.h 00:04:16.717 TEST_HEADER include/spdk/conf.h 00:04:16.717 TEST_HEADER include/spdk/config.h 00:04:16.717 TEST_HEADER include/spdk/cpuset.h 00:04:16.717 TEST_HEADER include/spdk/crc16.h 00:04:16.717 TEST_HEADER include/spdk/crc32.h 00:04:16.717 TEST_HEADER include/spdk/crc64.h 00:04:16.717 TEST_HEADER include/spdk/dif.h 00:04:16.717 TEST_HEADER include/spdk/dma.h 00:04:16.717 TEST_HEADER include/spdk/endian.h 00:04:16.717 TEST_HEADER include/spdk/env_dpdk.h 00:04:16.717 TEST_HEADER include/spdk/env.h 00:04:16.717 TEST_HEADER include/spdk/event.h 00:04:16.717 TEST_HEADER include/spdk/fd_group.h 00:04:16.717 TEST_HEADER include/spdk/fd.h 00:04:16.717 TEST_HEADER include/spdk/file.h 00:04:16.717 TEST_HEADER include/spdk/fsdev.h 00:04:16.717 TEST_HEADER include/spdk/fsdev_module.h 00:04:16.717 TEST_HEADER include/spdk/ftl.h 00:04:16.717 TEST_HEADER include/spdk/gpt_spec.h 00:04:16.717 TEST_HEADER include/spdk/hexlify.h 00:04:16.717 TEST_HEADER include/spdk/histogram_data.h 00:04:16.717 TEST_HEADER include/spdk/idxd.h 00:04:16.717 TEST_HEADER include/spdk/idxd_spec.h 00:04:16.717 TEST_HEADER include/spdk/init.h 00:04:16.717 TEST_HEADER include/spdk/ioat.h 00:04:16.717 TEST_HEADER include/spdk/ioat_spec.h 00:04:16.717 TEST_HEADER include/spdk/iscsi_spec.h 00:04:16.717 TEST_HEADER include/spdk/json.h 00:04:16.717 TEST_HEADER include/spdk/jsonrpc.h 00:04:16.717 TEST_HEADER include/spdk/keyring.h 00:04:16.717 TEST_HEADER include/spdk/keyring_module.h 00:04:16.717 LINK hello_sock 00:04:16.717 TEST_HEADER include/spdk/likely.h 00:04:16.717 TEST_HEADER include/spdk/log.h 00:04:16.717 TEST_HEADER include/spdk/lvol.h 00:04:16.717 TEST_HEADER include/spdk/md5.h 00:04:16.717 TEST_HEADER include/spdk/memory.h 00:04:16.717 CC test/app/histogram_perf/histogram_perf.o 00:04:16.717 TEST_HEADER include/spdk/mmio.h 00:04:16.717 TEST_HEADER include/spdk/nbd.h 00:04:16.717 TEST_HEADER include/spdk/net.h 00:04:16.717 LINK thread 00:04:16.717 TEST_HEADER include/spdk/notify.h 00:04:16.717 TEST_HEADER include/spdk/nvme.h 00:04:16.717 CC test/app/jsoncat/jsoncat.o 00:04:16.717 TEST_HEADER include/spdk/nvme_intel.h 00:04:16.717 TEST_HEADER include/spdk/nvme_ocssd.h 00:04:16.717 TEST_HEADER include/spdk/nvme_ocssd_spec.h 00:04:16.717 TEST_HEADER include/spdk/nvme_spec.h 00:04:16.717 TEST_HEADER include/spdk/nvme_zns.h 00:04:16.717 TEST_HEADER include/spdk/nvmf_cmd.h 00:04:16.717 TEST_HEADER include/spdk/nvmf_fc_spec.h 00:04:16.717 TEST_HEADER include/spdk/nvmf.h 00:04:16.717 TEST_HEADER include/spdk/nvmf_spec.h 00:04:16.717 TEST_HEADER include/spdk/nvmf_transport.h 00:04:16.717 TEST_HEADER include/spdk/opal.h 00:04:16.717 TEST_HEADER include/spdk/opal_spec.h 00:04:16.717 TEST_HEADER include/spdk/pci_ids.h 00:04:16.717 TEST_HEADER include/spdk/pipe.h 00:04:16.717 TEST_HEADER include/spdk/queue.h 00:04:16.717 TEST_HEADER include/spdk/reduce.h 00:04:16.717 LINK spdk_nvme 00:04:16.717 TEST_HEADER include/spdk/rpc.h 00:04:16.717 TEST_HEADER include/spdk/scheduler.h 00:04:16.717 TEST_HEADER include/spdk/scsi.h 00:04:16.717 TEST_HEADER include/spdk/scsi_spec.h 00:04:16.717 TEST_HEADER include/spdk/sock.h 00:04:16.717 TEST_HEADER include/spdk/stdinc.h 00:04:16.717 TEST_HEADER include/spdk/string.h 00:04:16.717 TEST_HEADER include/spdk/thread.h 00:04:16.717 TEST_HEADER include/spdk/trace.h 00:04:16.717 TEST_HEADER include/spdk/trace_parser.h 00:04:16.717 TEST_HEADER include/spdk/tree.h 00:04:16.717 TEST_HEADER include/spdk/ublk.h 00:04:16.717 TEST_HEADER include/spdk/util.h 00:04:16.717 CC test/app/fuzz/nvme_fuzz/nvme_fuzz.o 00:04:16.717 TEST_HEADER include/spdk/uuid.h 00:04:16.717 TEST_HEADER include/spdk/version.h 00:04:16.717 TEST_HEADER include/spdk/vfio_user_pci.h 00:04:16.717 TEST_HEADER include/spdk/vfio_user_spec.h 00:04:16.717 TEST_HEADER include/spdk/vhost.h 00:04:16.717 TEST_HEADER include/spdk/vmd.h 00:04:16.717 TEST_HEADER include/spdk/xor.h 00:04:16.717 TEST_HEADER include/spdk/zipf.h 00:04:16.717 CXX test/cpp_headers/accel.o 00:04:16.717 CC test/app/stub/stub.o 00:04:16.717 CXX test/cpp_headers/accel_module.o 00:04:16.976 CXX test/cpp_headers/assert.o 00:04:16.976 LINK histogram_perf 00:04:16.976 LINK jsoncat 00:04:16.976 CXX test/cpp_headers/barrier.o 00:04:16.976 LINK spdk_bdev 00:04:16.976 LINK stub 00:04:17.235 CXX test/cpp_headers/base64.o 00:04:17.235 CXX test/cpp_headers/bdev.o 00:04:17.235 CC test/app/fuzz/iscsi_fuzz/iscsi_fuzz.o 00:04:17.235 CC test/app/fuzz/vhost_fuzz/vhost_fuzz_rpc.o 00:04:17.235 LINK nvme_fuzz 00:04:17.504 CC test/event/event_perf/event_perf.o 00:04:17.504 CXX test/cpp_headers/bdev_module.o 00:04:17.504 CC test/env/vtophys/vtophys.o 00:04:17.504 CC test/app/fuzz/vhost_fuzz/vhost_fuzz.o 00:04:17.504 CC test/env/env_dpdk_post_init/env_dpdk_post_init.o 00:04:17.504 CC test/env/mem_callbacks/mem_callbacks.o 00:04:17.504 CC test/env/memory/memory_ut.o 00:04:17.504 CC test/env/pci/pci_ut.o 00:04:17.504 LINK event_perf 00:04:17.793 LINK vtophys 00:04:17.793 LINK env_dpdk_post_init 00:04:17.793 CXX test/cpp_headers/bdev_zone.o 00:04:17.793 CXX test/cpp_headers/bit_array.o 00:04:17.793 CC test/event/reactor/reactor.o 00:04:17.793 LINK vhost_fuzz 00:04:18.052 LINK pci_ut 00:04:18.052 CC examples/nvme/hello_world/hello_world.o 00:04:18.052 CXX test/cpp_headers/bit_pool.o 00:04:18.052 LINK reactor 00:04:18.052 CXX test/cpp_headers/blob_bdev.o 00:04:18.052 CC examples/nvme/reconnect/reconnect.o 00:04:18.052 LINK mem_callbacks 00:04:18.311 LINK hello_world 00:04:18.311 CXX test/cpp_headers/blobfs_bdev.o 00:04:18.311 CC test/event/reactor_perf/reactor_perf.o 00:04:18.311 CC examples/nvme/nvme_manage/nvme_manage.o 00:04:18.311 CXX test/cpp_headers/blobfs.o 00:04:18.569 CC test/nvme/aer/aer.o 00:04:18.569 LINK reconnect 00:04:18.569 LINK reactor_perf 00:04:18.569 CC examples/fsdev/hello_world/hello_fsdev.o 00:04:18.569 CXX test/cpp_headers/blob.o 00:04:18.828 CC test/event/app_repeat/app_repeat.o 00:04:18.828 LINK memory_ut 00:04:18.828 LINK aer 00:04:18.828 CC test/nvme/reset/reset.o 00:04:18.828 LINK hello_fsdev 00:04:18.828 CXX test/cpp_headers/conf.o 00:04:19.087 CC examples/accel/perf/accel_perf.o 00:04:19.087 LINK nvme_manage 00:04:19.087 LINK app_repeat 00:04:19.087 CXX test/cpp_headers/config.o 00:04:19.087 LINK iscsi_fuzz 00:04:19.087 CXX test/cpp_headers/cpuset.o 00:04:19.087 CC test/nvme/sgl/sgl.o 00:04:19.345 LINK reset 00:04:19.345 CC test/nvme/e2edp/nvme_dp.o 00:04:19.345 CC test/nvme/overhead/overhead.o 00:04:19.345 CC examples/nvme/arbitration/arbitration.o 00:04:19.345 CXX test/cpp_headers/crc16.o 00:04:19.345 CXX test/cpp_headers/crc32.o 00:04:19.604 CC test/event/scheduler/scheduler.o 00:04:19.604 CC examples/nvme/hotplug/hotplug.o 00:04:19.604 LINK accel_perf 00:04:19.604 LINK sgl 00:04:19.604 LINK nvme_dp 00:04:19.604 LINK overhead 00:04:19.604 CXX test/cpp_headers/crc64.o 00:04:19.604 LINK arbitration 00:04:19.604 CC test/nvme/err_injection/err_injection.o 00:04:19.862 LINK scheduler 00:04:19.862 CXX test/cpp_headers/dif.o 00:04:19.862 LINK hotplug 00:04:19.862 CC examples/nvme/cmb_copy/cmb_copy.o 00:04:19.862 CC test/nvme/startup/startup.o 00:04:19.862 CC test/nvme/reserve/reserve.o 00:04:19.862 CXX test/cpp_headers/dma.o 00:04:19.862 LINK err_injection 00:04:19.862 CC test/nvme/simple_copy/simple_copy.o 00:04:20.121 CC examples/nvme/abort/abort.o 00:04:20.121 CC examples/nvme/pmr_persistence/pmr_persistence.o 00:04:20.121 CC examples/blob/hello_world/hello_blob.o 00:04:20.121 LINK cmb_copy 00:04:20.121 CXX test/cpp_headers/endian.o 00:04:20.121 LINK startup 00:04:20.380 LINK reserve 00:04:20.380 CXX test/cpp_headers/env_dpdk.o 00:04:20.380 LINK simple_copy 00:04:20.639 LINK pmr_persistence 00:04:20.639 LINK hello_blob 00:04:20.639 CC test/nvme/connect_stress/connect_stress.o 00:04:20.639 CC test/rpc_client/rpc_client_test.o 00:04:20.639 LINK abort 00:04:20.639 CXX test/cpp_headers/env.o 00:04:20.639 CC test/nvme/boot_partition/boot_partition.o 00:04:20.898 CC examples/bdev/hello_world/hello_bdev.o 00:04:20.898 LINK connect_stress 00:04:20.898 LINK boot_partition 00:04:20.898 CC examples/bdev/bdevperf/bdevperf.o 00:04:20.898 LINK rpc_client_test 00:04:20.898 CC examples/blob/cli/blobcli.o 00:04:20.898 CXX test/cpp_headers/event.o 00:04:21.156 LINK hello_bdev 00:04:21.156 CXX test/cpp_headers/fd_group.o 00:04:21.415 CC test/accel/dif/dif.o 00:04:21.415 CC test/nvme/compliance/nvme_compliance.o 00:04:21.415 CC test/blobfs/mkfs/mkfs.o 00:04:21.415 CC test/nvme/fused_ordering/fused_ordering.o 00:04:21.415 CXX test/cpp_headers/fd.o 00:04:21.415 CC test/lvol/esnap/esnap.o 00:04:21.674 CC test/nvme/doorbell_aers/doorbell_aers.o 00:04:21.674 LINK blobcli 00:04:21.674 LINK mkfs 00:04:21.674 LINK fused_ordering 00:04:21.674 CXX test/cpp_headers/file.o 00:04:21.674 LINK nvme_compliance 00:04:21.932 CXX test/cpp_headers/fsdev.o 00:04:21.932 CXX test/cpp_headers/fsdev_module.o 00:04:21.932 LINK doorbell_aers 00:04:21.932 CC test/nvme/fdp/fdp.o 00:04:21.932 CXX test/cpp_headers/ftl.o 00:04:21.932 LINK bdevperf 00:04:21.932 CXX test/cpp_headers/gpt_spec.o 00:04:21.932 LINK dif 00:04:22.190 CXX test/cpp_headers/hexlify.o 00:04:22.190 CC test/nvme/cuse/cuse.o 00:04:22.190 CXX test/cpp_headers/histogram_data.o 00:04:22.190 CXX test/cpp_headers/idxd.o 00:04:22.190 CXX test/cpp_headers/idxd_spec.o 00:04:22.190 CXX test/cpp_headers/init.o 00:04:22.190 CXX test/cpp_headers/ioat.o 00:04:22.449 LINK fdp 00:04:22.449 CXX test/cpp_headers/ioat_spec.o 00:04:22.449 CXX test/cpp_headers/iscsi_spec.o 00:04:22.449 CXX test/cpp_headers/json.o 00:04:22.449 CXX test/cpp_headers/jsonrpc.o 00:04:22.449 CC examples/nvmf/nvmf/nvmf.o 00:04:22.449 CXX test/cpp_headers/keyring.o 00:04:22.449 CC test/bdev/bdevio/bdevio.o 00:04:22.449 CXX test/cpp_headers/keyring_module.o 00:04:22.708 CXX test/cpp_headers/likely.o 00:04:22.708 CXX test/cpp_headers/log.o 00:04:22.708 CXX test/cpp_headers/lvol.o 00:04:22.708 CXX test/cpp_headers/md5.o 00:04:22.708 CXX test/cpp_headers/memory.o 00:04:22.708 CXX test/cpp_headers/mmio.o 00:04:22.708 CXX test/cpp_headers/nbd.o 00:04:22.708 LINK nvmf 00:04:22.708 CXX test/cpp_headers/net.o 00:04:22.968 CXX test/cpp_headers/notify.o 00:04:22.968 CXX test/cpp_headers/nvme_intel.o 00:04:22.968 CXX test/cpp_headers/nvme.o 00:04:22.968 CXX test/cpp_headers/nvme_ocssd.o 00:04:22.968 LINK bdevio 00:04:22.968 CXX test/cpp_headers/nvme_ocssd_spec.o 00:04:23.235 CXX test/cpp_headers/nvme_spec.o 00:04:23.235 CXX test/cpp_headers/nvme_zns.o 00:04:23.235 CXX test/cpp_headers/nvmf_cmd.o 00:04:23.235 CXX test/cpp_headers/nvmf_fc_spec.o 00:04:23.235 CXX test/cpp_headers/nvmf.o 00:04:23.235 CXX test/cpp_headers/nvmf_spec.o 00:04:23.235 CXX test/cpp_headers/nvmf_transport.o 00:04:23.235 CXX test/cpp_headers/opal.o 00:04:23.507 CXX test/cpp_headers/opal_spec.o 00:04:23.507 CXX test/cpp_headers/pci_ids.o 00:04:23.507 CXX test/cpp_headers/pipe.o 00:04:23.507 LINK cuse 00:04:23.507 CXX test/cpp_headers/queue.o 00:04:23.507 CXX test/cpp_headers/reduce.o 00:04:23.507 CXX test/cpp_headers/rpc.o 00:04:23.507 CXX test/cpp_headers/scheduler.o 00:04:23.507 CXX test/cpp_headers/scsi.o 00:04:23.507 CXX test/cpp_headers/scsi_spec.o 00:04:23.508 CXX test/cpp_headers/sock.o 00:04:23.508 CXX test/cpp_headers/stdinc.o 00:04:23.508 CXX test/cpp_headers/string.o 00:04:23.508 CXX test/cpp_headers/thread.o 00:04:23.766 CXX test/cpp_headers/trace.o 00:04:23.766 CXX test/cpp_headers/trace_parser.o 00:04:23.766 CXX test/cpp_headers/tree.o 00:04:23.766 CXX test/cpp_headers/ublk.o 00:04:23.766 CXX test/cpp_headers/util.o 00:04:23.766 CXX test/cpp_headers/uuid.o 00:04:23.766 CXX test/cpp_headers/version.o 00:04:23.766 CXX test/cpp_headers/vfio_user_pci.o 00:04:23.766 CXX test/cpp_headers/vfio_user_spec.o 00:04:23.766 CXX test/cpp_headers/vhost.o 00:04:23.766 CXX test/cpp_headers/vmd.o 00:04:23.766 CXX test/cpp_headers/xor.o 00:04:23.766 CXX test/cpp_headers/zipf.o 00:04:27.052 LINK esnap 00:04:27.311 00:04:27.311 real 1m37.266s 00:04:27.311 user 8m46.772s 00:04:27.311 sys 1m49.629s 00:04:27.311 11:26:27 make -- common/autotest_common.sh@1130 -- $ xtrace_disable 00:04:27.311 11:26:27 make -- common/autotest_common.sh@10 -- $ set +x 00:04:27.311 ************************************ 00:04:27.311 END TEST make 00:04:27.311 ************************************ 00:04:27.311 11:26:27 -- spdk/autobuild.sh@1 -- $ stop_monitor_resources 00:04:27.311 11:26:27 -- pm/common@29 -- $ signal_monitor_resources TERM 00:04:27.311 11:26:27 -- pm/common@40 -- $ local monitor pid pids signal=TERM 00:04:27.311 11:26:27 -- pm/common@42 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:04:27.311 11:26:27 -- pm/common@43 -- $ [[ -e /home/vagrant/spdk_repo/spdk/../output/power/collect-cpu-load.pid ]] 00:04:27.311 11:26:27 -- pm/common@44 -- $ pid=5302 00:04:27.311 11:26:27 -- pm/common@50 -- $ kill -TERM 5302 00:04:27.311 11:26:27 -- pm/common@42 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:04:27.311 11:26:27 -- pm/common@43 -- $ [[ -e /home/vagrant/spdk_repo/spdk/../output/power/collect-vmstat.pid ]] 00:04:27.311 11:26:27 -- pm/common@44 -- $ pid=5304 00:04:27.311 11:26:27 -- pm/common@50 -- $ kill -TERM 5304 00:04:27.311 11:26:27 -- spdk/autorun.sh@26 -- $ (( SPDK_TEST_UNITTEST == 1 || SPDK_RUN_FUNCTIONAL_TEST == 1 )) 00:04:27.311 11:26:27 -- spdk/autorun.sh@27 -- $ sudo -E /home/vagrant/spdk_repo/spdk/autotest.sh /home/vagrant/spdk_repo/autorun-spdk.conf 00:04:27.311 11:26:27 -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:04:27.311 11:26:27 -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:04:27.311 11:26:27 -- common/autotest_common.sh@1711 -- # lcov --version 00:04:27.570 11:26:28 -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:04:27.570 11:26:28 -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:04:27.570 11:26:28 -- scripts/common.sh@333 -- # local ver1 ver1_l 00:04:27.570 11:26:28 -- scripts/common.sh@334 -- # local ver2 ver2_l 00:04:27.570 11:26:28 -- scripts/common.sh@336 -- # IFS=.-: 00:04:27.570 11:26:28 -- scripts/common.sh@336 -- # read -ra ver1 00:04:27.570 11:26:28 -- scripts/common.sh@337 -- # IFS=.-: 00:04:27.570 11:26:28 -- scripts/common.sh@337 -- # read -ra ver2 00:04:27.570 11:26:28 -- scripts/common.sh@338 -- # local 'op=<' 00:04:27.570 11:26:28 -- scripts/common.sh@340 -- # ver1_l=2 00:04:27.570 11:26:28 -- scripts/common.sh@341 -- # ver2_l=1 00:04:27.570 11:26:28 -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:04:27.570 11:26:28 -- scripts/common.sh@344 -- # case "$op" in 00:04:27.570 11:26:28 -- scripts/common.sh@345 -- # : 1 00:04:27.570 11:26:28 -- scripts/common.sh@364 -- # (( v = 0 )) 00:04:27.570 11:26:28 -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:04:27.570 11:26:28 -- scripts/common.sh@365 -- # decimal 1 00:04:27.570 11:26:28 -- scripts/common.sh@353 -- # local d=1 00:04:27.570 11:26:28 -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:04:27.570 11:26:28 -- scripts/common.sh@355 -- # echo 1 00:04:27.570 11:26:28 -- scripts/common.sh@365 -- # ver1[v]=1 00:04:27.570 11:26:28 -- scripts/common.sh@366 -- # decimal 2 00:04:27.570 11:26:28 -- scripts/common.sh@353 -- # local d=2 00:04:27.570 11:26:28 -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:04:27.570 11:26:28 -- scripts/common.sh@355 -- # echo 2 00:04:27.570 11:26:28 -- scripts/common.sh@366 -- # ver2[v]=2 00:04:27.570 11:26:28 -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:04:27.570 11:26:28 -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:04:27.570 11:26:28 -- scripts/common.sh@368 -- # return 0 00:04:27.570 11:26:28 -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:04:27.570 11:26:28 -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:04:27.570 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:27.570 --rc genhtml_branch_coverage=1 00:04:27.570 --rc genhtml_function_coverage=1 00:04:27.570 --rc genhtml_legend=1 00:04:27.570 --rc geninfo_all_blocks=1 00:04:27.570 --rc geninfo_unexecuted_blocks=1 00:04:27.570 00:04:27.570 ' 00:04:27.570 11:26:28 -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:04:27.570 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:27.570 --rc genhtml_branch_coverage=1 00:04:27.570 --rc genhtml_function_coverage=1 00:04:27.570 --rc genhtml_legend=1 00:04:27.570 --rc geninfo_all_blocks=1 00:04:27.570 --rc geninfo_unexecuted_blocks=1 00:04:27.570 00:04:27.570 ' 00:04:27.570 11:26:28 -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:04:27.570 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:27.570 --rc genhtml_branch_coverage=1 00:04:27.570 --rc genhtml_function_coverage=1 00:04:27.570 --rc genhtml_legend=1 00:04:27.570 --rc geninfo_all_blocks=1 00:04:27.570 --rc geninfo_unexecuted_blocks=1 00:04:27.570 00:04:27.570 ' 00:04:27.570 11:26:28 -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:04:27.570 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:27.570 --rc genhtml_branch_coverage=1 00:04:27.570 --rc genhtml_function_coverage=1 00:04:27.570 --rc genhtml_legend=1 00:04:27.570 --rc geninfo_all_blocks=1 00:04:27.570 --rc geninfo_unexecuted_blocks=1 00:04:27.570 00:04:27.570 ' 00:04:27.570 11:26:28 -- spdk/autotest.sh@25 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:04:27.570 11:26:28 -- nvmf/common.sh@7 -- # uname -s 00:04:27.570 11:26:28 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:04:27.570 11:26:28 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:04:27.570 11:26:28 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:04:27.570 11:26:28 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:04:27.570 11:26:28 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:04:27.570 11:26:28 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:04:27.570 11:26:28 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:04:27.570 11:26:28 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:04:27.570 11:26:28 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:04:27.570 11:26:28 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:04:27.570 11:26:28 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:2fb5c03d-b67f-4471-b8a9-a83bf7b15be8 00:04:27.570 11:26:28 -- nvmf/common.sh@18 -- # NVME_HOSTID=2fb5c03d-b67f-4471-b8a9-a83bf7b15be8 00:04:27.570 11:26:28 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:04:27.570 11:26:28 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:04:27.570 11:26:28 -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:04:27.570 11:26:28 -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:04:27.570 11:26:28 -- nvmf/common.sh@49 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:04:27.570 11:26:28 -- scripts/common.sh@15 -- # shopt -s extglob 00:04:27.570 11:26:28 -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:04:27.570 11:26:28 -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:04:27.570 11:26:28 -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:04:27.570 11:26:28 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:04:27.570 11:26:28 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:04:27.570 11:26:28 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:04:27.570 11:26:28 -- paths/export.sh@5 -- # export PATH 00:04:27.570 11:26:28 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:04:27.570 11:26:28 -- nvmf/common.sh@51 -- # : 0 00:04:27.570 11:26:28 -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:04:27.570 11:26:28 -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:04:27.570 11:26:28 -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:04:27.570 11:26:28 -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:04:27.570 11:26:28 -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:04:27.570 11:26:28 -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:04:27.570 /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:04:27.570 11:26:28 -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:04:27.570 11:26:28 -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:04:27.570 11:26:28 -- nvmf/common.sh@55 -- # have_pci_nics=0 00:04:27.570 11:26:28 -- spdk/autotest.sh@27 -- # '[' 0 -ne 0 ']' 00:04:27.570 11:26:28 -- spdk/autotest.sh@32 -- # uname -s 00:04:27.570 11:26:28 -- spdk/autotest.sh@32 -- # '[' Linux = Linux ']' 00:04:27.570 11:26:28 -- spdk/autotest.sh@33 -- # old_core_pattern='|/usr/lib/systemd/systemd-coredump %P %u %g %s %t %c %h' 00:04:27.570 11:26:28 -- spdk/autotest.sh@34 -- # mkdir -p /home/vagrant/spdk_repo/spdk/../output/coredumps 00:04:27.570 11:26:28 -- spdk/autotest.sh@39 -- # echo '|/home/vagrant/spdk_repo/spdk/scripts/core-collector.sh %P %s %t' 00:04:27.570 11:26:28 -- spdk/autotest.sh@40 -- # echo /home/vagrant/spdk_repo/spdk/../output/coredumps 00:04:27.570 11:26:28 -- spdk/autotest.sh@44 -- # modprobe nbd 00:04:27.570 11:26:28 -- spdk/autotest.sh@46 -- # type -P udevadm 00:04:27.570 11:26:28 -- spdk/autotest.sh@46 -- # udevadm=/usr/sbin/udevadm 00:04:27.570 11:26:28 -- spdk/autotest.sh@48 -- # udevadm_pid=57982 00:04:27.570 11:26:28 -- spdk/autotest.sh@53 -- # start_monitor_resources 00:04:27.570 11:26:28 -- pm/common@17 -- # local monitor 00:04:27.570 11:26:28 -- pm/common@19 -- # for monitor in "${MONITOR_RESOURCES[@]}" 00:04:27.571 11:26:28 -- pm/common@19 -- # for monitor in "${MONITOR_RESOURCES[@]}" 00:04:27.571 11:26:28 -- spdk/autotest.sh@47 -- # /usr/sbin/udevadm monitor --property 00:04:27.571 11:26:28 -- pm/common@25 -- # sleep 1 00:04:27.571 11:26:28 -- pm/common@21 -- # date +%s 00:04:27.571 11:26:28 -- pm/common@21 -- # date +%s 00:04:27.571 11:26:28 -- pm/common@21 -- # /home/vagrant/spdk_repo/spdk/scripts/perf/pm/collect-cpu-load -d /home/vagrant/spdk_repo/spdk/../output/power -l -p monitor.autotest.sh.1734348388 00:04:27.571 11:26:28 -- pm/common@21 -- # /home/vagrant/spdk_repo/spdk/scripts/perf/pm/collect-vmstat -d /home/vagrant/spdk_repo/spdk/../output/power -l -p monitor.autotest.sh.1734348388 00:04:27.571 Redirecting to /home/vagrant/spdk_repo/spdk/../output/power/monitor.autotest.sh.1734348388_collect-cpu-load.pm.log 00:04:27.571 Redirecting to /home/vagrant/spdk_repo/spdk/../output/power/monitor.autotest.sh.1734348388_collect-vmstat.pm.log 00:04:28.507 11:26:29 -- spdk/autotest.sh@55 -- # trap 'autotest_cleanup || :; exit 1' SIGINT SIGTERM EXIT 00:04:28.507 11:26:29 -- spdk/autotest.sh@57 -- # timing_enter autotest 00:04:28.507 11:26:29 -- common/autotest_common.sh@726 -- # xtrace_disable 00:04:28.507 11:26:29 -- common/autotest_common.sh@10 -- # set +x 00:04:28.507 11:26:29 -- spdk/autotest.sh@59 -- # create_test_list 00:04:28.507 11:26:29 -- common/autotest_common.sh@752 -- # xtrace_disable 00:04:28.507 11:26:29 -- common/autotest_common.sh@10 -- # set +x 00:04:28.507 11:26:29 -- spdk/autotest.sh@61 -- # dirname /home/vagrant/spdk_repo/spdk/autotest.sh 00:04:28.507 11:26:29 -- spdk/autotest.sh@61 -- # readlink -f /home/vagrant/spdk_repo/spdk 00:04:28.507 11:26:29 -- spdk/autotest.sh@61 -- # src=/home/vagrant/spdk_repo/spdk 00:04:28.507 11:26:29 -- spdk/autotest.sh@62 -- # out=/home/vagrant/spdk_repo/spdk/../output 00:04:28.507 11:26:29 -- spdk/autotest.sh@63 -- # cd /home/vagrant/spdk_repo/spdk 00:04:28.507 11:26:29 -- spdk/autotest.sh@65 -- # freebsd_update_contigmem_mod 00:04:28.507 11:26:29 -- common/autotest_common.sh@1457 -- # uname 00:04:28.507 11:26:29 -- common/autotest_common.sh@1457 -- # '[' Linux = FreeBSD ']' 00:04:28.507 11:26:29 -- spdk/autotest.sh@66 -- # freebsd_set_maxsock_buf 00:04:28.507 11:26:29 -- common/autotest_common.sh@1477 -- # uname 00:04:28.507 11:26:29 -- common/autotest_common.sh@1477 -- # [[ Linux = FreeBSD ]] 00:04:28.507 11:26:29 -- spdk/autotest.sh@68 -- # [[ y == y ]] 00:04:28.507 11:26:29 -- spdk/autotest.sh@70 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 --version 00:04:28.765 lcov: LCOV version 1.15 00:04:28.765 11:26:29 -- spdk/autotest.sh@72 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -c --no-external -i -t Baseline -d /home/vagrant/spdk_repo/spdk -o /home/vagrant/spdk_repo/spdk/../output/cov_base.info 00:04:46.863 /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_stubs.gcno:no functions found 00:04:46.863 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_stubs.gcno 00:05:01.771 11:27:01 -- spdk/autotest.sh@76 -- # timing_enter pre_cleanup 00:05:01.771 11:27:01 -- common/autotest_common.sh@726 -- # xtrace_disable 00:05:01.771 11:27:01 -- common/autotest_common.sh@10 -- # set +x 00:05:01.771 11:27:01 -- spdk/autotest.sh@78 -- # rm -f 00:05:01.771 11:27:01 -- spdk/autotest.sh@81 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh reset 00:05:01.771 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:05:01.771 0000:00:11.0 (1b36 0010): Already using the nvme driver 00:05:01.771 0000:00:10.0 (1b36 0010): Already using the nvme driver 00:05:02.030 11:27:02 -- spdk/autotest.sh@83 -- # get_zoned_devs 00:05:02.030 11:27:02 -- common/autotest_common.sh@1657 -- # zoned_devs=() 00:05:02.030 11:27:02 -- common/autotest_common.sh@1657 -- # local -gA zoned_devs 00:05:02.030 11:27:02 -- common/autotest_common.sh@1658 -- # zoned_ctrls=() 00:05:02.030 11:27:02 -- common/autotest_common.sh@1658 -- # local -A zoned_ctrls 00:05:02.030 11:27:02 -- common/autotest_common.sh@1659 -- # local nvme bdf ns 00:05:02.030 11:27:02 -- common/autotest_common.sh@1668 -- # for nvme in /sys/class/nvme/nvme* 00:05:02.030 11:27:02 -- common/autotest_common.sh@1669 -- # bdf=0000:00:10.0 00:05:02.030 11:27:02 -- common/autotest_common.sh@1670 -- # for ns in "$nvme/"nvme*n* 00:05:02.030 11:27:02 -- common/autotest_common.sh@1671 -- # is_block_zoned nvme0n1 00:05:02.030 11:27:02 -- common/autotest_common.sh@1650 -- # local device=nvme0n1 00:05:02.030 11:27:02 -- common/autotest_common.sh@1652 -- # [[ -e /sys/block/nvme0n1/queue/zoned ]] 00:05:02.030 11:27:02 -- common/autotest_common.sh@1653 -- # [[ none != none ]] 00:05:02.030 11:27:02 -- common/autotest_common.sh@1668 -- # for nvme in /sys/class/nvme/nvme* 00:05:02.030 11:27:02 -- common/autotest_common.sh@1669 -- # bdf=0000:00:11.0 00:05:02.030 11:27:02 -- common/autotest_common.sh@1670 -- # for ns in "$nvme/"nvme*n* 00:05:02.030 11:27:02 -- common/autotest_common.sh@1671 -- # is_block_zoned nvme1n1 00:05:02.030 11:27:02 -- common/autotest_common.sh@1650 -- # local device=nvme1n1 00:05:02.030 11:27:02 -- common/autotest_common.sh@1652 -- # [[ -e /sys/block/nvme1n1/queue/zoned ]] 00:05:02.030 11:27:02 -- common/autotest_common.sh@1653 -- # [[ none != none ]] 00:05:02.030 11:27:02 -- common/autotest_common.sh@1670 -- # for ns in "$nvme/"nvme*n* 00:05:02.030 11:27:02 -- common/autotest_common.sh@1671 -- # is_block_zoned nvme1n2 00:05:02.030 11:27:02 -- common/autotest_common.sh@1650 -- # local device=nvme1n2 00:05:02.030 11:27:02 -- common/autotest_common.sh@1652 -- # [[ -e /sys/block/nvme1n2/queue/zoned ]] 00:05:02.030 11:27:02 -- common/autotest_common.sh@1653 -- # [[ none != none ]] 00:05:02.030 11:27:02 -- common/autotest_common.sh@1670 -- # for ns in "$nvme/"nvme*n* 00:05:02.030 11:27:02 -- common/autotest_common.sh@1671 -- # is_block_zoned nvme1n3 00:05:02.030 11:27:02 -- common/autotest_common.sh@1650 -- # local device=nvme1n3 00:05:02.030 11:27:02 -- common/autotest_common.sh@1652 -- # [[ -e /sys/block/nvme1n3/queue/zoned ]] 00:05:02.030 11:27:02 -- common/autotest_common.sh@1653 -- # [[ none != none ]] 00:05:02.030 11:27:02 -- spdk/autotest.sh@85 -- # (( 0 > 0 )) 00:05:02.030 11:27:02 -- spdk/autotest.sh@97 -- # for dev in /dev/nvme*n!(*p*) 00:05:02.030 11:27:02 -- spdk/autotest.sh@99 -- # [[ -z '' ]] 00:05:02.030 11:27:02 -- spdk/autotest.sh@100 -- # block_in_use /dev/nvme0n1 00:05:02.030 11:27:02 -- scripts/common.sh@381 -- # local block=/dev/nvme0n1 pt 00:05:02.030 11:27:02 -- scripts/common.sh@390 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py /dev/nvme0n1 00:05:02.030 No valid GPT data, bailing 00:05:02.030 11:27:02 -- scripts/common.sh@394 -- # blkid -s PTTYPE -o value /dev/nvme0n1 00:05:02.030 11:27:02 -- scripts/common.sh@394 -- # pt= 00:05:02.030 11:27:02 -- scripts/common.sh@395 -- # return 1 00:05:02.030 11:27:02 -- spdk/autotest.sh@101 -- # dd if=/dev/zero of=/dev/nvme0n1 bs=1M count=1 00:05:02.030 1+0 records in 00:05:02.030 1+0 records out 00:05:02.030 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.00451858 s, 232 MB/s 00:05:02.030 11:27:02 -- spdk/autotest.sh@97 -- # for dev in /dev/nvme*n!(*p*) 00:05:02.030 11:27:02 -- spdk/autotest.sh@99 -- # [[ -z '' ]] 00:05:02.030 11:27:02 -- spdk/autotest.sh@100 -- # block_in_use /dev/nvme1n1 00:05:02.030 11:27:02 -- scripts/common.sh@381 -- # local block=/dev/nvme1n1 pt 00:05:02.030 11:27:02 -- scripts/common.sh@390 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py /dev/nvme1n1 00:05:02.030 No valid GPT data, bailing 00:05:02.030 11:27:02 -- scripts/common.sh@394 -- # blkid -s PTTYPE -o value /dev/nvme1n1 00:05:02.030 11:27:02 -- scripts/common.sh@394 -- # pt= 00:05:02.030 11:27:02 -- scripts/common.sh@395 -- # return 1 00:05:02.030 11:27:02 -- spdk/autotest.sh@101 -- # dd if=/dev/zero of=/dev/nvme1n1 bs=1M count=1 00:05:02.030 1+0 records in 00:05:02.030 1+0 records out 00:05:02.030 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.00379322 s, 276 MB/s 00:05:02.030 11:27:02 -- spdk/autotest.sh@97 -- # for dev in /dev/nvme*n!(*p*) 00:05:02.030 11:27:02 -- spdk/autotest.sh@99 -- # [[ -z '' ]] 00:05:02.030 11:27:02 -- spdk/autotest.sh@100 -- # block_in_use /dev/nvme1n2 00:05:02.030 11:27:02 -- scripts/common.sh@381 -- # local block=/dev/nvme1n2 pt 00:05:02.030 11:27:02 -- scripts/common.sh@390 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py /dev/nvme1n2 00:05:02.030 No valid GPT data, bailing 00:05:02.030 11:27:02 -- scripts/common.sh@394 -- # blkid -s PTTYPE -o value /dev/nvme1n2 00:05:02.030 11:27:02 -- scripts/common.sh@394 -- # pt= 00:05:02.030 11:27:02 -- scripts/common.sh@395 -- # return 1 00:05:02.030 11:27:02 -- spdk/autotest.sh@101 -- # dd if=/dev/zero of=/dev/nvme1n2 bs=1M count=1 00:05:02.030 1+0 records in 00:05:02.030 1+0 records out 00:05:02.030 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.00432469 s, 242 MB/s 00:05:02.030 11:27:02 -- spdk/autotest.sh@97 -- # for dev in /dev/nvme*n!(*p*) 00:05:02.030 11:27:02 -- spdk/autotest.sh@99 -- # [[ -z '' ]] 00:05:02.030 11:27:02 -- spdk/autotest.sh@100 -- # block_in_use /dev/nvme1n3 00:05:02.030 11:27:02 -- scripts/common.sh@381 -- # local block=/dev/nvme1n3 pt 00:05:02.030 11:27:02 -- scripts/common.sh@390 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py /dev/nvme1n3 00:05:02.289 No valid GPT data, bailing 00:05:02.289 11:27:02 -- scripts/common.sh@394 -- # blkid -s PTTYPE -o value /dev/nvme1n3 00:05:02.289 11:27:02 -- scripts/common.sh@394 -- # pt= 00:05:02.289 11:27:02 -- scripts/common.sh@395 -- # return 1 00:05:02.289 11:27:02 -- spdk/autotest.sh@101 -- # dd if=/dev/zero of=/dev/nvme1n3 bs=1M count=1 00:05:02.289 1+0 records in 00:05:02.289 1+0 records out 00:05:02.289 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.00481742 s, 218 MB/s 00:05:02.289 11:27:02 -- spdk/autotest.sh@105 -- # sync 00:05:02.289 11:27:02 -- spdk/autotest.sh@107 -- # xtrace_disable_per_cmd reap_spdk_processes 00:05:02.289 11:27:02 -- common/autotest_common.sh@22 -- # eval 'reap_spdk_processes 12> /dev/null' 00:05:02.289 11:27:02 -- common/autotest_common.sh@22 -- # reap_spdk_processes 00:05:04.190 11:27:04 -- spdk/autotest.sh@111 -- # uname -s 00:05:04.190 11:27:04 -- spdk/autotest.sh@111 -- # [[ Linux == Linux ]] 00:05:04.190 11:27:04 -- spdk/autotest.sh@111 -- # [[ 0 -eq 1 ]] 00:05:04.190 11:27:04 -- spdk/autotest.sh@115 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh status 00:05:04.755 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:05:04.755 Hugepages 00:05:04.755 node hugesize free / total 00:05:04.755 node0 1048576kB 0 / 0 00:05:04.755 node0 2048kB 0 / 0 00:05:04.755 00:05:04.755 Type BDF Vendor Device NUMA Driver Device Block devices 00:05:04.755 virtio 0000:00:03.0 1af4 1001 unknown virtio-pci - vda 00:05:05.013 NVMe 0000:00:10.0 1b36 0010 unknown nvme nvme0 nvme0n1 00:05:05.013 NVMe 0000:00:11.0 1b36 0010 unknown nvme nvme1 nvme1n1 nvme1n2 nvme1n3 00:05:05.013 11:27:05 -- spdk/autotest.sh@117 -- # uname -s 00:05:05.013 11:27:05 -- spdk/autotest.sh@117 -- # [[ Linux == Linux ]] 00:05:05.013 11:27:05 -- spdk/autotest.sh@119 -- # nvme_namespace_revert 00:05:05.013 11:27:05 -- common/autotest_common.sh@1516 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:05:05.580 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:05:05.839 0000:00:10.0 (1b36 0010): nvme -> uio_pci_generic 00:05:05.839 0000:00:11.0 (1b36 0010): nvme -> uio_pci_generic 00:05:05.839 11:27:06 -- common/autotest_common.sh@1517 -- # sleep 1 00:05:06.775 11:27:07 -- common/autotest_common.sh@1518 -- # bdfs=() 00:05:06.775 11:27:07 -- common/autotest_common.sh@1518 -- # local bdfs 00:05:06.775 11:27:07 -- common/autotest_common.sh@1520 -- # bdfs=($(get_nvme_bdfs)) 00:05:06.775 11:27:07 -- common/autotest_common.sh@1520 -- # get_nvme_bdfs 00:05:06.775 11:27:07 -- common/autotest_common.sh@1498 -- # bdfs=() 00:05:06.775 11:27:07 -- common/autotest_common.sh@1498 -- # local bdfs 00:05:06.775 11:27:07 -- common/autotest_common.sh@1499 -- # bdfs=($("$rootdir/scripts/gen_nvme.sh" | jq -r '.config[].params.traddr')) 00:05:06.775 11:27:07 -- common/autotest_common.sh@1499 -- # jq -r '.config[].params.traddr' 00:05:06.775 11:27:07 -- common/autotest_common.sh@1499 -- # /home/vagrant/spdk_repo/spdk/scripts/gen_nvme.sh 00:05:06.775 11:27:07 -- common/autotest_common.sh@1500 -- # (( 2 == 0 )) 00:05:06.775 11:27:07 -- common/autotest_common.sh@1504 -- # printf '%s\n' 0000:00:10.0 0000:00:11.0 00:05:06.775 11:27:07 -- common/autotest_common.sh@1522 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh reset 00:05:07.341 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:05:07.341 Waiting for block devices as requested 00:05:07.341 0000:00:11.0 (1b36 0010): uio_pci_generic -> nvme 00:05:07.341 0000:00:10.0 (1b36 0010): uio_pci_generic -> nvme 00:05:07.341 11:27:08 -- common/autotest_common.sh@1524 -- # for bdf in "${bdfs[@]}" 00:05:07.341 11:27:08 -- common/autotest_common.sh@1525 -- # get_nvme_ctrlr_from_bdf 0000:00:10.0 00:05:07.341 11:27:08 -- common/autotest_common.sh@1487 -- # readlink -f /sys/class/nvme/nvme0 /sys/class/nvme/nvme1 00:05:07.341 11:27:08 -- common/autotest_common.sh@1487 -- # grep 0000:00:10.0/nvme/nvme 00:05:07.341 11:27:08 -- common/autotest_common.sh@1487 -- # bdf_sysfs_path=/sys/devices/pci0000:00/0000:00:10.0/nvme/nvme1 00:05:07.341 11:27:08 -- common/autotest_common.sh@1488 -- # [[ -z /sys/devices/pci0000:00/0000:00:10.0/nvme/nvme1 ]] 00:05:07.341 11:27:08 -- common/autotest_common.sh@1492 -- # basename /sys/devices/pci0000:00/0000:00:10.0/nvme/nvme1 00:05:07.342 11:27:08 -- common/autotest_common.sh@1492 -- # printf '%s\n' nvme1 00:05:07.342 11:27:08 -- common/autotest_common.sh@1525 -- # nvme_ctrlr=/dev/nvme1 00:05:07.342 11:27:08 -- common/autotest_common.sh@1526 -- # [[ -z /dev/nvme1 ]] 00:05:07.342 11:27:08 -- common/autotest_common.sh@1531 -- # grep oacs 00:05:07.342 11:27:08 -- common/autotest_common.sh@1531 -- # nvme id-ctrl /dev/nvme1 00:05:07.342 11:27:08 -- common/autotest_common.sh@1531 -- # cut -d: -f2 00:05:07.342 11:27:08 -- common/autotest_common.sh@1531 -- # oacs=' 0x12a' 00:05:07.342 11:27:08 -- common/autotest_common.sh@1532 -- # oacs_ns_manage=8 00:05:07.600 11:27:08 -- common/autotest_common.sh@1534 -- # [[ 8 -ne 0 ]] 00:05:07.600 11:27:08 -- common/autotest_common.sh@1540 -- # nvme id-ctrl /dev/nvme1 00:05:07.600 11:27:08 -- common/autotest_common.sh@1540 -- # cut -d: -f2 00:05:07.600 11:27:08 -- common/autotest_common.sh@1540 -- # grep unvmcap 00:05:07.600 11:27:08 -- common/autotest_common.sh@1540 -- # unvmcap=' 0' 00:05:07.600 11:27:08 -- common/autotest_common.sh@1541 -- # [[ 0 -eq 0 ]] 00:05:07.600 11:27:08 -- common/autotest_common.sh@1543 -- # continue 00:05:07.600 11:27:08 -- common/autotest_common.sh@1524 -- # for bdf in "${bdfs[@]}" 00:05:07.600 11:27:08 -- common/autotest_common.sh@1525 -- # get_nvme_ctrlr_from_bdf 0000:00:11.0 00:05:07.600 11:27:08 -- common/autotest_common.sh@1487 -- # readlink -f /sys/class/nvme/nvme0 /sys/class/nvme/nvme1 00:05:07.600 11:27:08 -- common/autotest_common.sh@1487 -- # grep 0000:00:11.0/nvme/nvme 00:05:07.600 11:27:08 -- common/autotest_common.sh@1487 -- # bdf_sysfs_path=/sys/devices/pci0000:00/0000:00:11.0/nvme/nvme0 00:05:07.600 11:27:08 -- common/autotest_common.sh@1488 -- # [[ -z /sys/devices/pci0000:00/0000:00:11.0/nvme/nvme0 ]] 00:05:07.600 11:27:08 -- common/autotest_common.sh@1492 -- # basename /sys/devices/pci0000:00/0000:00:11.0/nvme/nvme0 00:05:07.600 11:27:08 -- common/autotest_common.sh@1492 -- # printf '%s\n' nvme0 00:05:07.600 11:27:08 -- common/autotest_common.sh@1525 -- # nvme_ctrlr=/dev/nvme0 00:05:07.600 11:27:08 -- common/autotest_common.sh@1526 -- # [[ -z /dev/nvme0 ]] 00:05:07.600 11:27:08 -- common/autotest_common.sh@1531 -- # grep oacs 00:05:07.600 11:27:08 -- common/autotest_common.sh@1531 -- # nvme id-ctrl /dev/nvme0 00:05:07.600 11:27:08 -- common/autotest_common.sh@1531 -- # cut -d: -f2 00:05:07.600 11:27:08 -- common/autotest_common.sh@1531 -- # oacs=' 0x12a' 00:05:07.600 11:27:08 -- common/autotest_common.sh@1532 -- # oacs_ns_manage=8 00:05:07.600 11:27:08 -- common/autotest_common.sh@1534 -- # [[ 8 -ne 0 ]] 00:05:07.600 11:27:08 -- common/autotest_common.sh@1540 -- # nvme id-ctrl /dev/nvme0 00:05:07.600 11:27:08 -- common/autotest_common.sh@1540 -- # grep unvmcap 00:05:07.600 11:27:08 -- common/autotest_common.sh@1540 -- # cut -d: -f2 00:05:07.600 11:27:08 -- common/autotest_common.sh@1540 -- # unvmcap=' 0' 00:05:07.600 11:27:08 -- common/autotest_common.sh@1541 -- # [[ 0 -eq 0 ]] 00:05:07.600 11:27:08 -- common/autotest_common.sh@1543 -- # continue 00:05:07.600 11:27:08 -- spdk/autotest.sh@122 -- # timing_exit pre_cleanup 00:05:07.600 11:27:08 -- common/autotest_common.sh@732 -- # xtrace_disable 00:05:07.600 11:27:08 -- common/autotest_common.sh@10 -- # set +x 00:05:07.600 11:27:08 -- spdk/autotest.sh@125 -- # timing_enter afterboot 00:05:07.600 11:27:08 -- common/autotest_common.sh@726 -- # xtrace_disable 00:05:07.600 11:27:08 -- common/autotest_common.sh@10 -- # set +x 00:05:07.600 11:27:08 -- spdk/autotest.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:05:08.167 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:05:08.167 0000:00:10.0 (1b36 0010): nvme -> uio_pci_generic 00:05:08.426 0000:00:11.0 (1b36 0010): nvme -> uio_pci_generic 00:05:08.426 11:27:09 -- spdk/autotest.sh@127 -- # timing_exit afterboot 00:05:08.426 11:27:09 -- common/autotest_common.sh@732 -- # xtrace_disable 00:05:08.426 11:27:09 -- common/autotest_common.sh@10 -- # set +x 00:05:08.426 11:27:09 -- spdk/autotest.sh@131 -- # opal_revert_cleanup 00:05:08.426 11:27:09 -- common/autotest_common.sh@1578 -- # mapfile -t bdfs 00:05:08.426 11:27:09 -- common/autotest_common.sh@1578 -- # get_nvme_bdfs_by_id 0x0a54 00:05:08.426 11:27:09 -- common/autotest_common.sh@1563 -- # bdfs=() 00:05:08.426 11:27:09 -- common/autotest_common.sh@1563 -- # _bdfs=() 00:05:08.426 11:27:09 -- common/autotest_common.sh@1563 -- # local bdfs _bdfs 00:05:08.426 11:27:09 -- common/autotest_common.sh@1564 -- # _bdfs=($(get_nvme_bdfs)) 00:05:08.426 11:27:09 -- common/autotest_common.sh@1564 -- # get_nvme_bdfs 00:05:08.426 11:27:09 -- common/autotest_common.sh@1498 -- # bdfs=() 00:05:08.426 11:27:09 -- common/autotest_common.sh@1498 -- # local bdfs 00:05:08.426 11:27:09 -- common/autotest_common.sh@1499 -- # bdfs=($("$rootdir/scripts/gen_nvme.sh" | jq -r '.config[].params.traddr')) 00:05:08.426 11:27:09 -- common/autotest_common.sh@1499 -- # /home/vagrant/spdk_repo/spdk/scripts/gen_nvme.sh 00:05:08.426 11:27:09 -- common/autotest_common.sh@1499 -- # jq -r '.config[].params.traddr' 00:05:08.426 11:27:09 -- common/autotest_common.sh@1500 -- # (( 2 == 0 )) 00:05:08.426 11:27:09 -- common/autotest_common.sh@1504 -- # printf '%s\n' 0000:00:10.0 0000:00:11.0 00:05:08.426 11:27:09 -- common/autotest_common.sh@1565 -- # for bdf in "${_bdfs[@]}" 00:05:08.426 11:27:09 -- common/autotest_common.sh@1566 -- # cat /sys/bus/pci/devices/0000:00:10.0/device 00:05:08.426 11:27:09 -- common/autotest_common.sh@1566 -- # device=0x0010 00:05:08.426 11:27:09 -- common/autotest_common.sh@1567 -- # [[ 0x0010 == \0\x\0\a\5\4 ]] 00:05:08.426 11:27:09 -- common/autotest_common.sh@1565 -- # for bdf in "${_bdfs[@]}" 00:05:08.426 11:27:09 -- common/autotest_common.sh@1566 -- # cat /sys/bus/pci/devices/0000:00:11.0/device 00:05:08.426 11:27:09 -- common/autotest_common.sh@1566 -- # device=0x0010 00:05:08.426 11:27:09 -- common/autotest_common.sh@1567 -- # [[ 0x0010 == \0\x\0\a\5\4 ]] 00:05:08.426 11:27:09 -- common/autotest_common.sh@1572 -- # (( 0 > 0 )) 00:05:08.426 11:27:09 -- common/autotest_common.sh@1572 -- # return 0 00:05:08.426 11:27:09 -- common/autotest_common.sh@1579 -- # [[ -z '' ]] 00:05:08.426 11:27:09 -- common/autotest_common.sh@1580 -- # return 0 00:05:08.426 11:27:09 -- spdk/autotest.sh@137 -- # '[' 0 -eq 1 ']' 00:05:08.426 11:27:09 -- spdk/autotest.sh@141 -- # '[' 1 -eq 1 ']' 00:05:08.426 11:27:09 -- spdk/autotest.sh@142 -- # [[ 0 -eq 1 ]] 00:05:08.426 11:27:09 -- spdk/autotest.sh@142 -- # [[ 0 -eq 1 ]] 00:05:08.426 11:27:09 -- spdk/autotest.sh@149 -- # timing_enter lib 00:05:08.426 11:27:09 -- common/autotest_common.sh@726 -- # xtrace_disable 00:05:08.426 11:27:09 -- common/autotest_common.sh@10 -- # set +x 00:05:08.426 11:27:09 -- spdk/autotest.sh@151 -- # [[ 0 -eq 1 ]] 00:05:08.426 11:27:09 -- spdk/autotest.sh@155 -- # run_test env /home/vagrant/spdk_repo/spdk/test/env/env.sh 00:05:08.426 11:27:09 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:05:08.426 11:27:09 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:05:08.426 11:27:09 -- common/autotest_common.sh@10 -- # set +x 00:05:08.426 ************************************ 00:05:08.426 START TEST env 00:05:08.426 ************************************ 00:05:08.426 11:27:09 env -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/env/env.sh 00:05:08.685 * Looking for test storage... 00:05:08.685 * Found test storage at /home/vagrant/spdk_repo/spdk/test/env 00:05:08.685 11:27:09 env -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:05:08.685 11:27:09 env -- common/autotest_common.sh@1711 -- # lcov --version 00:05:08.685 11:27:09 env -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:05:08.685 11:27:09 env -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:05:08.685 11:27:09 env -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:05:08.685 11:27:09 env -- scripts/common.sh@333 -- # local ver1 ver1_l 00:05:08.685 11:27:09 env -- scripts/common.sh@334 -- # local ver2 ver2_l 00:05:08.685 11:27:09 env -- scripts/common.sh@336 -- # IFS=.-: 00:05:08.685 11:27:09 env -- scripts/common.sh@336 -- # read -ra ver1 00:05:08.685 11:27:09 env -- scripts/common.sh@337 -- # IFS=.-: 00:05:08.685 11:27:09 env -- scripts/common.sh@337 -- # read -ra ver2 00:05:08.685 11:27:09 env -- scripts/common.sh@338 -- # local 'op=<' 00:05:08.685 11:27:09 env -- scripts/common.sh@340 -- # ver1_l=2 00:05:08.685 11:27:09 env -- scripts/common.sh@341 -- # ver2_l=1 00:05:08.685 11:27:09 env -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:05:08.685 11:27:09 env -- scripts/common.sh@344 -- # case "$op" in 00:05:08.685 11:27:09 env -- scripts/common.sh@345 -- # : 1 00:05:08.685 11:27:09 env -- scripts/common.sh@364 -- # (( v = 0 )) 00:05:08.685 11:27:09 env -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:05:08.685 11:27:09 env -- scripts/common.sh@365 -- # decimal 1 00:05:08.685 11:27:09 env -- scripts/common.sh@353 -- # local d=1 00:05:08.685 11:27:09 env -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:05:08.685 11:27:09 env -- scripts/common.sh@355 -- # echo 1 00:05:08.685 11:27:09 env -- scripts/common.sh@365 -- # ver1[v]=1 00:05:08.685 11:27:09 env -- scripts/common.sh@366 -- # decimal 2 00:05:08.685 11:27:09 env -- scripts/common.sh@353 -- # local d=2 00:05:08.685 11:27:09 env -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:05:08.685 11:27:09 env -- scripts/common.sh@355 -- # echo 2 00:05:08.685 11:27:09 env -- scripts/common.sh@366 -- # ver2[v]=2 00:05:08.685 11:27:09 env -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:05:08.685 11:27:09 env -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:05:08.685 11:27:09 env -- scripts/common.sh@368 -- # return 0 00:05:08.685 11:27:09 env -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:05:08.685 11:27:09 env -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:05:08.685 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:08.685 --rc genhtml_branch_coverage=1 00:05:08.685 --rc genhtml_function_coverage=1 00:05:08.685 --rc genhtml_legend=1 00:05:08.685 --rc geninfo_all_blocks=1 00:05:08.685 --rc geninfo_unexecuted_blocks=1 00:05:08.685 00:05:08.685 ' 00:05:08.685 11:27:09 env -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:05:08.685 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:08.685 --rc genhtml_branch_coverage=1 00:05:08.685 --rc genhtml_function_coverage=1 00:05:08.685 --rc genhtml_legend=1 00:05:08.685 --rc geninfo_all_blocks=1 00:05:08.685 --rc geninfo_unexecuted_blocks=1 00:05:08.685 00:05:08.685 ' 00:05:08.685 11:27:09 env -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:05:08.685 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:08.685 --rc genhtml_branch_coverage=1 00:05:08.685 --rc genhtml_function_coverage=1 00:05:08.685 --rc genhtml_legend=1 00:05:08.685 --rc geninfo_all_blocks=1 00:05:08.685 --rc geninfo_unexecuted_blocks=1 00:05:08.685 00:05:08.685 ' 00:05:08.685 11:27:09 env -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:05:08.685 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:08.685 --rc genhtml_branch_coverage=1 00:05:08.685 --rc genhtml_function_coverage=1 00:05:08.685 --rc genhtml_legend=1 00:05:08.685 --rc geninfo_all_blocks=1 00:05:08.685 --rc geninfo_unexecuted_blocks=1 00:05:08.685 00:05:08.685 ' 00:05:08.685 11:27:09 env -- env/env.sh@10 -- # run_test env_memory /home/vagrant/spdk_repo/spdk/test/env/memory/memory_ut 00:05:08.685 11:27:09 env -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:05:08.685 11:27:09 env -- common/autotest_common.sh@1111 -- # xtrace_disable 00:05:08.685 11:27:09 env -- common/autotest_common.sh@10 -- # set +x 00:05:08.685 ************************************ 00:05:08.685 START TEST env_memory 00:05:08.685 ************************************ 00:05:08.685 11:27:09 env.env_memory -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/env/memory/memory_ut 00:05:08.685 00:05:08.685 00:05:08.685 CUnit - A unit testing framework for C - Version 2.1-3 00:05:08.685 http://cunit.sourceforge.net/ 00:05:08.685 00:05:08.685 00:05:08.685 Suite: memory 00:05:08.685 Test: alloc and free memory map ...[2024-12-16 11:27:09.421279] /home/vagrant/spdk_repo/spdk/lib/env_dpdk/memory.c: 283:spdk_mem_map_alloc: *ERROR*: Initial mem_map notify failed 00:05:08.685 passed 00:05:08.944 Test: mem map translation ...[2024-12-16 11:27:09.445732] /home/vagrant/spdk_repo/spdk/lib/env_dpdk/memory.c: 595:spdk_mem_map_set_translation: *ERROR*: invalid spdk_mem_map_set_translation parameters, vaddr=2097152 len=1234 00:05:08.944 [2024-12-16 11:27:09.445776] /home/vagrant/spdk_repo/spdk/lib/env_dpdk/memory.c: 595:spdk_mem_map_set_translation: *ERROR*: invalid spdk_mem_map_set_translation parameters, vaddr=1234 len=2097152 00:05:08.944 [2024-12-16 11:27:09.445823] /home/vagrant/spdk_repo/spdk/lib/env_dpdk/memory.c: 589:spdk_mem_map_set_translation: *ERROR*: invalid usermode virtual address 281474976710656 00:05:08.944 [2024-12-16 11:27:09.445833] /home/vagrant/spdk_repo/spdk/lib/env_dpdk/memory.c: 605:spdk_mem_map_set_translation: *ERROR*: could not get 0xffffffe00000 map 00:05:08.944 passed 00:05:08.944 Test: mem map registration ...[2024-12-16 11:27:09.495941] /home/vagrant/spdk_repo/spdk/lib/env_dpdk/memory.c: 347:spdk_mem_register: *ERROR*: invalid spdk_mem_register parameters, vaddr=200000 len=1234 00:05:08.944 [2024-12-16 11:27:09.495987] /home/vagrant/spdk_repo/spdk/lib/env_dpdk/memory.c: 347:spdk_mem_register: *ERROR*: invalid spdk_mem_register parameters, vaddr=4d2 len=2097152 00:05:08.944 passed 00:05:08.944 Test: mem map adjacent registrations ...passed 00:05:08.944 00:05:08.944 Run Summary: Type Total Ran Passed Failed Inactive 00:05:08.944 suites 1 1 n/a 0 0 00:05:08.944 tests 4 4 4 0 0 00:05:08.944 asserts 152 152 152 0 n/a 00:05:08.944 00:05:08.944 Elapsed time = 0.168 seconds 00:05:08.944 00:05:08.944 real 0m0.183s 00:05:08.944 user 0m0.166s 00:05:08.944 sys 0m0.015s 00:05:08.944 11:27:09 env.env_memory -- common/autotest_common.sh@1130 -- # xtrace_disable 00:05:08.944 11:27:09 env.env_memory -- common/autotest_common.sh@10 -- # set +x 00:05:08.944 ************************************ 00:05:08.944 END TEST env_memory 00:05:08.944 ************************************ 00:05:08.944 11:27:09 env -- env/env.sh@11 -- # run_test env_vtophys /home/vagrant/spdk_repo/spdk/test/env/vtophys/vtophys 00:05:08.944 11:27:09 env -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:05:08.944 11:27:09 env -- common/autotest_common.sh@1111 -- # xtrace_disable 00:05:08.944 11:27:09 env -- common/autotest_common.sh@10 -- # set +x 00:05:08.944 ************************************ 00:05:08.944 START TEST env_vtophys 00:05:08.944 ************************************ 00:05:08.944 11:27:09 env.env_vtophys -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/env/vtophys/vtophys 00:05:08.944 EAL: lib.eal log level changed from notice to debug 00:05:08.944 EAL: Detected lcore 0 as core 0 on socket 0 00:05:08.944 EAL: Detected lcore 1 as core 0 on socket 0 00:05:08.944 EAL: Detected lcore 2 as core 0 on socket 0 00:05:08.944 EAL: Detected lcore 3 as core 0 on socket 0 00:05:08.944 EAL: Detected lcore 4 as core 0 on socket 0 00:05:08.944 EAL: Detected lcore 5 as core 0 on socket 0 00:05:08.944 EAL: Detected lcore 6 as core 0 on socket 0 00:05:08.944 EAL: Detected lcore 7 as core 0 on socket 0 00:05:08.944 EAL: Detected lcore 8 as core 0 on socket 0 00:05:08.944 EAL: Detected lcore 9 as core 0 on socket 0 00:05:08.944 EAL: Maximum logical cores by configuration: 128 00:05:08.944 EAL: Detected CPU lcores: 10 00:05:08.944 EAL: Detected NUMA nodes: 1 00:05:08.944 EAL: Checking presence of .so 'librte_eal.so.24.1' 00:05:08.944 EAL: Detected shared linkage of DPDK 00:05:08.944 EAL: No shared files mode enabled, IPC will be disabled 00:05:08.944 EAL: Selected IOVA mode 'PA' 00:05:08.944 EAL: Probing VFIO support... 00:05:08.944 EAL: Module /sys/module/vfio not found! error 2 (No such file or directory) 00:05:08.944 EAL: VFIO modules not loaded, skipping VFIO support... 00:05:08.944 EAL: Ask a virtual area of 0x2e000 bytes 00:05:08.944 EAL: Virtual area found at 0x200000000000 (size = 0x2e000) 00:05:08.944 EAL: Setting up physically contiguous memory... 00:05:08.944 EAL: Setting maximum number of open files to 524288 00:05:08.944 EAL: Detected memory type: socket_id:0 hugepage_sz:2097152 00:05:08.944 EAL: Creating 4 segment lists: n_segs:8192 socket_id:0 hugepage_sz:2097152 00:05:08.944 EAL: Ask a virtual area of 0x61000 bytes 00:05:08.944 EAL: Virtual area found at 0x20000002e000 (size = 0x61000) 00:05:08.944 EAL: Memseg list allocated at socket 0, page size 0x800kB 00:05:08.944 EAL: Ask a virtual area of 0x400000000 bytes 00:05:08.944 EAL: Virtual area found at 0x200000200000 (size = 0x400000000) 00:05:08.944 EAL: VA reserved for memseg list at 0x200000200000, size 400000000 00:05:08.944 EAL: Ask a virtual area of 0x61000 bytes 00:05:08.944 EAL: Virtual area found at 0x200400200000 (size = 0x61000) 00:05:08.944 EAL: Memseg list allocated at socket 0, page size 0x800kB 00:05:08.944 EAL: Ask a virtual area of 0x400000000 bytes 00:05:08.944 EAL: Virtual area found at 0x200400400000 (size = 0x400000000) 00:05:08.944 EAL: VA reserved for memseg list at 0x200400400000, size 400000000 00:05:08.944 EAL: Ask a virtual area of 0x61000 bytes 00:05:08.944 EAL: Virtual area found at 0x200800400000 (size = 0x61000) 00:05:08.944 EAL: Memseg list allocated at socket 0, page size 0x800kB 00:05:08.944 EAL: Ask a virtual area of 0x400000000 bytes 00:05:08.944 EAL: Virtual area found at 0x200800600000 (size = 0x400000000) 00:05:08.944 EAL: VA reserved for memseg list at 0x200800600000, size 400000000 00:05:08.944 EAL: Ask a virtual area of 0x61000 bytes 00:05:08.944 EAL: Virtual area found at 0x200c00600000 (size = 0x61000) 00:05:08.944 EAL: Memseg list allocated at socket 0, page size 0x800kB 00:05:08.944 EAL: Ask a virtual area of 0x400000000 bytes 00:05:08.944 EAL: Virtual area found at 0x200c00800000 (size = 0x400000000) 00:05:08.944 EAL: VA reserved for memseg list at 0x200c00800000, size 400000000 00:05:08.944 EAL: Hugepages will be freed exactly as allocated. 00:05:08.944 EAL: No shared files mode enabled, IPC is disabled 00:05:08.944 EAL: No shared files mode enabled, IPC is disabled 00:05:09.203 EAL: TSC frequency is ~2200000 KHz 00:05:09.203 EAL: Main lcore 0 is ready (tid=7f6c9da16a00;cpuset=[0]) 00:05:09.203 EAL: Trying to obtain current memory policy. 00:05:09.203 EAL: Setting policy MPOL_PREFERRED for socket 0 00:05:09.203 EAL: Restoring previous memory policy: 0 00:05:09.203 EAL: request: mp_malloc_sync 00:05:09.203 EAL: No shared files mode enabled, IPC is disabled 00:05:09.203 EAL: Heap on socket 0 was expanded by 2MB 00:05:09.203 EAL: Module /sys/module/vfio not found! error 2 (No such file or directory) 00:05:09.203 EAL: No PCI address specified using 'addr=' in: bus=pci 00:05:09.203 EAL: Mem event callback 'spdk:(nil)' registered 00:05:09.203 EAL: Module /sys/module/vfio_pci not found! error 2 (No such file or directory) 00:05:09.203 00:05:09.203 00:05:09.203 CUnit - A unit testing framework for C - Version 2.1-3 00:05:09.203 http://cunit.sourceforge.net/ 00:05:09.203 00:05:09.203 00:05:09.203 Suite: components_suite 00:05:09.203 Test: vtophys_malloc_test ...passed 00:05:09.203 Test: vtophys_spdk_malloc_test ...EAL: Trying to obtain current memory policy. 00:05:09.203 EAL: Setting policy MPOL_PREFERRED for socket 0 00:05:09.203 EAL: Restoring previous memory policy: 4 00:05:09.203 EAL: Calling mem event callback 'spdk:(nil)' 00:05:09.203 EAL: request: mp_malloc_sync 00:05:09.203 EAL: No shared files mode enabled, IPC is disabled 00:05:09.203 EAL: Heap on socket 0 was expanded by 4MB 00:05:09.203 EAL: Calling mem event callback 'spdk:(nil)' 00:05:09.203 EAL: request: mp_malloc_sync 00:05:09.203 EAL: No shared files mode enabled, IPC is disabled 00:05:09.203 EAL: Heap on socket 0 was shrunk by 4MB 00:05:09.203 EAL: Trying to obtain current memory policy. 00:05:09.203 EAL: Setting policy MPOL_PREFERRED for socket 0 00:05:09.203 EAL: Restoring previous memory policy: 4 00:05:09.203 EAL: Calling mem event callback 'spdk:(nil)' 00:05:09.203 EAL: request: mp_malloc_sync 00:05:09.203 EAL: No shared files mode enabled, IPC is disabled 00:05:09.203 EAL: Heap on socket 0 was expanded by 6MB 00:05:09.203 EAL: Calling mem event callback 'spdk:(nil)' 00:05:09.203 EAL: request: mp_malloc_sync 00:05:09.203 EAL: No shared files mode enabled, IPC is disabled 00:05:09.203 EAL: Heap on socket 0 was shrunk by 6MB 00:05:09.203 EAL: Trying to obtain current memory policy. 00:05:09.203 EAL: Setting policy MPOL_PREFERRED for socket 0 00:05:09.203 EAL: Restoring previous memory policy: 4 00:05:09.203 EAL: Calling mem event callback 'spdk:(nil)' 00:05:09.203 EAL: request: mp_malloc_sync 00:05:09.203 EAL: No shared files mode enabled, IPC is disabled 00:05:09.203 EAL: Heap on socket 0 was expanded by 10MB 00:05:09.203 EAL: Calling mem event callback 'spdk:(nil)' 00:05:09.203 EAL: request: mp_malloc_sync 00:05:09.203 EAL: No shared files mode enabled, IPC is disabled 00:05:09.203 EAL: Heap on socket 0 was shrunk by 10MB 00:05:09.203 EAL: Trying to obtain current memory policy. 00:05:09.203 EAL: Setting policy MPOL_PREFERRED for socket 0 00:05:09.203 EAL: Restoring previous memory policy: 4 00:05:09.203 EAL: Calling mem event callback 'spdk:(nil)' 00:05:09.203 EAL: request: mp_malloc_sync 00:05:09.203 EAL: No shared files mode enabled, IPC is disabled 00:05:09.203 EAL: Heap on socket 0 was expanded by 18MB 00:05:09.203 EAL: Calling mem event callback 'spdk:(nil)' 00:05:09.203 EAL: request: mp_malloc_sync 00:05:09.203 EAL: No shared files mode enabled, IPC is disabled 00:05:09.203 EAL: Heap on socket 0 was shrunk by 18MB 00:05:09.203 EAL: Trying to obtain current memory policy. 00:05:09.203 EAL: Setting policy MPOL_PREFERRED for socket 0 00:05:09.203 EAL: Restoring previous memory policy: 4 00:05:09.203 EAL: Calling mem event callback 'spdk:(nil)' 00:05:09.203 EAL: request: mp_malloc_sync 00:05:09.203 EAL: No shared files mode enabled, IPC is disabled 00:05:09.203 EAL: Heap on socket 0 was expanded by 34MB 00:05:09.203 EAL: Calling mem event callback 'spdk:(nil)' 00:05:09.203 EAL: request: mp_malloc_sync 00:05:09.203 EAL: No shared files mode enabled, IPC is disabled 00:05:09.203 EAL: Heap on socket 0 was shrunk by 34MB 00:05:09.203 EAL: Trying to obtain current memory policy. 00:05:09.203 EAL: Setting policy MPOL_PREFERRED for socket 0 00:05:09.203 EAL: Restoring previous memory policy: 4 00:05:09.203 EAL: Calling mem event callback 'spdk:(nil)' 00:05:09.203 EAL: request: mp_malloc_sync 00:05:09.203 EAL: No shared files mode enabled, IPC is disabled 00:05:09.203 EAL: Heap on socket 0 was expanded by 66MB 00:05:09.203 EAL: Calling mem event callback 'spdk:(nil)' 00:05:09.203 EAL: request: mp_malloc_sync 00:05:09.203 EAL: No shared files mode enabled, IPC is disabled 00:05:09.203 EAL: Heap on socket 0 was shrunk by 66MB 00:05:09.203 EAL: Trying to obtain current memory policy. 00:05:09.203 EAL: Setting policy MPOL_PREFERRED for socket 0 00:05:09.203 EAL: Restoring previous memory policy: 4 00:05:09.203 EAL: Calling mem event callback 'spdk:(nil)' 00:05:09.203 EAL: request: mp_malloc_sync 00:05:09.203 EAL: No shared files mode enabled, IPC is disabled 00:05:09.203 EAL: Heap on socket 0 was expanded by 130MB 00:05:09.203 EAL: Calling mem event callback 'spdk:(nil)' 00:05:09.203 EAL: request: mp_malloc_sync 00:05:09.203 EAL: No shared files mode enabled, IPC is disabled 00:05:09.203 EAL: Heap on socket 0 was shrunk by 130MB 00:05:09.203 EAL: Trying to obtain current memory policy. 00:05:09.203 EAL: Setting policy MPOL_PREFERRED for socket 0 00:05:09.462 EAL: Restoring previous memory policy: 4 00:05:09.462 EAL: Calling mem event callback 'spdk:(nil)' 00:05:09.462 EAL: request: mp_malloc_sync 00:05:09.462 EAL: No shared files mode enabled, IPC is disabled 00:05:09.462 EAL: Heap on socket 0 was expanded by 258MB 00:05:09.462 EAL: Calling mem event callback 'spdk:(nil)' 00:05:09.462 EAL: request: mp_malloc_sync 00:05:09.462 EAL: No shared files mode enabled, IPC is disabled 00:05:09.462 EAL: Heap on socket 0 was shrunk by 258MB 00:05:09.462 EAL: Trying to obtain current memory policy. 00:05:09.462 EAL: Setting policy MPOL_PREFERRED for socket 0 00:05:09.720 EAL: Restoring previous memory policy: 4 00:05:09.720 EAL: Calling mem event callback 'spdk:(nil)' 00:05:09.720 EAL: request: mp_malloc_sync 00:05:09.720 EAL: No shared files mode enabled, IPC is disabled 00:05:09.720 EAL: Heap on socket 0 was expanded by 514MB 00:05:09.720 EAL: Calling mem event callback 'spdk:(nil)' 00:05:09.720 EAL: request: mp_malloc_sync 00:05:09.720 EAL: No shared files mode enabled, IPC is disabled 00:05:09.720 EAL: Heap on socket 0 was shrunk by 514MB 00:05:09.720 EAL: Trying to obtain current memory policy. 00:05:09.720 EAL: Setting policy MPOL_PREFERRED for socket 0 00:05:10.026 EAL: Restoring previous memory policy: 4 00:05:10.026 EAL: Calling mem event callback 'spdk:(nil)' 00:05:10.026 EAL: request: mp_malloc_sync 00:05:10.026 EAL: No shared files mode enabled, IPC is disabled 00:05:10.026 EAL: Heap on socket 0 was expanded by 1026MB 00:05:10.285 EAL: Calling mem event callback 'spdk:(nil)' 00:05:10.543 passed 00:05:10.543 00:05:10.543 Run Summary: Type Total Ran Passed Failed Inactive 00:05:10.543 suites 1 1 n/a 0 0 00:05:10.543 tests 2 2 2 0 0 00:05:10.543 asserts 5470 5470 5470 0 n/a 00:05:10.543 00:05:10.543 Elapsed time = 1.254 seconds 00:05:10.543 EAL: request: mp_malloc_sync 00:05:10.543 EAL: No shared files mode enabled, IPC is disabled 00:05:10.543 EAL: Heap on socket 0 was shrunk by 1026MB 00:05:10.543 EAL: Calling mem event callback 'spdk:(nil)' 00:05:10.543 EAL: request: mp_malloc_sync 00:05:10.543 EAL: No shared files mode enabled, IPC is disabled 00:05:10.543 EAL: Heap on socket 0 was shrunk by 2MB 00:05:10.543 EAL: No shared files mode enabled, IPC is disabled 00:05:10.543 EAL: No shared files mode enabled, IPC is disabled 00:05:10.543 EAL: No shared files mode enabled, IPC is disabled 00:05:10.543 00:05:10.543 real 0m1.463s 00:05:10.543 user 0m0.809s 00:05:10.543 sys 0m0.520s 00:05:10.543 11:27:11 env.env_vtophys -- common/autotest_common.sh@1130 -- # xtrace_disable 00:05:10.543 11:27:11 env.env_vtophys -- common/autotest_common.sh@10 -- # set +x 00:05:10.543 ************************************ 00:05:10.543 END TEST env_vtophys 00:05:10.543 ************************************ 00:05:10.543 11:27:11 env -- env/env.sh@12 -- # run_test env_pci /home/vagrant/spdk_repo/spdk/test/env/pci/pci_ut 00:05:10.543 11:27:11 env -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:05:10.543 11:27:11 env -- common/autotest_common.sh@1111 -- # xtrace_disable 00:05:10.543 11:27:11 env -- common/autotest_common.sh@10 -- # set +x 00:05:10.543 ************************************ 00:05:10.543 START TEST env_pci 00:05:10.543 ************************************ 00:05:10.543 11:27:11 env.env_pci -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/env/pci/pci_ut 00:05:10.543 00:05:10.543 00:05:10.543 CUnit - A unit testing framework for C - Version 2.1-3 00:05:10.543 http://cunit.sourceforge.net/ 00:05:10.543 00:05:10.543 00:05:10.543 Suite: pci 00:05:10.543 Test: pci_hook ...[2024-12-16 11:27:11.145053] /home/vagrant/spdk_repo/spdk/lib/env_dpdk/pci.c:1117:spdk_pci_device_claim: *ERROR*: Cannot create lock on device /var/tmp/spdk_pci_lock_10000:00:01.0, probably process 60220 has claimed it 00:05:10.543 passed 00:05:10.543 00:05:10.543 Run Summary: Type Total Ran Passed Failed Inactive 00:05:10.543 suites 1 1 n/a 0 0 00:05:10.543 tests 1 1 1 0 0 00:05:10.543 asserts 25 25 25 0 n/a 00:05:10.543 00:05:10.543 Elapsed time = 0.002 seconds 00:05:10.543 EAL: Cannot find device (10000:00:01.0) 00:05:10.543 EAL: Failed to attach device on primary process 00:05:10.543 00:05:10.543 real 0m0.020s 00:05:10.543 user 0m0.015s 00:05:10.543 sys 0m0.005s 00:05:10.543 11:27:11 env.env_pci -- common/autotest_common.sh@1130 -- # xtrace_disable 00:05:10.543 11:27:11 env.env_pci -- common/autotest_common.sh@10 -- # set +x 00:05:10.543 ************************************ 00:05:10.543 END TEST env_pci 00:05:10.543 ************************************ 00:05:10.543 11:27:11 env -- env/env.sh@14 -- # argv='-c 0x1 ' 00:05:10.543 11:27:11 env -- env/env.sh@15 -- # uname 00:05:10.543 11:27:11 env -- env/env.sh@15 -- # '[' Linux = Linux ']' 00:05:10.543 11:27:11 env -- env/env.sh@22 -- # argv+=--base-virtaddr=0x200000000000 00:05:10.543 11:27:11 env -- env/env.sh@24 -- # run_test env_dpdk_post_init /home/vagrant/spdk_repo/spdk/test/env/env_dpdk_post_init/env_dpdk_post_init -c 0x1 --base-virtaddr=0x200000000000 00:05:10.543 11:27:11 env -- common/autotest_common.sh@1105 -- # '[' 5 -le 1 ']' 00:05:10.543 11:27:11 env -- common/autotest_common.sh@1111 -- # xtrace_disable 00:05:10.543 11:27:11 env -- common/autotest_common.sh@10 -- # set +x 00:05:10.543 ************************************ 00:05:10.543 START TEST env_dpdk_post_init 00:05:10.543 ************************************ 00:05:10.543 11:27:11 env.env_dpdk_post_init -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/env/env_dpdk_post_init/env_dpdk_post_init -c 0x1 --base-virtaddr=0x200000000000 00:05:10.543 EAL: Detected CPU lcores: 10 00:05:10.543 EAL: Detected NUMA nodes: 1 00:05:10.543 EAL: Detected shared linkage of DPDK 00:05:10.543 EAL: Multi-process socket /var/run/dpdk/rte/mp_socket 00:05:10.543 EAL: Selected IOVA mode 'PA' 00:05:10.801 TELEMETRY: No legacy callbacks, legacy socket not created 00:05:10.801 EAL: Probe PCI driver: spdk_nvme (1b36:0010) device: 0000:00:10.0 (socket -1) 00:05:10.801 EAL: Probe PCI driver: spdk_nvme (1b36:0010) device: 0000:00:11.0 (socket -1) 00:05:10.801 Starting DPDK initialization... 00:05:10.801 Starting SPDK post initialization... 00:05:10.801 SPDK NVMe probe 00:05:10.801 Attaching to 0000:00:10.0 00:05:10.801 Attaching to 0000:00:11.0 00:05:10.801 Attached to 0000:00:10.0 00:05:10.801 Attached to 0000:00:11.0 00:05:10.801 Cleaning up... 00:05:10.801 00:05:10.801 real 0m0.189s 00:05:10.801 user 0m0.054s 00:05:10.801 sys 0m0.034s 00:05:10.801 11:27:11 env.env_dpdk_post_init -- common/autotest_common.sh@1130 -- # xtrace_disable 00:05:10.801 11:27:11 env.env_dpdk_post_init -- common/autotest_common.sh@10 -- # set +x 00:05:10.801 ************************************ 00:05:10.801 END TEST env_dpdk_post_init 00:05:10.801 ************************************ 00:05:10.801 11:27:11 env -- env/env.sh@26 -- # uname 00:05:10.801 11:27:11 env -- env/env.sh@26 -- # '[' Linux = Linux ']' 00:05:10.801 11:27:11 env -- env/env.sh@29 -- # run_test env_mem_callbacks /home/vagrant/spdk_repo/spdk/test/env/mem_callbacks/mem_callbacks 00:05:10.801 11:27:11 env -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:05:10.801 11:27:11 env -- common/autotest_common.sh@1111 -- # xtrace_disable 00:05:10.801 11:27:11 env -- common/autotest_common.sh@10 -- # set +x 00:05:10.801 ************************************ 00:05:10.801 START TEST env_mem_callbacks 00:05:10.801 ************************************ 00:05:10.801 11:27:11 env.env_mem_callbacks -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/env/mem_callbacks/mem_callbacks 00:05:10.801 EAL: Detected CPU lcores: 10 00:05:10.801 EAL: Detected NUMA nodes: 1 00:05:10.801 EAL: Detected shared linkage of DPDK 00:05:10.801 EAL: Multi-process socket /var/run/dpdk/rte/mp_socket 00:05:10.801 EAL: Selected IOVA mode 'PA' 00:05:11.059 00:05:11.059 00:05:11.059 CUnit - A unit testing framework for C - Version 2.1-3 00:05:11.059 http://cunit.sourceforge.net/ 00:05:11.059 00:05:11.059 00:05:11.059 Suite: memory 00:05:11.059 Test: test ... 00:05:11.059 register 0x200000200000 2097152 00:05:11.059 malloc 3145728 00:05:11.059 TELEMETRY: No legacy callbacks, legacy socket not created 00:05:11.059 register 0x200000400000 4194304 00:05:11.059 buf 0x200000500000 len 3145728 PASSED 00:05:11.059 malloc 64 00:05:11.059 buf 0x2000004fff40 len 64 PASSED 00:05:11.059 malloc 4194304 00:05:11.059 register 0x200000800000 6291456 00:05:11.059 buf 0x200000a00000 len 4194304 PASSED 00:05:11.059 free 0x200000500000 3145728 00:05:11.059 free 0x2000004fff40 64 00:05:11.059 unregister 0x200000400000 4194304 PASSED 00:05:11.059 free 0x200000a00000 4194304 00:05:11.059 unregister 0x200000800000 6291456 PASSED 00:05:11.059 malloc 8388608 00:05:11.059 register 0x200000400000 10485760 00:05:11.059 buf 0x200000600000 len 8388608 PASSED 00:05:11.059 free 0x200000600000 8388608 00:05:11.059 unregister 0x200000400000 10485760 PASSED 00:05:11.059 passed 00:05:11.059 00:05:11.059 Run Summary: Type Total Ran Passed Failed Inactive 00:05:11.059 suites 1 1 n/a 0 0 00:05:11.059 tests 1 1 1 0 0 00:05:11.059 asserts 15 15 15 0 n/a 00:05:11.059 00:05:11.059 Elapsed time = 0.008 seconds 00:05:11.059 00:05:11.059 real 0m0.138s 00:05:11.059 user 0m0.018s 00:05:11.059 sys 0m0.020s 00:05:11.059 11:27:11 env.env_mem_callbacks -- common/autotest_common.sh@1130 -- # xtrace_disable 00:05:11.059 11:27:11 env.env_mem_callbacks -- common/autotest_common.sh@10 -- # set +x 00:05:11.059 ************************************ 00:05:11.059 END TEST env_mem_callbacks 00:05:11.059 ************************************ 00:05:11.059 00:05:11.059 real 0m2.459s 00:05:11.059 user 0m1.262s 00:05:11.059 sys 0m0.849s 00:05:11.059 11:27:11 env -- common/autotest_common.sh@1130 -- # xtrace_disable 00:05:11.059 11:27:11 env -- common/autotest_common.sh@10 -- # set +x 00:05:11.059 ************************************ 00:05:11.059 END TEST env 00:05:11.059 ************************************ 00:05:11.059 11:27:11 -- spdk/autotest.sh@156 -- # run_test rpc /home/vagrant/spdk_repo/spdk/test/rpc/rpc.sh 00:05:11.059 11:27:11 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:05:11.059 11:27:11 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:05:11.059 11:27:11 -- common/autotest_common.sh@10 -- # set +x 00:05:11.059 ************************************ 00:05:11.059 START TEST rpc 00:05:11.059 ************************************ 00:05:11.059 11:27:11 rpc -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/rpc/rpc.sh 00:05:11.059 * Looking for test storage... 00:05:11.059 * Found test storage at /home/vagrant/spdk_repo/spdk/test/rpc 00:05:11.059 11:27:11 rpc -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:05:11.059 11:27:11 rpc -- common/autotest_common.sh@1711 -- # lcov --version 00:05:11.059 11:27:11 rpc -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:05:11.318 11:27:11 rpc -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:05:11.318 11:27:11 rpc -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:05:11.318 11:27:11 rpc -- scripts/common.sh@333 -- # local ver1 ver1_l 00:05:11.318 11:27:11 rpc -- scripts/common.sh@334 -- # local ver2 ver2_l 00:05:11.318 11:27:11 rpc -- scripts/common.sh@336 -- # IFS=.-: 00:05:11.318 11:27:11 rpc -- scripts/common.sh@336 -- # read -ra ver1 00:05:11.318 11:27:11 rpc -- scripts/common.sh@337 -- # IFS=.-: 00:05:11.318 11:27:11 rpc -- scripts/common.sh@337 -- # read -ra ver2 00:05:11.318 11:27:11 rpc -- scripts/common.sh@338 -- # local 'op=<' 00:05:11.318 11:27:11 rpc -- scripts/common.sh@340 -- # ver1_l=2 00:05:11.318 11:27:11 rpc -- scripts/common.sh@341 -- # ver2_l=1 00:05:11.318 11:27:11 rpc -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:05:11.318 11:27:11 rpc -- scripts/common.sh@344 -- # case "$op" in 00:05:11.318 11:27:11 rpc -- scripts/common.sh@345 -- # : 1 00:05:11.318 11:27:11 rpc -- scripts/common.sh@364 -- # (( v = 0 )) 00:05:11.318 11:27:11 rpc -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:05:11.318 11:27:11 rpc -- scripts/common.sh@365 -- # decimal 1 00:05:11.318 11:27:11 rpc -- scripts/common.sh@353 -- # local d=1 00:05:11.318 11:27:11 rpc -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:05:11.318 11:27:11 rpc -- scripts/common.sh@355 -- # echo 1 00:05:11.318 11:27:11 rpc -- scripts/common.sh@365 -- # ver1[v]=1 00:05:11.318 11:27:11 rpc -- scripts/common.sh@366 -- # decimal 2 00:05:11.318 11:27:11 rpc -- scripts/common.sh@353 -- # local d=2 00:05:11.318 11:27:11 rpc -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:05:11.318 11:27:11 rpc -- scripts/common.sh@355 -- # echo 2 00:05:11.318 11:27:11 rpc -- scripts/common.sh@366 -- # ver2[v]=2 00:05:11.318 11:27:11 rpc -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:05:11.318 11:27:11 rpc -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:05:11.318 11:27:11 rpc -- scripts/common.sh@368 -- # return 0 00:05:11.318 11:27:11 rpc -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:05:11.318 11:27:11 rpc -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:05:11.318 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:11.318 --rc genhtml_branch_coverage=1 00:05:11.318 --rc genhtml_function_coverage=1 00:05:11.318 --rc genhtml_legend=1 00:05:11.318 --rc geninfo_all_blocks=1 00:05:11.318 --rc geninfo_unexecuted_blocks=1 00:05:11.318 00:05:11.318 ' 00:05:11.318 11:27:11 rpc -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:05:11.318 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:11.318 --rc genhtml_branch_coverage=1 00:05:11.318 --rc genhtml_function_coverage=1 00:05:11.318 --rc genhtml_legend=1 00:05:11.318 --rc geninfo_all_blocks=1 00:05:11.318 --rc geninfo_unexecuted_blocks=1 00:05:11.318 00:05:11.318 ' 00:05:11.318 11:27:11 rpc -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:05:11.318 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:11.318 --rc genhtml_branch_coverage=1 00:05:11.318 --rc genhtml_function_coverage=1 00:05:11.318 --rc genhtml_legend=1 00:05:11.318 --rc geninfo_all_blocks=1 00:05:11.318 --rc geninfo_unexecuted_blocks=1 00:05:11.318 00:05:11.318 ' 00:05:11.318 11:27:11 rpc -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:05:11.318 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:11.318 --rc genhtml_branch_coverage=1 00:05:11.318 --rc genhtml_function_coverage=1 00:05:11.318 --rc genhtml_legend=1 00:05:11.318 --rc geninfo_all_blocks=1 00:05:11.318 --rc geninfo_unexecuted_blocks=1 00:05:11.318 00:05:11.318 ' 00:05:11.318 11:27:11 rpc -- rpc/rpc.sh@65 -- # spdk_pid=60343 00:05:11.318 11:27:11 rpc -- rpc/rpc.sh@66 -- # trap 'killprocess $spdk_pid; exit 1' SIGINT SIGTERM EXIT 00:05:11.318 11:27:11 rpc -- rpc/rpc.sh@67 -- # waitforlisten 60343 00:05:11.318 11:27:11 rpc -- common/autotest_common.sh@835 -- # '[' -z 60343 ']' 00:05:11.318 11:27:11 rpc -- rpc/rpc.sh@64 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -e bdev 00:05:11.318 11:27:11 rpc -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:11.318 11:27:11 rpc -- common/autotest_common.sh@840 -- # local max_retries=100 00:05:11.318 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:11.318 11:27:11 rpc -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:11.318 11:27:11 rpc -- common/autotest_common.sh@844 -- # xtrace_disable 00:05:11.318 11:27:11 rpc -- common/autotest_common.sh@10 -- # set +x 00:05:11.318 [2024-12-16 11:27:11.936111] Starting SPDK v25.01-pre git sha1 c360129e7 / DPDK 24.03.0 initialization... 00:05:11.318 [2024-12-16 11:27:11.936250] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid60343 ] 00:05:11.577 [2024-12-16 11:27:12.089362] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:11.577 [2024-12-16 11:27:12.161322] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask bdev specified. 00:05:11.577 [2024-12-16 11:27:12.161384] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s spdk_tgt -p 60343' to capture a snapshot of events at runtime. 00:05:11.578 [2024-12-16 11:27:12.161399] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:05:11.578 [2024-12-16 11:27:12.161410] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:05:11.578 [2024-12-16 11:27:12.161420] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/spdk_tgt_trace.pid60343 for offline analysis/debug. 00:05:11.578 [2024-12-16 11:27:12.161913] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:05:12.512 11:27:12 rpc -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:05:12.512 11:27:12 rpc -- common/autotest_common.sh@868 -- # return 0 00:05:12.512 11:27:12 rpc -- rpc/rpc.sh@69 -- # export PYTHONPATH=:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/test/rpc 00:05:12.512 11:27:12 rpc -- rpc/rpc.sh@69 -- # PYTHONPATH=:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/test/rpc 00:05:12.512 11:27:12 rpc -- rpc/rpc.sh@72 -- # rpc=rpc_cmd 00:05:12.512 11:27:12 rpc -- rpc/rpc.sh@73 -- # run_test rpc_integrity rpc_integrity 00:05:12.512 11:27:12 rpc -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:05:12.512 11:27:12 rpc -- common/autotest_common.sh@1111 -- # xtrace_disable 00:05:12.512 11:27:12 rpc -- common/autotest_common.sh@10 -- # set +x 00:05:12.512 ************************************ 00:05:12.512 START TEST rpc_integrity 00:05:12.512 ************************************ 00:05:12.512 11:27:12 rpc.rpc_integrity -- common/autotest_common.sh@1129 -- # rpc_integrity 00:05:12.512 11:27:12 rpc.rpc_integrity -- rpc/rpc.sh@12 -- # rpc_cmd bdev_get_bdevs 00:05:12.512 11:27:12 rpc.rpc_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:12.512 11:27:12 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:05:12.512 11:27:13 rpc.rpc_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:12.512 11:27:13 rpc.rpc_integrity -- rpc/rpc.sh@12 -- # bdevs='[]' 00:05:12.512 11:27:13 rpc.rpc_integrity -- rpc/rpc.sh@13 -- # jq length 00:05:12.512 11:27:13 rpc.rpc_integrity -- rpc/rpc.sh@13 -- # '[' 0 == 0 ']' 00:05:12.512 11:27:13 rpc.rpc_integrity -- rpc/rpc.sh@15 -- # rpc_cmd bdev_malloc_create 8 512 00:05:12.512 11:27:13 rpc.rpc_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:12.512 11:27:13 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:05:12.512 11:27:13 rpc.rpc_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:12.512 11:27:13 rpc.rpc_integrity -- rpc/rpc.sh@15 -- # malloc=Malloc0 00:05:12.512 11:27:13 rpc.rpc_integrity -- rpc/rpc.sh@16 -- # rpc_cmd bdev_get_bdevs 00:05:12.512 11:27:13 rpc.rpc_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:12.512 11:27:13 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:05:12.512 11:27:13 rpc.rpc_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:12.512 11:27:13 rpc.rpc_integrity -- rpc/rpc.sh@16 -- # bdevs='[ 00:05:12.512 { 00:05:12.512 "aliases": [ 00:05:12.512 "b5b3c6cc-0dc8-4229-9798-122525a94182" 00:05:12.512 ], 00:05:12.512 "assigned_rate_limits": { 00:05:12.512 "r_mbytes_per_sec": 0, 00:05:12.512 "rw_ios_per_sec": 0, 00:05:12.512 "rw_mbytes_per_sec": 0, 00:05:12.512 "w_mbytes_per_sec": 0 00:05:12.512 }, 00:05:12.512 "block_size": 512, 00:05:12.512 "claimed": false, 00:05:12.512 "driver_specific": {}, 00:05:12.512 "memory_domains": [ 00:05:12.512 { 00:05:12.512 "dma_device_id": "system", 00:05:12.512 "dma_device_type": 1 00:05:12.512 }, 00:05:12.512 { 00:05:12.512 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:05:12.512 "dma_device_type": 2 00:05:12.512 } 00:05:12.512 ], 00:05:12.512 "name": "Malloc0", 00:05:12.512 "num_blocks": 16384, 00:05:12.512 "product_name": "Malloc disk", 00:05:12.512 "supported_io_types": { 00:05:12.512 "abort": true, 00:05:12.512 "compare": false, 00:05:12.512 "compare_and_write": false, 00:05:12.512 "copy": true, 00:05:12.512 "flush": true, 00:05:12.512 "get_zone_info": false, 00:05:12.512 "nvme_admin": false, 00:05:12.512 "nvme_io": false, 00:05:12.512 "nvme_io_md": false, 00:05:12.512 "nvme_iov_md": false, 00:05:12.512 "read": true, 00:05:12.512 "reset": true, 00:05:12.512 "seek_data": false, 00:05:12.512 "seek_hole": false, 00:05:12.512 "unmap": true, 00:05:12.512 "write": true, 00:05:12.512 "write_zeroes": true, 00:05:12.512 "zcopy": true, 00:05:12.512 "zone_append": false, 00:05:12.512 "zone_management": false 00:05:12.512 }, 00:05:12.512 "uuid": "b5b3c6cc-0dc8-4229-9798-122525a94182", 00:05:12.512 "zoned": false 00:05:12.512 } 00:05:12.512 ]' 00:05:12.512 11:27:13 rpc.rpc_integrity -- rpc/rpc.sh@17 -- # jq length 00:05:12.512 11:27:13 rpc.rpc_integrity -- rpc/rpc.sh@17 -- # '[' 1 == 1 ']' 00:05:12.513 11:27:13 rpc.rpc_integrity -- rpc/rpc.sh@19 -- # rpc_cmd bdev_passthru_create -b Malloc0 -p Passthru0 00:05:12.513 11:27:13 rpc.rpc_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:12.513 11:27:13 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:05:12.513 [2024-12-16 11:27:13.137357] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on Malloc0 00:05:12.513 [2024-12-16 11:27:13.137411] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:05:12.513 [2024-12-16 11:27:13.137434] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x821440 00:05:12.513 [2024-12-16 11:27:13.137445] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:05:12.513 [2024-12-16 11:27:13.139234] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:05:12.513 [2024-12-16 11:27:13.139272] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: Passthru0 00:05:12.513 Passthru0 00:05:12.513 11:27:13 rpc.rpc_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:12.513 11:27:13 rpc.rpc_integrity -- rpc/rpc.sh@20 -- # rpc_cmd bdev_get_bdevs 00:05:12.513 11:27:13 rpc.rpc_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:12.513 11:27:13 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:05:12.513 11:27:13 rpc.rpc_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:12.513 11:27:13 rpc.rpc_integrity -- rpc/rpc.sh@20 -- # bdevs='[ 00:05:12.513 { 00:05:12.513 "aliases": [ 00:05:12.513 "b5b3c6cc-0dc8-4229-9798-122525a94182" 00:05:12.513 ], 00:05:12.513 "assigned_rate_limits": { 00:05:12.513 "r_mbytes_per_sec": 0, 00:05:12.513 "rw_ios_per_sec": 0, 00:05:12.513 "rw_mbytes_per_sec": 0, 00:05:12.513 "w_mbytes_per_sec": 0 00:05:12.513 }, 00:05:12.513 "block_size": 512, 00:05:12.513 "claim_type": "exclusive_write", 00:05:12.513 "claimed": true, 00:05:12.513 "driver_specific": {}, 00:05:12.513 "memory_domains": [ 00:05:12.513 { 00:05:12.513 "dma_device_id": "system", 00:05:12.513 "dma_device_type": 1 00:05:12.513 }, 00:05:12.513 { 00:05:12.513 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:05:12.513 "dma_device_type": 2 00:05:12.513 } 00:05:12.513 ], 00:05:12.513 "name": "Malloc0", 00:05:12.513 "num_blocks": 16384, 00:05:12.513 "product_name": "Malloc disk", 00:05:12.513 "supported_io_types": { 00:05:12.513 "abort": true, 00:05:12.513 "compare": false, 00:05:12.513 "compare_and_write": false, 00:05:12.513 "copy": true, 00:05:12.513 "flush": true, 00:05:12.513 "get_zone_info": false, 00:05:12.513 "nvme_admin": false, 00:05:12.513 "nvme_io": false, 00:05:12.513 "nvme_io_md": false, 00:05:12.513 "nvme_iov_md": false, 00:05:12.513 "read": true, 00:05:12.513 "reset": true, 00:05:12.513 "seek_data": false, 00:05:12.513 "seek_hole": false, 00:05:12.513 "unmap": true, 00:05:12.513 "write": true, 00:05:12.513 "write_zeroes": true, 00:05:12.513 "zcopy": true, 00:05:12.513 "zone_append": false, 00:05:12.513 "zone_management": false 00:05:12.513 }, 00:05:12.513 "uuid": "b5b3c6cc-0dc8-4229-9798-122525a94182", 00:05:12.513 "zoned": false 00:05:12.513 }, 00:05:12.513 { 00:05:12.513 "aliases": [ 00:05:12.513 "ead2d3c0-bed9-560f-8e71-e3b7c5b8fb42" 00:05:12.513 ], 00:05:12.513 "assigned_rate_limits": { 00:05:12.513 "r_mbytes_per_sec": 0, 00:05:12.513 "rw_ios_per_sec": 0, 00:05:12.513 "rw_mbytes_per_sec": 0, 00:05:12.513 "w_mbytes_per_sec": 0 00:05:12.513 }, 00:05:12.513 "block_size": 512, 00:05:12.513 "claimed": false, 00:05:12.513 "driver_specific": { 00:05:12.513 "passthru": { 00:05:12.513 "base_bdev_name": "Malloc0", 00:05:12.513 "name": "Passthru0" 00:05:12.513 } 00:05:12.513 }, 00:05:12.513 "memory_domains": [ 00:05:12.513 { 00:05:12.513 "dma_device_id": "system", 00:05:12.513 "dma_device_type": 1 00:05:12.513 }, 00:05:12.513 { 00:05:12.513 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:05:12.513 "dma_device_type": 2 00:05:12.513 } 00:05:12.513 ], 00:05:12.513 "name": "Passthru0", 00:05:12.513 "num_blocks": 16384, 00:05:12.513 "product_name": "passthru", 00:05:12.513 "supported_io_types": { 00:05:12.513 "abort": true, 00:05:12.513 "compare": false, 00:05:12.513 "compare_and_write": false, 00:05:12.513 "copy": true, 00:05:12.513 "flush": true, 00:05:12.513 "get_zone_info": false, 00:05:12.513 "nvme_admin": false, 00:05:12.513 "nvme_io": false, 00:05:12.513 "nvme_io_md": false, 00:05:12.513 "nvme_iov_md": false, 00:05:12.513 "read": true, 00:05:12.513 "reset": true, 00:05:12.513 "seek_data": false, 00:05:12.513 "seek_hole": false, 00:05:12.513 "unmap": true, 00:05:12.513 "write": true, 00:05:12.513 "write_zeroes": true, 00:05:12.513 "zcopy": true, 00:05:12.513 "zone_append": false, 00:05:12.513 "zone_management": false 00:05:12.513 }, 00:05:12.513 "uuid": "ead2d3c0-bed9-560f-8e71-e3b7c5b8fb42", 00:05:12.513 "zoned": false 00:05:12.513 } 00:05:12.513 ]' 00:05:12.513 11:27:13 rpc.rpc_integrity -- rpc/rpc.sh@21 -- # jq length 00:05:12.513 11:27:13 rpc.rpc_integrity -- rpc/rpc.sh@21 -- # '[' 2 == 2 ']' 00:05:12.513 11:27:13 rpc.rpc_integrity -- rpc/rpc.sh@23 -- # rpc_cmd bdev_passthru_delete Passthru0 00:05:12.513 11:27:13 rpc.rpc_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:12.513 11:27:13 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:05:12.513 11:27:13 rpc.rpc_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:12.513 11:27:13 rpc.rpc_integrity -- rpc/rpc.sh@24 -- # rpc_cmd bdev_malloc_delete Malloc0 00:05:12.513 11:27:13 rpc.rpc_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:12.513 11:27:13 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:05:12.513 11:27:13 rpc.rpc_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:12.513 11:27:13 rpc.rpc_integrity -- rpc/rpc.sh@25 -- # rpc_cmd bdev_get_bdevs 00:05:12.513 11:27:13 rpc.rpc_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:12.513 11:27:13 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:05:12.513 11:27:13 rpc.rpc_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:12.513 11:27:13 rpc.rpc_integrity -- rpc/rpc.sh@25 -- # bdevs='[]' 00:05:12.513 11:27:13 rpc.rpc_integrity -- rpc/rpc.sh@26 -- # jq length 00:05:12.771 11:27:13 rpc.rpc_integrity -- rpc/rpc.sh@26 -- # '[' 0 == 0 ']' 00:05:12.771 00:05:12.771 real 0m0.311s 00:05:12.771 user 0m0.199s 00:05:12.771 sys 0m0.037s 00:05:12.771 11:27:13 rpc.rpc_integrity -- common/autotest_common.sh@1130 -- # xtrace_disable 00:05:12.771 ************************************ 00:05:12.771 END TEST rpc_integrity 00:05:12.771 11:27:13 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:05:12.771 ************************************ 00:05:12.771 11:27:13 rpc -- rpc/rpc.sh@74 -- # run_test rpc_plugins rpc_plugins 00:05:12.771 11:27:13 rpc -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:05:12.771 11:27:13 rpc -- common/autotest_common.sh@1111 -- # xtrace_disable 00:05:12.771 11:27:13 rpc -- common/autotest_common.sh@10 -- # set +x 00:05:12.771 ************************************ 00:05:12.771 START TEST rpc_plugins 00:05:12.772 ************************************ 00:05:12.772 11:27:13 rpc.rpc_plugins -- common/autotest_common.sh@1129 -- # rpc_plugins 00:05:12.772 11:27:13 rpc.rpc_plugins -- rpc/rpc.sh@30 -- # rpc_cmd --plugin rpc_plugin create_malloc 00:05:12.772 11:27:13 rpc.rpc_plugins -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:12.772 11:27:13 rpc.rpc_plugins -- common/autotest_common.sh@10 -- # set +x 00:05:12.772 11:27:13 rpc.rpc_plugins -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:12.772 11:27:13 rpc.rpc_plugins -- rpc/rpc.sh@30 -- # malloc=Malloc1 00:05:12.772 11:27:13 rpc.rpc_plugins -- rpc/rpc.sh@31 -- # rpc_cmd bdev_get_bdevs 00:05:12.772 11:27:13 rpc.rpc_plugins -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:12.772 11:27:13 rpc.rpc_plugins -- common/autotest_common.sh@10 -- # set +x 00:05:12.772 11:27:13 rpc.rpc_plugins -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:12.772 11:27:13 rpc.rpc_plugins -- rpc/rpc.sh@31 -- # bdevs='[ 00:05:12.772 { 00:05:12.772 "aliases": [ 00:05:12.772 "f4afd1c0-f1aa-4ffe-a398-344886c3c652" 00:05:12.772 ], 00:05:12.772 "assigned_rate_limits": { 00:05:12.772 "r_mbytes_per_sec": 0, 00:05:12.772 "rw_ios_per_sec": 0, 00:05:12.772 "rw_mbytes_per_sec": 0, 00:05:12.772 "w_mbytes_per_sec": 0 00:05:12.772 }, 00:05:12.772 "block_size": 4096, 00:05:12.772 "claimed": false, 00:05:12.772 "driver_specific": {}, 00:05:12.772 "memory_domains": [ 00:05:12.772 { 00:05:12.772 "dma_device_id": "system", 00:05:12.772 "dma_device_type": 1 00:05:12.772 }, 00:05:12.772 { 00:05:12.772 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:05:12.772 "dma_device_type": 2 00:05:12.772 } 00:05:12.772 ], 00:05:12.772 "name": "Malloc1", 00:05:12.772 "num_blocks": 256, 00:05:12.772 "product_name": "Malloc disk", 00:05:12.772 "supported_io_types": { 00:05:12.772 "abort": true, 00:05:12.772 "compare": false, 00:05:12.772 "compare_and_write": false, 00:05:12.772 "copy": true, 00:05:12.772 "flush": true, 00:05:12.772 "get_zone_info": false, 00:05:12.772 "nvme_admin": false, 00:05:12.772 "nvme_io": false, 00:05:12.772 "nvme_io_md": false, 00:05:12.772 "nvme_iov_md": false, 00:05:12.772 "read": true, 00:05:12.772 "reset": true, 00:05:12.772 "seek_data": false, 00:05:12.772 "seek_hole": false, 00:05:12.772 "unmap": true, 00:05:12.772 "write": true, 00:05:12.772 "write_zeroes": true, 00:05:12.772 "zcopy": true, 00:05:12.772 "zone_append": false, 00:05:12.772 "zone_management": false 00:05:12.772 }, 00:05:12.772 "uuid": "f4afd1c0-f1aa-4ffe-a398-344886c3c652", 00:05:12.772 "zoned": false 00:05:12.772 } 00:05:12.772 ]' 00:05:12.772 11:27:13 rpc.rpc_plugins -- rpc/rpc.sh@32 -- # jq length 00:05:12.772 11:27:13 rpc.rpc_plugins -- rpc/rpc.sh@32 -- # '[' 1 == 1 ']' 00:05:12.772 11:27:13 rpc.rpc_plugins -- rpc/rpc.sh@34 -- # rpc_cmd --plugin rpc_plugin delete_malloc Malloc1 00:05:12.772 11:27:13 rpc.rpc_plugins -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:12.772 11:27:13 rpc.rpc_plugins -- common/autotest_common.sh@10 -- # set +x 00:05:12.772 11:27:13 rpc.rpc_plugins -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:12.772 11:27:13 rpc.rpc_plugins -- rpc/rpc.sh@35 -- # rpc_cmd bdev_get_bdevs 00:05:12.772 11:27:13 rpc.rpc_plugins -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:12.772 11:27:13 rpc.rpc_plugins -- common/autotest_common.sh@10 -- # set +x 00:05:12.772 11:27:13 rpc.rpc_plugins -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:12.772 11:27:13 rpc.rpc_plugins -- rpc/rpc.sh@35 -- # bdevs='[]' 00:05:12.772 11:27:13 rpc.rpc_plugins -- rpc/rpc.sh@36 -- # jq length 00:05:12.772 11:27:13 rpc.rpc_plugins -- rpc/rpc.sh@36 -- # '[' 0 == 0 ']' 00:05:12.772 00:05:12.772 real 0m0.159s 00:05:12.772 user 0m0.104s 00:05:12.772 sys 0m0.018s 00:05:12.772 11:27:13 rpc.rpc_plugins -- common/autotest_common.sh@1130 -- # xtrace_disable 00:05:12.772 ************************************ 00:05:12.772 END TEST rpc_plugins 00:05:12.772 11:27:13 rpc.rpc_plugins -- common/autotest_common.sh@10 -- # set +x 00:05:12.772 ************************************ 00:05:13.031 11:27:13 rpc -- rpc/rpc.sh@75 -- # run_test rpc_trace_cmd_test rpc_trace_cmd_test 00:05:13.031 11:27:13 rpc -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:05:13.031 11:27:13 rpc -- common/autotest_common.sh@1111 -- # xtrace_disable 00:05:13.031 11:27:13 rpc -- common/autotest_common.sh@10 -- # set +x 00:05:13.031 ************************************ 00:05:13.031 START TEST rpc_trace_cmd_test 00:05:13.031 ************************************ 00:05:13.031 11:27:13 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@1129 -- # rpc_trace_cmd_test 00:05:13.031 11:27:13 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@40 -- # local info 00:05:13.031 11:27:13 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@42 -- # rpc_cmd trace_get_info 00:05:13.031 11:27:13 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:13.031 11:27:13 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@10 -- # set +x 00:05:13.031 11:27:13 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:13.031 11:27:13 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@42 -- # info='{ 00:05:13.031 "bdev": { 00:05:13.031 "mask": "0x8", 00:05:13.031 "tpoint_mask": "0xffffffffffffffff" 00:05:13.031 }, 00:05:13.031 "bdev_nvme": { 00:05:13.031 "mask": "0x4000", 00:05:13.031 "tpoint_mask": "0x0" 00:05:13.031 }, 00:05:13.031 "bdev_raid": { 00:05:13.031 "mask": "0x20000", 00:05:13.031 "tpoint_mask": "0x0" 00:05:13.031 }, 00:05:13.031 "blob": { 00:05:13.031 "mask": "0x10000", 00:05:13.031 "tpoint_mask": "0x0" 00:05:13.031 }, 00:05:13.031 "blobfs": { 00:05:13.031 "mask": "0x80", 00:05:13.031 "tpoint_mask": "0x0" 00:05:13.031 }, 00:05:13.031 "dsa": { 00:05:13.031 "mask": "0x200", 00:05:13.031 "tpoint_mask": "0x0" 00:05:13.031 }, 00:05:13.031 "ftl": { 00:05:13.031 "mask": "0x40", 00:05:13.031 "tpoint_mask": "0x0" 00:05:13.031 }, 00:05:13.031 "iaa": { 00:05:13.031 "mask": "0x1000", 00:05:13.031 "tpoint_mask": "0x0" 00:05:13.031 }, 00:05:13.031 "iscsi_conn": { 00:05:13.031 "mask": "0x2", 00:05:13.031 "tpoint_mask": "0x0" 00:05:13.031 }, 00:05:13.031 "nvme_pcie": { 00:05:13.031 "mask": "0x800", 00:05:13.031 "tpoint_mask": "0x0" 00:05:13.031 }, 00:05:13.031 "nvme_tcp": { 00:05:13.031 "mask": "0x2000", 00:05:13.031 "tpoint_mask": "0x0" 00:05:13.031 }, 00:05:13.031 "nvmf_rdma": { 00:05:13.031 "mask": "0x10", 00:05:13.031 "tpoint_mask": "0x0" 00:05:13.031 }, 00:05:13.031 "nvmf_tcp": { 00:05:13.031 "mask": "0x20", 00:05:13.031 "tpoint_mask": "0x0" 00:05:13.031 }, 00:05:13.031 "scheduler": { 00:05:13.031 "mask": "0x40000", 00:05:13.031 "tpoint_mask": "0x0" 00:05:13.031 }, 00:05:13.031 "scsi": { 00:05:13.031 "mask": "0x4", 00:05:13.031 "tpoint_mask": "0x0" 00:05:13.031 }, 00:05:13.031 "sock": { 00:05:13.031 "mask": "0x8000", 00:05:13.031 "tpoint_mask": "0x0" 00:05:13.031 }, 00:05:13.031 "thread": { 00:05:13.031 "mask": "0x400", 00:05:13.031 "tpoint_mask": "0x0" 00:05:13.031 }, 00:05:13.031 "tpoint_group_mask": "0x8", 00:05:13.031 "tpoint_shm_path": "/dev/shm/spdk_tgt_trace.pid60343" 00:05:13.031 }' 00:05:13.031 11:27:13 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@43 -- # jq length 00:05:13.031 11:27:13 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@43 -- # '[' 19 -gt 2 ']' 00:05:13.031 11:27:13 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@44 -- # jq 'has("tpoint_group_mask")' 00:05:13.031 11:27:13 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@44 -- # '[' true = true ']' 00:05:13.031 11:27:13 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@45 -- # jq 'has("tpoint_shm_path")' 00:05:13.031 11:27:13 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@45 -- # '[' true = true ']' 00:05:13.031 11:27:13 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@46 -- # jq 'has("bdev")' 00:05:13.031 11:27:13 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@46 -- # '[' true = true ']' 00:05:13.031 11:27:13 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@47 -- # jq -r .bdev.tpoint_mask 00:05:13.290 11:27:13 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@47 -- # '[' 0xffffffffffffffff '!=' 0x0 ']' 00:05:13.290 00:05:13.290 real 0m0.274s 00:05:13.290 user 0m0.234s 00:05:13.290 sys 0m0.032s 00:05:13.290 11:27:13 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@1130 -- # xtrace_disable 00:05:13.290 ************************************ 00:05:13.290 END TEST rpc_trace_cmd_test 00:05:13.290 ************************************ 00:05:13.290 11:27:13 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@10 -- # set +x 00:05:13.290 11:27:13 rpc -- rpc/rpc.sh@76 -- # [[ 1 -eq 1 ]] 00:05:13.290 11:27:13 rpc -- rpc/rpc.sh@77 -- # run_test go_rpc go_rpc 00:05:13.290 11:27:13 rpc -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:05:13.290 11:27:13 rpc -- common/autotest_common.sh@1111 -- # xtrace_disable 00:05:13.290 11:27:13 rpc -- common/autotest_common.sh@10 -- # set +x 00:05:13.290 ************************************ 00:05:13.290 START TEST go_rpc 00:05:13.290 ************************************ 00:05:13.290 11:27:13 rpc.go_rpc -- common/autotest_common.sh@1129 -- # go_rpc 00:05:13.290 11:27:13 rpc.go_rpc -- rpc/rpc.sh@51 -- # /home/vagrant/spdk_repo/spdk/build/examples/hello_gorpc 00:05:13.290 11:27:13 rpc.go_rpc -- rpc/rpc.sh@51 -- # bdevs='[]' 00:05:13.290 11:27:13 rpc.go_rpc -- rpc/rpc.sh@52 -- # jq length 00:05:13.290 11:27:13 rpc.go_rpc -- rpc/rpc.sh@52 -- # '[' 0 == 0 ']' 00:05:13.290 11:27:13 rpc.go_rpc -- rpc/rpc.sh@54 -- # rpc_cmd bdev_malloc_create 8 512 00:05:13.290 11:27:13 rpc.go_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:13.290 11:27:13 rpc.go_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:13.290 11:27:13 rpc.go_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:13.290 11:27:13 rpc.go_rpc -- rpc/rpc.sh@54 -- # malloc=Malloc2 00:05:13.290 11:27:13 rpc.go_rpc -- rpc/rpc.sh@56 -- # /home/vagrant/spdk_repo/spdk/build/examples/hello_gorpc 00:05:13.290 11:27:13 rpc.go_rpc -- rpc/rpc.sh@56 -- # bdevs='[{"aliases":["7ce820e0-8b75-4fa6-b847-553662a97249"],"assigned_rate_limits":{"r_mbytes_per_sec":0,"rw_ios_per_sec":0,"rw_mbytes_per_sec":0,"w_mbytes_per_sec":0},"block_size":512,"claimed":false,"driver_specific":{},"memory_domains":[{"dma_device_id":"system","dma_device_type":1},{"dma_device_id":"SPDK_ACCEL_DMA_DEVICE","dma_device_type":2}],"name":"Malloc2","num_blocks":16384,"product_name":"Malloc disk","supported_io_types":{"abort":true,"compare":false,"compare_and_write":false,"copy":true,"flush":true,"get_zone_info":false,"nvme_admin":false,"nvme_io":false,"nvme_io_md":false,"nvme_iov_md":false,"read":true,"reset":true,"seek_data":false,"seek_hole":false,"unmap":true,"write":true,"write_zeroes":true,"zcopy":true,"zone_append":false,"zone_management":false},"uuid":"7ce820e0-8b75-4fa6-b847-553662a97249","zoned":false}]' 00:05:13.290 11:27:13 rpc.go_rpc -- rpc/rpc.sh@57 -- # jq length 00:05:13.290 11:27:14 rpc.go_rpc -- rpc/rpc.sh@57 -- # '[' 1 == 1 ']' 00:05:13.290 11:27:14 rpc.go_rpc -- rpc/rpc.sh@59 -- # rpc_cmd bdev_malloc_delete Malloc2 00:05:13.290 11:27:14 rpc.go_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:13.290 11:27:14 rpc.go_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:13.290 11:27:14 rpc.go_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:13.290 11:27:14 rpc.go_rpc -- rpc/rpc.sh@60 -- # /home/vagrant/spdk_repo/spdk/build/examples/hello_gorpc 00:05:13.290 11:27:14 rpc.go_rpc -- rpc/rpc.sh@60 -- # bdevs='[]' 00:05:13.290 11:27:14 rpc.go_rpc -- rpc/rpc.sh@61 -- # jq length 00:05:13.549 11:27:14 rpc.go_rpc -- rpc/rpc.sh@61 -- # '[' 0 == 0 ']' 00:05:13.549 00:05:13.549 real 0m0.211s 00:05:13.549 user 0m0.142s 00:05:13.549 sys 0m0.036s 00:05:13.549 11:27:14 rpc.go_rpc -- common/autotest_common.sh@1130 -- # xtrace_disable 00:05:13.549 11:27:14 rpc.go_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:13.549 ************************************ 00:05:13.549 END TEST go_rpc 00:05:13.549 ************************************ 00:05:13.549 11:27:14 rpc -- rpc/rpc.sh@80 -- # rpc=rpc_cmd 00:05:13.549 11:27:14 rpc -- rpc/rpc.sh@81 -- # run_test rpc_daemon_integrity rpc_integrity 00:05:13.549 11:27:14 rpc -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:05:13.549 11:27:14 rpc -- common/autotest_common.sh@1111 -- # xtrace_disable 00:05:13.549 11:27:14 rpc -- common/autotest_common.sh@10 -- # set +x 00:05:13.549 ************************************ 00:05:13.549 START TEST rpc_daemon_integrity 00:05:13.549 ************************************ 00:05:13.549 11:27:14 rpc.rpc_daemon_integrity -- common/autotest_common.sh@1129 -- # rpc_integrity 00:05:13.549 11:27:14 rpc.rpc_daemon_integrity -- rpc/rpc.sh@12 -- # rpc_cmd bdev_get_bdevs 00:05:13.549 11:27:14 rpc.rpc_daemon_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:13.549 11:27:14 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:05:13.549 11:27:14 rpc.rpc_daemon_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:13.549 11:27:14 rpc.rpc_daemon_integrity -- rpc/rpc.sh@12 -- # bdevs='[]' 00:05:13.549 11:27:14 rpc.rpc_daemon_integrity -- rpc/rpc.sh@13 -- # jq length 00:05:13.549 11:27:14 rpc.rpc_daemon_integrity -- rpc/rpc.sh@13 -- # '[' 0 == 0 ']' 00:05:13.549 11:27:14 rpc.rpc_daemon_integrity -- rpc/rpc.sh@15 -- # rpc_cmd bdev_malloc_create 8 512 00:05:13.549 11:27:14 rpc.rpc_daemon_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:13.549 11:27:14 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:05:13.549 11:27:14 rpc.rpc_daemon_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:13.549 11:27:14 rpc.rpc_daemon_integrity -- rpc/rpc.sh@15 -- # malloc=Malloc3 00:05:13.549 11:27:14 rpc.rpc_daemon_integrity -- rpc/rpc.sh@16 -- # rpc_cmd bdev_get_bdevs 00:05:13.549 11:27:14 rpc.rpc_daemon_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:13.549 11:27:14 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:05:13.549 11:27:14 rpc.rpc_daemon_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:13.549 11:27:14 rpc.rpc_daemon_integrity -- rpc/rpc.sh@16 -- # bdevs='[ 00:05:13.549 { 00:05:13.549 "aliases": [ 00:05:13.549 "f351ce44-e0ae-4eb2-b118-e47bfd809361" 00:05:13.549 ], 00:05:13.549 "assigned_rate_limits": { 00:05:13.549 "r_mbytes_per_sec": 0, 00:05:13.549 "rw_ios_per_sec": 0, 00:05:13.549 "rw_mbytes_per_sec": 0, 00:05:13.549 "w_mbytes_per_sec": 0 00:05:13.549 }, 00:05:13.549 "block_size": 512, 00:05:13.549 "claimed": false, 00:05:13.549 "driver_specific": {}, 00:05:13.549 "memory_domains": [ 00:05:13.549 { 00:05:13.549 "dma_device_id": "system", 00:05:13.549 "dma_device_type": 1 00:05:13.549 }, 00:05:13.549 { 00:05:13.549 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:05:13.549 "dma_device_type": 2 00:05:13.549 } 00:05:13.549 ], 00:05:13.549 "name": "Malloc3", 00:05:13.549 "num_blocks": 16384, 00:05:13.549 "product_name": "Malloc disk", 00:05:13.549 "supported_io_types": { 00:05:13.549 "abort": true, 00:05:13.549 "compare": false, 00:05:13.549 "compare_and_write": false, 00:05:13.549 "copy": true, 00:05:13.549 "flush": true, 00:05:13.549 "get_zone_info": false, 00:05:13.549 "nvme_admin": false, 00:05:13.549 "nvme_io": false, 00:05:13.549 "nvme_io_md": false, 00:05:13.549 "nvme_iov_md": false, 00:05:13.549 "read": true, 00:05:13.549 "reset": true, 00:05:13.549 "seek_data": false, 00:05:13.549 "seek_hole": false, 00:05:13.549 "unmap": true, 00:05:13.549 "write": true, 00:05:13.549 "write_zeroes": true, 00:05:13.549 "zcopy": true, 00:05:13.549 "zone_append": false, 00:05:13.549 "zone_management": false 00:05:13.549 }, 00:05:13.549 "uuid": "f351ce44-e0ae-4eb2-b118-e47bfd809361", 00:05:13.549 "zoned": false 00:05:13.549 } 00:05:13.549 ]' 00:05:13.549 11:27:14 rpc.rpc_daemon_integrity -- rpc/rpc.sh@17 -- # jq length 00:05:13.549 11:27:14 rpc.rpc_daemon_integrity -- rpc/rpc.sh@17 -- # '[' 1 == 1 ']' 00:05:13.549 11:27:14 rpc.rpc_daemon_integrity -- rpc/rpc.sh@19 -- # rpc_cmd bdev_passthru_create -b Malloc3 -p Passthru0 00:05:13.549 11:27:14 rpc.rpc_daemon_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:13.549 11:27:14 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:05:13.549 [2024-12-16 11:27:14.290424] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on Malloc3 00:05:13.549 [2024-12-16 11:27:14.290478] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:05:13.549 [2024-12-16 11:27:14.290503] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x80ebe0 00:05:13.549 [2024-12-16 11:27:14.290513] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:05:13.549 [2024-12-16 11:27:14.292329] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:05:13.549 [2024-12-16 11:27:14.292365] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: Passthru0 00:05:13.549 Passthru0 00:05:13.549 11:27:14 rpc.rpc_daemon_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:13.549 11:27:14 rpc.rpc_daemon_integrity -- rpc/rpc.sh@20 -- # rpc_cmd bdev_get_bdevs 00:05:13.549 11:27:14 rpc.rpc_daemon_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:13.549 11:27:14 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:05:13.808 11:27:14 rpc.rpc_daemon_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:13.808 11:27:14 rpc.rpc_daemon_integrity -- rpc/rpc.sh@20 -- # bdevs='[ 00:05:13.808 { 00:05:13.808 "aliases": [ 00:05:13.808 "f351ce44-e0ae-4eb2-b118-e47bfd809361" 00:05:13.808 ], 00:05:13.808 "assigned_rate_limits": { 00:05:13.808 "r_mbytes_per_sec": 0, 00:05:13.808 "rw_ios_per_sec": 0, 00:05:13.808 "rw_mbytes_per_sec": 0, 00:05:13.808 "w_mbytes_per_sec": 0 00:05:13.808 }, 00:05:13.808 "block_size": 512, 00:05:13.808 "claim_type": "exclusive_write", 00:05:13.808 "claimed": true, 00:05:13.808 "driver_specific": {}, 00:05:13.808 "memory_domains": [ 00:05:13.808 { 00:05:13.808 "dma_device_id": "system", 00:05:13.808 "dma_device_type": 1 00:05:13.808 }, 00:05:13.808 { 00:05:13.808 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:05:13.808 "dma_device_type": 2 00:05:13.808 } 00:05:13.808 ], 00:05:13.808 "name": "Malloc3", 00:05:13.808 "num_blocks": 16384, 00:05:13.808 "product_name": "Malloc disk", 00:05:13.808 "supported_io_types": { 00:05:13.808 "abort": true, 00:05:13.808 "compare": false, 00:05:13.808 "compare_and_write": false, 00:05:13.809 "copy": true, 00:05:13.809 "flush": true, 00:05:13.809 "get_zone_info": false, 00:05:13.809 "nvme_admin": false, 00:05:13.809 "nvme_io": false, 00:05:13.809 "nvme_io_md": false, 00:05:13.809 "nvme_iov_md": false, 00:05:13.809 "read": true, 00:05:13.809 "reset": true, 00:05:13.809 "seek_data": false, 00:05:13.809 "seek_hole": false, 00:05:13.809 "unmap": true, 00:05:13.809 "write": true, 00:05:13.809 "write_zeroes": true, 00:05:13.809 "zcopy": true, 00:05:13.809 "zone_append": false, 00:05:13.809 "zone_management": false 00:05:13.809 }, 00:05:13.809 "uuid": "f351ce44-e0ae-4eb2-b118-e47bfd809361", 00:05:13.809 "zoned": false 00:05:13.809 }, 00:05:13.809 { 00:05:13.809 "aliases": [ 00:05:13.809 "9d6dc4ae-3adf-57cf-b653-0221b973746d" 00:05:13.809 ], 00:05:13.809 "assigned_rate_limits": { 00:05:13.809 "r_mbytes_per_sec": 0, 00:05:13.809 "rw_ios_per_sec": 0, 00:05:13.809 "rw_mbytes_per_sec": 0, 00:05:13.809 "w_mbytes_per_sec": 0 00:05:13.809 }, 00:05:13.809 "block_size": 512, 00:05:13.809 "claimed": false, 00:05:13.809 "driver_specific": { 00:05:13.809 "passthru": { 00:05:13.809 "base_bdev_name": "Malloc3", 00:05:13.809 "name": "Passthru0" 00:05:13.809 } 00:05:13.809 }, 00:05:13.809 "memory_domains": [ 00:05:13.809 { 00:05:13.809 "dma_device_id": "system", 00:05:13.809 "dma_device_type": 1 00:05:13.809 }, 00:05:13.809 { 00:05:13.809 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:05:13.809 "dma_device_type": 2 00:05:13.809 } 00:05:13.809 ], 00:05:13.809 "name": "Passthru0", 00:05:13.809 "num_blocks": 16384, 00:05:13.809 "product_name": "passthru", 00:05:13.809 "supported_io_types": { 00:05:13.809 "abort": true, 00:05:13.809 "compare": false, 00:05:13.809 "compare_and_write": false, 00:05:13.809 "copy": true, 00:05:13.809 "flush": true, 00:05:13.809 "get_zone_info": false, 00:05:13.809 "nvme_admin": false, 00:05:13.809 "nvme_io": false, 00:05:13.809 "nvme_io_md": false, 00:05:13.809 "nvme_iov_md": false, 00:05:13.809 "read": true, 00:05:13.809 "reset": true, 00:05:13.809 "seek_data": false, 00:05:13.809 "seek_hole": false, 00:05:13.809 "unmap": true, 00:05:13.809 "write": true, 00:05:13.809 "write_zeroes": true, 00:05:13.809 "zcopy": true, 00:05:13.809 "zone_append": false, 00:05:13.809 "zone_management": false 00:05:13.809 }, 00:05:13.809 "uuid": "9d6dc4ae-3adf-57cf-b653-0221b973746d", 00:05:13.809 "zoned": false 00:05:13.809 } 00:05:13.809 ]' 00:05:13.809 11:27:14 rpc.rpc_daemon_integrity -- rpc/rpc.sh@21 -- # jq length 00:05:13.809 11:27:14 rpc.rpc_daemon_integrity -- rpc/rpc.sh@21 -- # '[' 2 == 2 ']' 00:05:13.809 11:27:14 rpc.rpc_daemon_integrity -- rpc/rpc.sh@23 -- # rpc_cmd bdev_passthru_delete Passthru0 00:05:13.809 11:27:14 rpc.rpc_daemon_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:13.809 11:27:14 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:05:13.809 11:27:14 rpc.rpc_daemon_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:13.809 11:27:14 rpc.rpc_daemon_integrity -- rpc/rpc.sh@24 -- # rpc_cmd bdev_malloc_delete Malloc3 00:05:13.809 11:27:14 rpc.rpc_daemon_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:13.809 11:27:14 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:05:13.809 11:27:14 rpc.rpc_daemon_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:13.809 11:27:14 rpc.rpc_daemon_integrity -- rpc/rpc.sh@25 -- # rpc_cmd bdev_get_bdevs 00:05:13.809 11:27:14 rpc.rpc_daemon_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:13.809 11:27:14 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:05:13.809 11:27:14 rpc.rpc_daemon_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:13.809 11:27:14 rpc.rpc_daemon_integrity -- rpc/rpc.sh@25 -- # bdevs='[]' 00:05:13.809 11:27:14 rpc.rpc_daemon_integrity -- rpc/rpc.sh@26 -- # jq length 00:05:13.809 11:27:14 rpc.rpc_daemon_integrity -- rpc/rpc.sh@26 -- # '[' 0 == 0 ']' 00:05:13.809 00:05:13.809 real 0m0.319s 00:05:13.809 user 0m0.209s 00:05:13.809 sys 0m0.041s 00:05:13.809 11:27:14 rpc.rpc_daemon_integrity -- common/autotest_common.sh@1130 -- # xtrace_disable 00:05:13.809 11:27:14 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:05:13.809 ************************************ 00:05:13.809 END TEST rpc_daemon_integrity 00:05:13.809 ************************************ 00:05:13.809 11:27:14 rpc -- rpc/rpc.sh@83 -- # trap - SIGINT SIGTERM EXIT 00:05:13.809 11:27:14 rpc -- rpc/rpc.sh@84 -- # killprocess 60343 00:05:13.809 11:27:14 rpc -- common/autotest_common.sh@954 -- # '[' -z 60343 ']' 00:05:13.809 11:27:14 rpc -- common/autotest_common.sh@958 -- # kill -0 60343 00:05:13.809 11:27:14 rpc -- common/autotest_common.sh@959 -- # uname 00:05:13.809 11:27:14 rpc -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:05:13.809 11:27:14 rpc -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 60343 00:05:13.809 11:27:14 rpc -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:05:13.809 killing process with pid 60343 00:05:13.809 11:27:14 rpc -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:05:13.809 11:27:14 rpc -- common/autotest_common.sh@972 -- # echo 'killing process with pid 60343' 00:05:13.809 11:27:14 rpc -- common/autotest_common.sh@973 -- # kill 60343 00:05:13.809 11:27:14 rpc -- common/autotest_common.sh@978 -- # wait 60343 00:05:14.376 00:05:14.376 real 0m3.223s 00:05:14.376 user 0m4.219s 00:05:14.376 sys 0m0.808s 00:05:14.376 11:27:14 rpc -- common/autotest_common.sh@1130 -- # xtrace_disable 00:05:14.377 11:27:14 rpc -- common/autotest_common.sh@10 -- # set +x 00:05:14.377 ************************************ 00:05:14.377 END TEST rpc 00:05:14.377 ************************************ 00:05:14.377 11:27:14 -- spdk/autotest.sh@157 -- # run_test skip_rpc /home/vagrant/spdk_repo/spdk/test/rpc/skip_rpc.sh 00:05:14.377 11:27:14 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:05:14.377 11:27:14 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:05:14.377 11:27:14 -- common/autotest_common.sh@10 -- # set +x 00:05:14.377 ************************************ 00:05:14.377 START TEST skip_rpc 00:05:14.377 ************************************ 00:05:14.377 11:27:14 skip_rpc -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/rpc/skip_rpc.sh 00:05:14.377 * Looking for test storage... 00:05:14.377 * Found test storage at /home/vagrant/spdk_repo/spdk/test/rpc 00:05:14.377 11:27:15 skip_rpc -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:05:14.377 11:27:15 skip_rpc -- common/autotest_common.sh@1711 -- # lcov --version 00:05:14.377 11:27:15 skip_rpc -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:05:14.377 11:27:15 skip_rpc -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:05:14.377 11:27:15 skip_rpc -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:05:14.377 11:27:15 skip_rpc -- scripts/common.sh@333 -- # local ver1 ver1_l 00:05:14.377 11:27:15 skip_rpc -- scripts/common.sh@334 -- # local ver2 ver2_l 00:05:14.377 11:27:15 skip_rpc -- scripts/common.sh@336 -- # IFS=.-: 00:05:14.377 11:27:15 skip_rpc -- scripts/common.sh@336 -- # read -ra ver1 00:05:14.377 11:27:15 skip_rpc -- scripts/common.sh@337 -- # IFS=.-: 00:05:14.377 11:27:15 skip_rpc -- scripts/common.sh@337 -- # read -ra ver2 00:05:14.377 11:27:15 skip_rpc -- scripts/common.sh@338 -- # local 'op=<' 00:05:14.377 11:27:15 skip_rpc -- scripts/common.sh@340 -- # ver1_l=2 00:05:14.377 11:27:15 skip_rpc -- scripts/common.sh@341 -- # ver2_l=1 00:05:14.377 11:27:15 skip_rpc -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:05:14.377 11:27:15 skip_rpc -- scripts/common.sh@344 -- # case "$op" in 00:05:14.377 11:27:15 skip_rpc -- scripts/common.sh@345 -- # : 1 00:05:14.377 11:27:15 skip_rpc -- scripts/common.sh@364 -- # (( v = 0 )) 00:05:14.377 11:27:15 skip_rpc -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:05:14.377 11:27:15 skip_rpc -- scripts/common.sh@365 -- # decimal 1 00:05:14.377 11:27:15 skip_rpc -- scripts/common.sh@353 -- # local d=1 00:05:14.377 11:27:15 skip_rpc -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:05:14.377 11:27:15 skip_rpc -- scripts/common.sh@355 -- # echo 1 00:05:14.377 11:27:15 skip_rpc -- scripts/common.sh@365 -- # ver1[v]=1 00:05:14.377 11:27:15 skip_rpc -- scripts/common.sh@366 -- # decimal 2 00:05:14.377 11:27:15 skip_rpc -- scripts/common.sh@353 -- # local d=2 00:05:14.377 11:27:15 skip_rpc -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:05:14.377 11:27:15 skip_rpc -- scripts/common.sh@355 -- # echo 2 00:05:14.377 11:27:15 skip_rpc -- scripts/common.sh@366 -- # ver2[v]=2 00:05:14.377 11:27:15 skip_rpc -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:05:14.377 11:27:15 skip_rpc -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:05:14.377 11:27:15 skip_rpc -- scripts/common.sh@368 -- # return 0 00:05:14.377 11:27:15 skip_rpc -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:05:14.377 11:27:15 skip_rpc -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:05:14.377 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:14.377 --rc genhtml_branch_coverage=1 00:05:14.377 --rc genhtml_function_coverage=1 00:05:14.377 --rc genhtml_legend=1 00:05:14.377 --rc geninfo_all_blocks=1 00:05:14.377 --rc geninfo_unexecuted_blocks=1 00:05:14.377 00:05:14.377 ' 00:05:14.377 11:27:15 skip_rpc -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:05:14.377 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:14.377 --rc genhtml_branch_coverage=1 00:05:14.377 --rc genhtml_function_coverage=1 00:05:14.377 --rc genhtml_legend=1 00:05:14.377 --rc geninfo_all_blocks=1 00:05:14.377 --rc geninfo_unexecuted_blocks=1 00:05:14.377 00:05:14.377 ' 00:05:14.377 11:27:15 skip_rpc -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:05:14.377 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:14.377 --rc genhtml_branch_coverage=1 00:05:14.377 --rc genhtml_function_coverage=1 00:05:14.377 --rc genhtml_legend=1 00:05:14.377 --rc geninfo_all_blocks=1 00:05:14.377 --rc geninfo_unexecuted_blocks=1 00:05:14.377 00:05:14.377 ' 00:05:14.377 11:27:15 skip_rpc -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:05:14.377 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:14.377 --rc genhtml_branch_coverage=1 00:05:14.377 --rc genhtml_function_coverage=1 00:05:14.377 --rc genhtml_legend=1 00:05:14.377 --rc geninfo_all_blocks=1 00:05:14.377 --rc geninfo_unexecuted_blocks=1 00:05:14.377 00:05:14.377 ' 00:05:14.377 11:27:15 skip_rpc -- rpc/skip_rpc.sh@11 -- # CONFIG_PATH=/home/vagrant/spdk_repo/spdk/test/rpc/config.json 00:05:14.377 11:27:15 skip_rpc -- rpc/skip_rpc.sh@12 -- # LOG_PATH=/home/vagrant/spdk_repo/spdk/test/rpc/log.txt 00:05:14.377 11:27:15 skip_rpc -- rpc/skip_rpc.sh@73 -- # run_test skip_rpc test_skip_rpc 00:05:14.377 11:27:15 skip_rpc -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:05:14.377 11:27:15 skip_rpc -- common/autotest_common.sh@1111 -- # xtrace_disable 00:05:14.377 11:27:15 skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:14.377 ************************************ 00:05:14.377 START TEST skip_rpc 00:05:14.377 ************************************ 00:05:14.377 11:27:15 skip_rpc.skip_rpc -- common/autotest_common.sh@1129 -- # test_skip_rpc 00:05:14.377 11:27:15 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@16 -- # local spdk_pid=60612 00:05:14.377 11:27:15 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@15 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1 00:05:14.377 11:27:15 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@18 -- # trap 'killprocess $spdk_pid; exit 1' SIGINT SIGTERM EXIT 00:05:14.377 11:27:15 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@19 -- # sleep 5 00:05:14.634 [2024-12-16 11:27:15.199947] Starting SPDK v25.01-pre git sha1 c360129e7 / DPDK 24.03.0 initialization... 00:05:14.634 [2024-12-16 11:27:15.200081] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid60612 ] 00:05:14.634 [2024-12-16 11:27:15.354746] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:14.892 [2024-12-16 11:27:15.427964] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:05:20.159 11:27:20 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@21 -- # NOT rpc_cmd spdk_get_version 00:05:20.159 11:27:20 skip_rpc.skip_rpc -- common/autotest_common.sh@652 -- # local es=0 00:05:20.159 11:27:20 skip_rpc.skip_rpc -- common/autotest_common.sh@654 -- # valid_exec_arg rpc_cmd spdk_get_version 00:05:20.159 11:27:20 skip_rpc.skip_rpc -- common/autotest_common.sh@640 -- # local arg=rpc_cmd 00:05:20.159 11:27:20 skip_rpc.skip_rpc -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:05:20.159 11:27:20 skip_rpc.skip_rpc -- common/autotest_common.sh@644 -- # type -t rpc_cmd 00:05:20.159 11:27:20 skip_rpc.skip_rpc -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:05:20.159 11:27:20 skip_rpc.skip_rpc -- common/autotest_common.sh@655 -- # rpc_cmd spdk_get_version 00:05:20.159 11:27:20 skip_rpc.skip_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:20.159 11:27:20 skip_rpc.skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:20.159 2024/12/16 11:27:20 error on client creation, err: error during client creation for Unix socket, err: could not connect to a Unix socket on address /var/tmp/spdk.sock, err: dial unix /var/tmp/spdk.sock: connect: no such file or directory 00:05:20.159 11:27:20 skip_rpc.skip_rpc -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:05:20.159 11:27:20 skip_rpc.skip_rpc -- common/autotest_common.sh@655 -- # es=1 00:05:20.159 11:27:20 skip_rpc.skip_rpc -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:05:20.159 11:27:20 skip_rpc.skip_rpc -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:05:20.159 11:27:20 skip_rpc.skip_rpc -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:05:20.159 11:27:20 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@22 -- # trap - SIGINT SIGTERM EXIT 00:05:20.159 11:27:20 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@23 -- # killprocess 60612 00:05:20.159 11:27:20 skip_rpc.skip_rpc -- common/autotest_common.sh@954 -- # '[' -z 60612 ']' 00:05:20.159 11:27:20 skip_rpc.skip_rpc -- common/autotest_common.sh@958 -- # kill -0 60612 00:05:20.159 11:27:20 skip_rpc.skip_rpc -- common/autotest_common.sh@959 -- # uname 00:05:20.159 11:27:20 skip_rpc.skip_rpc -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:05:20.159 11:27:20 skip_rpc.skip_rpc -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 60612 00:05:20.159 11:27:20 skip_rpc.skip_rpc -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:05:20.159 killing process with pid 60612 00:05:20.159 11:27:20 skip_rpc.skip_rpc -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:05:20.159 11:27:20 skip_rpc.skip_rpc -- common/autotest_common.sh@972 -- # echo 'killing process with pid 60612' 00:05:20.159 11:27:20 skip_rpc.skip_rpc -- common/autotest_common.sh@973 -- # kill 60612 00:05:20.159 11:27:20 skip_rpc.skip_rpc -- common/autotest_common.sh@978 -- # wait 60612 00:05:20.159 00:05:20.159 real 0m5.438s 00:05:20.159 user 0m5.041s 00:05:20.159 sys 0m0.303s 00:05:20.159 11:27:20 skip_rpc.skip_rpc -- common/autotest_common.sh@1130 -- # xtrace_disable 00:05:20.159 11:27:20 skip_rpc.skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:20.159 ************************************ 00:05:20.159 END TEST skip_rpc 00:05:20.159 ************************************ 00:05:20.159 11:27:20 skip_rpc -- rpc/skip_rpc.sh@74 -- # run_test skip_rpc_with_json test_skip_rpc_with_json 00:05:20.159 11:27:20 skip_rpc -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:05:20.159 11:27:20 skip_rpc -- common/autotest_common.sh@1111 -- # xtrace_disable 00:05:20.159 11:27:20 skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:20.159 ************************************ 00:05:20.159 START TEST skip_rpc_with_json 00:05:20.159 ************************************ 00:05:20.159 11:27:20 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@1129 -- # test_skip_rpc_with_json 00:05:20.159 11:27:20 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@44 -- # gen_json_config 00:05:20.159 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:20.159 11:27:20 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@28 -- # local spdk_pid=60705 00:05:20.159 11:27:20 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@27 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 00:05:20.159 11:27:20 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@30 -- # trap 'killprocess $spdk_pid; exit 1' SIGINT SIGTERM EXIT 00:05:20.159 11:27:20 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@31 -- # waitforlisten 60705 00:05:20.159 11:27:20 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@835 -- # '[' -z 60705 ']' 00:05:20.159 11:27:20 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:20.159 11:27:20 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@840 -- # local max_retries=100 00:05:20.159 11:27:20 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:20.159 11:27:20 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@844 -- # xtrace_disable 00:05:20.159 11:27:20 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@10 -- # set +x 00:05:20.159 [2024-12-16 11:27:20.687690] Starting SPDK v25.01-pre git sha1 c360129e7 / DPDK 24.03.0 initialization... 00:05:20.160 [2024-12-16 11:27:20.688015] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid60705 ] 00:05:20.160 [2024-12-16 11:27:20.829981] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:20.160 [2024-12-16 11:27:20.887090] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:05:20.418 11:27:21 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:05:20.418 11:27:21 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@868 -- # return 0 00:05:20.418 11:27:21 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@34 -- # rpc_cmd nvmf_get_transports --trtype tcp 00:05:20.418 11:27:21 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:20.418 11:27:21 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@10 -- # set +x 00:05:20.677 [2024-12-16 11:27:21.171314] nvmf_rpc.c:2707:rpc_nvmf_get_transports: *ERROR*: transport 'tcp' does not exist 00:05:20.677 2024/12/16 11:27:21 error on JSON-RPC call, method: nvmf_get_transports, params: map[trtype:tcp], err: error received for nvmf_get_transports method, err: Code=-19 Msg=No such device 00:05:20.677 request: 00:05:20.677 { 00:05:20.677 "method": "nvmf_get_transports", 00:05:20.677 "params": { 00:05:20.677 "trtype": "tcp" 00:05:20.677 } 00:05:20.677 } 00:05:20.677 Got JSON-RPC error response 00:05:20.677 GoRPCClient: error on JSON-RPC call 00:05:20.677 11:27:21 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:05:20.677 11:27:21 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@34 -- # rpc_cmd nvmf_create_transport -t tcp 00:05:20.677 11:27:21 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:20.677 11:27:21 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@10 -- # set +x 00:05:20.677 [2024-12-16 11:27:21.183427] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:05:20.677 11:27:21 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:20.677 11:27:21 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@36 -- # rpc_cmd save_config 00:05:20.677 11:27:21 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:20.677 11:27:21 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@10 -- # set +x 00:05:20.677 11:27:21 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:20.677 11:27:21 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@37 -- # cat /home/vagrant/spdk_repo/spdk/test/rpc/config.json 00:05:20.677 { 00:05:20.677 "subsystems": [ 00:05:20.677 { 00:05:20.677 "subsystem": "fsdev", 00:05:20.677 "config": [ 00:05:20.677 { 00:05:20.677 "method": "fsdev_set_opts", 00:05:20.677 "params": { 00:05:20.677 "fsdev_io_cache_size": 256, 00:05:20.677 "fsdev_io_pool_size": 65535 00:05:20.677 } 00:05:20.677 } 00:05:20.677 ] 00:05:20.677 }, 00:05:20.677 { 00:05:20.677 "subsystem": "keyring", 00:05:20.677 "config": [] 00:05:20.677 }, 00:05:20.677 { 00:05:20.677 "subsystem": "iobuf", 00:05:20.677 "config": [ 00:05:20.677 { 00:05:20.677 "method": "iobuf_set_options", 00:05:20.677 "params": { 00:05:20.677 "enable_numa": false, 00:05:20.677 "large_bufsize": 135168, 00:05:20.677 "large_pool_count": 1024, 00:05:20.677 "small_bufsize": 8192, 00:05:20.677 "small_pool_count": 8192 00:05:20.677 } 00:05:20.677 } 00:05:20.677 ] 00:05:20.677 }, 00:05:20.677 { 00:05:20.677 "subsystem": "sock", 00:05:20.677 "config": [ 00:05:20.677 { 00:05:20.677 "method": "sock_set_default_impl", 00:05:20.677 "params": { 00:05:20.677 "impl_name": "posix" 00:05:20.677 } 00:05:20.677 }, 00:05:20.677 { 00:05:20.677 "method": "sock_impl_set_options", 00:05:20.677 "params": { 00:05:20.677 "enable_ktls": false, 00:05:20.677 "enable_placement_id": 0, 00:05:20.677 "enable_quickack": false, 00:05:20.677 "enable_recv_pipe": true, 00:05:20.677 "enable_zerocopy_send_client": false, 00:05:20.677 "enable_zerocopy_send_server": true, 00:05:20.677 "impl_name": "ssl", 00:05:20.677 "recv_buf_size": 4096, 00:05:20.677 "send_buf_size": 4096, 00:05:20.677 "tls_version": 0, 00:05:20.677 "zerocopy_threshold": 0 00:05:20.677 } 00:05:20.677 }, 00:05:20.677 { 00:05:20.677 "method": "sock_impl_set_options", 00:05:20.677 "params": { 00:05:20.677 "enable_ktls": false, 00:05:20.677 "enable_placement_id": 0, 00:05:20.677 "enable_quickack": false, 00:05:20.677 "enable_recv_pipe": true, 00:05:20.677 "enable_zerocopy_send_client": false, 00:05:20.677 "enable_zerocopy_send_server": true, 00:05:20.677 "impl_name": "posix", 00:05:20.677 "recv_buf_size": 2097152, 00:05:20.677 "send_buf_size": 2097152, 00:05:20.677 "tls_version": 0, 00:05:20.677 "zerocopy_threshold": 0 00:05:20.677 } 00:05:20.677 } 00:05:20.677 ] 00:05:20.677 }, 00:05:20.677 { 00:05:20.677 "subsystem": "vmd", 00:05:20.677 "config": [] 00:05:20.677 }, 00:05:20.677 { 00:05:20.677 "subsystem": "accel", 00:05:20.677 "config": [ 00:05:20.677 { 00:05:20.677 "method": "accel_set_options", 00:05:20.677 "params": { 00:05:20.677 "buf_count": 2048, 00:05:20.677 "large_cache_size": 16, 00:05:20.677 "sequence_count": 2048, 00:05:20.677 "small_cache_size": 128, 00:05:20.677 "task_count": 2048 00:05:20.677 } 00:05:20.677 } 00:05:20.677 ] 00:05:20.677 }, 00:05:20.677 { 00:05:20.677 "subsystem": "bdev", 00:05:20.677 "config": [ 00:05:20.677 { 00:05:20.677 "method": "bdev_set_options", 00:05:20.677 "params": { 00:05:20.677 "bdev_auto_examine": true, 00:05:20.677 "bdev_io_cache_size": 256, 00:05:20.677 "bdev_io_pool_size": 65535, 00:05:20.677 "iobuf_large_cache_size": 16, 00:05:20.677 "iobuf_small_cache_size": 128 00:05:20.677 } 00:05:20.677 }, 00:05:20.677 { 00:05:20.677 "method": "bdev_raid_set_options", 00:05:20.677 "params": { 00:05:20.677 "process_max_bandwidth_mb_sec": 0, 00:05:20.677 "process_window_size_kb": 1024 00:05:20.677 } 00:05:20.677 }, 00:05:20.677 { 00:05:20.677 "method": "bdev_iscsi_set_options", 00:05:20.677 "params": { 00:05:20.677 "timeout_sec": 30 00:05:20.677 } 00:05:20.677 }, 00:05:20.677 { 00:05:20.677 "method": "bdev_nvme_set_options", 00:05:20.677 "params": { 00:05:20.677 "action_on_timeout": "none", 00:05:20.677 "allow_accel_sequence": false, 00:05:20.677 "arbitration_burst": 0, 00:05:20.677 "bdev_retry_count": 3, 00:05:20.677 "ctrlr_loss_timeout_sec": 0, 00:05:20.677 "delay_cmd_submit": true, 00:05:20.677 "dhchap_dhgroups": [ 00:05:20.677 "null", 00:05:20.677 "ffdhe2048", 00:05:20.677 "ffdhe3072", 00:05:20.677 "ffdhe4096", 00:05:20.677 "ffdhe6144", 00:05:20.677 "ffdhe8192" 00:05:20.677 ], 00:05:20.677 "dhchap_digests": [ 00:05:20.677 "sha256", 00:05:20.677 "sha384", 00:05:20.677 "sha512" 00:05:20.677 ], 00:05:20.677 "disable_auto_failback": false, 00:05:20.677 "fast_io_fail_timeout_sec": 0, 00:05:20.677 "generate_uuids": false, 00:05:20.677 "high_priority_weight": 0, 00:05:20.677 "io_path_stat": false, 00:05:20.677 "io_queue_requests": 0, 00:05:20.677 "keep_alive_timeout_ms": 10000, 00:05:20.677 "low_priority_weight": 0, 00:05:20.677 "medium_priority_weight": 0, 00:05:20.677 "nvme_adminq_poll_period_us": 10000, 00:05:20.677 "nvme_error_stat": false, 00:05:20.677 "nvme_ioq_poll_period_us": 0, 00:05:20.677 "rdma_cm_event_timeout_ms": 0, 00:05:20.677 "rdma_max_cq_size": 0, 00:05:20.677 "rdma_srq_size": 0, 00:05:20.677 "rdma_umr_per_io": false, 00:05:20.677 "reconnect_delay_sec": 0, 00:05:20.677 "timeout_admin_us": 0, 00:05:20.677 "timeout_us": 0, 00:05:20.677 "transport_ack_timeout": 0, 00:05:20.677 "transport_retry_count": 4, 00:05:20.677 "transport_tos": 0 00:05:20.677 } 00:05:20.677 }, 00:05:20.677 { 00:05:20.677 "method": "bdev_nvme_set_hotplug", 00:05:20.677 "params": { 00:05:20.677 "enable": false, 00:05:20.677 "period_us": 100000 00:05:20.677 } 00:05:20.677 }, 00:05:20.677 { 00:05:20.677 "method": "bdev_wait_for_examine" 00:05:20.677 } 00:05:20.677 ] 00:05:20.677 }, 00:05:20.677 { 00:05:20.677 "subsystem": "scsi", 00:05:20.677 "config": null 00:05:20.677 }, 00:05:20.677 { 00:05:20.677 "subsystem": "scheduler", 00:05:20.677 "config": [ 00:05:20.677 { 00:05:20.677 "method": "framework_set_scheduler", 00:05:20.677 "params": { 00:05:20.677 "name": "static" 00:05:20.677 } 00:05:20.677 } 00:05:20.677 ] 00:05:20.677 }, 00:05:20.677 { 00:05:20.677 "subsystem": "vhost_scsi", 00:05:20.677 "config": [] 00:05:20.677 }, 00:05:20.677 { 00:05:20.677 "subsystem": "vhost_blk", 00:05:20.677 "config": [] 00:05:20.677 }, 00:05:20.677 { 00:05:20.677 "subsystem": "ublk", 00:05:20.677 "config": [] 00:05:20.677 }, 00:05:20.677 { 00:05:20.677 "subsystem": "nbd", 00:05:20.677 "config": [] 00:05:20.677 }, 00:05:20.677 { 00:05:20.677 "subsystem": "nvmf", 00:05:20.677 "config": [ 00:05:20.677 { 00:05:20.677 "method": "nvmf_set_config", 00:05:20.677 "params": { 00:05:20.677 "admin_cmd_passthru": { 00:05:20.677 "identify_ctrlr": false 00:05:20.678 }, 00:05:20.678 "dhchap_dhgroups": [ 00:05:20.678 "null", 00:05:20.678 "ffdhe2048", 00:05:20.678 "ffdhe3072", 00:05:20.678 "ffdhe4096", 00:05:20.678 "ffdhe6144", 00:05:20.678 "ffdhe8192" 00:05:20.678 ], 00:05:20.678 "dhchap_digests": [ 00:05:20.678 "sha256", 00:05:20.678 "sha384", 00:05:20.678 "sha512" 00:05:20.678 ], 00:05:20.678 "discovery_filter": "match_any" 00:05:20.678 } 00:05:20.678 }, 00:05:20.678 { 00:05:20.678 "method": "nvmf_set_max_subsystems", 00:05:20.678 "params": { 00:05:20.678 "max_subsystems": 1024 00:05:20.678 } 00:05:20.678 }, 00:05:20.678 { 00:05:20.678 "method": "nvmf_set_crdt", 00:05:20.678 "params": { 00:05:20.678 "crdt1": 0, 00:05:20.678 "crdt2": 0, 00:05:20.678 "crdt3": 0 00:05:20.678 } 00:05:20.678 }, 00:05:20.678 { 00:05:20.678 "method": "nvmf_create_transport", 00:05:20.678 "params": { 00:05:20.678 "abort_timeout_sec": 1, 00:05:20.678 "ack_timeout": 0, 00:05:20.678 "buf_cache_size": 4294967295, 00:05:20.678 "c2h_success": true, 00:05:20.678 "data_wr_pool_size": 0, 00:05:20.678 "dif_insert_or_strip": false, 00:05:20.678 "in_capsule_data_size": 4096, 00:05:20.678 "io_unit_size": 131072, 00:05:20.678 "max_aq_depth": 128, 00:05:20.678 "max_io_qpairs_per_ctrlr": 127, 00:05:20.678 "max_io_size": 131072, 00:05:20.678 "max_queue_depth": 128, 00:05:20.678 "num_shared_buffers": 511, 00:05:20.678 "sock_priority": 0, 00:05:20.678 "trtype": "TCP", 00:05:20.678 "zcopy": false 00:05:20.678 } 00:05:20.678 } 00:05:20.678 ] 00:05:20.678 }, 00:05:20.678 { 00:05:20.678 "subsystem": "iscsi", 00:05:20.678 "config": [ 00:05:20.678 { 00:05:20.678 "method": "iscsi_set_options", 00:05:20.678 "params": { 00:05:20.678 "allow_duplicated_isid": false, 00:05:20.678 "chap_group": 0, 00:05:20.678 "data_out_pool_size": 2048, 00:05:20.678 "default_time2retain": 20, 00:05:20.678 "default_time2wait": 2, 00:05:20.678 "disable_chap": false, 00:05:20.678 "error_recovery_level": 0, 00:05:20.678 "first_burst_length": 8192, 00:05:20.678 "immediate_data": true, 00:05:20.678 "immediate_data_pool_size": 16384, 00:05:20.678 "max_connections_per_session": 2, 00:05:20.678 "max_large_datain_per_connection": 64, 00:05:20.678 "max_queue_depth": 64, 00:05:20.678 "max_r2t_per_connection": 4, 00:05:20.678 "max_sessions": 128, 00:05:20.678 "mutual_chap": false, 00:05:20.678 "node_base": "iqn.2016-06.io.spdk", 00:05:20.678 "nop_in_interval": 30, 00:05:20.678 "nop_timeout": 60, 00:05:20.678 "pdu_pool_size": 36864, 00:05:20.678 "require_chap": false 00:05:20.678 } 00:05:20.678 } 00:05:20.678 ] 00:05:20.678 } 00:05:20.678 ] 00:05:20.678 } 00:05:20.678 11:27:21 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@39 -- # trap - SIGINT SIGTERM EXIT 00:05:20.678 11:27:21 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@40 -- # killprocess 60705 00:05:20.678 11:27:21 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@954 -- # '[' -z 60705 ']' 00:05:20.678 11:27:21 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@958 -- # kill -0 60705 00:05:20.678 11:27:21 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@959 -- # uname 00:05:20.678 11:27:21 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:05:20.678 11:27:21 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 60705 00:05:20.678 killing process with pid 60705 00:05:20.678 11:27:21 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:05:20.678 11:27:21 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:05:20.678 11:27:21 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@972 -- # echo 'killing process with pid 60705' 00:05:20.678 11:27:21 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@973 -- # kill 60705 00:05:20.678 11:27:21 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@978 -- # wait 60705 00:05:21.245 11:27:21 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@47 -- # local spdk_pid=60731 00:05:21.245 11:27:21 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@46 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1 --json /home/vagrant/spdk_repo/spdk/test/rpc/config.json 00:05:21.245 11:27:21 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@48 -- # sleep 5 00:05:26.516 11:27:26 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@50 -- # killprocess 60731 00:05:26.516 11:27:26 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@954 -- # '[' -z 60731 ']' 00:05:26.516 11:27:26 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@958 -- # kill -0 60731 00:05:26.516 11:27:26 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@959 -- # uname 00:05:26.516 11:27:26 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:05:26.516 11:27:26 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 60731 00:05:26.516 killing process with pid 60731 00:05:26.516 11:27:26 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:05:26.516 11:27:26 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:05:26.516 11:27:26 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@972 -- # echo 'killing process with pid 60731' 00:05:26.516 11:27:26 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@973 -- # kill 60731 00:05:26.516 11:27:26 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@978 -- # wait 60731 00:05:26.516 11:27:27 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@51 -- # grep -q 'TCP Transport Init' /home/vagrant/spdk_repo/spdk/test/rpc/log.txt 00:05:26.516 11:27:27 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@52 -- # rm /home/vagrant/spdk_repo/spdk/test/rpc/log.txt 00:05:26.516 ************************************ 00:05:26.516 END TEST skip_rpc_with_json 00:05:26.516 ************************************ 00:05:26.516 00:05:26.516 real 0m6.611s 00:05:26.516 user 0m6.155s 00:05:26.516 sys 0m0.641s 00:05:26.516 11:27:27 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@1130 -- # xtrace_disable 00:05:26.516 11:27:27 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@10 -- # set +x 00:05:26.775 11:27:27 skip_rpc -- rpc/skip_rpc.sh@75 -- # run_test skip_rpc_with_delay test_skip_rpc_with_delay 00:05:26.775 11:27:27 skip_rpc -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:05:26.775 11:27:27 skip_rpc -- common/autotest_common.sh@1111 -- # xtrace_disable 00:05:26.775 11:27:27 skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:26.775 ************************************ 00:05:26.775 START TEST skip_rpc_with_delay 00:05:26.775 ************************************ 00:05:26.775 11:27:27 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@1129 -- # test_skip_rpc_with_delay 00:05:26.775 11:27:27 skip_rpc.skip_rpc_with_delay -- rpc/skip_rpc.sh@57 -- # NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1 --wait-for-rpc 00:05:26.775 11:27:27 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@652 -- # local es=0 00:05:26.775 11:27:27 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@654 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1 --wait-for-rpc 00:05:26.775 11:27:27 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@640 -- # local arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:05:26.775 11:27:27 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:05:26.775 11:27:27 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@644 -- # type -t /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:05:26.775 11:27:27 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:05:26.775 11:27:27 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@646 -- # type -P /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:05:26.775 11:27:27 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:05:26.775 11:27:27 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@646 -- # arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:05:26.775 11:27:27 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@646 -- # [[ -x /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt ]] 00:05:26.775 11:27:27 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@655 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1 --wait-for-rpc 00:05:26.775 [2024-12-16 11:27:27.361044] app.c: 842:spdk_app_start: *ERROR*: Cannot use '--wait-for-rpc' if no RPC server is going to be started. 00:05:26.775 11:27:27 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@655 -- # es=1 00:05:26.775 11:27:27 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:05:26.775 11:27:27 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:05:26.775 11:27:27 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:05:26.775 00:05:26.775 real 0m0.099s 00:05:26.775 user 0m0.069s 00:05:26.776 sys 0m0.029s 00:05:26.776 11:27:27 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@1130 -- # xtrace_disable 00:05:26.776 ************************************ 00:05:26.776 END TEST skip_rpc_with_delay 00:05:26.776 ************************************ 00:05:26.776 11:27:27 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@10 -- # set +x 00:05:26.776 11:27:27 skip_rpc -- rpc/skip_rpc.sh@77 -- # uname 00:05:26.776 11:27:27 skip_rpc -- rpc/skip_rpc.sh@77 -- # '[' Linux '!=' FreeBSD ']' 00:05:26.776 11:27:27 skip_rpc -- rpc/skip_rpc.sh@78 -- # run_test exit_on_failed_rpc_init test_exit_on_failed_rpc_init 00:05:26.776 11:27:27 skip_rpc -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:05:26.776 11:27:27 skip_rpc -- common/autotest_common.sh@1111 -- # xtrace_disable 00:05:26.776 11:27:27 skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:26.776 ************************************ 00:05:26.776 START TEST exit_on_failed_rpc_init 00:05:26.776 ************************************ 00:05:26.776 11:27:27 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@1129 -- # test_exit_on_failed_rpc_init 00:05:26.776 11:27:27 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@62 -- # local spdk_pid=60840 00:05:26.776 11:27:27 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@63 -- # waitforlisten 60840 00:05:26.776 11:27:27 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@61 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 00:05:26.776 11:27:27 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@835 -- # '[' -z 60840 ']' 00:05:26.776 11:27:27 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:26.776 11:27:27 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@840 -- # local max_retries=100 00:05:26.776 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:26.776 11:27:27 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:26.776 11:27:27 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@844 -- # xtrace_disable 00:05:26.776 11:27:27 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@10 -- # set +x 00:05:26.776 [2024-12-16 11:27:27.511757] Starting SPDK v25.01-pre git sha1 c360129e7 / DPDK 24.03.0 initialization... 00:05:26.776 [2024-12-16 11:27:27.511861] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid60840 ] 00:05:27.034 [2024-12-16 11:27:27.659737] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:27.034 [2024-12-16 11:27:27.704879] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:05:27.293 11:27:27 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:05:27.293 11:27:27 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@868 -- # return 0 00:05:27.293 11:27:27 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@65 -- # trap 'killprocess $spdk_pid; exit 1' SIGINT SIGTERM EXIT 00:05:27.293 11:27:27 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@67 -- # NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x2 00:05:27.293 11:27:27 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@652 -- # local es=0 00:05:27.293 11:27:27 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@654 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x2 00:05:27.293 11:27:27 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@640 -- # local arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:05:27.293 11:27:27 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:05:27.293 11:27:27 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@644 -- # type -t /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:05:27.293 11:27:27 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:05:27.293 11:27:27 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@646 -- # type -P /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:05:27.293 11:27:27 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:05:27.293 11:27:27 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@646 -- # arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:05:27.293 11:27:27 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@646 -- # [[ -x /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt ]] 00:05:27.293 11:27:27 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@655 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x2 00:05:27.551 [2024-12-16 11:27:28.059903] Starting SPDK v25.01-pre git sha1 c360129e7 / DPDK 24.03.0 initialization... 00:05:27.551 [2024-12-16 11:27:28.060004] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid60857 ] 00:05:27.551 [2024-12-16 11:27:28.224491] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:27.809 [2024-12-16 11:27:28.304754] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 1 00:05:27.809 [2024-12-16 11:27:28.305014] rpc.c: 180:_spdk_rpc_listen: *ERROR*: RPC Unix domain socket path /var/tmp/spdk.sock in use. Specify another. 00:05:27.809 [2024-12-16 11:27:28.305038] rpc.c: 166:spdk_rpc_initialize: *ERROR*: Unable to start RPC service at /var/tmp/spdk.sock 00:05:27.809 [2024-12-16 11:27:28.305051] app.c:1064:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:05:27.809 11:27:28 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@655 -- # es=234 00:05:27.809 11:27:28 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:05:27.809 11:27:28 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@664 -- # es=106 00:05:27.809 11:27:28 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@665 -- # case "$es" in 00:05:27.809 11:27:28 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@672 -- # es=1 00:05:27.809 11:27:28 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:05:27.809 11:27:28 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@69 -- # trap - SIGINT SIGTERM EXIT 00:05:27.809 11:27:28 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@70 -- # killprocess 60840 00:05:27.809 11:27:28 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@954 -- # '[' -z 60840 ']' 00:05:27.809 11:27:28 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@958 -- # kill -0 60840 00:05:27.809 11:27:28 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@959 -- # uname 00:05:27.809 11:27:28 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:05:27.809 11:27:28 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 60840 00:05:27.809 killing process with pid 60840 00:05:27.809 11:27:28 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:05:27.809 11:27:28 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:05:27.809 11:27:28 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@972 -- # echo 'killing process with pid 60840' 00:05:27.809 11:27:28 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@973 -- # kill 60840 00:05:27.809 11:27:28 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@978 -- # wait 60840 00:05:28.068 00:05:28.068 real 0m1.355s 00:05:28.068 user 0m1.472s 00:05:28.068 sys 0m0.408s 00:05:28.068 11:27:28 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@1130 -- # xtrace_disable 00:05:28.068 11:27:28 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@10 -- # set +x 00:05:28.068 ************************************ 00:05:28.068 END TEST exit_on_failed_rpc_init 00:05:28.068 ************************************ 00:05:28.326 11:27:28 skip_rpc -- rpc/skip_rpc.sh@81 -- # rm /home/vagrant/spdk_repo/spdk/test/rpc/config.json 00:05:28.326 ************************************ 00:05:28.326 END TEST skip_rpc 00:05:28.326 ************************************ 00:05:28.326 00:05:28.326 real 0m13.885s 00:05:28.326 user 0m12.919s 00:05:28.326 sys 0m1.571s 00:05:28.326 11:27:28 skip_rpc -- common/autotest_common.sh@1130 -- # xtrace_disable 00:05:28.326 11:27:28 skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:28.326 11:27:28 -- spdk/autotest.sh@158 -- # run_test rpc_client /home/vagrant/spdk_repo/spdk/test/rpc_client/rpc_client.sh 00:05:28.326 11:27:28 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:05:28.326 11:27:28 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:05:28.326 11:27:28 -- common/autotest_common.sh@10 -- # set +x 00:05:28.326 ************************************ 00:05:28.326 START TEST rpc_client 00:05:28.326 ************************************ 00:05:28.326 11:27:28 rpc_client -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/rpc_client/rpc_client.sh 00:05:28.326 * Looking for test storage... 00:05:28.326 * Found test storage at /home/vagrant/spdk_repo/spdk/test/rpc_client 00:05:28.326 11:27:28 rpc_client -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:05:28.326 11:27:28 rpc_client -- common/autotest_common.sh@1711 -- # lcov --version 00:05:28.326 11:27:28 rpc_client -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:05:28.585 11:27:29 rpc_client -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:05:28.585 11:27:29 rpc_client -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:05:28.585 11:27:29 rpc_client -- scripts/common.sh@333 -- # local ver1 ver1_l 00:05:28.585 11:27:29 rpc_client -- scripts/common.sh@334 -- # local ver2 ver2_l 00:05:28.585 11:27:29 rpc_client -- scripts/common.sh@336 -- # IFS=.-: 00:05:28.585 11:27:29 rpc_client -- scripts/common.sh@336 -- # read -ra ver1 00:05:28.585 11:27:29 rpc_client -- scripts/common.sh@337 -- # IFS=.-: 00:05:28.586 11:27:29 rpc_client -- scripts/common.sh@337 -- # read -ra ver2 00:05:28.586 11:27:29 rpc_client -- scripts/common.sh@338 -- # local 'op=<' 00:05:28.586 11:27:29 rpc_client -- scripts/common.sh@340 -- # ver1_l=2 00:05:28.586 11:27:29 rpc_client -- scripts/common.sh@341 -- # ver2_l=1 00:05:28.586 11:27:29 rpc_client -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:05:28.586 11:27:29 rpc_client -- scripts/common.sh@344 -- # case "$op" in 00:05:28.586 11:27:29 rpc_client -- scripts/common.sh@345 -- # : 1 00:05:28.586 11:27:29 rpc_client -- scripts/common.sh@364 -- # (( v = 0 )) 00:05:28.586 11:27:29 rpc_client -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:05:28.586 11:27:29 rpc_client -- scripts/common.sh@365 -- # decimal 1 00:05:28.586 11:27:29 rpc_client -- scripts/common.sh@353 -- # local d=1 00:05:28.586 11:27:29 rpc_client -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:05:28.586 11:27:29 rpc_client -- scripts/common.sh@355 -- # echo 1 00:05:28.586 11:27:29 rpc_client -- scripts/common.sh@365 -- # ver1[v]=1 00:05:28.586 11:27:29 rpc_client -- scripts/common.sh@366 -- # decimal 2 00:05:28.586 11:27:29 rpc_client -- scripts/common.sh@353 -- # local d=2 00:05:28.586 11:27:29 rpc_client -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:05:28.586 11:27:29 rpc_client -- scripts/common.sh@355 -- # echo 2 00:05:28.586 11:27:29 rpc_client -- scripts/common.sh@366 -- # ver2[v]=2 00:05:28.586 11:27:29 rpc_client -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:05:28.586 11:27:29 rpc_client -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:05:28.586 11:27:29 rpc_client -- scripts/common.sh@368 -- # return 0 00:05:28.586 11:27:29 rpc_client -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:05:28.586 11:27:29 rpc_client -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:05:28.586 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:28.586 --rc genhtml_branch_coverage=1 00:05:28.586 --rc genhtml_function_coverage=1 00:05:28.586 --rc genhtml_legend=1 00:05:28.586 --rc geninfo_all_blocks=1 00:05:28.586 --rc geninfo_unexecuted_blocks=1 00:05:28.586 00:05:28.586 ' 00:05:28.586 11:27:29 rpc_client -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:05:28.586 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:28.586 --rc genhtml_branch_coverage=1 00:05:28.586 --rc genhtml_function_coverage=1 00:05:28.586 --rc genhtml_legend=1 00:05:28.586 --rc geninfo_all_blocks=1 00:05:28.586 --rc geninfo_unexecuted_blocks=1 00:05:28.586 00:05:28.586 ' 00:05:28.586 11:27:29 rpc_client -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:05:28.586 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:28.586 --rc genhtml_branch_coverage=1 00:05:28.586 --rc genhtml_function_coverage=1 00:05:28.586 --rc genhtml_legend=1 00:05:28.586 --rc geninfo_all_blocks=1 00:05:28.586 --rc geninfo_unexecuted_blocks=1 00:05:28.586 00:05:28.586 ' 00:05:28.586 11:27:29 rpc_client -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:05:28.586 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:28.586 --rc genhtml_branch_coverage=1 00:05:28.586 --rc genhtml_function_coverage=1 00:05:28.586 --rc genhtml_legend=1 00:05:28.586 --rc geninfo_all_blocks=1 00:05:28.586 --rc geninfo_unexecuted_blocks=1 00:05:28.586 00:05:28.586 ' 00:05:28.586 11:27:29 rpc_client -- rpc_client/rpc_client.sh@10 -- # /home/vagrant/spdk_repo/spdk/test/rpc_client/rpc_client_test 00:05:28.586 OK 00:05:28.586 11:27:29 rpc_client -- rpc_client/rpc_client.sh@12 -- # trap - SIGINT SIGTERM EXIT 00:05:28.586 00:05:28.586 real 0m0.238s 00:05:28.586 user 0m0.160s 00:05:28.586 sys 0m0.084s 00:05:28.586 11:27:29 rpc_client -- common/autotest_common.sh@1130 -- # xtrace_disable 00:05:28.586 11:27:29 rpc_client -- common/autotest_common.sh@10 -- # set +x 00:05:28.586 ************************************ 00:05:28.586 END TEST rpc_client 00:05:28.586 ************************************ 00:05:28.586 11:27:29 -- spdk/autotest.sh@159 -- # run_test json_config /home/vagrant/spdk_repo/spdk/test/json_config/json_config.sh 00:05:28.586 11:27:29 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:05:28.586 11:27:29 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:05:28.586 11:27:29 -- common/autotest_common.sh@10 -- # set +x 00:05:28.586 ************************************ 00:05:28.586 START TEST json_config 00:05:28.586 ************************************ 00:05:28.586 11:27:29 json_config -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/json_config/json_config.sh 00:05:28.586 11:27:29 json_config -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:05:28.586 11:27:29 json_config -- common/autotest_common.sh@1711 -- # lcov --version 00:05:28.586 11:27:29 json_config -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:05:28.586 11:27:29 json_config -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:05:28.586 11:27:29 json_config -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:05:28.586 11:27:29 json_config -- scripts/common.sh@333 -- # local ver1 ver1_l 00:05:28.586 11:27:29 json_config -- scripts/common.sh@334 -- # local ver2 ver2_l 00:05:28.586 11:27:29 json_config -- scripts/common.sh@336 -- # IFS=.-: 00:05:28.586 11:27:29 json_config -- scripts/common.sh@336 -- # read -ra ver1 00:05:28.586 11:27:29 json_config -- scripts/common.sh@337 -- # IFS=.-: 00:05:28.586 11:27:29 json_config -- scripts/common.sh@337 -- # read -ra ver2 00:05:28.586 11:27:29 json_config -- scripts/common.sh@338 -- # local 'op=<' 00:05:28.586 11:27:29 json_config -- scripts/common.sh@340 -- # ver1_l=2 00:05:28.586 11:27:29 json_config -- scripts/common.sh@341 -- # ver2_l=1 00:05:28.586 11:27:29 json_config -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:05:28.586 11:27:29 json_config -- scripts/common.sh@344 -- # case "$op" in 00:05:28.586 11:27:29 json_config -- scripts/common.sh@345 -- # : 1 00:05:28.586 11:27:29 json_config -- scripts/common.sh@364 -- # (( v = 0 )) 00:05:28.586 11:27:29 json_config -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:05:28.586 11:27:29 json_config -- scripts/common.sh@365 -- # decimal 1 00:05:28.586 11:27:29 json_config -- scripts/common.sh@353 -- # local d=1 00:05:28.586 11:27:29 json_config -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:05:28.586 11:27:29 json_config -- scripts/common.sh@355 -- # echo 1 00:05:28.586 11:27:29 json_config -- scripts/common.sh@365 -- # ver1[v]=1 00:05:28.586 11:27:29 json_config -- scripts/common.sh@366 -- # decimal 2 00:05:28.586 11:27:29 json_config -- scripts/common.sh@353 -- # local d=2 00:05:28.586 11:27:29 json_config -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:05:28.586 11:27:29 json_config -- scripts/common.sh@355 -- # echo 2 00:05:28.586 11:27:29 json_config -- scripts/common.sh@366 -- # ver2[v]=2 00:05:28.586 11:27:29 json_config -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:05:28.586 11:27:29 json_config -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:05:28.586 11:27:29 json_config -- scripts/common.sh@368 -- # return 0 00:05:28.586 11:27:29 json_config -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:05:28.586 11:27:29 json_config -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:05:28.586 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:28.586 --rc genhtml_branch_coverage=1 00:05:28.586 --rc genhtml_function_coverage=1 00:05:28.586 --rc genhtml_legend=1 00:05:28.586 --rc geninfo_all_blocks=1 00:05:28.586 --rc geninfo_unexecuted_blocks=1 00:05:28.586 00:05:28.586 ' 00:05:28.586 11:27:29 json_config -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:05:28.586 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:28.586 --rc genhtml_branch_coverage=1 00:05:28.586 --rc genhtml_function_coverage=1 00:05:28.586 --rc genhtml_legend=1 00:05:28.586 --rc geninfo_all_blocks=1 00:05:28.586 --rc geninfo_unexecuted_blocks=1 00:05:28.586 00:05:28.586 ' 00:05:28.586 11:27:29 json_config -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:05:28.586 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:28.586 --rc genhtml_branch_coverage=1 00:05:28.586 --rc genhtml_function_coverage=1 00:05:28.586 --rc genhtml_legend=1 00:05:28.586 --rc geninfo_all_blocks=1 00:05:28.586 --rc geninfo_unexecuted_blocks=1 00:05:28.586 00:05:28.586 ' 00:05:28.586 11:27:29 json_config -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:05:28.586 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:28.586 --rc genhtml_branch_coverage=1 00:05:28.586 --rc genhtml_function_coverage=1 00:05:28.586 --rc genhtml_legend=1 00:05:28.586 --rc geninfo_all_blocks=1 00:05:28.586 --rc geninfo_unexecuted_blocks=1 00:05:28.586 00:05:28.586 ' 00:05:28.586 11:27:29 json_config -- json_config/json_config.sh@8 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:05:28.586 11:27:29 json_config -- nvmf/common.sh@7 -- # uname -s 00:05:28.586 11:27:29 json_config -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:05:28.586 11:27:29 json_config -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:05:28.586 11:27:29 json_config -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:05:28.586 11:27:29 json_config -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:05:28.586 11:27:29 json_config -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:05:28.586 11:27:29 json_config -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:05:28.586 11:27:29 json_config -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:05:28.586 11:27:29 json_config -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:05:28.586 11:27:29 json_config -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:05:28.586 11:27:29 json_config -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:05:28.586 11:27:29 json_config -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:2fb5c03d-b67f-4471-b8a9-a83bf7b15be8 00:05:28.586 11:27:29 json_config -- nvmf/common.sh@18 -- # NVME_HOSTID=2fb5c03d-b67f-4471-b8a9-a83bf7b15be8 00:05:28.586 11:27:29 json_config -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:05:28.586 11:27:29 json_config -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:05:28.586 11:27:29 json_config -- nvmf/common.sh@21 -- # NET_TYPE=phy-fallback 00:05:28.586 11:27:29 json_config -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:05:28.586 11:27:29 json_config -- nvmf/common.sh@49 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:05:28.586 11:27:29 json_config -- scripts/common.sh@15 -- # shopt -s extglob 00:05:28.846 11:27:29 json_config -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:05:28.846 11:27:29 json_config -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:05:28.846 11:27:29 json_config -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:05:28.846 11:27:29 json_config -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:05:28.846 11:27:29 json_config -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:05:28.846 11:27:29 json_config -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:05:28.846 11:27:29 json_config -- paths/export.sh@5 -- # export PATH 00:05:28.846 11:27:29 json_config -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:05:28.846 11:27:29 json_config -- nvmf/common.sh@51 -- # : 0 00:05:28.846 11:27:29 json_config -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:05:28.846 11:27:29 json_config -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:05:28.846 11:27:29 json_config -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:05:28.846 11:27:29 json_config -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:05:28.846 11:27:29 json_config -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:05:28.846 11:27:29 json_config -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:05:28.846 /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:05:28.846 11:27:29 json_config -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:05:28.846 11:27:29 json_config -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:05:28.846 11:27:29 json_config -- nvmf/common.sh@55 -- # have_pci_nics=0 00:05:28.846 11:27:29 json_config -- json_config/json_config.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/json_config/common.sh 00:05:28.846 11:27:29 json_config -- json_config/json_config.sh@11 -- # [[ 0 -eq 1 ]] 00:05:28.846 11:27:29 json_config -- json_config/json_config.sh@15 -- # [[ 0 -ne 1 ]] 00:05:28.846 11:27:29 json_config -- json_config/json_config.sh@15 -- # [[ 0 -eq 1 ]] 00:05:28.846 11:27:29 json_config -- json_config/json_config.sh@26 -- # (( SPDK_TEST_BLOCKDEV + SPDK_TEST_ISCSI + SPDK_TEST_NVMF + SPDK_TEST_VHOST + SPDK_TEST_VHOST_INIT + SPDK_TEST_RBD == 0 )) 00:05:28.846 11:27:29 json_config -- json_config/json_config.sh@31 -- # app_pid=(['target']='' ['initiator']='') 00:05:28.846 11:27:29 json_config -- json_config/json_config.sh@31 -- # declare -A app_pid 00:05:28.846 INFO: JSON configuration test init 00:05:28.846 11:27:29 json_config -- json_config/json_config.sh@32 -- # app_socket=(['target']='/var/tmp/spdk_tgt.sock' ['initiator']='/var/tmp/spdk_initiator.sock') 00:05:28.846 11:27:29 json_config -- json_config/json_config.sh@32 -- # declare -A app_socket 00:05:28.846 11:27:29 json_config -- json_config/json_config.sh@33 -- # app_params=(['target']='-m 0x1 -s 1024' ['initiator']='-m 0x2 -g -u -s 1024') 00:05:28.846 11:27:29 json_config -- json_config/json_config.sh@33 -- # declare -A app_params 00:05:28.846 11:27:29 json_config -- json_config/json_config.sh@34 -- # configs_path=(['target']='/home/vagrant/spdk_repo/spdk/spdk_tgt_config.json' ['initiator']='/home/vagrant/spdk_repo/spdk/spdk_initiator_config.json') 00:05:28.846 11:27:29 json_config -- json_config/json_config.sh@34 -- # declare -A configs_path 00:05:28.846 11:27:29 json_config -- json_config/json_config.sh@40 -- # last_event_id=0 00:05:28.846 11:27:29 json_config -- json_config/json_config.sh@362 -- # trap 'on_error_exit "${FUNCNAME}" "${LINENO}"' ERR 00:05:28.846 11:27:29 json_config -- json_config/json_config.sh@363 -- # echo 'INFO: JSON configuration test init' 00:05:28.846 11:27:29 json_config -- json_config/json_config.sh@364 -- # json_config_test_init 00:05:28.846 11:27:29 json_config -- json_config/json_config.sh@269 -- # timing_enter json_config_test_init 00:05:28.846 11:27:29 json_config -- common/autotest_common.sh@726 -- # xtrace_disable 00:05:28.846 11:27:29 json_config -- common/autotest_common.sh@10 -- # set +x 00:05:28.846 11:27:29 json_config -- json_config/json_config.sh@270 -- # timing_enter json_config_setup_target 00:05:28.846 11:27:29 json_config -- common/autotest_common.sh@726 -- # xtrace_disable 00:05:28.846 11:27:29 json_config -- common/autotest_common.sh@10 -- # set +x 00:05:28.846 Waiting for target to run... 00:05:28.846 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock... 00:05:28.846 11:27:29 json_config -- json_config/json_config.sh@272 -- # json_config_test_start_app target --wait-for-rpc 00:05:28.846 11:27:29 json_config -- json_config/common.sh@9 -- # local app=target 00:05:28.846 11:27:29 json_config -- json_config/common.sh@10 -- # shift 00:05:28.846 11:27:29 json_config -- json_config/common.sh@12 -- # [[ -n 22 ]] 00:05:28.846 11:27:29 json_config -- json_config/common.sh@13 -- # [[ -z '' ]] 00:05:28.846 11:27:29 json_config -- json_config/common.sh@15 -- # local app_extra_params= 00:05:28.846 11:27:29 json_config -- json_config/common.sh@16 -- # [[ 0 -eq 1 ]] 00:05:28.846 11:27:29 json_config -- json_config/common.sh@16 -- # [[ 0 -eq 1 ]] 00:05:28.846 11:27:29 json_config -- json_config/common.sh@22 -- # app_pid["$app"]=60996 00:05:28.846 11:27:29 json_config -- json_config/common.sh@24 -- # echo 'Waiting for target to run...' 00:05:28.846 11:27:29 json_config -- json_config/common.sh@25 -- # waitforlisten 60996 /var/tmp/spdk_tgt.sock 00:05:28.846 11:27:29 json_config -- common/autotest_common.sh@835 -- # '[' -z 60996 ']' 00:05:28.846 11:27:29 json_config -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk_tgt.sock 00:05:28.846 11:27:29 json_config -- common/autotest_common.sh@840 -- # local max_retries=100 00:05:28.846 11:27:29 json_config -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock...' 00:05:28.846 11:27:29 json_config -- json_config/common.sh@21 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 -s 1024 -r /var/tmp/spdk_tgt.sock --wait-for-rpc 00:05:28.846 11:27:29 json_config -- common/autotest_common.sh@844 -- # xtrace_disable 00:05:28.846 11:27:29 json_config -- common/autotest_common.sh@10 -- # set +x 00:05:28.846 [2024-12-16 11:27:29.429069] Starting SPDK v25.01-pre git sha1 c360129e7 / DPDK 24.03.0 initialization... 00:05:28.846 [2024-12-16 11:27:29.429492] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 -m 1024 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid60996 ] 00:05:29.413 [2024-12-16 11:27:29.872557] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:29.413 [2024-12-16 11:27:29.930037] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:05:29.980 11:27:30 json_config -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:05:29.980 11:27:30 json_config -- common/autotest_common.sh@868 -- # return 0 00:05:29.980 11:27:30 json_config -- json_config/common.sh@26 -- # echo '' 00:05:29.980 00:05:29.980 11:27:30 json_config -- json_config/json_config.sh@276 -- # create_accel_config 00:05:29.980 11:27:30 json_config -- json_config/json_config.sh@100 -- # timing_enter create_accel_config 00:05:29.980 11:27:30 json_config -- common/autotest_common.sh@726 -- # xtrace_disable 00:05:29.980 11:27:30 json_config -- common/autotest_common.sh@10 -- # set +x 00:05:29.980 11:27:30 json_config -- json_config/json_config.sh@102 -- # [[ 0 -eq 1 ]] 00:05:29.980 11:27:30 json_config -- json_config/json_config.sh@108 -- # timing_exit create_accel_config 00:05:29.980 11:27:30 json_config -- common/autotest_common.sh@732 -- # xtrace_disable 00:05:29.980 11:27:30 json_config -- common/autotest_common.sh@10 -- # set +x 00:05:29.980 11:27:30 json_config -- json_config/json_config.sh@280 -- # /home/vagrant/spdk_repo/spdk/scripts/gen_nvme.sh --json-with-subsystems 00:05:29.980 11:27:30 json_config -- json_config/json_config.sh@281 -- # tgt_rpc load_config 00:05:29.980 11:27:30 json_config -- json_config/common.sh@57 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock load_config 00:05:30.548 11:27:31 json_config -- json_config/json_config.sh@283 -- # tgt_check_notification_types 00:05:30.548 11:27:31 json_config -- json_config/json_config.sh@43 -- # timing_enter tgt_check_notification_types 00:05:30.548 11:27:31 json_config -- common/autotest_common.sh@726 -- # xtrace_disable 00:05:30.548 11:27:31 json_config -- common/autotest_common.sh@10 -- # set +x 00:05:30.548 11:27:31 json_config -- json_config/json_config.sh@45 -- # local ret=0 00:05:30.548 11:27:31 json_config -- json_config/json_config.sh@46 -- # enabled_types=('bdev_register' 'bdev_unregister') 00:05:30.548 11:27:31 json_config -- json_config/json_config.sh@46 -- # local enabled_types 00:05:30.548 11:27:31 json_config -- json_config/json_config.sh@47 -- # [[ y == y ]] 00:05:30.548 11:27:31 json_config -- json_config/json_config.sh@48 -- # enabled_types+=("fsdev_register" "fsdev_unregister") 00:05:30.548 11:27:31 json_config -- json_config/json_config.sh@51 -- # tgt_rpc notify_get_types 00:05:30.548 11:27:31 json_config -- json_config/common.sh@57 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock notify_get_types 00:05:30.548 11:27:31 json_config -- json_config/json_config.sh@51 -- # jq -r '.[]' 00:05:30.807 11:27:31 json_config -- json_config/json_config.sh@51 -- # get_types=('fsdev_register' 'fsdev_unregister' 'bdev_register' 'bdev_unregister') 00:05:30.807 11:27:31 json_config -- json_config/json_config.sh@51 -- # local get_types 00:05:30.807 11:27:31 json_config -- json_config/json_config.sh@53 -- # local type_diff 00:05:30.807 11:27:31 json_config -- json_config/json_config.sh@54 -- # echo bdev_register bdev_unregister fsdev_register fsdev_unregister fsdev_register fsdev_unregister bdev_register bdev_unregister 00:05:30.807 11:27:31 json_config -- json_config/json_config.sh@54 -- # tr ' ' '\n' 00:05:30.807 11:27:31 json_config -- json_config/json_config.sh@54 -- # sort 00:05:30.807 11:27:31 json_config -- json_config/json_config.sh@54 -- # uniq -u 00:05:30.807 11:27:31 json_config -- json_config/json_config.sh@54 -- # type_diff= 00:05:30.807 11:27:31 json_config -- json_config/json_config.sh@56 -- # [[ -n '' ]] 00:05:30.807 11:27:31 json_config -- json_config/json_config.sh@61 -- # timing_exit tgt_check_notification_types 00:05:30.807 11:27:31 json_config -- common/autotest_common.sh@732 -- # xtrace_disable 00:05:30.807 11:27:31 json_config -- common/autotest_common.sh@10 -- # set +x 00:05:30.807 11:27:31 json_config -- json_config/json_config.sh@62 -- # return 0 00:05:30.807 11:27:31 json_config -- json_config/json_config.sh@285 -- # [[ 0 -eq 1 ]] 00:05:30.807 11:27:31 json_config -- json_config/json_config.sh@289 -- # [[ 0 -eq 1 ]] 00:05:30.807 11:27:31 json_config -- json_config/json_config.sh@293 -- # [[ 0 -eq 1 ]] 00:05:30.807 11:27:31 json_config -- json_config/json_config.sh@297 -- # [[ 1 -eq 1 ]] 00:05:30.807 11:27:31 json_config -- json_config/json_config.sh@298 -- # create_nvmf_subsystem_config 00:05:30.807 11:27:31 json_config -- json_config/json_config.sh@237 -- # timing_enter create_nvmf_subsystem_config 00:05:30.807 11:27:31 json_config -- common/autotest_common.sh@726 -- # xtrace_disable 00:05:30.807 11:27:31 json_config -- common/autotest_common.sh@10 -- # set +x 00:05:30.807 11:27:31 json_config -- json_config/json_config.sh@239 -- # NVMF_FIRST_TARGET_IP=127.0.0.1 00:05:30.807 11:27:31 json_config -- json_config/json_config.sh@240 -- # [[ tcp == \r\d\m\a ]] 00:05:30.807 11:27:31 json_config -- json_config/json_config.sh@244 -- # [[ -z 127.0.0.1 ]] 00:05:30.807 11:27:31 json_config -- json_config/json_config.sh@249 -- # tgt_rpc bdev_malloc_create 8 512 --name MallocForNvmf0 00:05:30.807 11:27:31 json_config -- json_config/common.sh@57 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock bdev_malloc_create 8 512 --name MallocForNvmf0 00:05:31.066 MallocForNvmf0 00:05:31.066 11:27:31 json_config -- json_config/json_config.sh@250 -- # tgt_rpc bdev_malloc_create 4 1024 --name MallocForNvmf1 00:05:31.066 11:27:31 json_config -- json_config/common.sh@57 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock bdev_malloc_create 4 1024 --name MallocForNvmf1 00:05:31.633 MallocForNvmf1 00:05:31.633 11:27:32 json_config -- json_config/json_config.sh@252 -- # tgt_rpc nvmf_create_transport -t tcp -u 8192 -c 0 00:05:31.633 11:27:32 json_config -- json_config/common.sh@57 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock nvmf_create_transport -t tcp -u 8192 -c 0 00:05:31.892 [2024-12-16 11:27:32.470553] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:05:31.892 11:27:32 json_config -- json_config/json_config.sh@253 -- # tgt_rpc nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:05:31.892 11:27:32 json_config -- json_config/common.sh@57 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:05:32.151 11:27:32 json_config -- json_config/json_config.sh@254 -- # tgt_rpc nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 MallocForNvmf0 00:05:32.151 11:27:32 json_config -- json_config/common.sh@57 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 MallocForNvmf0 00:05:32.409 11:27:33 json_config -- json_config/json_config.sh@255 -- # tgt_rpc nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 MallocForNvmf1 00:05:32.410 11:27:33 json_config -- json_config/common.sh@57 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 MallocForNvmf1 00:05:32.668 11:27:33 json_config -- json_config/json_config.sh@256 -- # tgt_rpc nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 127.0.0.1 -s 4420 00:05:32.668 11:27:33 json_config -- json_config/common.sh@57 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 127.0.0.1 -s 4420 00:05:32.927 [2024-12-16 11:27:33.579246] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4420 *** 00:05:32.927 11:27:33 json_config -- json_config/json_config.sh@258 -- # timing_exit create_nvmf_subsystem_config 00:05:32.927 11:27:33 json_config -- common/autotest_common.sh@732 -- # xtrace_disable 00:05:32.927 11:27:33 json_config -- common/autotest_common.sh@10 -- # set +x 00:05:32.927 11:27:33 json_config -- json_config/json_config.sh@300 -- # timing_exit json_config_setup_target 00:05:32.927 11:27:33 json_config -- common/autotest_common.sh@732 -- # xtrace_disable 00:05:32.927 11:27:33 json_config -- common/autotest_common.sh@10 -- # set +x 00:05:32.927 11:27:33 json_config -- json_config/json_config.sh@302 -- # [[ 0 -eq 1 ]] 00:05:32.927 11:27:33 json_config -- json_config/json_config.sh@307 -- # tgt_rpc bdev_malloc_create 8 512 --name MallocBdevForConfigChangeCheck 00:05:32.927 11:27:33 json_config -- json_config/common.sh@57 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock bdev_malloc_create 8 512 --name MallocBdevForConfigChangeCheck 00:05:33.494 MallocBdevForConfigChangeCheck 00:05:33.494 11:27:33 json_config -- json_config/json_config.sh@309 -- # timing_exit json_config_test_init 00:05:33.494 11:27:33 json_config -- common/autotest_common.sh@732 -- # xtrace_disable 00:05:33.494 11:27:33 json_config -- common/autotest_common.sh@10 -- # set +x 00:05:33.494 11:27:34 json_config -- json_config/json_config.sh@366 -- # tgt_rpc save_config 00:05:33.494 11:27:34 json_config -- json_config/common.sh@57 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock save_config 00:05:33.752 INFO: shutting down applications... 00:05:33.752 11:27:34 json_config -- json_config/json_config.sh@368 -- # echo 'INFO: shutting down applications...' 00:05:33.752 11:27:34 json_config -- json_config/json_config.sh@369 -- # [[ 0 -eq 1 ]] 00:05:33.752 11:27:34 json_config -- json_config/json_config.sh@375 -- # json_config_clear target 00:05:33.752 11:27:34 json_config -- json_config/json_config.sh@339 -- # [[ -n 22 ]] 00:05:33.752 11:27:34 json_config -- json_config/json_config.sh@340 -- # /home/vagrant/spdk_repo/spdk/test/json_config/clear_config.py -s /var/tmp/spdk_tgt.sock clear_config 00:05:34.011 Calling clear_iscsi_subsystem 00:05:34.011 Calling clear_nvmf_subsystem 00:05:34.011 Calling clear_nbd_subsystem 00:05:34.011 Calling clear_ublk_subsystem 00:05:34.011 Calling clear_vhost_blk_subsystem 00:05:34.011 Calling clear_vhost_scsi_subsystem 00:05:34.011 Calling clear_bdev_subsystem 00:05:34.011 11:27:34 json_config -- json_config/json_config.sh@344 -- # local config_filter=/home/vagrant/spdk_repo/spdk/test/json_config/config_filter.py 00:05:34.011 11:27:34 json_config -- json_config/json_config.sh@350 -- # count=100 00:05:34.011 11:27:34 json_config -- json_config/json_config.sh@351 -- # '[' 100 -gt 0 ']' 00:05:34.011 11:27:34 json_config -- json_config/json_config.sh@352 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock save_config 00:05:34.011 11:27:34 json_config -- json_config/json_config.sh@352 -- # /home/vagrant/spdk_repo/spdk/test/json_config/config_filter.py -method delete_global_parameters 00:05:34.011 11:27:34 json_config -- json_config/json_config.sh@352 -- # /home/vagrant/spdk_repo/spdk/test/json_config/config_filter.py -method check_empty 00:05:34.578 11:27:35 json_config -- json_config/json_config.sh@352 -- # break 00:05:34.578 11:27:35 json_config -- json_config/json_config.sh@357 -- # '[' 100 -eq 0 ']' 00:05:34.578 11:27:35 json_config -- json_config/json_config.sh@376 -- # json_config_test_shutdown_app target 00:05:34.578 11:27:35 json_config -- json_config/common.sh@31 -- # local app=target 00:05:34.578 11:27:35 json_config -- json_config/common.sh@34 -- # [[ -n 22 ]] 00:05:34.578 11:27:35 json_config -- json_config/common.sh@35 -- # [[ -n 60996 ]] 00:05:34.578 11:27:35 json_config -- json_config/common.sh@38 -- # kill -SIGINT 60996 00:05:34.578 11:27:35 json_config -- json_config/common.sh@40 -- # (( i = 0 )) 00:05:34.578 11:27:35 json_config -- json_config/common.sh@40 -- # (( i < 30 )) 00:05:34.578 11:27:35 json_config -- json_config/common.sh@41 -- # kill -0 60996 00:05:34.578 11:27:35 json_config -- json_config/common.sh@45 -- # sleep 0.5 00:05:35.145 11:27:35 json_config -- json_config/common.sh@40 -- # (( i++ )) 00:05:35.145 11:27:35 json_config -- json_config/common.sh@40 -- # (( i < 30 )) 00:05:35.145 11:27:35 json_config -- json_config/common.sh@41 -- # kill -0 60996 00:05:35.145 11:27:35 json_config -- json_config/common.sh@42 -- # app_pid["$app"]= 00:05:35.145 11:27:35 json_config -- json_config/common.sh@43 -- # break 00:05:35.145 11:27:35 json_config -- json_config/common.sh@48 -- # [[ -n '' ]] 00:05:35.145 SPDK target shutdown done 00:05:35.145 11:27:35 json_config -- json_config/common.sh@53 -- # echo 'SPDK target shutdown done' 00:05:35.145 INFO: relaunching applications... 00:05:35.145 11:27:35 json_config -- json_config/json_config.sh@378 -- # echo 'INFO: relaunching applications...' 00:05:35.145 11:27:35 json_config -- json_config/json_config.sh@379 -- # json_config_test_start_app target --json /home/vagrant/spdk_repo/spdk/spdk_tgt_config.json 00:05:35.145 11:27:35 json_config -- json_config/common.sh@9 -- # local app=target 00:05:35.145 11:27:35 json_config -- json_config/common.sh@10 -- # shift 00:05:35.145 11:27:35 json_config -- json_config/common.sh@12 -- # [[ -n 22 ]] 00:05:35.145 11:27:35 json_config -- json_config/common.sh@13 -- # [[ -z '' ]] 00:05:35.145 11:27:35 json_config -- json_config/common.sh@15 -- # local app_extra_params= 00:05:35.145 11:27:35 json_config -- json_config/common.sh@16 -- # [[ 0 -eq 1 ]] 00:05:35.145 11:27:35 json_config -- json_config/common.sh@16 -- # [[ 0 -eq 1 ]] 00:05:35.145 11:27:35 json_config -- json_config/common.sh@22 -- # app_pid["$app"]=61281 00:05:35.145 11:27:35 json_config -- json_config/common.sh@21 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 -s 1024 -r /var/tmp/spdk_tgt.sock --json /home/vagrant/spdk_repo/spdk/spdk_tgt_config.json 00:05:35.145 Waiting for target to run... 00:05:35.145 11:27:35 json_config -- json_config/common.sh@24 -- # echo 'Waiting for target to run...' 00:05:35.145 11:27:35 json_config -- json_config/common.sh@25 -- # waitforlisten 61281 /var/tmp/spdk_tgt.sock 00:05:35.145 11:27:35 json_config -- common/autotest_common.sh@835 -- # '[' -z 61281 ']' 00:05:35.145 11:27:35 json_config -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk_tgt.sock 00:05:35.145 11:27:35 json_config -- common/autotest_common.sh@840 -- # local max_retries=100 00:05:35.145 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock... 00:05:35.145 11:27:35 json_config -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock...' 00:05:35.145 11:27:35 json_config -- common/autotest_common.sh@844 -- # xtrace_disable 00:05:35.145 11:27:35 json_config -- common/autotest_common.sh@10 -- # set +x 00:05:35.145 [2024-12-16 11:27:35.723271] Starting SPDK v25.01-pre git sha1 c360129e7 / DPDK 24.03.0 initialization... 00:05:35.145 [2024-12-16 11:27:35.723389] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 -m 1024 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid61281 ] 00:05:35.711 [2024-12-16 11:27:36.172910] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:35.711 [2024-12-16 11:27:36.220911] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:05:35.970 [2024-12-16 11:27:36.569444] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:05:35.970 [2024-12-16 11:27:36.601523] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4420 *** 00:05:36.229 11:27:36 json_config -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:05:36.229 11:27:36 json_config -- common/autotest_common.sh@868 -- # return 0 00:05:36.229 00:05:36.229 11:27:36 json_config -- json_config/common.sh@26 -- # echo '' 00:05:36.229 11:27:36 json_config -- json_config/json_config.sh@380 -- # [[ 0 -eq 1 ]] 00:05:36.229 INFO: Checking if target configuration is the same... 00:05:36.229 11:27:36 json_config -- json_config/json_config.sh@384 -- # echo 'INFO: Checking if target configuration is the same...' 00:05:36.229 11:27:36 json_config -- json_config/json_config.sh@385 -- # /home/vagrant/spdk_repo/spdk/test/json_config/json_diff.sh /dev/fd/62 /home/vagrant/spdk_repo/spdk/spdk_tgt_config.json 00:05:36.229 11:27:36 json_config -- json_config/json_config.sh@385 -- # tgt_rpc save_config 00:05:36.229 11:27:36 json_config -- json_config/common.sh@57 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock save_config 00:05:36.229 + '[' 2 -ne 2 ']' 00:05:36.229 +++ dirname /home/vagrant/spdk_repo/spdk/test/json_config/json_diff.sh 00:05:36.229 ++ readlink -f /home/vagrant/spdk_repo/spdk/test/json_config/../.. 00:05:36.229 + rootdir=/home/vagrant/spdk_repo/spdk 00:05:36.229 +++ basename /dev/fd/62 00:05:36.229 ++ mktemp /tmp/62.XXX 00:05:36.229 + tmp_file_1=/tmp/62.uxI 00:05:36.229 +++ basename /home/vagrant/spdk_repo/spdk/spdk_tgt_config.json 00:05:36.229 ++ mktemp /tmp/spdk_tgt_config.json.XXX 00:05:36.229 + tmp_file_2=/tmp/spdk_tgt_config.json.mOk 00:05:36.229 + ret=0 00:05:36.229 + /home/vagrant/spdk_repo/spdk/test/json_config/config_filter.py -method sort 00:05:36.487 + /home/vagrant/spdk_repo/spdk/test/json_config/config_filter.py -method sort 00:05:36.487 + diff -u /tmp/62.uxI /tmp/spdk_tgt_config.json.mOk 00:05:36.487 INFO: JSON config files are the same 00:05:36.487 + echo 'INFO: JSON config files are the same' 00:05:36.487 + rm /tmp/62.uxI /tmp/spdk_tgt_config.json.mOk 00:05:36.487 + exit 0 00:05:36.487 11:27:37 json_config -- json_config/json_config.sh@386 -- # [[ 0 -eq 1 ]] 00:05:36.487 INFO: changing configuration and checking if this can be detected... 00:05:36.487 11:27:37 json_config -- json_config/json_config.sh@391 -- # echo 'INFO: changing configuration and checking if this can be detected...' 00:05:36.487 11:27:37 json_config -- json_config/json_config.sh@393 -- # tgt_rpc bdev_malloc_delete MallocBdevForConfigChangeCheck 00:05:36.487 11:27:37 json_config -- json_config/common.sh@57 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock bdev_malloc_delete MallocBdevForConfigChangeCheck 00:05:36.746 11:27:37 json_config -- json_config/json_config.sh@394 -- # tgt_rpc save_config 00:05:36.746 11:27:37 json_config -- json_config/json_config.sh@394 -- # /home/vagrant/spdk_repo/spdk/test/json_config/json_diff.sh /dev/fd/62 /home/vagrant/spdk_repo/spdk/spdk_tgt_config.json 00:05:36.746 11:27:37 json_config -- json_config/common.sh@57 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock save_config 00:05:36.746 + '[' 2 -ne 2 ']' 00:05:36.746 +++ dirname /home/vagrant/spdk_repo/spdk/test/json_config/json_diff.sh 00:05:36.746 ++ readlink -f /home/vagrant/spdk_repo/spdk/test/json_config/../.. 00:05:36.746 + rootdir=/home/vagrant/spdk_repo/spdk 00:05:36.746 +++ basename /dev/fd/62 00:05:36.746 ++ mktemp /tmp/62.XXX 00:05:37.004 + tmp_file_1=/tmp/62.rSf 00:05:37.004 +++ basename /home/vagrant/spdk_repo/spdk/spdk_tgt_config.json 00:05:37.004 ++ mktemp /tmp/spdk_tgt_config.json.XXX 00:05:37.004 + tmp_file_2=/tmp/spdk_tgt_config.json.YMW 00:05:37.004 + ret=0 00:05:37.004 + /home/vagrant/spdk_repo/spdk/test/json_config/config_filter.py -method sort 00:05:37.263 + /home/vagrant/spdk_repo/spdk/test/json_config/config_filter.py -method sort 00:05:37.263 + diff -u /tmp/62.rSf /tmp/spdk_tgt_config.json.YMW 00:05:37.263 + ret=1 00:05:37.263 + echo '=== Start of file: /tmp/62.rSf ===' 00:05:37.263 + cat /tmp/62.rSf 00:05:37.263 + echo '=== End of file: /tmp/62.rSf ===' 00:05:37.263 + echo '' 00:05:37.263 + echo '=== Start of file: /tmp/spdk_tgt_config.json.YMW ===' 00:05:37.263 + cat /tmp/spdk_tgt_config.json.YMW 00:05:37.263 + echo '=== End of file: /tmp/spdk_tgt_config.json.YMW ===' 00:05:37.263 + echo '' 00:05:37.263 + rm /tmp/62.rSf /tmp/spdk_tgt_config.json.YMW 00:05:37.263 + exit 1 00:05:37.263 INFO: configuration change detected. 00:05:37.263 11:27:38 json_config -- json_config/json_config.sh@398 -- # echo 'INFO: configuration change detected.' 00:05:37.263 11:27:38 json_config -- json_config/json_config.sh@401 -- # json_config_test_fini 00:05:37.263 11:27:38 json_config -- json_config/json_config.sh@313 -- # timing_enter json_config_test_fini 00:05:37.263 11:27:38 json_config -- common/autotest_common.sh@726 -- # xtrace_disable 00:05:37.263 11:27:38 json_config -- common/autotest_common.sh@10 -- # set +x 00:05:37.263 11:27:38 json_config -- json_config/json_config.sh@314 -- # local ret=0 00:05:37.263 11:27:38 json_config -- json_config/json_config.sh@316 -- # [[ -n '' ]] 00:05:37.263 11:27:38 json_config -- json_config/json_config.sh@324 -- # [[ -n 61281 ]] 00:05:37.263 11:27:38 json_config -- json_config/json_config.sh@327 -- # cleanup_bdev_subsystem_config 00:05:37.263 11:27:38 json_config -- json_config/json_config.sh@191 -- # timing_enter cleanup_bdev_subsystem_config 00:05:37.263 11:27:38 json_config -- common/autotest_common.sh@726 -- # xtrace_disable 00:05:37.263 11:27:38 json_config -- common/autotest_common.sh@10 -- # set +x 00:05:37.522 11:27:38 json_config -- json_config/json_config.sh@193 -- # [[ 0 -eq 1 ]] 00:05:37.522 11:27:38 json_config -- json_config/json_config.sh@200 -- # uname -s 00:05:37.522 11:27:38 json_config -- json_config/json_config.sh@200 -- # [[ Linux = Linux ]] 00:05:37.522 11:27:38 json_config -- json_config/json_config.sh@201 -- # rm -f /sample_aio 00:05:37.522 11:27:38 json_config -- json_config/json_config.sh@204 -- # [[ 0 -eq 1 ]] 00:05:37.522 11:27:38 json_config -- json_config/json_config.sh@208 -- # timing_exit cleanup_bdev_subsystem_config 00:05:37.522 11:27:38 json_config -- common/autotest_common.sh@732 -- # xtrace_disable 00:05:37.522 11:27:38 json_config -- common/autotest_common.sh@10 -- # set +x 00:05:37.522 11:27:38 json_config -- json_config/json_config.sh@330 -- # killprocess 61281 00:05:37.522 11:27:38 json_config -- common/autotest_common.sh@954 -- # '[' -z 61281 ']' 00:05:37.522 11:27:38 json_config -- common/autotest_common.sh@958 -- # kill -0 61281 00:05:37.522 11:27:38 json_config -- common/autotest_common.sh@959 -- # uname 00:05:37.522 11:27:38 json_config -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:05:37.522 11:27:38 json_config -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 61281 00:05:37.522 11:27:38 json_config -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:05:37.522 11:27:38 json_config -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:05:37.522 killing process with pid 61281 00:05:37.522 11:27:38 json_config -- common/autotest_common.sh@972 -- # echo 'killing process with pid 61281' 00:05:37.522 11:27:38 json_config -- common/autotest_common.sh@973 -- # kill 61281 00:05:37.522 11:27:38 json_config -- common/autotest_common.sh@978 -- # wait 61281 00:05:37.781 11:27:38 json_config -- json_config/json_config.sh@333 -- # rm -f /home/vagrant/spdk_repo/spdk/spdk_initiator_config.json /home/vagrant/spdk_repo/spdk/spdk_tgt_config.json 00:05:37.781 11:27:38 json_config -- json_config/json_config.sh@334 -- # timing_exit json_config_test_fini 00:05:37.781 11:27:38 json_config -- common/autotest_common.sh@732 -- # xtrace_disable 00:05:37.781 11:27:38 json_config -- common/autotest_common.sh@10 -- # set +x 00:05:37.781 11:27:38 json_config -- json_config/json_config.sh@335 -- # return 0 00:05:37.781 INFO: Success 00:05:37.781 11:27:38 json_config -- json_config/json_config.sh@403 -- # echo 'INFO: Success' 00:05:37.781 00:05:37.781 real 0m9.232s 00:05:37.781 user 0m13.385s 00:05:37.781 sys 0m1.946s 00:05:37.781 11:27:38 json_config -- common/autotest_common.sh@1130 -- # xtrace_disable 00:05:37.781 11:27:38 json_config -- common/autotest_common.sh@10 -- # set +x 00:05:37.781 ************************************ 00:05:37.781 END TEST json_config 00:05:37.781 ************************************ 00:05:37.781 11:27:38 -- spdk/autotest.sh@160 -- # run_test json_config_extra_key /home/vagrant/spdk_repo/spdk/test/json_config/json_config_extra_key.sh 00:05:37.781 11:27:38 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:05:37.781 11:27:38 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:05:37.781 11:27:38 -- common/autotest_common.sh@10 -- # set +x 00:05:37.781 ************************************ 00:05:37.781 START TEST json_config_extra_key 00:05:37.781 ************************************ 00:05:37.781 11:27:38 json_config_extra_key -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/json_config/json_config_extra_key.sh 00:05:37.781 11:27:38 json_config_extra_key -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:05:37.781 11:27:38 json_config_extra_key -- common/autotest_common.sh@1711 -- # lcov --version 00:05:37.781 11:27:38 json_config_extra_key -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:05:38.041 11:27:38 json_config_extra_key -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:05:38.041 11:27:38 json_config_extra_key -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:05:38.041 11:27:38 json_config_extra_key -- scripts/common.sh@333 -- # local ver1 ver1_l 00:05:38.041 11:27:38 json_config_extra_key -- scripts/common.sh@334 -- # local ver2 ver2_l 00:05:38.041 11:27:38 json_config_extra_key -- scripts/common.sh@336 -- # IFS=.-: 00:05:38.041 11:27:38 json_config_extra_key -- scripts/common.sh@336 -- # read -ra ver1 00:05:38.041 11:27:38 json_config_extra_key -- scripts/common.sh@337 -- # IFS=.-: 00:05:38.041 11:27:38 json_config_extra_key -- scripts/common.sh@337 -- # read -ra ver2 00:05:38.041 11:27:38 json_config_extra_key -- scripts/common.sh@338 -- # local 'op=<' 00:05:38.041 11:27:38 json_config_extra_key -- scripts/common.sh@340 -- # ver1_l=2 00:05:38.041 11:27:38 json_config_extra_key -- scripts/common.sh@341 -- # ver2_l=1 00:05:38.041 11:27:38 json_config_extra_key -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:05:38.041 11:27:38 json_config_extra_key -- scripts/common.sh@344 -- # case "$op" in 00:05:38.041 11:27:38 json_config_extra_key -- scripts/common.sh@345 -- # : 1 00:05:38.041 11:27:38 json_config_extra_key -- scripts/common.sh@364 -- # (( v = 0 )) 00:05:38.041 11:27:38 json_config_extra_key -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:05:38.041 11:27:38 json_config_extra_key -- scripts/common.sh@365 -- # decimal 1 00:05:38.041 11:27:38 json_config_extra_key -- scripts/common.sh@353 -- # local d=1 00:05:38.041 11:27:38 json_config_extra_key -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:05:38.041 11:27:38 json_config_extra_key -- scripts/common.sh@355 -- # echo 1 00:05:38.041 11:27:38 json_config_extra_key -- scripts/common.sh@365 -- # ver1[v]=1 00:05:38.041 11:27:38 json_config_extra_key -- scripts/common.sh@366 -- # decimal 2 00:05:38.041 11:27:38 json_config_extra_key -- scripts/common.sh@353 -- # local d=2 00:05:38.041 11:27:38 json_config_extra_key -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:05:38.041 11:27:38 json_config_extra_key -- scripts/common.sh@355 -- # echo 2 00:05:38.041 11:27:38 json_config_extra_key -- scripts/common.sh@366 -- # ver2[v]=2 00:05:38.041 11:27:38 json_config_extra_key -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:05:38.041 11:27:38 json_config_extra_key -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:05:38.041 11:27:38 json_config_extra_key -- scripts/common.sh@368 -- # return 0 00:05:38.041 11:27:38 json_config_extra_key -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:05:38.041 11:27:38 json_config_extra_key -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:05:38.041 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:38.041 --rc genhtml_branch_coverage=1 00:05:38.041 --rc genhtml_function_coverage=1 00:05:38.041 --rc genhtml_legend=1 00:05:38.041 --rc geninfo_all_blocks=1 00:05:38.041 --rc geninfo_unexecuted_blocks=1 00:05:38.041 00:05:38.041 ' 00:05:38.041 11:27:38 json_config_extra_key -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:05:38.041 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:38.041 --rc genhtml_branch_coverage=1 00:05:38.041 --rc genhtml_function_coverage=1 00:05:38.041 --rc genhtml_legend=1 00:05:38.041 --rc geninfo_all_blocks=1 00:05:38.041 --rc geninfo_unexecuted_blocks=1 00:05:38.041 00:05:38.041 ' 00:05:38.041 11:27:38 json_config_extra_key -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:05:38.041 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:38.041 --rc genhtml_branch_coverage=1 00:05:38.041 --rc genhtml_function_coverage=1 00:05:38.041 --rc genhtml_legend=1 00:05:38.041 --rc geninfo_all_blocks=1 00:05:38.041 --rc geninfo_unexecuted_blocks=1 00:05:38.041 00:05:38.041 ' 00:05:38.041 11:27:38 json_config_extra_key -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:05:38.041 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:38.041 --rc genhtml_branch_coverage=1 00:05:38.041 --rc genhtml_function_coverage=1 00:05:38.041 --rc genhtml_legend=1 00:05:38.041 --rc geninfo_all_blocks=1 00:05:38.041 --rc geninfo_unexecuted_blocks=1 00:05:38.041 00:05:38.041 ' 00:05:38.041 11:27:38 json_config_extra_key -- json_config/json_config_extra_key.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:05:38.041 11:27:38 json_config_extra_key -- nvmf/common.sh@7 -- # uname -s 00:05:38.041 11:27:38 json_config_extra_key -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:05:38.041 11:27:38 json_config_extra_key -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:05:38.041 11:27:38 json_config_extra_key -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:05:38.041 11:27:38 json_config_extra_key -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:05:38.041 11:27:38 json_config_extra_key -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:05:38.041 11:27:38 json_config_extra_key -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:05:38.041 11:27:38 json_config_extra_key -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:05:38.041 11:27:38 json_config_extra_key -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:05:38.041 11:27:38 json_config_extra_key -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:05:38.041 11:27:38 json_config_extra_key -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:05:38.041 11:27:38 json_config_extra_key -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:2fb5c03d-b67f-4471-b8a9-a83bf7b15be8 00:05:38.041 11:27:38 json_config_extra_key -- nvmf/common.sh@18 -- # NVME_HOSTID=2fb5c03d-b67f-4471-b8a9-a83bf7b15be8 00:05:38.041 11:27:38 json_config_extra_key -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:05:38.041 11:27:38 json_config_extra_key -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:05:38.041 11:27:38 json_config_extra_key -- nvmf/common.sh@21 -- # NET_TYPE=phy-fallback 00:05:38.041 11:27:38 json_config_extra_key -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:05:38.041 11:27:38 json_config_extra_key -- nvmf/common.sh@49 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:05:38.041 11:27:38 json_config_extra_key -- scripts/common.sh@15 -- # shopt -s extglob 00:05:38.041 11:27:38 json_config_extra_key -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:05:38.041 11:27:38 json_config_extra_key -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:05:38.041 11:27:38 json_config_extra_key -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:05:38.041 11:27:38 json_config_extra_key -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:05:38.041 11:27:38 json_config_extra_key -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:05:38.041 11:27:38 json_config_extra_key -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:05:38.041 11:27:38 json_config_extra_key -- paths/export.sh@5 -- # export PATH 00:05:38.041 11:27:38 json_config_extra_key -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:05:38.041 11:27:38 json_config_extra_key -- nvmf/common.sh@51 -- # : 0 00:05:38.041 11:27:38 json_config_extra_key -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:05:38.041 11:27:38 json_config_extra_key -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:05:38.041 11:27:38 json_config_extra_key -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:05:38.041 11:27:38 json_config_extra_key -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:05:38.041 11:27:38 json_config_extra_key -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:05:38.041 11:27:38 json_config_extra_key -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:05:38.041 /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:05:38.041 11:27:38 json_config_extra_key -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:05:38.041 11:27:38 json_config_extra_key -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:05:38.042 11:27:38 json_config_extra_key -- nvmf/common.sh@55 -- # have_pci_nics=0 00:05:38.042 11:27:38 json_config_extra_key -- json_config/json_config_extra_key.sh@10 -- # source /home/vagrant/spdk_repo/spdk/test/json_config/common.sh 00:05:38.042 11:27:38 json_config_extra_key -- json_config/json_config_extra_key.sh@17 -- # app_pid=(['target']='') 00:05:38.042 11:27:38 json_config_extra_key -- json_config/json_config_extra_key.sh@17 -- # declare -A app_pid 00:05:38.042 11:27:38 json_config_extra_key -- json_config/json_config_extra_key.sh@18 -- # app_socket=(['target']='/var/tmp/spdk_tgt.sock') 00:05:38.042 11:27:38 json_config_extra_key -- json_config/json_config_extra_key.sh@18 -- # declare -A app_socket 00:05:38.042 11:27:38 json_config_extra_key -- json_config/json_config_extra_key.sh@19 -- # app_params=(['target']='-m 0x1 -s 1024') 00:05:38.042 11:27:38 json_config_extra_key -- json_config/json_config_extra_key.sh@19 -- # declare -A app_params 00:05:38.042 11:27:38 json_config_extra_key -- json_config/json_config_extra_key.sh@20 -- # configs_path=(['target']='/home/vagrant/spdk_repo/spdk/test/json_config/extra_key.json') 00:05:38.042 11:27:38 json_config_extra_key -- json_config/json_config_extra_key.sh@20 -- # declare -A configs_path 00:05:38.042 11:27:38 json_config_extra_key -- json_config/json_config_extra_key.sh@22 -- # trap 'on_error_exit "${FUNCNAME}" "${LINENO}"' ERR 00:05:38.042 INFO: launching applications... 00:05:38.042 11:27:38 json_config_extra_key -- json_config/json_config_extra_key.sh@24 -- # echo 'INFO: launching applications...' 00:05:38.042 11:27:38 json_config_extra_key -- json_config/json_config_extra_key.sh@25 -- # json_config_test_start_app target --json /home/vagrant/spdk_repo/spdk/test/json_config/extra_key.json 00:05:38.042 11:27:38 json_config_extra_key -- json_config/common.sh@9 -- # local app=target 00:05:38.042 11:27:38 json_config_extra_key -- json_config/common.sh@10 -- # shift 00:05:38.042 11:27:38 json_config_extra_key -- json_config/common.sh@12 -- # [[ -n 22 ]] 00:05:38.042 11:27:38 json_config_extra_key -- json_config/common.sh@13 -- # [[ -z '' ]] 00:05:38.042 11:27:38 json_config_extra_key -- json_config/common.sh@15 -- # local app_extra_params= 00:05:38.042 11:27:38 json_config_extra_key -- json_config/common.sh@16 -- # [[ 0 -eq 1 ]] 00:05:38.042 11:27:38 json_config_extra_key -- json_config/common.sh@16 -- # [[ 0 -eq 1 ]] 00:05:38.042 11:27:38 json_config_extra_key -- json_config/common.sh@22 -- # app_pid["$app"]=61465 00:05:38.042 Waiting for target to run... 00:05:38.042 11:27:38 json_config_extra_key -- json_config/common.sh@24 -- # echo 'Waiting for target to run...' 00:05:38.042 11:27:38 json_config_extra_key -- json_config/common.sh@25 -- # waitforlisten 61465 /var/tmp/spdk_tgt.sock 00:05:38.042 11:27:38 json_config_extra_key -- json_config/common.sh@21 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 -s 1024 -r /var/tmp/spdk_tgt.sock --json /home/vagrant/spdk_repo/spdk/test/json_config/extra_key.json 00:05:38.042 11:27:38 json_config_extra_key -- common/autotest_common.sh@835 -- # '[' -z 61465 ']' 00:05:38.042 11:27:38 json_config_extra_key -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk_tgt.sock 00:05:38.042 11:27:38 json_config_extra_key -- common/autotest_common.sh@840 -- # local max_retries=100 00:05:38.042 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock... 00:05:38.042 11:27:38 json_config_extra_key -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock...' 00:05:38.042 11:27:38 json_config_extra_key -- common/autotest_common.sh@844 -- # xtrace_disable 00:05:38.042 11:27:38 json_config_extra_key -- common/autotest_common.sh@10 -- # set +x 00:05:38.042 [2024-12-16 11:27:38.726487] Starting SPDK v25.01-pre git sha1 c360129e7 / DPDK 24.03.0 initialization... 00:05:38.042 [2024-12-16 11:27:38.726601] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 -m 1024 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid61465 ] 00:05:38.609 [2024-12-16 11:27:39.168785] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:38.609 [2024-12-16 11:27:39.221481] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:05:39.176 11:27:39 json_config_extra_key -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:05:39.176 11:27:39 json_config_extra_key -- common/autotest_common.sh@868 -- # return 0 00:05:39.176 00:05:39.176 11:27:39 json_config_extra_key -- json_config/common.sh@26 -- # echo '' 00:05:39.176 INFO: shutting down applications... 00:05:39.176 11:27:39 json_config_extra_key -- json_config/json_config_extra_key.sh@27 -- # echo 'INFO: shutting down applications...' 00:05:39.176 11:27:39 json_config_extra_key -- json_config/json_config_extra_key.sh@28 -- # json_config_test_shutdown_app target 00:05:39.176 11:27:39 json_config_extra_key -- json_config/common.sh@31 -- # local app=target 00:05:39.176 11:27:39 json_config_extra_key -- json_config/common.sh@34 -- # [[ -n 22 ]] 00:05:39.176 11:27:39 json_config_extra_key -- json_config/common.sh@35 -- # [[ -n 61465 ]] 00:05:39.176 11:27:39 json_config_extra_key -- json_config/common.sh@38 -- # kill -SIGINT 61465 00:05:39.176 11:27:39 json_config_extra_key -- json_config/common.sh@40 -- # (( i = 0 )) 00:05:39.176 11:27:39 json_config_extra_key -- json_config/common.sh@40 -- # (( i < 30 )) 00:05:39.176 11:27:39 json_config_extra_key -- json_config/common.sh@41 -- # kill -0 61465 00:05:39.176 11:27:39 json_config_extra_key -- json_config/common.sh@45 -- # sleep 0.5 00:05:39.744 11:27:40 json_config_extra_key -- json_config/common.sh@40 -- # (( i++ )) 00:05:39.744 11:27:40 json_config_extra_key -- json_config/common.sh@40 -- # (( i < 30 )) 00:05:39.744 11:27:40 json_config_extra_key -- json_config/common.sh@41 -- # kill -0 61465 00:05:39.744 11:27:40 json_config_extra_key -- json_config/common.sh@42 -- # app_pid["$app"]= 00:05:39.744 11:27:40 json_config_extra_key -- json_config/common.sh@43 -- # break 00:05:39.744 11:27:40 json_config_extra_key -- json_config/common.sh@48 -- # [[ -n '' ]] 00:05:39.745 11:27:40 json_config_extra_key -- json_config/common.sh@53 -- # echo 'SPDK target shutdown done' 00:05:39.745 SPDK target shutdown done 00:05:39.745 Success 00:05:39.745 11:27:40 json_config_extra_key -- json_config/json_config_extra_key.sh@30 -- # echo Success 00:05:39.745 00:05:39.745 real 0m1.847s 00:05:39.745 user 0m1.803s 00:05:39.745 sys 0m0.494s 00:05:39.745 11:27:40 json_config_extra_key -- common/autotest_common.sh@1130 -- # xtrace_disable 00:05:39.745 11:27:40 json_config_extra_key -- common/autotest_common.sh@10 -- # set +x 00:05:39.745 ************************************ 00:05:39.745 END TEST json_config_extra_key 00:05:39.745 ************************************ 00:05:39.745 11:27:40 -- spdk/autotest.sh@161 -- # run_test alias_rpc /home/vagrant/spdk_repo/spdk/test/json_config/alias_rpc/alias_rpc.sh 00:05:39.745 11:27:40 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:05:39.745 11:27:40 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:05:39.745 11:27:40 -- common/autotest_common.sh@10 -- # set +x 00:05:39.745 ************************************ 00:05:39.745 START TEST alias_rpc 00:05:39.745 ************************************ 00:05:39.745 11:27:40 alias_rpc -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/json_config/alias_rpc/alias_rpc.sh 00:05:39.745 * Looking for test storage... 00:05:39.745 * Found test storage at /home/vagrant/spdk_repo/spdk/test/json_config/alias_rpc 00:05:39.745 11:27:40 alias_rpc -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:05:39.745 11:27:40 alias_rpc -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:05:39.745 11:27:40 alias_rpc -- common/autotest_common.sh@1711 -- # lcov --version 00:05:40.004 11:27:40 alias_rpc -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:05:40.004 11:27:40 alias_rpc -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:05:40.004 11:27:40 alias_rpc -- scripts/common.sh@333 -- # local ver1 ver1_l 00:05:40.004 11:27:40 alias_rpc -- scripts/common.sh@334 -- # local ver2 ver2_l 00:05:40.004 11:27:40 alias_rpc -- scripts/common.sh@336 -- # IFS=.-: 00:05:40.004 11:27:40 alias_rpc -- scripts/common.sh@336 -- # read -ra ver1 00:05:40.004 11:27:40 alias_rpc -- scripts/common.sh@337 -- # IFS=.-: 00:05:40.004 11:27:40 alias_rpc -- scripts/common.sh@337 -- # read -ra ver2 00:05:40.004 11:27:40 alias_rpc -- scripts/common.sh@338 -- # local 'op=<' 00:05:40.004 11:27:40 alias_rpc -- scripts/common.sh@340 -- # ver1_l=2 00:05:40.004 11:27:40 alias_rpc -- scripts/common.sh@341 -- # ver2_l=1 00:05:40.004 11:27:40 alias_rpc -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:05:40.004 11:27:40 alias_rpc -- scripts/common.sh@344 -- # case "$op" in 00:05:40.004 11:27:40 alias_rpc -- scripts/common.sh@345 -- # : 1 00:05:40.004 11:27:40 alias_rpc -- scripts/common.sh@364 -- # (( v = 0 )) 00:05:40.004 11:27:40 alias_rpc -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:05:40.004 11:27:40 alias_rpc -- scripts/common.sh@365 -- # decimal 1 00:05:40.004 11:27:40 alias_rpc -- scripts/common.sh@353 -- # local d=1 00:05:40.004 11:27:40 alias_rpc -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:05:40.004 11:27:40 alias_rpc -- scripts/common.sh@355 -- # echo 1 00:05:40.004 11:27:40 alias_rpc -- scripts/common.sh@365 -- # ver1[v]=1 00:05:40.004 11:27:40 alias_rpc -- scripts/common.sh@366 -- # decimal 2 00:05:40.004 11:27:40 alias_rpc -- scripts/common.sh@353 -- # local d=2 00:05:40.004 11:27:40 alias_rpc -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:05:40.004 11:27:40 alias_rpc -- scripts/common.sh@355 -- # echo 2 00:05:40.004 11:27:40 alias_rpc -- scripts/common.sh@366 -- # ver2[v]=2 00:05:40.004 11:27:40 alias_rpc -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:05:40.004 11:27:40 alias_rpc -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:05:40.004 11:27:40 alias_rpc -- scripts/common.sh@368 -- # return 0 00:05:40.004 11:27:40 alias_rpc -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:05:40.004 11:27:40 alias_rpc -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:05:40.004 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:40.004 --rc genhtml_branch_coverage=1 00:05:40.004 --rc genhtml_function_coverage=1 00:05:40.004 --rc genhtml_legend=1 00:05:40.004 --rc geninfo_all_blocks=1 00:05:40.004 --rc geninfo_unexecuted_blocks=1 00:05:40.004 00:05:40.004 ' 00:05:40.004 11:27:40 alias_rpc -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:05:40.004 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:40.004 --rc genhtml_branch_coverage=1 00:05:40.004 --rc genhtml_function_coverage=1 00:05:40.004 --rc genhtml_legend=1 00:05:40.004 --rc geninfo_all_blocks=1 00:05:40.004 --rc geninfo_unexecuted_blocks=1 00:05:40.004 00:05:40.004 ' 00:05:40.004 11:27:40 alias_rpc -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:05:40.004 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:40.004 --rc genhtml_branch_coverage=1 00:05:40.004 --rc genhtml_function_coverage=1 00:05:40.004 --rc genhtml_legend=1 00:05:40.004 --rc geninfo_all_blocks=1 00:05:40.004 --rc geninfo_unexecuted_blocks=1 00:05:40.004 00:05:40.005 ' 00:05:40.005 11:27:40 alias_rpc -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:05:40.005 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:40.005 --rc genhtml_branch_coverage=1 00:05:40.005 --rc genhtml_function_coverage=1 00:05:40.005 --rc genhtml_legend=1 00:05:40.005 --rc geninfo_all_blocks=1 00:05:40.005 --rc geninfo_unexecuted_blocks=1 00:05:40.005 00:05:40.005 ' 00:05:40.005 11:27:40 alias_rpc -- alias_rpc/alias_rpc.sh@10 -- # trap 'killprocess $spdk_tgt_pid; exit 1' ERR 00:05:40.005 11:27:40 alias_rpc -- alias_rpc/alias_rpc.sh@13 -- # spdk_tgt_pid=61554 00:05:40.005 11:27:40 alias_rpc -- alias_rpc/alias_rpc.sh@14 -- # waitforlisten 61554 00:05:40.005 11:27:40 alias_rpc -- common/autotest_common.sh@835 -- # '[' -z 61554 ']' 00:05:40.005 11:27:40 alias_rpc -- alias_rpc/alias_rpc.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:05:40.005 11:27:40 alias_rpc -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:40.005 11:27:40 alias_rpc -- common/autotest_common.sh@840 -- # local max_retries=100 00:05:40.005 11:27:40 alias_rpc -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:40.005 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:40.005 11:27:40 alias_rpc -- common/autotest_common.sh@844 -- # xtrace_disable 00:05:40.005 11:27:40 alias_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:40.005 [2024-12-16 11:27:40.596608] Starting SPDK v25.01-pre git sha1 c360129e7 / DPDK 24.03.0 initialization... 00:05:40.005 [2024-12-16 11:27:40.596713] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid61554 ] 00:05:40.005 [2024-12-16 11:27:40.743566] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:40.308 [2024-12-16 11:27:40.802778] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:05:40.565 11:27:41 alias_rpc -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:05:40.565 11:27:41 alias_rpc -- common/autotest_common.sh@868 -- # return 0 00:05:40.565 11:27:41 alias_rpc -- alias_rpc/alias_rpc.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py load_config -i 00:05:40.823 11:27:41 alias_rpc -- alias_rpc/alias_rpc.sh@19 -- # killprocess 61554 00:05:40.823 11:27:41 alias_rpc -- common/autotest_common.sh@954 -- # '[' -z 61554 ']' 00:05:40.823 11:27:41 alias_rpc -- common/autotest_common.sh@958 -- # kill -0 61554 00:05:40.823 11:27:41 alias_rpc -- common/autotest_common.sh@959 -- # uname 00:05:40.823 11:27:41 alias_rpc -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:05:40.823 11:27:41 alias_rpc -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 61554 00:05:40.823 11:27:41 alias_rpc -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:05:40.823 11:27:41 alias_rpc -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:05:40.823 killing process with pid 61554 00:05:40.823 11:27:41 alias_rpc -- common/autotest_common.sh@972 -- # echo 'killing process with pid 61554' 00:05:40.823 11:27:41 alias_rpc -- common/autotest_common.sh@973 -- # kill 61554 00:05:40.823 11:27:41 alias_rpc -- common/autotest_common.sh@978 -- # wait 61554 00:05:41.081 00:05:41.081 real 0m1.468s 00:05:41.081 user 0m1.545s 00:05:41.081 sys 0m0.428s 00:05:41.081 11:27:41 alias_rpc -- common/autotest_common.sh@1130 -- # xtrace_disable 00:05:41.081 11:27:41 alias_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:41.081 ************************************ 00:05:41.081 END TEST alias_rpc 00:05:41.081 ************************************ 00:05:41.341 11:27:41 -- spdk/autotest.sh@163 -- # [[ 1 -eq 0 ]] 00:05:41.341 11:27:41 -- spdk/autotest.sh@167 -- # run_test dpdk_mem_utility /home/vagrant/spdk_repo/spdk/test/dpdk_memory_utility/test_dpdk_mem_info.sh 00:05:41.341 11:27:41 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:05:41.341 11:27:41 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:05:41.341 11:27:41 -- common/autotest_common.sh@10 -- # set +x 00:05:41.341 ************************************ 00:05:41.341 START TEST dpdk_mem_utility 00:05:41.341 ************************************ 00:05:41.341 11:27:41 dpdk_mem_utility -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/dpdk_memory_utility/test_dpdk_mem_info.sh 00:05:41.341 * Looking for test storage... 00:05:41.341 * Found test storage at /home/vagrant/spdk_repo/spdk/test/dpdk_memory_utility 00:05:41.341 11:27:41 dpdk_mem_utility -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:05:41.341 11:27:41 dpdk_mem_utility -- common/autotest_common.sh@1711 -- # lcov --version 00:05:41.341 11:27:41 dpdk_mem_utility -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:05:41.341 11:27:42 dpdk_mem_utility -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:05:41.341 11:27:42 dpdk_mem_utility -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:05:41.341 11:27:42 dpdk_mem_utility -- scripts/common.sh@333 -- # local ver1 ver1_l 00:05:41.341 11:27:42 dpdk_mem_utility -- scripts/common.sh@334 -- # local ver2 ver2_l 00:05:41.341 11:27:42 dpdk_mem_utility -- scripts/common.sh@336 -- # IFS=.-: 00:05:41.341 11:27:42 dpdk_mem_utility -- scripts/common.sh@336 -- # read -ra ver1 00:05:41.341 11:27:42 dpdk_mem_utility -- scripts/common.sh@337 -- # IFS=.-: 00:05:41.341 11:27:42 dpdk_mem_utility -- scripts/common.sh@337 -- # read -ra ver2 00:05:41.341 11:27:42 dpdk_mem_utility -- scripts/common.sh@338 -- # local 'op=<' 00:05:41.341 11:27:42 dpdk_mem_utility -- scripts/common.sh@340 -- # ver1_l=2 00:05:41.341 11:27:42 dpdk_mem_utility -- scripts/common.sh@341 -- # ver2_l=1 00:05:41.341 11:27:42 dpdk_mem_utility -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:05:41.341 11:27:42 dpdk_mem_utility -- scripts/common.sh@344 -- # case "$op" in 00:05:41.341 11:27:42 dpdk_mem_utility -- scripts/common.sh@345 -- # : 1 00:05:41.341 11:27:42 dpdk_mem_utility -- scripts/common.sh@364 -- # (( v = 0 )) 00:05:41.341 11:27:42 dpdk_mem_utility -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:05:41.341 11:27:42 dpdk_mem_utility -- scripts/common.sh@365 -- # decimal 1 00:05:41.341 11:27:42 dpdk_mem_utility -- scripts/common.sh@353 -- # local d=1 00:05:41.341 11:27:42 dpdk_mem_utility -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:05:41.341 11:27:42 dpdk_mem_utility -- scripts/common.sh@355 -- # echo 1 00:05:41.341 11:27:42 dpdk_mem_utility -- scripts/common.sh@365 -- # ver1[v]=1 00:05:41.341 11:27:42 dpdk_mem_utility -- scripts/common.sh@366 -- # decimal 2 00:05:41.341 11:27:42 dpdk_mem_utility -- scripts/common.sh@353 -- # local d=2 00:05:41.341 11:27:42 dpdk_mem_utility -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:05:41.341 11:27:42 dpdk_mem_utility -- scripts/common.sh@355 -- # echo 2 00:05:41.341 11:27:42 dpdk_mem_utility -- scripts/common.sh@366 -- # ver2[v]=2 00:05:41.341 11:27:42 dpdk_mem_utility -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:05:41.341 11:27:42 dpdk_mem_utility -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:05:41.341 11:27:42 dpdk_mem_utility -- scripts/common.sh@368 -- # return 0 00:05:41.341 11:27:42 dpdk_mem_utility -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:05:41.341 11:27:42 dpdk_mem_utility -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:05:41.341 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:41.341 --rc genhtml_branch_coverage=1 00:05:41.341 --rc genhtml_function_coverage=1 00:05:41.341 --rc genhtml_legend=1 00:05:41.341 --rc geninfo_all_blocks=1 00:05:41.341 --rc geninfo_unexecuted_blocks=1 00:05:41.341 00:05:41.341 ' 00:05:41.341 11:27:42 dpdk_mem_utility -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:05:41.341 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:41.341 --rc genhtml_branch_coverage=1 00:05:41.341 --rc genhtml_function_coverage=1 00:05:41.341 --rc genhtml_legend=1 00:05:41.341 --rc geninfo_all_blocks=1 00:05:41.341 --rc geninfo_unexecuted_blocks=1 00:05:41.341 00:05:41.341 ' 00:05:41.341 11:27:42 dpdk_mem_utility -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:05:41.341 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:41.341 --rc genhtml_branch_coverage=1 00:05:41.341 --rc genhtml_function_coverage=1 00:05:41.341 --rc genhtml_legend=1 00:05:41.341 --rc geninfo_all_blocks=1 00:05:41.341 --rc geninfo_unexecuted_blocks=1 00:05:41.341 00:05:41.341 ' 00:05:41.341 11:27:42 dpdk_mem_utility -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:05:41.341 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:41.341 --rc genhtml_branch_coverage=1 00:05:41.341 --rc genhtml_function_coverage=1 00:05:41.341 --rc genhtml_legend=1 00:05:41.341 --rc geninfo_all_blocks=1 00:05:41.341 --rc geninfo_unexecuted_blocks=1 00:05:41.341 00:05:41.341 ' 00:05:41.341 11:27:42 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@10 -- # MEM_SCRIPT=/home/vagrant/spdk_repo/spdk/scripts/dpdk_mem_info.py 00:05:41.341 11:27:42 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@13 -- # spdkpid=61642 00:05:41.341 11:27:42 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@15 -- # waitforlisten 61642 00:05:41.341 11:27:42 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:05:41.341 11:27:42 dpdk_mem_utility -- common/autotest_common.sh@835 -- # '[' -z 61642 ']' 00:05:41.341 11:27:42 dpdk_mem_utility -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:41.341 11:27:42 dpdk_mem_utility -- common/autotest_common.sh@840 -- # local max_retries=100 00:05:41.341 11:27:42 dpdk_mem_utility -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:41.341 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:41.341 11:27:42 dpdk_mem_utility -- common/autotest_common.sh@844 -- # xtrace_disable 00:05:41.341 11:27:42 dpdk_mem_utility -- common/autotest_common.sh@10 -- # set +x 00:05:41.600 [2024-12-16 11:27:42.105312] Starting SPDK v25.01-pre git sha1 c360129e7 / DPDK 24.03.0 initialization... 00:05:41.600 [2024-12-16 11:27:42.105420] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid61642 ] 00:05:41.600 [2024-12-16 11:27:42.254122] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:41.600 [2024-12-16 11:27:42.323226] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:05:42.538 11:27:43 dpdk_mem_utility -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:05:42.538 11:27:43 dpdk_mem_utility -- common/autotest_common.sh@868 -- # return 0 00:05:42.538 11:27:43 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@17 -- # trap 'killprocess $spdkpid' SIGINT SIGTERM EXIT 00:05:42.538 11:27:43 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@19 -- # rpc_cmd env_dpdk_get_mem_stats 00:05:42.538 11:27:43 dpdk_mem_utility -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:42.538 11:27:43 dpdk_mem_utility -- common/autotest_common.sh@10 -- # set +x 00:05:42.538 { 00:05:42.538 "filename": "/tmp/spdk_mem_dump.txt" 00:05:42.538 } 00:05:42.538 11:27:43 dpdk_mem_utility -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:42.538 11:27:43 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@21 -- # /home/vagrant/spdk_repo/spdk/scripts/dpdk_mem_info.py 00:05:42.538 DPDK memory size 818.000000 MiB in 1 heap(s) 00:05:42.538 1 heaps totaling size 818.000000 MiB 00:05:42.538 size: 818.000000 MiB heap id: 0 00:05:42.538 end heaps---------- 00:05:42.538 9 mempools totaling size 603.782043 MiB 00:05:42.538 size: 212.674988 MiB name: PDU_immediate_data_Pool 00:05:42.538 size: 158.602051 MiB name: PDU_data_out_Pool 00:05:42.538 size: 100.555481 MiB name: bdev_io_61642 00:05:42.538 size: 50.003479 MiB name: msgpool_61642 00:05:42.538 size: 36.509338 MiB name: fsdev_io_61642 00:05:42.538 size: 21.763794 MiB name: PDU_Pool 00:05:42.538 size: 19.513306 MiB name: SCSI_TASK_Pool 00:05:42.538 size: 4.133484 MiB name: evtpool_61642 00:05:42.538 size: 0.026123 MiB name: Session_Pool 00:05:42.538 end mempools------- 00:05:42.538 6 memzones totaling size 4.142822 MiB 00:05:42.538 size: 1.000366 MiB name: RG_ring_0_61642 00:05:42.538 size: 1.000366 MiB name: RG_ring_1_61642 00:05:42.538 size: 1.000366 MiB name: RG_ring_4_61642 00:05:42.538 size: 1.000366 MiB name: RG_ring_5_61642 00:05:42.538 size: 0.125366 MiB name: RG_ring_2_61642 00:05:42.538 size: 0.015991 MiB name: RG_ring_3_61642 00:05:42.538 end memzones------- 00:05:42.538 11:27:43 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@23 -- # /home/vagrant/spdk_repo/spdk/scripts/dpdk_mem_info.py -m 0 00:05:42.538 heap id: 0 total size: 818.000000 MiB number of busy elements: 225 number of free elements: 15 00:05:42.538 list of free elements. size: 10.819336 MiB 00:05:42.538 element at address: 0x200019200000 with size: 0.999878 MiB 00:05:42.538 element at address: 0x200019400000 with size: 0.999878 MiB 00:05:42.538 element at address: 0x200000400000 with size: 0.996155 MiB 00:05:42.538 element at address: 0x200032000000 with size: 0.994446 MiB 00:05:42.538 element at address: 0x200006400000 with size: 0.959839 MiB 00:05:42.538 element at address: 0x200012c00000 with size: 0.944275 MiB 00:05:42.538 element at address: 0x200019600000 with size: 0.936584 MiB 00:05:42.538 element at address: 0x200000200000 with size: 0.717346 MiB 00:05:42.538 element at address: 0x20001ae00000 with size: 0.573364 MiB 00:05:42.538 element at address: 0x200000c00000 with size: 0.490662 MiB 00:05:42.538 element at address: 0x20000a600000 with size: 0.489807 MiB 00:05:42.538 element at address: 0x200019800000 with size: 0.485657 MiB 00:05:42.538 element at address: 0x200003e00000 with size: 0.481201 MiB 00:05:42.538 element at address: 0x200028200000 with size: 0.396851 MiB 00:05:42.538 element at address: 0x200000800000 with size: 0.353394 MiB 00:05:42.538 list of standard malloc elements. size: 199.251770 MiB 00:05:42.538 element at address: 0x20000a7fff80 with size: 132.000122 MiB 00:05:42.538 element at address: 0x2000065fff80 with size: 64.000122 MiB 00:05:42.538 element at address: 0x2000192fff80 with size: 1.000122 MiB 00:05:42.538 element at address: 0x2000194fff80 with size: 1.000122 MiB 00:05:42.538 element at address: 0x2000196fff80 with size: 1.000122 MiB 00:05:42.538 element at address: 0x2000003d9f00 with size: 0.140747 MiB 00:05:42.538 element at address: 0x2000196eff00 with size: 0.062622 MiB 00:05:42.538 element at address: 0x2000003fdf80 with size: 0.007935 MiB 00:05:42.538 element at address: 0x2000196efdc0 with size: 0.000305 MiB 00:05:42.538 element at address: 0x2000002d7c40 with size: 0.000183 MiB 00:05:42.538 element at address: 0x2000003d9e40 with size: 0.000183 MiB 00:05:42.538 element at address: 0x2000004ff040 with size: 0.000183 MiB 00:05:42.538 element at address: 0x2000004ff100 with size: 0.000183 MiB 00:05:42.538 element at address: 0x2000004ff1c0 with size: 0.000183 MiB 00:05:42.538 element at address: 0x2000004ff280 with size: 0.000183 MiB 00:05:42.538 element at address: 0x2000004ff340 with size: 0.000183 MiB 00:05:42.538 element at address: 0x2000004ff400 with size: 0.000183 MiB 00:05:42.538 element at address: 0x2000004ff4c0 with size: 0.000183 MiB 00:05:42.538 element at address: 0x2000004ff580 with size: 0.000183 MiB 00:05:42.538 element at address: 0x2000004ff640 with size: 0.000183 MiB 00:05:42.538 element at address: 0x2000004ff700 with size: 0.000183 MiB 00:05:42.538 element at address: 0x2000004ff7c0 with size: 0.000183 MiB 00:05:42.538 element at address: 0x2000004ff880 with size: 0.000183 MiB 00:05:42.538 element at address: 0x2000004ff940 with size: 0.000183 MiB 00:05:42.538 element at address: 0x2000004ffa00 with size: 0.000183 MiB 00:05:42.538 element at address: 0x2000004ffac0 with size: 0.000183 MiB 00:05:42.538 element at address: 0x2000004ffcc0 with size: 0.000183 MiB 00:05:42.538 element at address: 0x2000004ffd80 with size: 0.000183 MiB 00:05:42.538 element at address: 0x2000004ffe40 with size: 0.000183 MiB 00:05:42.538 element at address: 0x20000085a780 with size: 0.000183 MiB 00:05:42.538 element at address: 0x20000085a980 with size: 0.000183 MiB 00:05:42.538 element at address: 0x20000085ec40 with size: 0.000183 MiB 00:05:42.538 element at address: 0x20000087ef00 with size: 0.000183 MiB 00:05:42.538 element at address: 0x20000087efc0 with size: 0.000183 MiB 00:05:42.538 element at address: 0x20000087f080 with size: 0.000183 MiB 00:05:42.538 element at address: 0x20000087f140 with size: 0.000183 MiB 00:05:42.538 element at address: 0x20000087f200 with size: 0.000183 MiB 00:05:42.538 element at address: 0x20000087f2c0 with size: 0.000183 MiB 00:05:42.538 element at address: 0x20000087f380 with size: 0.000183 MiB 00:05:42.538 element at address: 0x20000087f440 with size: 0.000183 MiB 00:05:42.538 element at address: 0x20000087f500 with size: 0.000183 MiB 00:05:42.538 element at address: 0x20000087f5c0 with size: 0.000183 MiB 00:05:42.538 element at address: 0x20000087f680 with size: 0.000183 MiB 00:05:42.538 element at address: 0x2000008ff940 with size: 0.000183 MiB 00:05:42.538 element at address: 0x2000008ffb40 with size: 0.000183 MiB 00:05:42.538 element at address: 0x200000c7d9c0 with size: 0.000183 MiB 00:05:42.538 element at address: 0x200000c7da80 with size: 0.000183 MiB 00:05:42.538 element at address: 0x200000c7db40 with size: 0.000183 MiB 00:05:42.538 element at address: 0x200000c7dc00 with size: 0.000183 MiB 00:05:42.538 element at address: 0x200000c7dcc0 with size: 0.000183 MiB 00:05:42.538 element at address: 0x200000c7dd80 with size: 0.000183 MiB 00:05:42.538 element at address: 0x200000c7de40 with size: 0.000183 MiB 00:05:42.538 element at address: 0x200000c7df00 with size: 0.000183 MiB 00:05:42.538 element at address: 0x200000c7dfc0 with size: 0.000183 MiB 00:05:42.538 element at address: 0x200000c7e080 with size: 0.000183 MiB 00:05:42.538 element at address: 0x200000c7e140 with size: 0.000183 MiB 00:05:42.538 element at address: 0x200000c7e200 with size: 0.000183 MiB 00:05:42.538 element at address: 0x200000c7e2c0 with size: 0.000183 MiB 00:05:42.538 element at address: 0x200000c7e380 with size: 0.000183 MiB 00:05:42.538 element at address: 0x200000c7e440 with size: 0.000183 MiB 00:05:42.538 element at address: 0x200000c7e500 with size: 0.000183 MiB 00:05:42.538 element at address: 0x200000c7e5c0 with size: 0.000183 MiB 00:05:42.538 element at address: 0x200000c7e680 with size: 0.000183 MiB 00:05:42.538 element at address: 0x200000c7e740 with size: 0.000183 MiB 00:05:42.538 element at address: 0x200000c7e800 with size: 0.000183 MiB 00:05:42.538 element at address: 0x200000c7e8c0 with size: 0.000183 MiB 00:05:42.538 element at address: 0x200000c7e980 with size: 0.000183 MiB 00:05:42.538 element at address: 0x200000c7ea40 with size: 0.000183 MiB 00:05:42.538 element at address: 0x200000c7eb00 with size: 0.000183 MiB 00:05:42.538 element at address: 0x200000c7ebc0 with size: 0.000183 MiB 00:05:42.538 element at address: 0x200000c7ec80 with size: 0.000183 MiB 00:05:42.538 element at address: 0x200000c7ed40 with size: 0.000183 MiB 00:05:42.538 element at address: 0x200000cff000 with size: 0.000183 MiB 00:05:42.538 element at address: 0x200000cff0c0 with size: 0.000183 MiB 00:05:42.538 element at address: 0x200003e7b300 with size: 0.000183 MiB 00:05:42.538 element at address: 0x200003e7b3c0 with size: 0.000183 MiB 00:05:42.538 element at address: 0x200003e7b480 with size: 0.000183 MiB 00:05:42.538 element at address: 0x200003e7b540 with size: 0.000183 MiB 00:05:42.538 element at address: 0x200003e7b600 with size: 0.000183 MiB 00:05:42.538 element at address: 0x200003e7b6c0 with size: 0.000183 MiB 00:05:42.538 element at address: 0x200003efb980 with size: 0.000183 MiB 00:05:42.538 element at address: 0x2000064fdd80 with size: 0.000183 MiB 00:05:42.539 element at address: 0x20000a67d640 with size: 0.000183 MiB 00:05:42.539 element at address: 0x20000a67d700 with size: 0.000183 MiB 00:05:42.539 element at address: 0x20000a67d7c0 with size: 0.000183 MiB 00:05:42.539 element at address: 0x20000a67d880 with size: 0.000183 MiB 00:05:42.539 element at address: 0x20000a67d940 with size: 0.000183 MiB 00:05:42.539 element at address: 0x20000a67da00 with size: 0.000183 MiB 00:05:42.539 element at address: 0x20000a67dac0 with size: 0.000183 MiB 00:05:42.539 element at address: 0x20000a6fdd80 with size: 0.000183 MiB 00:05:42.539 element at address: 0x200012cf1bc0 with size: 0.000183 MiB 00:05:42.539 element at address: 0x2000196efc40 with size: 0.000183 MiB 00:05:42.539 element at address: 0x2000196efd00 with size: 0.000183 MiB 00:05:42.539 element at address: 0x2000198bc740 with size: 0.000183 MiB 00:05:42.539 element at address: 0x20001ae92c80 with size: 0.000183 MiB 00:05:42.539 element at address: 0x20001ae92d40 with size: 0.000183 MiB 00:05:42.539 element at address: 0x20001ae92e00 with size: 0.000183 MiB 00:05:42.539 element at address: 0x20001ae92ec0 with size: 0.000183 MiB 00:05:42.539 element at address: 0x20001ae92f80 with size: 0.000183 MiB 00:05:42.539 element at address: 0x20001ae93040 with size: 0.000183 MiB 00:05:42.539 element at address: 0x20001ae93100 with size: 0.000183 MiB 00:05:42.539 element at address: 0x20001ae931c0 with size: 0.000183 MiB 00:05:42.539 element at address: 0x20001ae93280 with size: 0.000183 MiB 00:05:42.539 element at address: 0x20001ae93340 with size: 0.000183 MiB 00:05:42.539 element at address: 0x20001ae93400 with size: 0.000183 MiB 00:05:42.539 element at address: 0x20001ae934c0 with size: 0.000183 MiB 00:05:42.539 element at address: 0x20001ae93580 with size: 0.000183 MiB 00:05:42.539 element at address: 0x20001ae93640 with size: 0.000183 MiB 00:05:42.539 element at address: 0x20001ae93700 with size: 0.000183 MiB 00:05:42.539 element at address: 0x20001ae937c0 with size: 0.000183 MiB 00:05:42.539 element at address: 0x20001ae93880 with size: 0.000183 MiB 00:05:42.539 element at address: 0x20001ae93940 with size: 0.000183 MiB 00:05:42.539 element at address: 0x20001ae93a00 with size: 0.000183 MiB 00:05:42.539 element at address: 0x20001ae93ac0 with size: 0.000183 MiB 00:05:42.539 element at address: 0x20001ae93b80 with size: 0.000183 MiB 00:05:42.539 element at address: 0x20001ae93c40 with size: 0.000183 MiB 00:05:42.539 element at address: 0x20001ae93d00 with size: 0.000183 MiB 00:05:42.539 element at address: 0x20001ae93dc0 with size: 0.000183 MiB 00:05:42.539 element at address: 0x20001ae93e80 with size: 0.000183 MiB 00:05:42.539 element at address: 0x20001ae93f40 with size: 0.000183 MiB 00:05:42.539 element at address: 0x20001ae94000 with size: 0.000183 MiB 00:05:42.539 element at address: 0x20001ae940c0 with size: 0.000183 MiB 00:05:42.539 element at address: 0x20001ae94180 with size: 0.000183 MiB 00:05:42.539 element at address: 0x20001ae94240 with size: 0.000183 MiB 00:05:42.539 element at address: 0x20001ae94300 with size: 0.000183 MiB 00:05:42.539 element at address: 0x20001ae943c0 with size: 0.000183 MiB 00:05:42.539 element at address: 0x20001ae94480 with size: 0.000183 MiB 00:05:42.539 element at address: 0x20001ae94540 with size: 0.000183 MiB 00:05:42.539 element at address: 0x20001ae94600 with size: 0.000183 MiB 00:05:42.539 element at address: 0x20001ae946c0 with size: 0.000183 MiB 00:05:42.539 element at address: 0x20001ae94780 with size: 0.000183 MiB 00:05:42.539 element at address: 0x20001ae94840 with size: 0.000183 MiB 00:05:42.539 element at address: 0x20001ae94900 with size: 0.000183 MiB 00:05:42.539 element at address: 0x20001ae949c0 with size: 0.000183 MiB 00:05:42.539 element at address: 0x20001ae94a80 with size: 0.000183 MiB 00:05:42.539 element at address: 0x20001ae94b40 with size: 0.000183 MiB 00:05:42.539 element at address: 0x20001ae94c00 with size: 0.000183 MiB 00:05:42.539 element at address: 0x20001ae94cc0 with size: 0.000183 MiB 00:05:42.539 element at address: 0x20001ae94d80 with size: 0.000183 MiB 00:05:42.539 element at address: 0x20001ae94e40 with size: 0.000183 MiB 00:05:42.539 element at address: 0x20001ae94f00 with size: 0.000183 MiB 00:05:42.539 element at address: 0x20001ae94fc0 with size: 0.000183 MiB 00:05:42.539 element at address: 0x20001ae95080 with size: 0.000183 MiB 00:05:42.539 element at address: 0x20001ae95140 with size: 0.000183 MiB 00:05:42.539 element at address: 0x20001ae95200 with size: 0.000183 MiB 00:05:42.539 element at address: 0x20001ae952c0 with size: 0.000183 MiB 00:05:42.539 element at address: 0x20001ae95380 with size: 0.000183 MiB 00:05:42.539 element at address: 0x20001ae95440 with size: 0.000183 MiB 00:05:42.539 element at address: 0x200028265980 with size: 0.000183 MiB 00:05:42.539 element at address: 0x200028265a40 with size: 0.000183 MiB 00:05:42.539 element at address: 0x20002826c640 with size: 0.000183 MiB 00:05:42.539 element at address: 0x20002826c840 with size: 0.000183 MiB 00:05:42.539 element at address: 0x20002826c900 with size: 0.000183 MiB 00:05:42.539 element at address: 0x20002826c9c0 with size: 0.000183 MiB 00:05:42.539 element at address: 0x20002826ca80 with size: 0.000183 MiB 00:05:42.539 element at address: 0x20002826cb40 with size: 0.000183 MiB 00:05:42.539 element at address: 0x20002826cc00 with size: 0.000183 MiB 00:05:42.539 element at address: 0x20002826ccc0 with size: 0.000183 MiB 00:05:42.539 element at address: 0x20002826cd80 with size: 0.000183 MiB 00:05:42.539 element at address: 0x20002826ce40 with size: 0.000183 MiB 00:05:42.539 element at address: 0x20002826cf00 with size: 0.000183 MiB 00:05:42.539 element at address: 0x20002826cfc0 with size: 0.000183 MiB 00:05:42.539 element at address: 0x20002826d080 with size: 0.000183 MiB 00:05:42.539 element at address: 0x20002826d140 with size: 0.000183 MiB 00:05:42.539 element at address: 0x20002826d200 with size: 0.000183 MiB 00:05:42.539 element at address: 0x20002826d2c0 with size: 0.000183 MiB 00:05:42.539 element at address: 0x20002826d380 with size: 0.000183 MiB 00:05:42.539 element at address: 0x20002826d440 with size: 0.000183 MiB 00:05:42.539 element at address: 0x20002826d500 with size: 0.000183 MiB 00:05:42.539 element at address: 0x20002826d5c0 with size: 0.000183 MiB 00:05:42.539 element at address: 0x20002826d680 with size: 0.000183 MiB 00:05:42.539 element at address: 0x20002826d740 with size: 0.000183 MiB 00:05:42.539 element at address: 0x20002826d800 with size: 0.000183 MiB 00:05:42.539 element at address: 0x20002826d8c0 with size: 0.000183 MiB 00:05:42.539 element at address: 0x20002826d980 with size: 0.000183 MiB 00:05:42.539 element at address: 0x20002826da40 with size: 0.000183 MiB 00:05:42.539 element at address: 0x20002826db00 with size: 0.000183 MiB 00:05:42.539 element at address: 0x20002826dbc0 with size: 0.000183 MiB 00:05:42.539 element at address: 0x20002826dc80 with size: 0.000183 MiB 00:05:42.539 element at address: 0x20002826dd40 with size: 0.000183 MiB 00:05:42.539 element at address: 0x20002826de00 with size: 0.000183 MiB 00:05:42.539 element at address: 0x20002826dec0 with size: 0.000183 MiB 00:05:42.539 element at address: 0x20002826df80 with size: 0.000183 MiB 00:05:42.539 element at address: 0x20002826e040 with size: 0.000183 MiB 00:05:42.539 element at address: 0x20002826e100 with size: 0.000183 MiB 00:05:42.539 element at address: 0x20002826e1c0 with size: 0.000183 MiB 00:05:42.539 element at address: 0x20002826e280 with size: 0.000183 MiB 00:05:42.539 element at address: 0x20002826e340 with size: 0.000183 MiB 00:05:42.539 element at address: 0x20002826e400 with size: 0.000183 MiB 00:05:42.539 element at address: 0x20002826e4c0 with size: 0.000183 MiB 00:05:42.539 element at address: 0x20002826e580 with size: 0.000183 MiB 00:05:42.539 element at address: 0x20002826e640 with size: 0.000183 MiB 00:05:42.539 element at address: 0x20002826e700 with size: 0.000183 MiB 00:05:42.539 element at address: 0x20002826e7c0 with size: 0.000183 MiB 00:05:42.539 element at address: 0x20002826e880 with size: 0.000183 MiB 00:05:42.539 element at address: 0x20002826e940 with size: 0.000183 MiB 00:05:42.539 element at address: 0x20002826ea00 with size: 0.000183 MiB 00:05:42.539 element at address: 0x20002826eac0 with size: 0.000183 MiB 00:05:42.539 element at address: 0x20002826eb80 with size: 0.000183 MiB 00:05:42.539 element at address: 0x20002826ec40 with size: 0.000183 MiB 00:05:42.539 element at address: 0x20002826ed00 with size: 0.000183 MiB 00:05:42.539 element at address: 0x20002826edc0 with size: 0.000183 MiB 00:05:42.539 element at address: 0x20002826ee80 with size: 0.000183 MiB 00:05:42.539 element at address: 0x20002826ef40 with size: 0.000183 MiB 00:05:42.539 element at address: 0x20002826f000 with size: 0.000183 MiB 00:05:42.539 element at address: 0x20002826f0c0 with size: 0.000183 MiB 00:05:42.539 element at address: 0x20002826f180 with size: 0.000183 MiB 00:05:42.539 element at address: 0x20002826f240 with size: 0.000183 MiB 00:05:42.539 element at address: 0x20002826f300 with size: 0.000183 MiB 00:05:42.539 element at address: 0x20002826f3c0 with size: 0.000183 MiB 00:05:42.539 element at address: 0x20002826f480 with size: 0.000183 MiB 00:05:42.539 element at address: 0x20002826f540 with size: 0.000183 MiB 00:05:42.539 element at address: 0x20002826f600 with size: 0.000183 MiB 00:05:42.539 element at address: 0x20002826f6c0 with size: 0.000183 MiB 00:05:42.539 element at address: 0x20002826f780 with size: 0.000183 MiB 00:05:42.539 element at address: 0x20002826f840 with size: 0.000183 MiB 00:05:42.539 element at address: 0x20002826f900 with size: 0.000183 MiB 00:05:42.539 element at address: 0x20002826f9c0 with size: 0.000183 MiB 00:05:42.539 element at address: 0x20002826fa80 with size: 0.000183 MiB 00:05:42.539 element at address: 0x20002826fb40 with size: 0.000183 MiB 00:05:42.539 element at address: 0x20002826fc00 with size: 0.000183 MiB 00:05:42.539 element at address: 0x20002826fcc0 with size: 0.000183 MiB 00:05:42.539 element at address: 0x20002826fd80 with size: 0.000183 MiB 00:05:42.539 element at address: 0x20002826fe40 with size: 0.000183 MiB 00:05:42.539 element at address: 0x20002826ff00 with size: 0.000183 MiB 00:05:42.539 list of memzone associated elements. size: 607.928894 MiB 00:05:42.539 element at address: 0x20001ae95500 with size: 211.416748 MiB 00:05:42.539 associated memzone info: size: 211.416626 MiB name: MP_PDU_immediate_data_Pool_0 00:05:42.539 element at address: 0x20002826ffc0 with size: 157.562561 MiB 00:05:42.539 associated memzone info: size: 157.562439 MiB name: MP_PDU_data_out_Pool_0 00:05:42.539 element at address: 0x200012df1e80 with size: 100.055054 MiB 00:05:42.539 associated memzone info: size: 100.054932 MiB name: MP_bdev_io_61642_0 00:05:42.539 element at address: 0x200000dff380 with size: 48.003052 MiB 00:05:42.539 associated memzone info: size: 48.002930 MiB name: MP_msgpool_61642_0 00:05:42.539 element at address: 0x200003ffdb80 with size: 36.008911 MiB 00:05:42.539 associated memzone info: size: 36.008789 MiB name: MP_fsdev_io_61642_0 00:05:42.539 element at address: 0x2000199be940 with size: 20.255554 MiB 00:05:42.539 associated memzone info: size: 20.255432 MiB name: MP_PDU_Pool_0 00:05:42.539 element at address: 0x2000321feb40 with size: 18.005066 MiB 00:05:42.539 associated memzone info: size: 18.004944 MiB name: MP_SCSI_TASK_Pool_0 00:05:42.540 element at address: 0x2000004fff00 with size: 3.000244 MiB 00:05:42.540 associated memzone info: size: 3.000122 MiB name: MP_evtpool_61642_0 00:05:42.540 element at address: 0x2000009ffe00 with size: 2.000488 MiB 00:05:42.540 associated memzone info: size: 2.000366 MiB name: RG_MP_msgpool_61642 00:05:42.540 element at address: 0x2000002d7d00 with size: 1.008118 MiB 00:05:42.540 associated memzone info: size: 1.007996 MiB name: MP_evtpool_61642 00:05:42.540 element at address: 0x20000a6fde40 with size: 1.008118 MiB 00:05:42.540 associated memzone info: size: 1.007996 MiB name: MP_PDU_Pool 00:05:42.540 element at address: 0x2000198bc800 with size: 1.008118 MiB 00:05:42.540 associated memzone info: size: 1.007996 MiB name: MP_PDU_immediate_data_Pool 00:05:42.540 element at address: 0x2000064fde40 with size: 1.008118 MiB 00:05:42.540 associated memzone info: size: 1.007996 MiB name: MP_PDU_data_out_Pool 00:05:42.540 element at address: 0x200003efba40 with size: 1.008118 MiB 00:05:42.540 associated memzone info: size: 1.007996 MiB name: MP_SCSI_TASK_Pool 00:05:42.540 element at address: 0x200000cff180 with size: 1.000488 MiB 00:05:42.540 associated memzone info: size: 1.000366 MiB name: RG_ring_0_61642 00:05:42.540 element at address: 0x2000008ffc00 with size: 1.000488 MiB 00:05:42.540 associated memzone info: size: 1.000366 MiB name: RG_ring_1_61642 00:05:42.540 element at address: 0x200012cf1c80 with size: 1.000488 MiB 00:05:42.540 associated memzone info: size: 1.000366 MiB name: RG_ring_4_61642 00:05:42.540 element at address: 0x2000320fe940 with size: 1.000488 MiB 00:05:42.540 associated memzone info: size: 1.000366 MiB name: RG_ring_5_61642 00:05:42.540 element at address: 0x20000087f740 with size: 0.500488 MiB 00:05:42.540 associated memzone info: size: 0.500366 MiB name: RG_MP_fsdev_io_61642 00:05:42.540 element at address: 0x200000c7ee00 with size: 0.500488 MiB 00:05:42.540 associated memzone info: size: 0.500366 MiB name: RG_MP_bdev_io_61642 00:05:42.540 element at address: 0x20000a67db80 with size: 0.500488 MiB 00:05:42.540 associated memzone info: size: 0.500366 MiB name: RG_MP_PDU_Pool 00:05:42.540 element at address: 0x200003e7b780 with size: 0.500488 MiB 00:05:42.540 associated memzone info: size: 0.500366 MiB name: RG_MP_SCSI_TASK_Pool 00:05:42.540 element at address: 0x20001987c540 with size: 0.250488 MiB 00:05:42.540 associated memzone info: size: 0.250366 MiB name: RG_MP_PDU_immediate_data_Pool 00:05:42.540 element at address: 0x2000002b7a40 with size: 0.125488 MiB 00:05:42.540 associated memzone info: size: 0.125366 MiB name: RG_MP_evtpool_61642 00:05:42.540 element at address: 0x20000085ed00 with size: 0.125488 MiB 00:05:42.540 associated memzone info: size: 0.125366 MiB name: RG_ring_2_61642 00:05:42.540 element at address: 0x2000064f5b80 with size: 0.031738 MiB 00:05:42.540 associated memzone info: size: 0.031616 MiB name: RG_MP_PDU_data_out_Pool 00:05:42.540 element at address: 0x200028265b00 with size: 0.023743 MiB 00:05:42.540 associated memzone info: size: 0.023621 MiB name: MP_Session_Pool_0 00:05:42.540 element at address: 0x20000085aa40 with size: 0.016113 MiB 00:05:42.540 associated memzone info: size: 0.015991 MiB name: RG_ring_3_61642 00:05:42.540 element at address: 0x20002826bc40 with size: 0.002441 MiB 00:05:42.540 associated memzone info: size: 0.002319 MiB name: RG_MP_Session_Pool 00:05:42.540 element at address: 0x2000004ffb80 with size: 0.000305 MiB 00:05:42.540 associated memzone info: size: 0.000183 MiB name: MP_msgpool_61642 00:05:42.540 element at address: 0x2000008ffa00 with size: 0.000305 MiB 00:05:42.540 associated memzone info: size: 0.000183 MiB name: MP_fsdev_io_61642 00:05:42.540 element at address: 0x20000085a840 with size: 0.000305 MiB 00:05:42.540 associated memzone info: size: 0.000183 MiB name: MP_bdev_io_61642 00:05:42.540 element at address: 0x20002826c700 with size: 0.000305 MiB 00:05:42.540 associated memzone info: size: 0.000183 MiB name: MP_Session_Pool 00:05:42.540 11:27:43 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@25 -- # trap - SIGINT SIGTERM EXIT 00:05:42.540 11:27:43 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@26 -- # killprocess 61642 00:05:42.540 11:27:43 dpdk_mem_utility -- common/autotest_common.sh@954 -- # '[' -z 61642 ']' 00:05:42.540 11:27:43 dpdk_mem_utility -- common/autotest_common.sh@958 -- # kill -0 61642 00:05:42.540 11:27:43 dpdk_mem_utility -- common/autotest_common.sh@959 -- # uname 00:05:42.540 11:27:43 dpdk_mem_utility -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:05:42.540 11:27:43 dpdk_mem_utility -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 61642 00:05:42.540 11:27:43 dpdk_mem_utility -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:05:42.540 11:27:43 dpdk_mem_utility -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:05:42.540 killing process with pid 61642 00:05:42.540 11:27:43 dpdk_mem_utility -- common/autotest_common.sh@972 -- # echo 'killing process with pid 61642' 00:05:42.540 11:27:43 dpdk_mem_utility -- common/autotest_common.sh@973 -- # kill 61642 00:05:42.540 11:27:43 dpdk_mem_utility -- common/autotest_common.sh@978 -- # wait 61642 00:05:43.107 00:05:43.108 real 0m1.782s 00:05:43.108 user 0m1.953s 00:05:43.108 sys 0m0.446s 00:05:43.108 11:27:43 dpdk_mem_utility -- common/autotest_common.sh@1130 -- # xtrace_disable 00:05:43.108 11:27:43 dpdk_mem_utility -- common/autotest_common.sh@10 -- # set +x 00:05:43.108 ************************************ 00:05:43.108 END TEST dpdk_mem_utility 00:05:43.108 ************************************ 00:05:43.108 11:27:43 -- spdk/autotest.sh@168 -- # run_test event /home/vagrant/spdk_repo/spdk/test/event/event.sh 00:05:43.108 11:27:43 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:05:43.108 11:27:43 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:05:43.108 11:27:43 -- common/autotest_common.sh@10 -- # set +x 00:05:43.108 ************************************ 00:05:43.108 START TEST event 00:05:43.108 ************************************ 00:05:43.108 11:27:43 event -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/event/event.sh 00:05:43.108 * Looking for test storage... 00:05:43.108 * Found test storage at /home/vagrant/spdk_repo/spdk/test/event 00:05:43.108 11:27:43 event -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:05:43.108 11:27:43 event -- common/autotest_common.sh@1711 -- # lcov --version 00:05:43.108 11:27:43 event -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:05:43.367 11:27:43 event -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:05:43.367 11:27:43 event -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:05:43.367 11:27:43 event -- scripts/common.sh@333 -- # local ver1 ver1_l 00:05:43.367 11:27:43 event -- scripts/common.sh@334 -- # local ver2 ver2_l 00:05:43.367 11:27:43 event -- scripts/common.sh@336 -- # IFS=.-: 00:05:43.367 11:27:43 event -- scripts/common.sh@336 -- # read -ra ver1 00:05:43.367 11:27:43 event -- scripts/common.sh@337 -- # IFS=.-: 00:05:43.367 11:27:43 event -- scripts/common.sh@337 -- # read -ra ver2 00:05:43.367 11:27:43 event -- scripts/common.sh@338 -- # local 'op=<' 00:05:43.367 11:27:43 event -- scripts/common.sh@340 -- # ver1_l=2 00:05:43.367 11:27:43 event -- scripts/common.sh@341 -- # ver2_l=1 00:05:43.367 11:27:43 event -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:05:43.367 11:27:43 event -- scripts/common.sh@344 -- # case "$op" in 00:05:43.367 11:27:43 event -- scripts/common.sh@345 -- # : 1 00:05:43.367 11:27:43 event -- scripts/common.sh@364 -- # (( v = 0 )) 00:05:43.367 11:27:43 event -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:05:43.367 11:27:43 event -- scripts/common.sh@365 -- # decimal 1 00:05:43.367 11:27:43 event -- scripts/common.sh@353 -- # local d=1 00:05:43.367 11:27:43 event -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:05:43.367 11:27:43 event -- scripts/common.sh@355 -- # echo 1 00:05:43.367 11:27:43 event -- scripts/common.sh@365 -- # ver1[v]=1 00:05:43.367 11:27:43 event -- scripts/common.sh@366 -- # decimal 2 00:05:43.367 11:27:43 event -- scripts/common.sh@353 -- # local d=2 00:05:43.367 11:27:43 event -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:05:43.367 11:27:43 event -- scripts/common.sh@355 -- # echo 2 00:05:43.367 11:27:43 event -- scripts/common.sh@366 -- # ver2[v]=2 00:05:43.367 11:27:43 event -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:05:43.367 11:27:43 event -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:05:43.367 11:27:43 event -- scripts/common.sh@368 -- # return 0 00:05:43.367 11:27:43 event -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:05:43.367 11:27:43 event -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:05:43.367 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:43.367 --rc genhtml_branch_coverage=1 00:05:43.367 --rc genhtml_function_coverage=1 00:05:43.367 --rc genhtml_legend=1 00:05:43.367 --rc geninfo_all_blocks=1 00:05:43.367 --rc geninfo_unexecuted_blocks=1 00:05:43.367 00:05:43.367 ' 00:05:43.367 11:27:43 event -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:05:43.367 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:43.367 --rc genhtml_branch_coverage=1 00:05:43.367 --rc genhtml_function_coverage=1 00:05:43.367 --rc genhtml_legend=1 00:05:43.367 --rc geninfo_all_blocks=1 00:05:43.367 --rc geninfo_unexecuted_blocks=1 00:05:43.367 00:05:43.367 ' 00:05:43.367 11:27:43 event -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:05:43.367 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:43.367 --rc genhtml_branch_coverage=1 00:05:43.367 --rc genhtml_function_coverage=1 00:05:43.367 --rc genhtml_legend=1 00:05:43.367 --rc geninfo_all_blocks=1 00:05:43.367 --rc geninfo_unexecuted_blocks=1 00:05:43.367 00:05:43.367 ' 00:05:43.367 11:27:43 event -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:05:43.367 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:43.367 --rc genhtml_branch_coverage=1 00:05:43.367 --rc genhtml_function_coverage=1 00:05:43.367 --rc genhtml_legend=1 00:05:43.367 --rc geninfo_all_blocks=1 00:05:43.367 --rc geninfo_unexecuted_blocks=1 00:05:43.367 00:05:43.367 ' 00:05:43.367 11:27:43 event -- event/event.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/bdev/nbd_common.sh 00:05:43.367 11:27:43 event -- bdev/nbd_common.sh@6 -- # set -e 00:05:43.367 11:27:43 event -- event/event.sh@45 -- # run_test event_perf /home/vagrant/spdk_repo/spdk/test/event/event_perf/event_perf -m 0xF -t 1 00:05:43.367 11:27:43 event -- common/autotest_common.sh@1105 -- # '[' 6 -le 1 ']' 00:05:43.367 11:27:43 event -- common/autotest_common.sh@1111 -- # xtrace_disable 00:05:43.367 11:27:43 event -- common/autotest_common.sh@10 -- # set +x 00:05:43.367 ************************************ 00:05:43.367 START TEST event_perf 00:05:43.367 ************************************ 00:05:43.367 11:27:43 event.event_perf -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/event/event_perf/event_perf -m 0xF -t 1 00:05:43.367 Running I/O for 1 seconds...[2024-12-16 11:27:43.909446] Starting SPDK v25.01-pre git sha1 c360129e7 / DPDK 24.03.0 initialization... 00:05:43.367 [2024-12-16 11:27:43.909564] [ DPDK EAL parameters: event_perf --no-shconf -c 0xF --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid61745 ] 00:05:43.367 [2024-12-16 11:27:44.052393] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:05:43.626 [2024-12-16 11:27:44.118982] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 1 00:05:43.626 [2024-12-16 11:27:44.119090] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 2 00:05:43.626 Running I/O for 1 seconds...[2024-12-16 11:27:44.119218] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 3 00:05:43.626 [2024-12-16 11:27:44.119223] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:05:44.560 00:05:44.560 lcore 0: 117397 00:05:44.560 lcore 1: 117398 00:05:44.560 lcore 2: 117394 00:05:44.560 lcore 3: 117395 00:05:44.560 done. 00:05:44.560 00:05:44.560 real 0m1.292s 00:05:44.560 user 0m4.119s 00:05:44.560 sys 0m0.053s 00:05:44.560 11:27:45 event.event_perf -- common/autotest_common.sh@1130 -- # xtrace_disable 00:05:44.560 ************************************ 00:05:44.560 END TEST event_perf 00:05:44.560 11:27:45 event.event_perf -- common/autotest_common.sh@10 -- # set +x 00:05:44.560 ************************************ 00:05:44.560 11:27:45 event -- event/event.sh@46 -- # run_test event_reactor /home/vagrant/spdk_repo/spdk/test/event/reactor/reactor -t 1 00:05:44.560 11:27:45 event -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:05:44.560 11:27:45 event -- common/autotest_common.sh@1111 -- # xtrace_disable 00:05:44.560 11:27:45 event -- common/autotest_common.sh@10 -- # set +x 00:05:44.560 ************************************ 00:05:44.560 START TEST event_reactor 00:05:44.560 ************************************ 00:05:44.560 11:27:45 event.event_reactor -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/event/reactor/reactor -t 1 00:05:44.560 [2024-12-16 11:27:45.260840] Starting SPDK v25.01-pre git sha1 c360129e7 / DPDK 24.03.0 initialization... 00:05:44.560 [2024-12-16 11:27:45.260950] [ DPDK EAL parameters: reactor --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid61778 ] 00:05:44.818 [2024-12-16 11:27:45.406846] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:44.818 [2024-12-16 11:27:45.471041] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:05:46.194 test_start 00:05:46.194 oneshot 00:05:46.194 tick 100 00:05:46.194 tick 100 00:05:46.194 tick 250 00:05:46.194 tick 100 00:05:46.194 tick 100 00:05:46.194 tick 250 00:05:46.194 tick 100 00:05:46.194 tick 500 00:05:46.194 tick 100 00:05:46.194 tick 100 00:05:46.194 tick 250 00:05:46.194 tick 100 00:05:46.194 tick 100 00:05:46.194 test_end 00:05:46.194 00:05:46.194 real 0m1.281s 00:05:46.194 user 0m1.129s 00:05:46.194 sys 0m0.044s 00:05:46.194 11:27:46 event.event_reactor -- common/autotest_common.sh@1130 -- # xtrace_disable 00:05:46.194 11:27:46 event.event_reactor -- common/autotest_common.sh@10 -- # set +x 00:05:46.194 ************************************ 00:05:46.194 END TEST event_reactor 00:05:46.194 ************************************ 00:05:46.194 11:27:46 event -- event/event.sh@47 -- # run_test event_reactor_perf /home/vagrant/spdk_repo/spdk/test/event/reactor_perf/reactor_perf -t 1 00:05:46.194 11:27:46 event -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:05:46.194 11:27:46 event -- common/autotest_common.sh@1111 -- # xtrace_disable 00:05:46.194 11:27:46 event -- common/autotest_common.sh@10 -- # set +x 00:05:46.194 ************************************ 00:05:46.194 START TEST event_reactor_perf 00:05:46.194 ************************************ 00:05:46.194 11:27:46 event.event_reactor_perf -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/event/reactor_perf/reactor_perf -t 1 00:05:46.194 [2024-12-16 11:27:46.594181] Starting SPDK v25.01-pre git sha1 c360129e7 / DPDK 24.03.0 initialization... 00:05:46.194 [2024-12-16 11:27:46.594309] [ DPDK EAL parameters: reactor_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid61808 ] 00:05:46.194 [2024-12-16 11:27:46.741596] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:46.194 [2024-12-16 11:27:46.802866] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:05:47.167 test_start 00:05:47.167 test_end 00:05:47.167 Performance: 374204 events per second 00:05:47.167 00:05:47.167 real 0m1.294s 00:05:47.167 user 0m1.139s 00:05:47.167 sys 0m0.049s 00:05:47.167 11:27:47 event.event_reactor_perf -- common/autotest_common.sh@1130 -- # xtrace_disable 00:05:47.167 11:27:47 event.event_reactor_perf -- common/autotest_common.sh@10 -- # set +x 00:05:47.167 ************************************ 00:05:47.167 END TEST event_reactor_perf 00:05:47.167 ************************************ 00:05:47.167 11:27:47 event -- event/event.sh@49 -- # uname -s 00:05:47.427 11:27:47 event -- event/event.sh@49 -- # '[' Linux = Linux ']' 00:05:47.427 11:27:47 event -- event/event.sh@50 -- # run_test event_scheduler /home/vagrant/spdk_repo/spdk/test/event/scheduler/scheduler.sh 00:05:47.427 11:27:47 event -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:05:47.427 11:27:47 event -- common/autotest_common.sh@1111 -- # xtrace_disable 00:05:47.427 11:27:47 event -- common/autotest_common.sh@10 -- # set +x 00:05:47.427 ************************************ 00:05:47.427 START TEST event_scheduler 00:05:47.427 ************************************ 00:05:47.427 11:27:47 event.event_scheduler -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/event/scheduler/scheduler.sh 00:05:47.427 * Looking for test storage... 00:05:47.427 * Found test storage at /home/vagrant/spdk_repo/spdk/test/event/scheduler 00:05:47.427 11:27:48 event.event_scheduler -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:05:47.427 11:27:48 event.event_scheduler -- common/autotest_common.sh@1711 -- # lcov --version 00:05:47.427 11:27:48 event.event_scheduler -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:05:47.427 11:27:48 event.event_scheduler -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:05:47.427 11:27:48 event.event_scheduler -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:05:47.427 11:27:48 event.event_scheduler -- scripts/common.sh@333 -- # local ver1 ver1_l 00:05:47.427 11:27:48 event.event_scheduler -- scripts/common.sh@334 -- # local ver2 ver2_l 00:05:47.427 11:27:48 event.event_scheduler -- scripts/common.sh@336 -- # IFS=.-: 00:05:47.427 11:27:48 event.event_scheduler -- scripts/common.sh@336 -- # read -ra ver1 00:05:47.427 11:27:48 event.event_scheduler -- scripts/common.sh@337 -- # IFS=.-: 00:05:47.427 11:27:48 event.event_scheduler -- scripts/common.sh@337 -- # read -ra ver2 00:05:47.427 11:27:48 event.event_scheduler -- scripts/common.sh@338 -- # local 'op=<' 00:05:47.427 11:27:48 event.event_scheduler -- scripts/common.sh@340 -- # ver1_l=2 00:05:47.427 11:27:48 event.event_scheduler -- scripts/common.sh@341 -- # ver2_l=1 00:05:47.427 11:27:48 event.event_scheduler -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:05:47.427 11:27:48 event.event_scheduler -- scripts/common.sh@344 -- # case "$op" in 00:05:47.427 11:27:48 event.event_scheduler -- scripts/common.sh@345 -- # : 1 00:05:47.427 11:27:48 event.event_scheduler -- scripts/common.sh@364 -- # (( v = 0 )) 00:05:47.427 11:27:48 event.event_scheduler -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:05:47.427 11:27:48 event.event_scheduler -- scripts/common.sh@365 -- # decimal 1 00:05:47.427 11:27:48 event.event_scheduler -- scripts/common.sh@353 -- # local d=1 00:05:47.427 11:27:48 event.event_scheduler -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:05:47.427 11:27:48 event.event_scheduler -- scripts/common.sh@355 -- # echo 1 00:05:47.427 11:27:48 event.event_scheduler -- scripts/common.sh@365 -- # ver1[v]=1 00:05:47.427 11:27:48 event.event_scheduler -- scripts/common.sh@366 -- # decimal 2 00:05:47.427 11:27:48 event.event_scheduler -- scripts/common.sh@353 -- # local d=2 00:05:47.427 11:27:48 event.event_scheduler -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:05:47.427 11:27:48 event.event_scheduler -- scripts/common.sh@355 -- # echo 2 00:05:47.427 11:27:48 event.event_scheduler -- scripts/common.sh@366 -- # ver2[v]=2 00:05:47.427 11:27:48 event.event_scheduler -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:05:47.427 11:27:48 event.event_scheduler -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:05:47.427 11:27:48 event.event_scheduler -- scripts/common.sh@368 -- # return 0 00:05:47.427 11:27:48 event.event_scheduler -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:05:47.427 11:27:48 event.event_scheduler -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:05:47.427 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:47.427 --rc genhtml_branch_coverage=1 00:05:47.427 --rc genhtml_function_coverage=1 00:05:47.427 --rc genhtml_legend=1 00:05:47.427 --rc geninfo_all_blocks=1 00:05:47.427 --rc geninfo_unexecuted_blocks=1 00:05:47.427 00:05:47.427 ' 00:05:47.427 11:27:48 event.event_scheduler -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:05:47.427 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:47.427 --rc genhtml_branch_coverage=1 00:05:47.427 --rc genhtml_function_coverage=1 00:05:47.427 --rc genhtml_legend=1 00:05:47.427 --rc geninfo_all_blocks=1 00:05:47.427 --rc geninfo_unexecuted_blocks=1 00:05:47.427 00:05:47.427 ' 00:05:47.427 11:27:48 event.event_scheduler -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:05:47.427 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:47.427 --rc genhtml_branch_coverage=1 00:05:47.427 --rc genhtml_function_coverage=1 00:05:47.427 --rc genhtml_legend=1 00:05:47.427 --rc geninfo_all_blocks=1 00:05:47.427 --rc geninfo_unexecuted_blocks=1 00:05:47.427 00:05:47.427 ' 00:05:47.427 11:27:48 event.event_scheduler -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:05:47.427 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:47.427 --rc genhtml_branch_coverage=1 00:05:47.427 --rc genhtml_function_coverage=1 00:05:47.427 --rc genhtml_legend=1 00:05:47.427 --rc geninfo_all_blocks=1 00:05:47.427 --rc geninfo_unexecuted_blocks=1 00:05:47.427 00:05:47.427 ' 00:05:47.427 11:27:48 event.event_scheduler -- scheduler/scheduler.sh@29 -- # rpc=rpc_cmd 00:05:47.427 11:27:48 event.event_scheduler -- scheduler/scheduler.sh@35 -- # scheduler_pid=61883 00:05:47.427 11:27:48 event.event_scheduler -- scheduler/scheduler.sh@34 -- # /home/vagrant/spdk_repo/spdk/test/event/scheduler/scheduler -m 0xF -p 0x2 --wait-for-rpc -f 00:05:47.427 11:27:48 event.event_scheduler -- scheduler/scheduler.sh@36 -- # trap 'killprocess $scheduler_pid; exit 1' SIGINT SIGTERM EXIT 00:05:47.427 11:27:48 event.event_scheduler -- scheduler/scheduler.sh@37 -- # waitforlisten 61883 00:05:47.427 11:27:48 event.event_scheduler -- common/autotest_common.sh@835 -- # '[' -z 61883 ']' 00:05:47.427 11:27:48 event.event_scheduler -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:47.427 11:27:48 event.event_scheduler -- common/autotest_common.sh@840 -- # local max_retries=100 00:05:47.427 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:47.427 11:27:48 event.event_scheduler -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:47.427 11:27:48 event.event_scheduler -- common/autotest_common.sh@844 -- # xtrace_disable 00:05:47.427 11:27:48 event.event_scheduler -- common/autotest_common.sh@10 -- # set +x 00:05:47.686 [2024-12-16 11:27:48.216192] Starting SPDK v25.01-pre git sha1 c360129e7 / DPDK 24.03.0 initialization... 00:05:47.686 [2024-12-16 11:27:48.216312] [ DPDK EAL parameters: scheduler --no-shconf -c 0xF --main-lcore=2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid61883 ] 00:05:47.686 [2024-12-16 11:27:48.376174] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:05:47.944 [2024-12-16 11:27:48.454669] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:05:47.944 [2024-12-16 11:27:48.454894] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 1 00:05:47.944 [2024-12-16 11:27:48.455008] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 3 00:05:47.944 [2024-12-16 11:27:48.455011] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 2 00:05:47.944 11:27:48 event.event_scheduler -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:05:47.944 11:27:48 event.event_scheduler -- common/autotest_common.sh@868 -- # return 0 00:05:47.944 11:27:48 event.event_scheduler -- scheduler/scheduler.sh@39 -- # rpc_cmd framework_set_scheduler dynamic 00:05:47.944 11:27:48 event.event_scheduler -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:47.944 11:27:48 event.event_scheduler -- common/autotest_common.sh@10 -- # set +x 00:05:47.944 POWER: failed to open /sys/devices/system/cpu/cpu%u/cpufreq/scaling_governor 00:05:47.944 POWER: Cannot set governor of lcore 0 to userspace 00:05:47.944 POWER: failed to open /sys/devices/system/cpu/cpu%u/cpufreq/scaling_governor 00:05:47.944 POWER: Cannot set governor of lcore 0 to performance 00:05:47.944 POWER: failed to open /sys/devices/system/cpu/cpu%u/cpufreq/scaling_governor 00:05:47.944 POWER: Cannot set governor of lcore 0 to userspace 00:05:47.944 POWER: failed to open /sys/devices/system/cpu/cpu%u/cpufreq/scaling_governor 00:05:47.944 POWER: Cannot set governor of lcore 0 to userspace 00:05:47.944 GUEST_CHANNEL: Opening channel '/dev/virtio-ports/virtio.serial.port.poweragent.0' for lcore 0 00:05:47.944 GUEST_CHANNEL: Unable to connect to '/dev/virtio-ports/virtio.serial.port.poweragent.0' with error No such file or directory 00:05:47.944 POWER: Unable to set Power Management Environment for lcore 0 00:05:47.944 [2024-12-16 11:27:48.500896] dpdk_governor.c: 135:_init_core: *ERROR*: Failed to initialize on core0 00:05:47.944 [2024-12-16 11:27:48.500912] dpdk_governor.c: 196:_init: *ERROR*: Failed to initialize on core0 00:05:47.944 [2024-12-16 11:27:48.500922] scheduler_dynamic.c: 280:init: *NOTICE*: Unable to initialize dpdk governor 00:05:47.944 [2024-12-16 11:27:48.500937] scheduler_dynamic.c: 427:set_opts: *NOTICE*: Setting scheduler load limit to 20 00:05:47.944 [2024-12-16 11:27:48.500946] scheduler_dynamic.c: 429:set_opts: *NOTICE*: Setting scheduler core limit to 80 00:05:47.944 [2024-12-16 11:27:48.500955] scheduler_dynamic.c: 431:set_opts: *NOTICE*: Setting scheduler core busy to 95 00:05:47.944 11:27:48 event.event_scheduler -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:47.944 11:27:48 event.event_scheduler -- scheduler/scheduler.sh@40 -- # rpc_cmd framework_start_init 00:05:47.944 11:27:48 event.event_scheduler -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:47.944 11:27:48 event.event_scheduler -- common/autotest_common.sh@10 -- # set +x 00:05:47.944 [2024-12-16 11:27:48.611183] scheduler.c: 382:test_start: *NOTICE*: Scheduler test application started. 00:05:47.944 11:27:48 event.event_scheduler -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:47.944 11:27:48 event.event_scheduler -- scheduler/scheduler.sh@43 -- # run_test scheduler_create_thread scheduler_create_thread 00:05:47.944 11:27:48 event.event_scheduler -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:05:47.945 11:27:48 event.event_scheduler -- common/autotest_common.sh@1111 -- # xtrace_disable 00:05:47.945 11:27:48 event.event_scheduler -- common/autotest_common.sh@10 -- # set +x 00:05:47.945 ************************************ 00:05:47.945 START TEST scheduler_create_thread 00:05:47.945 ************************************ 00:05:47.945 11:27:48 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@1129 -- # scheduler_create_thread 00:05:47.945 11:27:48 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@12 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n active_pinned -m 0x1 -a 100 00:05:47.945 11:27:48 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:47.945 11:27:48 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:05:47.945 2 00:05:47.945 11:27:48 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:47.945 11:27:48 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@13 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n active_pinned -m 0x2 -a 100 00:05:47.945 11:27:48 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:47.945 11:27:48 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:05:47.945 3 00:05:47.945 11:27:48 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:47.945 11:27:48 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@14 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n active_pinned -m 0x4 -a 100 00:05:47.945 11:27:48 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:47.945 11:27:48 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:05:47.945 4 00:05:47.945 11:27:48 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:47.945 11:27:48 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@15 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n active_pinned -m 0x8 -a 100 00:05:47.945 11:27:48 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:47.945 11:27:48 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:05:47.945 5 00:05:47.945 11:27:48 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:47.945 11:27:48 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@16 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n idle_pinned -m 0x1 -a 0 00:05:47.945 11:27:48 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:47.945 11:27:48 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:05:47.945 6 00:05:47.945 11:27:48 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:47.945 11:27:48 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@17 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n idle_pinned -m 0x2 -a 0 00:05:47.945 11:27:48 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:47.945 11:27:48 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:05:47.945 7 00:05:47.945 11:27:48 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:47.945 11:27:48 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@18 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n idle_pinned -m 0x4 -a 0 00:05:47.945 11:27:48 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:47.945 11:27:48 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:05:47.945 8 00:05:47.945 11:27:48 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:47.945 11:27:48 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@19 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n idle_pinned -m 0x8 -a 0 00:05:47.945 11:27:48 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:47.945 11:27:48 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:05:47.945 9 00:05:47.945 11:27:48 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:47.945 11:27:48 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@21 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n one_third_active -a 30 00:05:47.945 11:27:48 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:47.945 11:27:48 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:05:48.203 10 00:05:48.203 11:27:48 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:48.203 11:27:48 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@22 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n half_active -a 0 00:05:48.203 11:27:48 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:48.203 11:27:48 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:05:48.203 11:27:48 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:48.203 11:27:48 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@22 -- # thread_id=11 00:05:48.203 11:27:48 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@23 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_set_active 11 50 00:05:48.203 11:27:48 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:48.203 11:27:48 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:05:48.203 11:27:48 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:48.203 11:27:48 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@25 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n deleted -a 100 00:05:48.203 11:27:48 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:48.203 11:27:48 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:05:49.577 11:27:50 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:49.577 11:27:50 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@25 -- # thread_id=12 00:05:49.577 11:27:50 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@26 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_delete 12 00:05:49.577 11:27:50 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:49.577 11:27:50 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:05:50.511 11:27:51 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:50.511 00:05:50.511 real 0m2.614s 00:05:50.511 user 0m0.020s 00:05:50.511 sys 0m0.006s 00:05:50.511 11:27:51 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@1130 -- # xtrace_disable 00:05:50.511 11:27:51 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:05:50.511 ************************************ 00:05:50.511 END TEST scheduler_create_thread 00:05:50.511 ************************************ 00:05:50.769 11:27:51 event.event_scheduler -- scheduler/scheduler.sh@45 -- # trap - SIGINT SIGTERM EXIT 00:05:50.769 11:27:51 event.event_scheduler -- scheduler/scheduler.sh@46 -- # killprocess 61883 00:05:50.769 11:27:51 event.event_scheduler -- common/autotest_common.sh@954 -- # '[' -z 61883 ']' 00:05:50.769 11:27:51 event.event_scheduler -- common/autotest_common.sh@958 -- # kill -0 61883 00:05:50.769 11:27:51 event.event_scheduler -- common/autotest_common.sh@959 -- # uname 00:05:50.769 11:27:51 event.event_scheduler -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:05:50.769 11:27:51 event.event_scheduler -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 61883 00:05:50.769 11:27:51 event.event_scheduler -- common/autotest_common.sh@960 -- # process_name=reactor_2 00:05:50.769 11:27:51 event.event_scheduler -- common/autotest_common.sh@964 -- # '[' reactor_2 = sudo ']' 00:05:50.769 killing process with pid 61883 00:05:50.769 11:27:51 event.event_scheduler -- common/autotest_common.sh@972 -- # echo 'killing process with pid 61883' 00:05:50.769 11:27:51 event.event_scheduler -- common/autotest_common.sh@973 -- # kill 61883 00:05:50.769 11:27:51 event.event_scheduler -- common/autotest_common.sh@978 -- # wait 61883 00:05:51.027 [2024-12-16 11:27:51.716088] scheduler.c: 360:test_shutdown: *NOTICE*: Scheduler test application stopped. 00:05:51.287 00:05:51.287 real 0m4.004s 00:05:51.287 user 0m5.780s 00:05:51.287 sys 0m0.389s 00:05:51.287 11:27:51 event.event_scheduler -- common/autotest_common.sh@1130 -- # xtrace_disable 00:05:51.287 ************************************ 00:05:51.287 END TEST event_scheduler 00:05:51.287 11:27:51 event.event_scheduler -- common/autotest_common.sh@10 -- # set +x 00:05:51.287 ************************************ 00:05:51.287 11:27:51 event -- event/event.sh@51 -- # modprobe -n nbd 00:05:51.287 11:27:51 event -- event/event.sh@52 -- # run_test app_repeat app_repeat_test 00:05:51.287 11:27:51 event -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:05:51.287 11:27:51 event -- common/autotest_common.sh@1111 -- # xtrace_disable 00:05:51.287 11:27:51 event -- common/autotest_common.sh@10 -- # set +x 00:05:51.287 ************************************ 00:05:51.287 START TEST app_repeat 00:05:51.287 ************************************ 00:05:51.287 11:27:51 event.app_repeat -- common/autotest_common.sh@1129 -- # app_repeat_test 00:05:51.287 11:27:51 event.app_repeat -- event/event.sh@12 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:51.287 11:27:51 event.app_repeat -- event/event.sh@13 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:51.287 11:27:51 event.app_repeat -- event/event.sh@13 -- # local nbd_list 00:05:51.287 11:27:51 event.app_repeat -- event/event.sh@14 -- # bdev_list=('Malloc0' 'Malloc1') 00:05:51.287 11:27:51 event.app_repeat -- event/event.sh@14 -- # local bdev_list 00:05:51.287 11:27:51 event.app_repeat -- event/event.sh@15 -- # local repeat_times=4 00:05:51.287 11:27:51 event.app_repeat -- event/event.sh@17 -- # modprobe nbd 00:05:51.287 11:27:51 event.app_repeat -- event/event.sh@19 -- # repeat_pid=61981 00:05:51.287 11:27:51 event.app_repeat -- event/event.sh@18 -- # /home/vagrant/spdk_repo/spdk/test/event/app_repeat/app_repeat -r /var/tmp/spdk-nbd.sock -m 0x3 -t 4 00:05:51.287 11:27:51 event.app_repeat -- event/event.sh@20 -- # trap 'killprocess $repeat_pid; exit 1' SIGINT SIGTERM EXIT 00:05:51.287 Process app_repeat pid: 61981 00:05:51.287 11:27:51 event.app_repeat -- event/event.sh@21 -- # echo 'Process app_repeat pid: 61981' 00:05:51.287 11:27:51 event.app_repeat -- event/event.sh@23 -- # for i in {0..2} 00:05:51.287 spdk_app_start Round 0 00:05:51.287 11:27:51 event.app_repeat -- event/event.sh@24 -- # echo 'spdk_app_start Round 0' 00:05:51.287 11:27:51 event.app_repeat -- event/event.sh@25 -- # waitforlisten 61981 /var/tmp/spdk-nbd.sock 00:05:51.287 11:27:51 event.app_repeat -- common/autotest_common.sh@835 -- # '[' -z 61981 ']' 00:05:51.287 11:27:51 event.app_repeat -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:05:51.287 11:27:51 event.app_repeat -- common/autotest_common.sh@840 -- # local max_retries=100 00:05:51.287 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:05:51.287 11:27:51 event.app_repeat -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:05:51.287 11:27:51 event.app_repeat -- common/autotest_common.sh@844 -- # xtrace_disable 00:05:51.287 11:27:51 event.app_repeat -- common/autotest_common.sh@10 -- # set +x 00:05:51.287 [2024-12-16 11:27:52.013480] Starting SPDK v25.01-pre git sha1 c360129e7 / DPDK 24.03.0 initialization... 00:05:51.287 [2024-12-16 11:27:52.013573] [ DPDK EAL parameters: app_repeat --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid61981 ] 00:05:51.545 [2024-12-16 11:27:52.165318] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:05:51.545 [2024-12-16 11:27:52.239757] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 1 00:05:51.545 [2024-12-16 11:27:52.239768] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:05:51.806 11:27:52 event.app_repeat -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:05:51.806 11:27:52 event.app_repeat -- common/autotest_common.sh@868 -- # return 0 00:05:51.806 11:27:52 event.app_repeat -- event/event.sh@27 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:05:52.063 Malloc0 00:05:52.063 11:27:52 event.app_repeat -- event/event.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:05:52.322 Malloc1 00:05:52.322 11:27:53 event.app_repeat -- event/event.sh@30 -- # nbd_rpc_data_verify /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:05:52.322 11:27:53 event.app_repeat -- bdev/nbd_common.sh@90 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:52.322 11:27:53 event.app_repeat -- bdev/nbd_common.sh@91 -- # bdev_list=('Malloc0' 'Malloc1') 00:05:52.322 11:27:53 event.app_repeat -- bdev/nbd_common.sh@91 -- # local bdev_list 00:05:52.322 11:27:53 event.app_repeat -- bdev/nbd_common.sh@92 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:52.322 11:27:53 event.app_repeat -- bdev/nbd_common.sh@92 -- # local nbd_list 00:05:52.322 11:27:53 event.app_repeat -- bdev/nbd_common.sh@94 -- # nbd_start_disks /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:05:52.322 11:27:53 event.app_repeat -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:52.322 11:27:53 event.app_repeat -- bdev/nbd_common.sh@10 -- # bdev_list=('Malloc0' 'Malloc1') 00:05:52.322 11:27:53 event.app_repeat -- bdev/nbd_common.sh@10 -- # local bdev_list 00:05:52.322 11:27:53 event.app_repeat -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:52.322 11:27:53 event.app_repeat -- bdev/nbd_common.sh@11 -- # local nbd_list 00:05:52.322 11:27:53 event.app_repeat -- bdev/nbd_common.sh@12 -- # local i 00:05:52.322 11:27:53 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:05:52.322 11:27:53 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:05:52.322 11:27:53 event.app_repeat -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc0 /dev/nbd0 00:05:52.580 /dev/nbd0 00:05:52.580 11:27:53 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:05:52.580 11:27:53 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:05:52.580 11:27:53 event.app_repeat -- common/autotest_common.sh@872 -- # local nbd_name=nbd0 00:05:52.580 11:27:53 event.app_repeat -- common/autotest_common.sh@873 -- # local i 00:05:52.580 11:27:53 event.app_repeat -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:05:52.580 11:27:53 event.app_repeat -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:05:52.580 11:27:53 event.app_repeat -- common/autotest_common.sh@876 -- # grep -q -w nbd0 /proc/partitions 00:05:52.580 11:27:53 event.app_repeat -- common/autotest_common.sh@877 -- # break 00:05:52.580 11:27:53 event.app_repeat -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:05:52.580 11:27:53 event.app_repeat -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:05:52.580 11:27:53 event.app_repeat -- common/autotest_common.sh@889 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:05:52.580 1+0 records in 00:05:52.580 1+0 records out 00:05:52.580 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000345064 s, 11.9 MB/s 00:05:52.580 11:27:53 event.app_repeat -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:05:52.838 11:27:53 event.app_repeat -- common/autotest_common.sh@890 -- # size=4096 00:05:52.838 11:27:53 event.app_repeat -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:05:52.838 11:27:53 event.app_repeat -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:05:52.838 11:27:53 event.app_repeat -- common/autotest_common.sh@893 -- # return 0 00:05:52.838 11:27:53 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:05:52.838 11:27:53 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:05:52.838 11:27:53 event.app_repeat -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc1 /dev/nbd1 00:05:53.097 /dev/nbd1 00:05:53.097 11:27:53 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:05:53.097 11:27:53 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:05:53.097 11:27:53 event.app_repeat -- common/autotest_common.sh@872 -- # local nbd_name=nbd1 00:05:53.097 11:27:53 event.app_repeat -- common/autotest_common.sh@873 -- # local i 00:05:53.097 11:27:53 event.app_repeat -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:05:53.097 11:27:53 event.app_repeat -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:05:53.097 11:27:53 event.app_repeat -- common/autotest_common.sh@876 -- # grep -q -w nbd1 /proc/partitions 00:05:53.097 11:27:53 event.app_repeat -- common/autotest_common.sh@877 -- # break 00:05:53.097 11:27:53 event.app_repeat -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:05:53.097 11:27:53 event.app_repeat -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:05:53.097 11:27:53 event.app_repeat -- common/autotest_common.sh@889 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:05:53.097 1+0 records in 00:05:53.097 1+0 records out 00:05:53.097 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000376603 s, 10.9 MB/s 00:05:53.097 11:27:53 event.app_repeat -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:05:53.097 11:27:53 event.app_repeat -- common/autotest_common.sh@890 -- # size=4096 00:05:53.097 11:27:53 event.app_repeat -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:05:53.097 11:27:53 event.app_repeat -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:05:53.097 11:27:53 event.app_repeat -- common/autotest_common.sh@893 -- # return 0 00:05:53.097 11:27:53 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:05:53.097 11:27:53 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:05:53.097 11:27:53 event.app_repeat -- bdev/nbd_common.sh@95 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:05:53.097 11:27:53 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:53.097 11:27:53 event.app_repeat -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:05:53.355 11:27:54 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[ 00:05:53.355 { 00:05:53.355 "bdev_name": "Malloc0", 00:05:53.355 "nbd_device": "/dev/nbd0" 00:05:53.355 }, 00:05:53.355 { 00:05:53.355 "bdev_name": "Malloc1", 00:05:53.355 "nbd_device": "/dev/nbd1" 00:05:53.355 } 00:05:53.355 ]' 00:05:53.355 11:27:54 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[ 00:05:53.355 { 00:05:53.355 "bdev_name": "Malloc0", 00:05:53.355 "nbd_device": "/dev/nbd0" 00:05:53.355 }, 00:05:53.355 { 00:05:53.355 "bdev_name": "Malloc1", 00:05:53.355 "nbd_device": "/dev/nbd1" 00:05:53.355 } 00:05:53.355 ]' 00:05:53.355 11:27:54 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:05:53.613 11:27:54 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name='/dev/nbd0 00:05:53.613 /dev/nbd1' 00:05:53.613 11:27:54 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '/dev/nbd0 00:05:53.613 /dev/nbd1' 00:05:53.613 11:27:54 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:05:53.613 11:27:54 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=2 00:05:53.613 11:27:54 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 2 00:05:53.613 11:27:54 event.app_repeat -- bdev/nbd_common.sh@95 -- # count=2 00:05:53.613 11:27:54 event.app_repeat -- bdev/nbd_common.sh@96 -- # '[' 2 -ne 2 ']' 00:05:53.613 11:27:54 event.app_repeat -- bdev/nbd_common.sh@100 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' write 00:05:53.613 11:27:54 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:53.613 11:27:54 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:05:53.613 11:27:54 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=write 00:05:53.613 11:27:54 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:05:53.613 11:27:54 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' write = write ']' 00:05:53.613 11:27:54 event.app_repeat -- bdev/nbd_common.sh@76 -- # dd if=/dev/urandom of=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest bs=4096 count=256 00:05:53.613 256+0 records in 00:05:53.613 256+0 records out 00:05:53.613 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.00725151 s, 145 MB/s 00:05:53.613 11:27:54 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:05:53.613 11:27:54 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest of=/dev/nbd0 bs=4096 count=256 oflag=direct 00:05:53.613 256+0 records in 00:05:53.613 256+0 records out 00:05:53.613 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0342957 s, 30.6 MB/s 00:05:53.613 11:27:54 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:05:53.613 11:27:54 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest of=/dev/nbd1 bs=4096 count=256 oflag=direct 00:05:53.613 256+0 records in 00:05:53.613 256+0 records out 00:05:53.613 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0262804 s, 39.9 MB/s 00:05:53.613 11:27:54 event.app_repeat -- bdev/nbd_common.sh@101 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' verify 00:05:53.613 11:27:54 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:53.613 11:27:54 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:05:53.613 11:27:54 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=verify 00:05:53.613 11:27:54 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:05:53.613 11:27:54 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' verify = write ']' 00:05:53.613 11:27:54 event.app_repeat -- bdev/nbd_common.sh@80 -- # '[' verify = verify ']' 00:05:53.613 11:27:54 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:05:53.613 11:27:54 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest /dev/nbd0 00:05:53.613 11:27:54 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:05:53.613 11:27:54 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest /dev/nbd1 00:05:53.613 11:27:54 event.app_repeat -- bdev/nbd_common.sh@85 -- # rm /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:05:53.613 11:27:54 event.app_repeat -- bdev/nbd_common.sh@103 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1' 00:05:53.613 11:27:54 event.app_repeat -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:53.613 11:27:54 event.app_repeat -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:53.613 11:27:54 event.app_repeat -- bdev/nbd_common.sh@50 -- # local nbd_list 00:05:53.613 11:27:54 event.app_repeat -- bdev/nbd_common.sh@51 -- # local i 00:05:53.613 11:27:54 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:05:53.613 11:27:54 event.app_repeat -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:05:53.871 11:27:54 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:05:53.871 11:27:54 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:05:53.871 11:27:54 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:05:53.871 11:27:54 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:05:53.871 11:27:54 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:05:53.871 11:27:54 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:05:53.871 11:27:54 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:05:53.871 11:27:54 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:05:53.871 11:27:54 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:05:53.871 11:27:54 event.app_repeat -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd1 00:05:54.440 11:27:54 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:05:54.440 11:27:54 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:05:54.440 11:27:54 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:05:54.440 11:27:54 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:05:54.440 11:27:54 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:05:54.440 11:27:54 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:05:54.440 11:27:54 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:05:54.440 11:27:54 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:05:54.440 11:27:54 event.app_repeat -- bdev/nbd_common.sh@104 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:05:54.440 11:27:54 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:54.440 11:27:54 event.app_repeat -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:05:54.699 11:27:55 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:05:54.699 11:27:55 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[]' 00:05:54.699 11:27:55 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:05:54.699 11:27:55 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:05:54.699 11:27:55 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '' 00:05:54.699 11:27:55 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:05:54.699 11:27:55 event.app_repeat -- bdev/nbd_common.sh@65 -- # true 00:05:54.699 11:27:55 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=0 00:05:54.699 11:27:55 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 0 00:05:54.699 11:27:55 event.app_repeat -- bdev/nbd_common.sh@104 -- # count=0 00:05:54.699 11:27:55 event.app_repeat -- bdev/nbd_common.sh@105 -- # '[' 0 -ne 0 ']' 00:05:54.699 11:27:55 event.app_repeat -- bdev/nbd_common.sh@109 -- # return 0 00:05:54.699 11:27:55 event.app_repeat -- event/event.sh@34 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock spdk_kill_instance SIGTERM 00:05:54.958 11:27:55 event.app_repeat -- event/event.sh@35 -- # sleep 3 00:05:55.217 [2024-12-16 11:27:55.873894] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:05:55.217 [2024-12-16 11:27:55.927949] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 1 00:05:55.217 [2024-12-16 11:27:55.927961] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:05:55.476 [2024-12-16 11:27:55.984392] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_register' already registered. 00:05:55.476 [2024-12-16 11:27:55.984470] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_unregister' already registered. 00:05:58.011 11:27:58 event.app_repeat -- event/event.sh@23 -- # for i in {0..2} 00:05:58.011 spdk_app_start Round 1 00:05:58.011 11:27:58 event.app_repeat -- event/event.sh@24 -- # echo 'spdk_app_start Round 1' 00:05:58.011 11:27:58 event.app_repeat -- event/event.sh@25 -- # waitforlisten 61981 /var/tmp/spdk-nbd.sock 00:05:58.011 11:27:58 event.app_repeat -- common/autotest_common.sh@835 -- # '[' -z 61981 ']' 00:05:58.011 11:27:58 event.app_repeat -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:05:58.011 11:27:58 event.app_repeat -- common/autotest_common.sh@840 -- # local max_retries=100 00:05:58.011 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:05:58.011 11:27:58 event.app_repeat -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:05:58.011 11:27:58 event.app_repeat -- common/autotest_common.sh@844 -- # xtrace_disable 00:05:58.011 11:27:58 event.app_repeat -- common/autotest_common.sh@10 -- # set +x 00:05:58.270 11:27:59 event.app_repeat -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:05:58.270 11:27:59 event.app_repeat -- common/autotest_common.sh@868 -- # return 0 00:05:58.270 11:27:59 event.app_repeat -- event/event.sh@27 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:05:58.838 Malloc0 00:05:58.838 11:27:59 event.app_repeat -- event/event.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:05:59.096 Malloc1 00:05:59.096 11:27:59 event.app_repeat -- event/event.sh@30 -- # nbd_rpc_data_verify /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:05:59.096 11:27:59 event.app_repeat -- bdev/nbd_common.sh@90 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:59.096 11:27:59 event.app_repeat -- bdev/nbd_common.sh@91 -- # bdev_list=('Malloc0' 'Malloc1') 00:05:59.096 11:27:59 event.app_repeat -- bdev/nbd_common.sh@91 -- # local bdev_list 00:05:59.096 11:27:59 event.app_repeat -- bdev/nbd_common.sh@92 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:59.096 11:27:59 event.app_repeat -- bdev/nbd_common.sh@92 -- # local nbd_list 00:05:59.096 11:27:59 event.app_repeat -- bdev/nbd_common.sh@94 -- # nbd_start_disks /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:05:59.096 11:27:59 event.app_repeat -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:59.096 11:27:59 event.app_repeat -- bdev/nbd_common.sh@10 -- # bdev_list=('Malloc0' 'Malloc1') 00:05:59.096 11:27:59 event.app_repeat -- bdev/nbd_common.sh@10 -- # local bdev_list 00:05:59.096 11:27:59 event.app_repeat -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:59.096 11:27:59 event.app_repeat -- bdev/nbd_common.sh@11 -- # local nbd_list 00:05:59.096 11:27:59 event.app_repeat -- bdev/nbd_common.sh@12 -- # local i 00:05:59.096 11:27:59 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:05:59.096 11:27:59 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:05:59.096 11:27:59 event.app_repeat -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc0 /dev/nbd0 00:05:59.354 /dev/nbd0 00:05:59.354 11:27:59 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:05:59.354 11:27:59 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:05:59.354 11:27:59 event.app_repeat -- common/autotest_common.sh@872 -- # local nbd_name=nbd0 00:05:59.354 11:27:59 event.app_repeat -- common/autotest_common.sh@873 -- # local i 00:05:59.354 11:27:59 event.app_repeat -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:05:59.354 11:27:59 event.app_repeat -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:05:59.354 11:27:59 event.app_repeat -- common/autotest_common.sh@876 -- # grep -q -w nbd0 /proc/partitions 00:05:59.354 11:27:59 event.app_repeat -- common/autotest_common.sh@877 -- # break 00:05:59.354 11:27:59 event.app_repeat -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:05:59.354 11:27:59 event.app_repeat -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:05:59.354 11:27:59 event.app_repeat -- common/autotest_common.sh@889 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:05:59.354 1+0 records in 00:05:59.354 1+0 records out 00:05:59.354 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000220554 s, 18.6 MB/s 00:05:59.354 11:27:59 event.app_repeat -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:05:59.354 11:27:59 event.app_repeat -- common/autotest_common.sh@890 -- # size=4096 00:05:59.354 11:27:59 event.app_repeat -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:05:59.354 11:27:59 event.app_repeat -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:05:59.354 11:27:59 event.app_repeat -- common/autotest_common.sh@893 -- # return 0 00:05:59.354 11:27:59 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:05:59.354 11:27:59 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:05:59.354 11:27:59 event.app_repeat -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc1 /dev/nbd1 00:05:59.613 /dev/nbd1 00:05:59.613 11:28:00 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:05:59.613 11:28:00 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:05:59.613 11:28:00 event.app_repeat -- common/autotest_common.sh@872 -- # local nbd_name=nbd1 00:05:59.613 11:28:00 event.app_repeat -- common/autotest_common.sh@873 -- # local i 00:05:59.613 11:28:00 event.app_repeat -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:05:59.613 11:28:00 event.app_repeat -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:05:59.613 11:28:00 event.app_repeat -- common/autotest_common.sh@876 -- # grep -q -w nbd1 /proc/partitions 00:05:59.613 11:28:00 event.app_repeat -- common/autotest_common.sh@877 -- # break 00:05:59.613 11:28:00 event.app_repeat -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:05:59.613 11:28:00 event.app_repeat -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:05:59.613 11:28:00 event.app_repeat -- common/autotest_common.sh@889 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:05:59.613 1+0 records in 00:05:59.613 1+0 records out 00:05:59.613 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000293396 s, 14.0 MB/s 00:05:59.613 11:28:00 event.app_repeat -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:05:59.613 11:28:00 event.app_repeat -- common/autotest_common.sh@890 -- # size=4096 00:05:59.613 11:28:00 event.app_repeat -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:05:59.613 11:28:00 event.app_repeat -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:05:59.613 11:28:00 event.app_repeat -- common/autotest_common.sh@893 -- # return 0 00:05:59.613 11:28:00 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:05:59.613 11:28:00 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:05:59.613 11:28:00 event.app_repeat -- bdev/nbd_common.sh@95 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:05:59.613 11:28:00 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:59.613 11:28:00 event.app_repeat -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:06:00.179 11:28:00 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[ 00:06:00.179 { 00:06:00.179 "bdev_name": "Malloc0", 00:06:00.179 "nbd_device": "/dev/nbd0" 00:06:00.179 }, 00:06:00.179 { 00:06:00.179 "bdev_name": "Malloc1", 00:06:00.179 "nbd_device": "/dev/nbd1" 00:06:00.179 } 00:06:00.179 ]' 00:06:00.179 11:28:00 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[ 00:06:00.179 { 00:06:00.179 "bdev_name": "Malloc0", 00:06:00.179 "nbd_device": "/dev/nbd0" 00:06:00.179 }, 00:06:00.179 { 00:06:00.179 "bdev_name": "Malloc1", 00:06:00.179 "nbd_device": "/dev/nbd1" 00:06:00.179 } 00:06:00.179 ]' 00:06:00.179 11:28:00 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:06:00.179 11:28:00 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name='/dev/nbd0 00:06:00.179 /dev/nbd1' 00:06:00.179 11:28:00 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:06:00.179 11:28:00 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '/dev/nbd0 00:06:00.179 /dev/nbd1' 00:06:00.179 11:28:00 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=2 00:06:00.179 11:28:00 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 2 00:06:00.179 11:28:00 event.app_repeat -- bdev/nbd_common.sh@95 -- # count=2 00:06:00.179 11:28:00 event.app_repeat -- bdev/nbd_common.sh@96 -- # '[' 2 -ne 2 ']' 00:06:00.179 11:28:00 event.app_repeat -- bdev/nbd_common.sh@100 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' write 00:06:00.179 11:28:00 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:06:00.179 11:28:00 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:06:00.179 11:28:00 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=write 00:06:00.179 11:28:00 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:06:00.179 11:28:00 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' write = write ']' 00:06:00.179 11:28:00 event.app_repeat -- bdev/nbd_common.sh@76 -- # dd if=/dev/urandom of=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest bs=4096 count=256 00:06:00.179 256+0 records in 00:06:00.179 256+0 records out 00:06:00.179 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.00904663 s, 116 MB/s 00:06:00.179 11:28:00 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:06:00.179 11:28:00 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest of=/dev/nbd0 bs=4096 count=256 oflag=direct 00:06:00.179 256+0 records in 00:06:00.179 256+0 records out 00:06:00.179 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0254763 s, 41.2 MB/s 00:06:00.179 11:28:00 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:06:00.179 11:28:00 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest of=/dev/nbd1 bs=4096 count=256 oflag=direct 00:06:00.179 256+0 records in 00:06:00.179 256+0 records out 00:06:00.179 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0276596 s, 37.9 MB/s 00:06:00.179 11:28:00 event.app_repeat -- bdev/nbd_common.sh@101 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' verify 00:06:00.179 11:28:00 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:06:00.179 11:28:00 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:06:00.179 11:28:00 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=verify 00:06:00.179 11:28:00 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:06:00.179 11:28:00 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' verify = write ']' 00:06:00.179 11:28:00 event.app_repeat -- bdev/nbd_common.sh@80 -- # '[' verify = verify ']' 00:06:00.179 11:28:00 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:06:00.179 11:28:00 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest /dev/nbd0 00:06:00.179 11:28:00 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:06:00.179 11:28:00 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest /dev/nbd1 00:06:00.179 11:28:00 event.app_repeat -- bdev/nbd_common.sh@85 -- # rm /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:06:00.179 11:28:00 event.app_repeat -- bdev/nbd_common.sh@103 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1' 00:06:00.179 11:28:00 event.app_repeat -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:00.179 11:28:00 event.app_repeat -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:06:00.179 11:28:00 event.app_repeat -- bdev/nbd_common.sh@50 -- # local nbd_list 00:06:00.179 11:28:00 event.app_repeat -- bdev/nbd_common.sh@51 -- # local i 00:06:00.179 11:28:00 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:06:00.179 11:28:00 event.app_repeat -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:06:00.436 11:28:01 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:06:00.436 11:28:01 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:06:00.436 11:28:01 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:06:00.436 11:28:01 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:06:00.436 11:28:01 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:06:00.436 11:28:01 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:06:00.436 11:28:01 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:06:00.436 11:28:01 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:06:00.436 11:28:01 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:06:00.436 11:28:01 event.app_repeat -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd1 00:06:00.694 11:28:01 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:06:00.695 11:28:01 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:06:00.695 11:28:01 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:06:00.695 11:28:01 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:06:00.695 11:28:01 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:06:00.695 11:28:01 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:06:00.695 11:28:01 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:06:00.695 11:28:01 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:06:00.695 11:28:01 event.app_repeat -- bdev/nbd_common.sh@104 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:06:00.695 11:28:01 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:00.695 11:28:01 event.app_repeat -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:06:01.261 11:28:01 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:06:01.261 11:28:01 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:06:01.261 11:28:01 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[]' 00:06:01.261 11:28:01 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:06:01.261 11:28:01 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:06:01.261 11:28:01 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '' 00:06:01.261 11:28:01 event.app_repeat -- bdev/nbd_common.sh@65 -- # true 00:06:01.261 11:28:01 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=0 00:06:01.261 11:28:01 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 0 00:06:01.261 11:28:01 event.app_repeat -- bdev/nbd_common.sh@104 -- # count=0 00:06:01.261 11:28:01 event.app_repeat -- bdev/nbd_common.sh@105 -- # '[' 0 -ne 0 ']' 00:06:01.261 11:28:01 event.app_repeat -- bdev/nbd_common.sh@109 -- # return 0 00:06:01.261 11:28:01 event.app_repeat -- event/event.sh@34 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock spdk_kill_instance SIGTERM 00:06:01.520 11:28:02 event.app_repeat -- event/event.sh@35 -- # sleep 3 00:06:01.778 [2024-12-16 11:28:02.274212] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:06:01.778 [2024-12-16 11:28:02.322408] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 1 00:06:01.778 [2024-12-16 11:28:02.322429] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:06:01.778 [2024-12-16 11:28:02.379997] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_register' already registered. 00:06:01.778 [2024-12-16 11:28:02.380056] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_unregister' already registered. 00:06:05.091 spdk_app_start Round 2 00:06:05.091 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:06:05.091 11:28:05 event.app_repeat -- event/event.sh@23 -- # for i in {0..2} 00:06:05.091 11:28:05 event.app_repeat -- event/event.sh@24 -- # echo 'spdk_app_start Round 2' 00:06:05.091 11:28:05 event.app_repeat -- event/event.sh@25 -- # waitforlisten 61981 /var/tmp/spdk-nbd.sock 00:06:05.091 11:28:05 event.app_repeat -- common/autotest_common.sh@835 -- # '[' -z 61981 ']' 00:06:05.091 11:28:05 event.app_repeat -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:06:05.091 11:28:05 event.app_repeat -- common/autotest_common.sh@840 -- # local max_retries=100 00:06:05.091 11:28:05 event.app_repeat -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:06:05.091 11:28:05 event.app_repeat -- common/autotest_common.sh@844 -- # xtrace_disable 00:06:05.091 11:28:05 event.app_repeat -- common/autotest_common.sh@10 -- # set +x 00:06:05.091 11:28:05 event.app_repeat -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:06:05.091 11:28:05 event.app_repeat -- common/autotest_common.sh@868 -- # return 0 00:06:05.091 11:28:05 event.app_repeat -- event/event.sh@27 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:06:05.091 Malloc0 00:06:05.091 11:28:05 event.app_repeat -- event/event.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:06:05.349 Malloc1 00:06:05.349 11:28:05 event.app_repeat -- event/event.sh@30 -- # nbd_rpc_data_verify /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:06:05.349 11:28:05 event.app_repeat -- bdev/nbd_common.sh@90 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:05.349 11:28:05 event.app_repeat -- bdev/nbd_common.sh@91 -- # bdev_list=('Malloc0' 'Malloc1') 00:06:05.349 11:28:05 event.app_repeat -- bdev/nbd_common.sh@91 -- # local bdev_list 00:06:05.349 11:28:05 event.app_repeat -- bdev/nbd_common.sh@92 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:06:05.349 11:28:05 event.app_repeat -- bdev/nbd_common.sh@92 -- # local nbd_list 00:06:05.349 11:28:05 event.app_repeat -- bdev/nbd_common.sh@94 -- # nbd_start_disks /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:06:05.349 11:28:05 event.app_repeat -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:05.349 11:28:05 event.app_repeat -- bdev/nbd_common.sh@10 -- # bdev_list=('Malloc0' 'Malloc1') 00:06:05.349 11:28:05 event.app_repeat -- bdev/nbd_common.sh@10 -- # local bdev_list 00:06:05.349 11:28:05 event.app_repeat -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:06:05.349 11:28:05 event.app_repeat -- bdev/nbd_common.sh@11 -- # local nbd_list 00:06:05.349 11:28:05 event.app_repeat -- bdev/nbd_common.sh@12 -- # local i 00:06:05.349 11:28:05 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:06:05.349 11:28:05 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:06:05.349 11:28:05 event.app_repeat -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc0 /dev/nbd0 00:06:05.608 /dev/nbd0 00:06:05.608 11:28:06 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:06:05.608 11:28:06 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:06:05.608 11:28:06 event.app_repeat -- common/autotest_common.sh@872 -- # local nbd_name=nbd0 00:06:05.608 11:28:06 event.app_repeat -- common/autotest_common.sh@873 -- # local i 00:06:05.608 11:28:06 event.app_repeat -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:06:05.608 11:28:06 event.app_repeat -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:06:05.608 11:28:06 event.app_repeat -- common/autotest_common.sh@876 -- # grep -q -w nbd0 /proc/partitions 00:06:05.608 11:28:06 event.app_repeat -- common/autotest_common.sh@877 -- # break 00:06:05.608 11:28:06 event.app_repeat -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:06:05.608 11:28:06 event.app_repeat -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:06:05.608 11:28:06 event.app_repeat -- common/autotest_common.sh@889 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:06:05.608 1+0 records in 00:06:05.608 1+0 records out 00:06:05.608 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000288114 s, 14.2 MB/s 00:06:05.608 11:28:06 event.app_repeat -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:06:05.608 11:28:06 event.app_repeat -- common/autotest_common.sh@890 -- # size=4096 00:06:05.608 11:28:06 event.app_repeat -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:06:05.608 11:28:06 event.app_repeat -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:06:05.608 11:28:06 event.app_repeat -- common/autotest_common.sh@893 -- # return 0 00:06:05.608 11:28:06 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:06:05.608 11:28:06 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:06:05.608 11:28:06 event.app_repeat -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc1 /dev/nbd1 00:06:05.866 /dev/nbd1 00:06:05.866 11:28:06 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:06:06.124 11:28:06 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:06:06.124 11:28:06 event.app_repeat -- common/autotest_common.sh@872 -- # local nbd_name=nbd1 00:06:06.124 11:28:06 event.app_repeat -- common/autotest_common.sh@873 -- # local i 00:06:06.124 11:28:06 event.app_repeat -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:06:06.124 11:28:06 event.app_repeat -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:06:06.124 11:28:06 event.app_repeat -- common/autotest_common.sh@876 -- # grep -q -w nbd1 /proc/partitions 00:06:06.124 11:28:06 event.app_repeat -- common/autotest_common.sh@877 -- # break 00:06:06.124 11:28:06 event.app_repeat -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:06:06.124 11:28:06 event.app_repeat -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:06:06.124 11:28:06 event.app_repeat -- common/autotest_common.sh@889 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:06:06.124 1+0 records in 00:06:06.124 1+0 records out 00:06:06.124 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000222549 s, 18.4 MB/s 00:06:06.124 11:28:06 event.app_repeat -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:06:06.124 11:28:06 event.app_repeat -- common/autotest_common.sh@890 -- # size=4096 00:06:06.124 11:28:06 event.app_repeat -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:06:06.124 11:28:06 event.app_repeat -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:06:06.124 11:28:06 event.app_repeat -- common/autotest_common.sh@893 -- # return 0 00:06:06.124 11:28:06 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:06:06.124 11:28:06 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:06:06.124 11:28:06 event.app_repeat -- bdev/nbd_common.sh@95 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:06:06.124 11:28:06 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:06.124 11:28:06 event.app_repeat -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:06:06.383 11:28:06 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[ 00:06:06.383 { 00:06:06.383 "bdev_name": "Malloc0", 00:06:06.383 "nbd_device": "/dev/nbd0" 00:06:06.383 }, 00:06:06.383 { 00:06:06.383 "bdev_name": "Malloc1", 00:06:06.383 "nbd_device": "/dev/nbd1" 00:06:06.383 } 00:06:06.383 ]' 00:06:06.383 11:28:06 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:06:06.383 11:28:06 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[ 00:06:06.383 { 00:06:06.383 "bdev_name": "Malloc0", 00:06:06.383 "nbd_device": "/dev/nbd0" 00:06:06.383 }, 00:06:06.383 { 00:06:06.383 "bdev_name": "Malloc1", 00:06:06.383 "nbd_device": "/dev/nbd1" 00:06:06.383 } 00:06:06.383 ]' 00:06:06.383 11:28:06 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name='/dev/nbd0 00:06:06.383 /dev/nbd1' 00:06:06.383 11:28:06 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:06:06.383 11:28:06 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '/dev/nbd0 00:06:06.383 /dev/nbd1' 00:06:06.383 11:28:06 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=2 00:06:06.383 11:28:06 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 2 00:06:06.383 11:28:06 event.app_repeat -- bdev/nbd_common.sh@95 -- # count=2 00:06:06.383 11:28:06 event.app_repeat -- bdev/nbd_common.sh@96 -- # '[' 2 -ne 2 ']' 00:06:06.383 11:28:06 event.app_repeat -- bdev/nbd_common.sh@100 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' write 00:06:06.383 11:28:06 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:06:06.383 11:28:06 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:06:06.383 11:28:06 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=write 00:06:06.383 11:28:06 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:06:06.383 11:28:06 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' write = write ']' 00:06:06.383 11:28:06 event.app_repeat -- bdev/nbd_common.sh@76 -- # dd if=/dev/urandom of=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest bs=4096 count=256 00:06:06.383 256+0 records in 00:06:06.383 256+0 records out 00:06:06.383 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0108072 s, 97.0 MB/s 00:06:06.383 11:28:06 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:06:06.383 11:28:06 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest of=/dev/nbd0 bs=4096 count=256 oflag=direct 00:06:06.383 256+0 records in 00:06:06.383 256+0 records out 00:06:06.383 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0264015 s, 39.7 MB/s 00:06:06.383 11:28:07 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:06:06.383 11:28:07 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest of=/dev/nbd1 bs=4096 count=256 oflag=direct 00:06:06.383 256+0 records in 00:06:06.383 256+0 records out 00:06:06.383 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0287557 s, 36.5 MB/s 00:06:06.383 11:28:07 event.app_repeat -- bdev/nbd_common.sh@101 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' verify 00:06:06.383 11:28:07 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:06:06.383 11:28:07 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:06:06.383 11:28:07 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=verify 00:06:06.383 11:28:07 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:06:06.383 11:28:07 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' verify = write ']' 00:06:06.383 11:28:07 event.app_repeat -- bdev/nbd_common.sh@80 -- # '[' verify = verify ']' 00:06:06.383 11:28:07 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:06:06.383 11:28:07 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest /dev/nbd0 00:06:06.383 11:28:07 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:06:06.383 11:28:07 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest /dev/nbd1 00:06:06.383 11:28:07 event.app_repeat -- bdev/nbd_common.sh@85 -- # rm /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:06:06.383 11:28:07 event.app_repeat -- bdev/nbd_common.sh@103 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1' 00:06:06.383 11:28:07 event.app_repeat -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:06.383 11:28:07 event.app_repeat -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:06:06.383 11:28:07 event.app_repeat -- bdev/nbd_common.sh@50 -- # local nbd_list 00:06:06.383 11:28:07 event.app_repeat -- bdev/nbd_common.sh@51 -- # local i 00:06:06.383 11:28:07 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:06:06.383 11:28:07 event.app_repeat -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:06:06.641 11:28:07 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:06:06.641 11:28:07 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:06:06.641 11:28:07 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:06:06.641 11:28:07 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:06:06.641 11:28:07 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:06:06.641 11:28:07 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:06:06.641 11:28:07 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:06:06.641 11:28:07 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:06:06.641 11:28:07 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:06:06.641 11:28:07 event.app_repeat -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd1 00:06:06.899 11:28:07 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:06:06.899 11:28:07 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:06:06.899 11:28:07 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:06:06.899 11:28:07 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:06:06.899 11:28:07 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:06:06.899 11:28:07 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:06:06.899 11:28:07 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:06:06.899 11:28:07 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:06:06.899 11:28:07 event.app_repeat -- bdev/nbd_common.sh@104 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:06:06.899 11:28:07 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:06.899 11:28:07 event.app_repeat -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:06:07.465 11:28:07 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:06:07.465 11:28:07 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:06:07.465 11:28:07 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[]' 00:06:07.465 11:28:07 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:06:07.465 11:28:07 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:06:07.465 11:28:07 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '' 00:06:07.465 11:28:07 event.app_repeat -- bdev/nbd_common.sh@65 -- # true 00:06:07.465 11:28:07 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=0 00:06:07.465 11:28:07 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 0 00:06:07.465 11:28:07 event.app_repeat -- bdev/nbd_common.sh@104 -- # count=0 00:06:07.465 11:28:07 event.app_repeat -- bdev/nbd_common.sh@105 -- # '[' 0 -ne 0 ']' 00:06:07.465 11:28:07 event.app_repeat -- bdev/nbd_common.sh@109 -- # return 0 00:06:07.465 11:28:07 event.app_repeat -- event/event.sh@34 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock spdk_kill_instance SIGTERM 00:06:07.723 11:28:08 event.app_repeat -- event/event.sh@35 -- # sleep 3 00:06:07.723 [2024-12-16 11:28:08.418897] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:06:07.723 [2024-12-16 11:28:08.467916] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 1 00:06:07.723 [2024-12-16 11:28:08.467925] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:06:07.982 [2024-12-16 11:28:08.522000] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_register' already registered. 00:06:07.982 [2024-12-16 11:28:08.522081] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_unregister' already registered. 00:06:10.510 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:06:10.510 11:28:11 event.app_repeat -- event/event.sh@38 -- # waitforlisten 61981 /var/tmp/spdk-nbd.sock 00:06:10.510 11:28:11 event.app_repeat -- common/autotest_common.sh@835 -- # '[' -z 61981 ']' 00:06:10.510 11:28:11 event.app_repeat -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:06:10.510 11:28:11 event.app_repeat -- common/autotest_common.sh@840 -- # local max_retries=100 00:06:10.510 11:28:11 event.app_repeat -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:06:10.510 11:28:11 event.app_repeat -- common/autotest_common.sh@844 -- # xtrace_disable 00:06:10.511 11:28:11 event.app_repeat -- common/autotest_common.sh@10 -- # set +x 00:06:11.076 11:28:11 event.app_repeat -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:06:11.076 11:28:11 event.app_repeat -- common/autotest_common.sh@868 -- # return 0 00:06:11.076 11:28:11 event.app_repeat -- event/event.sh@39 -- # killprocess 61981 00:06:11.076 11:28:11 event.app_repeat -- common/autotest_common.sh@954 -- # '[' -z 61981 ']' 00:06:11.076 11:28:11 event.app_repeat -- common/autotest_common.sh@958 -- # kill -0 61981 00:06:11.076 11:28:11 event.app_repeat -- common/autotest_common.sh@959 -- # uname 00:06:11.076 11:28:11 event.app_repeat -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:06:11.076 11:28:11 event.app_repeat -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 61981 00:06:11.076 killing process with pid 61981 00:06:11.076 11:28:11 event.app_repeat -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:06:11.076 11:28:11 event.app_repeat -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:06:11.076 11:28:11 event.app_repeat -- common/autotest_common.sh@972 -- # echo 'killing process with pid 61981' 00:06:11.076 11:28:11 event.app_repeat -- common/autotest_common.sh@973 -- # kill 61981 00:06:11.076 11:28:11 event.app_repeat -- common/autotest_common.sh@978 -- # wait 61981 00:06:11.076 spdk_app_start is called in Round 0. 00:06:11.076 Shutdown signal received, stop current app iteration 00:06:11.076 Starting SPDK v25.01-pre git sha1 c360129e7 / DPDK 24.03.0 reinitialization... 00:06:11.076 spdk_app_start is called in Round 1. 00:06:11.076 Shutdown signal received, stop current app iteration 00:06:11.076 Starting SPDK v25.01-pre git sha1 c360129e7 / DPDK 24.03.0 reinitialization... 00:06:11.076 spdk_app_start is called in Round 2. 00:06:11.076 Shutdown signal received, stop current app iteration 00:06:11.076 Starting SPDK v25.01-pre git sha1 c360129e7 / DPDK 24.03.0 reinitialization... 00:06:11.076 spdk_app_start is called in Round 3. 00:06:11.077 Shutdown signal received, stop current app iteration 00:06:11.077 11:28:11 event.app_repeat -- event/event.sh@40 -- # trap - SIGINT SIGTERM EXIT 00:06:11.077 11:28:11 event.app_repeat -- event/event.sh@42 -- # return 0 00:06:11.077 00:06:11.077 real 0m19.815s 00:06:11.077 user 0m45.431s 00:06:11.077 sys 0m3.146s 00:06:11.077 11:28:11 event.app_repeat -- common/autotest_common.sh@1130 -- # xtrace_disable 00:06:11.077 ************************************ 00:06:11.077 END TEST app_repeat 00:06:11.077 11:28:11 event.app_repeat -- common/autotest_common.sh@10 -- # set +x 00:06:11.077 ************************************ 00:06:11.335 11:28:11 event -- event/event.sh@54 -- # (( SPDK_TEST_CRYPTO == 0 )) 00:06:11.335 11:28:11 event -- event/event.sh@55 -- # run_test cpu_locks /home/vagrant/spdk_repo/spdk/test/event/cpu_locks.sh 00:06:11.335 11:28:11 event -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:06:11.335 11:28:11 event -- common/autotest_common.sh@1111 -- # xtrace_disable 00:06:11.335 11:28:11 event -- common/autotest_common.sh@10 -- # set +x 00:06:11.335 ************************************ 00:06:11.335 START TEST cpu_locks 00:06:11.335 ************************************ 00:06:11.335 11:28:11 event.cpu_locks -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/event/cpu_locks.sh 00:06:11.335 * Looking for test storage... 00:06:11.335 * Found test storage at /home/vagrant/spdk_repo/spdk/test/event 00:06:11.335 11:28:11 event.cpu_locks -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:06:11.335 11:28:11 event.cpu_locks -- common/autotest_common.sh@1711 -- # lcov --version 00:06:11.335 11:28:11 event.cpu_locks -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:06:11.335 11:28:12 event.cpu_locks -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:06:11.335 11:28:12 event.cpu_locks -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:06:11.335 11:28:12 event.cpu_locks -- scripts/common.sh@333 -- # local ver1 ver1_l 00:06:11.335 11:28:12 event.cpu_locks -- scripts/common.sh@334 -- # local ver2 ver2_l 00:06:11.335 11:28:12 event.cpu_locks -- scripts/common.sh@336 -- # IFS=.-: 00:06:11.335 11:28:12 event.cpu_locks -- scripts/common.sh@336 -- # read -ra ver1 00:06:11.335 11:28:12 event.cpu_locks -- scripts/common.sh@337 -- # IFS=.-: 00:06:11.335 11:28:12 event.cpu_locks -- scripts/common.sh@337 -- # read -ra ver2 00:06:11.335 11:28:12 event.cpu_locks -- scripts/common.sh@338 -- # local 'op=<' 00:06:11.335 11:28:12 event.cpu_locks -- scripts/common.sh@340 -- # ver1_l=2 00:06:11.335 11:28:12 event.cpu_locks -- scripts/common.sh@341 -- # ver2_l=1 00:06:11.335 11:28:12 event.cpu_locks -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:06:11.335 11:28:12 event.cpu_locks -- scripts/common.sh@344 -- # case "$op" in 00:06:11.335 11:28:12 event.cpu_locks -- scripts/common.sh@345 -- # : 1 00:06:11.335 11:28:12 event.cpu_locks -- scripts/common.sh@364 -- # (( v = 0 )) 00:06:11.335 11:28:12 event.cpu_locks -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:06:11.335 11:28:12 event.cpu_locks -- scripts/common.sh@365 -- # decimal 1 00:06:11.335 11:28:12 event.cpu_locks -- scripts/common.sh@353 -- # local d=1 00:06:11.335 11:28:12 event.cpu_locks -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:06:11.335 11:28:12 event.cpu_locks -- scripts/common.sh@355 -- # echo 1 00:06:11.335 11:28:12 event.cpu_locks -- scripts/common.sh@365 -- # ver1[v]=1 00:06:11.335 11:28:12 event.cpu_locks -- scripts/common.sh@366 -- # decimal 2 00:06:11.335 11:28:12 event.cpu_locks -- scripts/common.sh@353 -- # local d=2 00:06:11.335 11:28:12 event.cpu_locks -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:06:11.335 11:28:12 event.cpu_locks -- scripts/common.sh@355 -- # echo 2 00:06:11.335 11:28:12 event.cpu_locks -- scripts/common.sh@366 -- # ver2[v]=2 00:06:11.335 11:28:12 event.cpu_locks -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:06:11.335 11:28:12 event.cpu_locks -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:06:11.335 11:28:12 event.cpu_locks -- scripts/common.sh@368 -- # return 0 00:06:11.335 11:28:12 event.cpu_locks -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:06:11.335 11:28:12 event.cpu_locks -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:06:11.335 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:11.335 --rc genhtml_branch_coverage=1 00:06:11.335 --rc genhtml_function_coverage=1 00:06:11.335 --rc genhtml_legend=1 00:06:11.335 --rc geninfo_all_blocks=1 00:06:11.335 --rc geninfo_unexecuted_blocks=1 00:06:11.335 00:06:11.335 ' 00:06:11.335 11:28:12 event.cpu_locks -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:06:11.335 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:11.335 --rc genhtml_branch_coverage=1 00:06:11.335 --rc genhtml_function_coverage=1 00:06:11.335 --rc genhtml_legend=1 00:06:11.335 --rc geninfo_all_blocks=1 00:06:11.335 --rc geninfo_unexecuted_blocks=1 00:06:11.335 00:06:11.335 ' 00:06:11.335 11:28:12 event.cpu_locks -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:06:11.335 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:11.335 --rc genhtml_branch_coverage=1 00:06:11.335 --rc genhtml_function_coverage=1 00:06:11.335 --rc genhtml_legend=1 00:06:11.335 --rc geninfo_all_blocks=1 00:06:11.335 --rc geninfo_unexecuted_blocks=1 00:06:11.335 00:06:11.335 ' 00:06:11.335 11:28:12 event.cpu_locks -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:06:11.335 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:11.335 --rc genhtml_branch_coverage=1 00:06:11.335 --rc genhtml_function_coverage=1 00:06:11.335 --rc genhtml_legend=1 00:06:11.335 --rc geninfo_all_blocks=1 00:06:11.335 --rc geninfo_unexecuted_blocks=1 00:06:11.335 00:06:11.335 ' 00:06:11.335 11:28:12 event.cpu_locks -- event/cpu_locks.sh@11 -- # rpc_sock1=/var/tmp/spdk.sock 00:06:11.335 11:28:12 event.cpu_locks -- event/cpu_locks.sh@12 -- # rpc_sock2=/var/tmp/spdk2.sock 00:06:11.335 11:28:12 event.cpu_locks -- event/cpu_locks.sh@164 -- # trap cleanup EXIT SIGTERM SIGINT 00:06:11.335 11:28:12 event.cpu_locks -- event/cpu_locks.sh@166 -- # run_test default_locks default_locks 00:06:11.335 11:28:12 event.cpu_locks -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:06:11.335 11:28:12 event.cpu_locks -- common/autotest_common.sh@1111 -- # xtrace_disable 00:06:11.335 11:28:12 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:06:11.335 ************************************ 00:06:11.335 START TEST default_locks 00:06:11.335 ************************************ 00:06:11.335 11:28:12 event.cpu_locks.default_locks -- common/autotest_common.sh@1129 -- # default_locks 00:06:11.335 11:28:12 event.cpu_locks.default_locks -- event/cpu_locks.sh@46 -- # spdk_tgt_pid=62617 00:06:11.335 11:28:12 event.cpu_locks.default_locks -- event/cpu_locks.sh@47 -- # waitforlisten 62617 00:06:11.335 11:28:12 event.cpu_locks.default_locks -- event/cpu_locks.sh@45 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 00:06:11.335 11:28:12 event.cpu_locks.default_locks -- common/autotest_common.sh@835 -- # '[' -z 62617 ']' 00:06:11.335 11:28:12 event.cpu_locks.default_locks -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:11.335 11:28:12 event.cpu_locks.default_locks -- common/autotest_common.sh@840 -- # local max_retries=100 00:06:11.335 11:28:12 event.cpu_locks.default_locks -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:11.335 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:11.336 11:28:12 event.cpu_locks.default_locks -- common/autotest_common.sh@844 -- # xtrace_disable 00:06:11.336 11:28:12 event.cpu_locks.default_locks -- common/autotest_common.sh@10 -- # set +x 00:06:11.594 [2024-12-16 11:28:12.130738] Starting SPDK v25.01-pre git sha1 c360129e7 / DPDK 24.03.0 initialization... 00:06:11.594 [2024-12-16 11:28:12.131105] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid62617 ] 00:06:11.594 [2024-12-16 11:28:12.285194] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:11.852 [2024-12-16 11:28:12.356873] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:06:12.110 11:28:12 event.cpu_locks.default_locks -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:06:12.110 11:28:12 event.cpu_locks.default_locks -- common/autotest_common.sh@868 -- # return 0 00:06:12.110 11:28:12 event.cpu_locks.default_locks -- event/cpu_locks.sh@49 -- # locks_exist 62617 00:06:12.110 11:28:12 event.cpu_locks.default_locks -- event/cpu_locks.sh@22 -- # lslocks -p 62617 00:06:12.110 11:28:12 event.cpu_locks.default_locks -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:06:12.369 11:28:13 event.cpu_locks.default_locks -- event/cpu_locks.sh@50 -- # killprocess 62617 00:06:12.369 11:28:13 event.cpu_locks.default_locks -- common/autotest_common.sh@954 -- # '[' -z 62617 ']' 00:06:12.369 11:28:13 event.cpu_locks.default_locks -- common/autotest_common.sh@958 -- # kill -0 62617 00:06:12.369 11:28:13 event.cpu_locks.default_locks -- common/autotest_common.sh@959 -- # uname 00:06:12.369 11:28:13 event.cpu_locks.default_locks -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:06:12.369 11:28:13 event.cpu_locks.default_locks -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 62617 00:06:12.369 killing process with pid 62617 00:06:12.369 11:28:13 event.cpu_locks.default_locks -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:06:12.369 11:28:13 event.cpu_locks.default_locks -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:06:12.369 11:28:13 event.cpu_locks.default_locks -- common/autotest_common.sh@972 -- # echo 'killing process with pid 62617' 00:06:12.369 11:28:13 event.cpu_locks.default_locks -- common/autotest_common.sh@973 -- # kill 62617 00:06:12.369 11:28:13 event.cpu_locks.default_locks -- common/autotest_common.sh@978 -- # wait 62617 00:06:12.935 11:28:13 event.cpu_locks.default_locks -- event/cpu_locks.sh@52 -- # NOT waitforlisten 62617 00:06:12.935 11:28:13 event.cpu_locks.default_locks -- common/autotest_common.sh@652 -- # local es=0 00:06:12.935 11:28:13 event.cpu_locks.default_locks -- common/autotest_common.sh@654 -- # valid_exec_arg waitforlisten 62617 00:06:12.935 11:28:13 event.cpu_locks.default_locks -- common/autotest_common.sh@640 -- # local arg=waitforlisten 00:06:12.935 11:28:13 event.cpu_locks.default_locks -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:06:12.935 11:28:13 event.cpu_locks.default_locks -- common/autotest_common.sh@644 -- # type -t waitforlisten 00:06:12.935 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:12.935 ERROR: process (pid: 62617) is no longer running 00:06:12.935 11:28:13 event.cpu_locks.default_locks -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:06:12.935 11:28:13 event.cpu_locks.default_locks -- common/autotest_common.sh@655 -- # waitforlisten 62617 00:06:12.935 11:28:13 event.cpu_locks.default_locks -- common/autotest_common.sh@835 -- # '[' -z 62617 ']' 00:06:12.935 11:28:13 event.cpu_locks.default_locks -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:12.935 11:28:13 event.cpu_locks.default_locks -- common/autotest_common.sh@840 -- # local max_retries=100 00:06:12.935 11:28:13 event.cpu_locks.default_locks -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:12.935 11:28:13 event.cpu_locks.default_locks -- common/autotest_common.sh@844 -- # xtrace_disable 00:06:12.935 11:28:13 event.cpu_locks.default_locks -- common/autotest_common.sh@10 -- # set +x 00:06:12.935 /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh: line 850: kill: (62617) - No such process 00:06:12.935 11:28:13 event.cpu_locks.default_locks -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:06:12.935 11:28:13 event.cpu_locks.default_locks -- common/autotest_common.sh@868 -- # return 1 00:06:12.935 11:28:13 event.cpu_locks.default_locks -- common/autotest_common.sh@655 -- # es=1 00:06:12.935 11:28:13 event.cpu_locks.default_locks -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:06:12.935 11:28:13 event.cpu_locks.default_locks -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:06:12.935 11:28:13 event.cpu_locks.default_locks -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:06:12.935 11:28:13 event.cpu_locks.default_locks -- event/cpu_locks.sh@54 -- # no_locks 00:06:12.935 ************************************ 00:06:12.935 END TEST default_locks 00:06:12.935 ************************************ 00:06:12.935 11:28:13 event.cpu_locks.default_locks -- event/cpu_locks.sh@26 -- # lock_files=() 00:06:12.935 11:28:13 event.cpu_locks.default_locks -- event/cpu_locks.sh@26 -- # local lock_files 00:06:12.935 11:28:13 event.cpu_locks.default_locks -- event/cpu_locks.sh@27 -- # (( 0 != 0 )) 00:06:12.935 00:06:12.935 real 0m1.446s 00:06:12.935 user 0m1.433s 00:06:12.935 sys 0m0.566s 00:06:12.935 11:28:13 event.cpu_locks.default_locks -- common/autotest_common.sh@1130 -- # xtrace_disable 00:06:12.935 11:28:13 event.cpu_locks.default_locks -- common/autotest_common.sh@10 -- # set +x 00:06:12.935 11:28:13 event.cpu_locks -- event/cpu_locks.sh@167 -- # run_test default_locks_via_rpc default_locks_via_rpc 00:06:12.935 11:28:13 event.cpu_locks -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:06:12.935 11:28:13 event.cpu_locks -- common/autotest_common.sh@1111 -- # xtrace_disable 00:06:12.935 11:28:13 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:06:12.935 ************************************ 00:06:12.935 START TEST default_locks_via_rpc 00:06:12.935 ************************************ 00:06:12.935 11:28:13 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@1129 -- # default_locks_via_rpc 00:06:12.935 11:28:13 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@62 -- # spdk_tgt_pid=62667 00:06:12.935 11:28:13 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@63 -- # waitforlisten 62667 00:06:12.935 11:28:13 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@61 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 00:06:12.935 11:28:13 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@835 -- # '[' -z 62667 ']' 00:06:12.935 11:28:13 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:12.935 11:28:13 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@840 -- # local max_retries=100 00:06:12.935 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:12.935 11:28:13 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:12.935 11:28:13 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@844 -- # xtrace_disable 00:06:12.935 11:28:13 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:06:12.935 [2024-12-16 11:28:13.633773] Starting SPDK v25.01-pre git sha1 c360129e7 / DPDK 24.03.0 initialization... 00:06:12.935 [2024-12-16 11:28:13.633927] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid62667 ] 00:06:13.193 [2024-12-16 11:28:13.787584] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:13.194 [2024-12-16 11:28:13.849725] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:06:13.452 11:28:14 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:06:13.452 11:28:14 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@868 -- # return 0 00:06:13.452 11:28:14 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@65 -- # rpc_cmd framework_disable_cpumask_locks 00:06:13.452 11:28:14 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:13.452 11:28:14 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:06:13.452 11:28:14 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:13.452 11:28:14 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@67 -- # no_locks 00:06:13.452 11:28:14 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@26 -- # lock_files=() 00:06:13.452 11:28:14 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@26 -- # local lock_files 00:06:13.452 11:28:14 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@27 -- # (( 0 != 0 )) 00:06:13.452 11:28:14 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@69 -- # rpc_cmd framework_enable_cpumask_locks 00:06:13.452 11:28:14 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:13.452 11:28:14 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:06:13.452 11:28:14 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:13.452 11:28:14 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@71 -- # locks_exist 62667 00:06:13.452 11:28:14 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@22 -- # lslocks -p 62667 00:06:13.452 11:28:14 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:06:14.018 11:28:14 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@73 -- # killprocess 62667 00:06:14.018 11:28:14 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@954 -- # '[' -z 62667 ']' 00:06:14.018 11:28:14 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@958 -- # kill -0 62667 00:06:14.018 11:28:14 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@959 -- # uname 00:06:14.018 11:28:14 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:06:14.018 11:28:14 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 62667 00:06:14.018 killing process with pid 62667 00:06:14.018 11:28:14 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:06:14.018 11:28:14 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:06:14.018 11:28:14 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@972 -- # echo 'killing process with pid 62667' 00:06:14.018 11:28:14 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@973 -- # kill 62667 00:06:14.018 11:28:14 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@978 -- # wait 62667 00:06:14.584 ************************************ 00:06:14.584 END TEST default_locks_via_rpc 00:06:14.584 ************************************ 00:06:14.584 00:06:14.584 real 0m1.488s 00:06:14.584 user 0m1.448s 00:06:14.584 sys 0m0.572s 00:06:14.584 11:28:15 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@1130 -- # xtrace_disable 00:06:14.584 11:28:15 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:06:14.584 11:28:15 event.cpu_locks -- event/cpu_locks.sh@168 -- # run_test non_locking_app_on_locked_coremask non_locking_app_on_locked_coremask 00:06:14.584 11:28:15 event.cpu_locks -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:06:14.584 11:28:15 event.cpu_locks -- common/autotest_common.sh@1111 -- # xtrace_disable 00:06:14.584 11:28:15 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:06:14.584 ************************************ 00:06:14.584 START TEST non_locking_app_on_locked_coremask 00:06:14.584 ************************************ 00:06:14.584 11:28:15 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@1129 -- # non_locking_app_on_locked_coremask 00:06:14.584 11:28:15 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@80 -- # spdk_tgt_pid=62723 00:06:14.584 11:28:15 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@79 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 00:06:14.584 11:28:15 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@81 -- # waitforlisten 62723 /var/tmp/spdk.sock 00:06:14.584 11:28:15 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@835 -- # '[' -z 62723 ']' 00:06:14.584 11:28:15 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:14.584 11:28:15 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@840 -- # local max_retries=100 00:06:14.584 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:14.584 11:28:15 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:14.584 11:28:15 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@844 -- # xtrace_disable 00:06:14.584 11:28:15 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:06:14.584 [2024-12-16 11:28:15.187313] Starting SPDK v25.01-pre git sha1 c360129e7 / DPDK 24.03.0 initialization... 00:06:14.584 [2024-12-16 11:28:15.187453] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid62723 ] 00:06:14.841 [2024-12-16 11:28:15.337504] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:14.842 [2024-12-16 11:28:15.399132] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:06:15.776 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:06:15.776 11:28:16 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:06:15.776 11:28:16 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@868 -- # return 0 00:06:15.776 11:28:16 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@84 -- # spdk_tgt_pid2=62751 00:06:15.776 11:28:16 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@85 -- # waitforlisten 62751 /var/tmp/spdk2.sock 00:06:15.776 11:28:16 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@83 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 --disable-cpumask-locks -r /var/tmp/spdk2.sock 00:06:15.776 11:28:16 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@835 -- # '[' -z 62751 ']' 00:06:15.776 11:28:16 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk2.sock 00:06:15.776 11:28:16 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@840 -- # local max_retries=100 00:06:15.776 11:28:16 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:06:15.776 11:28:16 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@844 -- # xtrace_disable 00:06:15.776 11:28:16 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:06:15.776 [2024-12-16 11:28:16.291750] Starting SPDK v25.01-pre git sha1 c360129e7 / DPDK 24.03.0 initialization... 00:06:15.776 [2024-12-16 11:28:16.292101] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid62751 ] 00:06:15.776 [2024-12-16 11:28:16.456986] app.c: 916:spdk_app_start: *NOTICE*: CPU core locks deactivated. 00:06:15.776 [2024-12-16 11:28:16.457054] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:16.033 [2024-12-16 11:28:16.585256] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:06:16.968 11:28:17 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:06:16.968 11:28:17 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@868 -- # return 0 00:06:16.968 11:28:17 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@87 -- # locks_exist 62723 00:06:16.968 11:28:17 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@22 -- # lslocks -p 62723 00:06:16.968 11:28:17 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:06:17.532 11:28:18 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@89 -- # killprocess 62723 00:06:17.532 11:28:18 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@954 -- # '[' -z 62723 ']' 00:06:17.532 11:28:18 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@958 -- # kill -0 62723 00:06:17.532 11:28:18 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@959 -- # uname 00:06:17.532 11:28:18 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:06:17.532 11:28:18 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 62723 00:06:17.790 killing process with pid 62723 00:06:17.790 11:28:18 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:06:17.790 11:28:18 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:06:17.790 11:28:18 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@972 -- # echo 'killing process with pid 62723' 00:06:17.790 11:28:18 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@973 -- # kill 62723 00:06:17.790 11:28:18 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@978 -- # wait 62723 00:06:18.725 11:28:19 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@90 -- # killprocess 62751 00:06:18.725 11:28:19 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@954 -- # '[' -z 62751 ']' 00:06:18.725 11:28:19 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@958 -- # kill -0 62751 00:06:18.725 11:28:19 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@959 -- # uname 00:06:18.725 11:28:19 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:06:18.725 11:28:19 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 62751 00:06:18.725 killing process with pid 62751 00:06:18.725 11:28:19 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:06:18.725 11:28:19 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:06:18.725 11:28:19 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@972 -- # echo 'killing process with pid 62751' 00:06:18.725 11:28:19 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@973 -- # kill 62751 00:06:18.725 11:28:19 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@978 -- # wait 62751 00:06:18.983 00:06:18.983 real 0m4.481s 00:06:18.983 user 0m5.106s 00:06:18.983 sys 0m1.234s 00:06:18.983 11:28:19 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@1130 -- # xtrace_disable 00:06:18.983 ************************************ 00:06:18.983 END TEST non_locking_app_on_locked_coremask 00:06:18.983 ************************************ 00:06:18.983 11:28:19 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:06:18.983 11:28:19 event.cpu_locks -- event/cpu_locks.sh@169 -- # run_test locking_app_on_unlocked_coremask locking_app_on_unlocked_coremask 00:06:18.983 11:28:19 event.cpu_locks -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:06:18.983 11:28:19 event.cpu_locks -- common/autotest_common.sh@1111 -- # xtrace_disable 00:06:18.983 11:28:19 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:06:18.983 ************************************ 00:06:18.983 START TEST locking_app_on_unlocked_coremask 00:06:18.983 ************************************ 00:06:18.983 11:28:19 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@1129 -- # locking_app_on_unlocked_coremask 00:06:18.983 11:28:19 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@98 -- # spdk_tgt_pid=62830 00:06:18.983 11:28:19 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@99 -- # waitforlisten 62830 /var/tmp/spdk.sock 00:06:18.983 11:28:19 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@97 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 --disable-cpumask-locks 00:06:18.983 11:28:19 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@835 -- # '[' -z 62830 ']' 00:06:18.983 11:28:19 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:18.983 11:28:19 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@840 -- # local max_retries=100 00:06:18.983 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:18.983 11:28:19 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:18.983 11:28:19 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@844 -- # xtrace_disable 00:06:18.983 11:28:19 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@10 -- # set +x 00:06:18.983 [2024-12-16 11:28:19.697005] Starting SPDK v25.01-pre git sha1 c360129e7 / DPDK 24.03.0 initialization... 00:06:18.983 [2024-12-16 11:28:19.697108] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid62830 ] 00:06:19.242 [2024-12-16 11:28:19.841276] app.c: 916:spdk_app_start: *NOTICE*: CPU core locks deactivated. 00:06:19.242 [2024-12-16 11:28:19.841348] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:19.242 [2024-12-16 11:28:19.909162] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:06:19.500 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:06:19.500 11:28:20 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:06:19.500 11:28:20 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@868 -- # return 0 00:06:19.500 11:28:20 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@102 -- # spdk_tgt_pid2=62850 00:06:19.500 11:28:20 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@103 -- # waitforlisten 62850 /var/tmp/spdk2.sock 00:06:19.500 11:28:20 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@835 -- # '[' -z 62850 ']' 00:06:19.500 11:28:20 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@101 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 -r /var/tmp/spdk2.sock 00:06:19.500 11:28:20 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk2.sock 00:06:19.500 11:28:20 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@840 -- # local max_retries=100 00:06:19.500 11:28:20 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:06:19.500 11:28:20 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@844 -- # xtrace_disable 00:06:19.500 11:28:20 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@10 -- # set +x 00:06:19.758 [2024-12-16 11:28:20.284292] Starting SPDK v25.01-pre git sha1 c360129e7 / DPDK 24.03.0 initialization... 00:06:19.758 [2024-12-16 11:28:20.284649] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid62850 ] 00:06:19.758 [2024-12-16 11:28:20.447794] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:20.016 [2024-12-16 11:28:20.579889] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:06:20.950 11:28:21 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:06:20.951 11:28:21 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@868 -- # return 0 00:06:20.951 11:28:21 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@105 -- # locks_exist 62850 00:06:20.951 11:28:21 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:06:20.951 11:28:21 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@22 -- # lslocks -p 62850 00:06:21.516 11:28:22 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@107 -- # killprocess 62830 00:06:21.516 11:28:22 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@954 -- # '[' -z 62830 ']' 00:06:21.516 11:28:22 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@958 -- # kill -0 62830 00:06:21.516 11:28:22 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@959 -- # uname 00:06:21.516 11:28:22 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:06:21.516 11:28:22 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 62830 00:06:21.516 11:28:22 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:06:21.516 11:28:22 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:06:21.516 killing process with pid 62830 00:06:21.516 11:28:22 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@972 -- # echo 'killing process with pid 62830' 00:06:21.516 11:28:22 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@973 -- # kill 62830 00:06:21.516 11:28:22 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@978 -- # wait 62830 00:06:22.451 11:28:23 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@108 -- # killprocess 62850 00:06:22.451 11:28:23 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@954 -- # '[' -z 62850 ']' 00:06:22.451 11:28:23 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@958 -- # kill -0 62850 00:06:22.451 11:28:23 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@959 -- # uname 00:06:22.451 11:28:23 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:06:22.451 11:28:23 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 62850 00:06:22.451 killing process with pid 62850 00:06:22.451 11:28:23 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:06:22.451 11:28:23 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:06:22.451 11:28:23 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@972 -- # echo 'killing process with pid 62850' 00:06:22.451 11:28:23 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@973 -- # kill 62850 00:06:22.451 11:28:23 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@978 -- # wait 62850 00:06:22.709 ************************************ 00:06:22.709 END TEST locking_app_on_unlocked_coremask 00:06:22.709 ************************************ 00:06:22.709 00:06:22.709 real 0m3.811s 00:06:22.709 user 0m4.161s 00:06:22.709 sys 0m1.162s 00:06:22.709 11:28:23 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@1130 -- # xtrace_disable 00:06:22.709 11:28:23 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@10 -- # set +x 00:06:22.968 11:28:23 event.cpu_locks -- event/cpu_locks.sh@170 -- # run_test locking_app_on_locked_coremask locking_app_on_locked_coremask 00:06:22.968 11:28:23 event.cpu_locks -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:06:22.968 11:28:23 event.cpu_locks -- common/autotest_common.sh@1111 -- # xtrace_disable 00:06:22.968 11:28:23 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:06:22.968 ************************************ 00:06:22.968 START TEST locking_app_on_locked_coremask 00:06:22.968 ************************************ 00:06:22.968 11:28:23 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@1129 -- # locking_app_on_locked_coremask 00:06:22.968 11:28:23 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@115 -- # spdk_tgt_pid=62929 00:06:22.968 11:28:23 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@116 -- # waitforlisten 62929 /var/tmp/spdk.sock 00:06:22.968 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:22.968 11:28:23 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@114 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 00:06:22.968 11:28:23 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@835 -- # '[' -z 62929 ']' 00:06:22.968 11:28:23 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:22.968 11:28:23 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@840 -- # local max_retries=100 00:06:22.968 11:28:23 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:22.968 11:28:23 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@844 -- # xtrace_disable 00:06:22.968 11:28:23 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:06:22.968 [2024-12-16 11:28:23.570611] Starting SPDK v25.01-pre git sha1 c360129e7 / DPDK 24.03.0 initialization... 00:06:22.968 [2024-12-16 11:28:23.570987] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid62929 ] 00:06:22.968 [2024-12-16 11:28:23.716282] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:23.226 [2024-12-16 11:28:23.770702] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:06:23.485 11:28:24 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:06:23.485 11:28:24 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@868 -- # return 0 00:06:23.485 11:28:24 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@119 -- # spdk_tgt_pid2=62944 00:06:23.485 11:28:24 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@120 -- # NOT waitforlisten 62944 /var/tmp/spdk2.sock 00:06:23.485 11:28:24 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@118 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 -r /var/tmp/spdk2.sock 00:06:23.485 11:28:24 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@652 -- # local es=0 00:06:23.485 11:28:24 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@654 -- # valid_exec_arg waitforlisten 62944 /var/tmp/spdk2.sock 00:06:23.485 11:28:24 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@640 -- # local arg=waitforlisten 00:06:23.485 11:28:24 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:06:23.485 11:28:24 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@644 -- # type -t waitforlisten 00:06:23.485 11:28:24 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:06:23.485 11:28:24 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@655 -- # waitforlisten 62944 /var/tmp/spdk2.sock 00:06:23.485 11:28:24 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@835 -- # '[' -z 62944 ']' 00:06:23.485 11:28:24 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk2.sock 00:06:23.485 11:28:24 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@840 -- # local max_retries=100 00:06:23.485 11:28:24 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:06:23.485 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:06:23.485 11:28:24 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@844 -- # xtrace_disable 00:06:23.485 11:28:24 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:06:23.485 [2024-12-16 11:28:24.130757] Starting SPDK v25.01-pre git sha1 c360129e7 / DPDK 24.03.0 initialization... 00:06:23.485 [2024-12-16 11:28:24.131159] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid62944 ] 00:06:23.743 [2024-12-16 11:28:24.295108] app.c: 781:claim_cpu_cores: *ERROR*: Cannot create lock on core 0, probably process 62929 has claimed it. 00:06:23.743 [2024-12-16 11:28:24.299235] app.c: 912:spdk_app_start: *ERROR*: Unable to acquire lock on assigned core mask - exiting. 00:06:24.309 ERROR: process (pid: 62944) is no longer running 00:06:24.309 /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh: line 850: kill: (62944) - No such process 00:06:24.309 11:28:24 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:06:24.309 11:28:24 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@868 -- # return 1 00:06:24.309 11:28:24 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@655 -- # es=1 00:06:24.309 11:28:24 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:06:24.309 11:28:24 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:06:24.309 11:28:24 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:06:24.309 11:28:24 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@122 -- # locks_exist 62929 00:06:24.309 11:28:24 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@22 -- # lslocks -p 62929 00:06:24.309 11:28:24 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:06:24.566 11:28:25 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@124 -- # killprocess 62929 00:06:24.566 11:28:25 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@954 -- # '[' -z 62929 ']' 00:06:24.566 11:28:25 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@958 -- # kill -0 62929 00:06:24.566 11:28:25 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@959 -- # uname 00:06:24.566 11:28:25 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:06:24.566 11:28:25 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 62929 00:06:24.824 killing process with pid 62929 00:06:24.824 11:28:25 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:06:24.824 11:28:25 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:06:24.824 11:28:25 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@972 -- # echo 'killing process with pid 62929' 00:06:24.824 11:28:25 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@973 -- # kill 62929 00:06:24.824 11:28:25 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@978 -- # wait 62929 00:06:25.081 00:06:25.081 real 0m2.228s 00:06:25.081 user 0m2.495s 00:06:25.081 sys 0m0.633s 00:06:25.081 11:28:25 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@1130 -- # xtrace_disable 00:06:25.081 ************************************ 00:06:25.081 END TEST locking_app_on_locked_coremask 00:06:25.081 ************************************ 00:06:25.081 11:28:25 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:06:25.081 11:28:25 event.cpu_locks -- event/cpu_locks.sh@171 -- # run_test locking_overlapped_coremask locking_overlapped_coremask 00:06:25.081 11:28:25 event.cpu_locks -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:06:25.081 11:28:25 event.cpu_locks -- common/autotest_common.sh@1111 -- # xtrace_disable 00:06:25.081 11:28:25 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:06:25.081 ************************************ 00:06:25.081 START TEST locking_overlapped_coremask 00:06:25.081 ************************************ 00:06:25.081 11:28:25 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@1129 -- # locking_overlapped_coremask 00:06:25.081 11:28:25 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@132 -- # spdk_tgt_pid=62995 00:06:25.081 11:28:25 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@131 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x7 00:06:25.081 11:28:25 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@133 -- # waitforlisten 62995 /var/tmp/spdk.sock 00:06:25.081 11:28:25 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@835 -- # '[' -z 62995 ']' 00:06:25.081 11:28:25 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:25.081 11:28:25 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@840 -- # local max_retries=100 00:06:25.081 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:25.081 11:28:25 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:25.081 11:28:25 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@844 -- # xtrace_disable 00:06:25.081 11:28:25 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@10 -- # set +x 00:06:25.339 [2024-12-16 11:28:25.860207] Starting SPDK v25.01-pre git sha1 c360129e7 / DPDK 24.03.0 initialization... 00:06:25.339 [2024-12-16 11:28:25.860496] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x7 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid62995 ] 00:06:25.339 [2024-12-16 11:28:26.009223] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:06:25.339 [2024-12-16 11:28:26.075188] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 1 00:06:25.339 [2024-12-16 11:28:26.075321] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 2 00:06:25.339 [2024-12-16 11:28:26.075325] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:06:26.273 11:28:26 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:06:26.273 11:28:26 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@868 -- # return 0 00:06:26.273 11:28:26 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@136 -- # spdk_tgt_pid2=63030 00:06:26.273 11:28:26 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@137 -- # NOT waitforlisten 63030 /var/tmp/spdk2.sock 00:06:26.273 11:28:26 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@652 -- # local es=0 00:06:26.273 11:28:26 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@654 -- # valid_exec_arg waitforlisten 63030 /var/tmp/spdk2.sock 00:06:26.273 11:28:26 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@640 -- # local arg=waitforlisten 00:06:26.273 11:28:26 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@135 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1c -r /var/tmp/spdk2.sock 00:06:26.273 11:28:26 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:06:26.273 11:28:26 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@644 -- # type -t waitforlisten 00:06:26.273 11:28:26 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:06:26.273 11:28:26 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@655 -- # waitforlisten 63030 /var/tmp/spdk2.sock 00:06:26.273 11:28:26 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@835 -- # '[' -z 63030 ']' 00:06:26.273 11:28:26 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk2.sock 00:06:26.273 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:06:26.273 11:28:26 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@840 -- # local max_retries=100 00:06:26.273 11:28:26 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:06:26.273 11:28:26 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@844 -- # xtrace_disable 00:06:26.273 11:28:26 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@10 -- # set +x 00:06:26.273 [2024-12-16 11:28:26.937934] Starting SPDK v25.01-pre git sha1 c360129e7 / DPDK 24.03.0 initialization... 00:06:26.273 [2024-12-16 11:28:26.938064] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1c --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid63030 ] 00:06:26.532 [2024-12-16 11:28:27.110808] app.c: 781:claim_cpu_cores: *ERROR*: Cannot create lock on core 2, probably process 62995 has claimed it. 00:06:26.532 [2024-12-16 11:28:27.111038] app.c: 912:spdk_app_start: *ERROR*: Unable to acquire lock on assigned core mask - exiting. 00:06:27.100 ERROR: process (pid: 63030) is no longer running 00:06:27.100 /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh: line 850: kill: (63030) - No such process 00:06:27.100 11:28:27 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:06:27.100 11:28:27 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@868 -- # return 1 00:06:27.100 11:28:27 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@655 -- # es=1 00:06:27.100 11:28:27 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:06:27.100 11:28:27 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:06:27.100 11:28:27 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:06:27.100 11:28:27 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@139 -- # check_remaining_locks 00:06:27.100 11:28:27 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@36 -- # locks=(/var/tmp/spdk_cpu_lock_*) 00:06:27.100 11:28:27 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@37 -- # locks_expected=(/var/tmp/spdk_cpu_lock_{000..002}) 00:06:27.100 11:28:27 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@38 -- # [[ /var/tmp/spdk_cpu_lock_000 /var/tmp/spdk_cpu_lock_001 /var/tmp/spdk_cpu_lock_002 == \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\0\ \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\1\ \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\2 ]] 00:06:27.100 11:28:27 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@141 -- # killprocess 62995 00:06:27.100 11:28:27 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@954 -- # '[' -z 62995 ']' 00:06:27.100 11:28:27 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@958 -- # kill -0 62995 00:06:27.100 11:28:27 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@959 -- # uname 00:06:27.100 11:28:27 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:06:27.100 11:28:27 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 62995 00:06:27.100 11:28:27 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:06:27.100 11:28:27 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:06:27.100 11:28:27 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@972 -- # echo 'killing process with pid 62995' 00:06:27.100 killing process with pid 62995 00:06:27.100 11:28:27 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@973 -- # kill 62995 00:06:27.100 11:28:27 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@978 -- # wait 62995 00:06:27.359 00:06:27.359 real 0m2.325s 00:06:27.359 user 0m6.639s 00:06:27.359 sys 0m0.462s 00:06:27.359 11:28:28 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@1130 -- # xtrace_disable 00:06:27.629 11:28:28 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@10 -- # set +x 00:06:27.629 ************************************ 00:06:27.629 END TEST locking_overlapped_coremask 00:06:27.629 ************************************ 00:06:27.629 11:28:28 event.cpu_locks -- event/cpu_locks.sh@172 -- # run_test locking_overlapped_coremask_via_rpc locking_overlapped_coremask_via_rpc 00:06:27.629 11:28:28 event.cpu_locks -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:06:27.629 11:28:28 event.cpu_locks -- common/autotest_common.sh@1111 -- # xtrace_disable 00:06:27.629 11:28:28 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:06:27.629 ************************************ 00:06:27.629 START TEST locking_overlapped_coremask_via_rpc 00:06:27.629 ************************************ 00:06:27.629 11:28:28 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@1129 -- # locking_overlapped_coremask_via_rpc 00:06:27.629 11:28:28 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@148 -- # spdk_tgt_pid=63077 00:06:27.629 11:28:28 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@147 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x7 --disable-cpumask-locks 00:06:27.629 11:28:28 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@149 -- # waitforlisten 63077 /var/tmp/spdk.sock 00:06:27.629 11:28:28 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@835 -- # '[' -z 63077 ']' 00:06:27.629 11:28:28 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:27.629 11:28:28 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@840 -- # local max_retries=100 00:06:27.629 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:27.629 11:28:28 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:27.629 11:28:28 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@844 -- # xtrace_disable 00:06:27.629 11:28:28 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:06:27.629 [2024-12-16 11:28:28.230596] Starting SPDK v25.01-pre git sha1 c360129e7 / DPDK 24.03.0 initialization... 00:06:27.629 [2024-12-16 11:28:28.230918] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x7 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid63077 ] 00:06:27.900 [2024-12-16 11:28:28.379366] app.c: 916:spdk_app_start: *NOTICE*: CPU core locks deactivated. 00:06:27.900 [2024-12-16 11:28:28.379420] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:06:27.900 [2024-12-16 11:28:28.451207] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 1 00:06:27.901 [2024-12-16 11:28:28.451369] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:06:27.901 [2024-12-16 11:28:28.451370] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 2 00:06:28.836 11:28:29 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:06:28.836 11:28:29 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@868 -- # return 0 00:06:28.836 11:28:29 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@152 -- # spdk_tgt_pid2=63107 00:06:28.836 11:28:29 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@151 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1c -r /var/tmp/spdk2.sock --disable-cpumask-locks 00:06:28.836 11:28:29 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@153 -- # waitforlisten 63107 /var/tmp/spdk2.sock 00:06:28.836 11:28:29 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@835 -- # '[' -z 63107 ']' 00:06:28.836 11:28:29 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk2.sock 00:06:28.836 11:28:29 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@840 -- # local max_retries=100 00:06:28.836 11:28:29 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:06:28.836 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:06:28.836 11:28:29 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@844 -- # xtrace_disable 00:06:28.836 11:28:29 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:06:28.836 [2024-12-16 11:28:29.324686] Starting SPDK v25.01-pre git sha1 c360129e7 / DPDK 24.03.0 initialization... 00:06:28.836 [2024-12-16 11:28:29.325108] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1c --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid63107 ] 00:06:28.836 [2024-12-16 11:28:29.499805] app.c: 916:spdk_app_start: *NOTICE*: CPU core locks deactivated. 00:06:28.836 [2024-12-16 11:28:29.504154] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:06:29.094 [2024-12-16 11:28:29.659938] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 3 00:06:29.094 [2024-12-16 11:28:29.660024] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 4 00:06:29.094 [2024-12-16 11:28:29.660027] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 2 00:06:30.030 11:28:30 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:06:30.030 11:28:30 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@868 -- # return 0 00:06:30.030 11:28:30 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@155 -- # rpc_cmd framework_enable_cpumask_locks 00:06:30.030 11:28:30 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:30.030 11:28:30 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:06:30.030 11:28:30 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:30.030 11:28:30 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@156 -- # NOT rpc_cmd -s /var/tmp/spdk2.sock framework_enable_cpumask_locks 00:06:30.030 11:28:30 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@652 -- # local es=0 00:06:30.030 11:28:30 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@654 -- # valid_exec_arg rpc_cmd -s /var/tmp/spdk2.sock framework_enable_cpumask_locks 00:06:30.030 11:28:30 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@640 -- # local arg=rpc_cmd 00:06:30.030 11:28:30 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:06:30.030 11:28:30 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@644 -- # type -t rpc_cmd 00:06:30.030 11:28:30 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:06:30.030 11:28:30 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@655 -- # rpc_cmd -s /var/tmp/spdk2.sock framework_enable_cpumask_locks 00:06:30.030 11:28:30 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:30.030 11:28:30 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:06:30.030 [2024-12-16 11:28:30.448290] app.c: 781:claim_cpu_cores: *ERROR*: Cannot create lock on core 2, probably process 63077 has claimed it. 00:06:30.030 2024/12/16 11:28:30 error on JSON-RPC call, method: framework_enable_cpumask_locks, params: map[], err: error received for framework_enable_cpumask_locks method, err: Code=-32603 Msg=Failed to claim CPU core: 2 00:06:30.030 request: 00:06:30.030 { 00:06:30.030 "method": "framework_enable_cpumask_locks", 00:06:30.030 "params": {} 00:06:30.030 } 00:06:30.030 Got JSON-RPC error response 00:06:30.030 GoRPCClient: error on JSON-RPC call 00:06:30.030 11:28:30 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:06:30.030 11:28:30 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@655 -- # es=1 00:06:30.030 11:28:30 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:06:30.030 11:28:30 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:06:30.030 11:28:30 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:06:30.030 11:28:30 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@158 -- # waitforlisten 63077 /var/tmp/spdk.sock 00:06:30.030 11:28:30 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@835 -- # '[' -z 63077 ']' 00:06:30.030 11:28:30 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:30.030 11:28:30 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@840 -- # local max_retries=100 00:06:30.030 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:30.030 11:28:30 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:30.030 11:28:30 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@844 -- # xtrace_disable 00:06:30.030 11:28:30 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:06:30.289 11:28:30 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:06:30.289 11:28:30 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@868 -- # return 0 00:06:30.289 11:28:30 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@159 -- # waitforlisten 63107 /var/tmp/spdk2.sock 00:06:30.289 11:28:30 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@835 -- # '[' -z 63107 ']' 00:06:30.289 11:28:30 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk2.sock 00:06:30.289 11:28:30 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@840 -- # local max_retries=100 00:06:30.289 11:28:30 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:06:30.289 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:06:30.289 11:28:30 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@844 -- # xtrace_disable 00:06:30.289 11:28:30 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:06:30.548 11:28:31 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:06:30.549 11:28:31 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@868 -- # return 0 00:06:30.549 11:28:31 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@161 -- # check_remaining_locks 00:06:30.549 11:28:31 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@36 -- # locks=(/var/tmp/spdk_cpu_lock_*) 00:06:30.549 11:28:31 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@37 -- # locks_expected=(/var/tmp/spdk_cpu_lock_{000..002}) 00:06:30.549 11:28:31 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@38 -- # [[ /var/tmp/spdk_cpu_lock_000 /var/tmp/spdk_cpu_lock_001 /var/tmp/spdk_cpu_lock_002 == \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\0\ \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\1\ \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\2 ]] 00:06:30.549 00:06:30.549 real 0m2.893s 00:06:30.549 user 0m1.566s 00:06:30.549 sys 0m0.251s 00:06:30.549 11:28:31 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@1130 -- # xtrace_disable 00:06:30.549 11:28:31 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:06:30.549 ************************************ 00:06:30.549 END TEST locking_overlapped_coremask_via_rpc 00:06:30.549 ************************************ 00:06:30.549 11:28:31 event.cpu_locks -- event/cpu_locks.sh@174 -- # cleanup 00:06:30.549 11:28:31 event.cpu_locks -- event/cpu_locks.sh@15 -- # [[ -z 63077 ]] 00:06:30.549 11:28:31 event.cpu_locks -- event/cpu_locks.sh@15 -- # killprocess 63077 00:06:30.549 11:28:31 event.cpu_locks -- common/autotest_common.sh@954 -- # '[' -z 63077 ']' 00:06:30.549 11:28:31 event.cpu_locks -- common/autotest_common.sh@958 -- # kill -0 63077 00:06:30.549 11:28:31 event.cpu_locks -- common/autotest_common.sh@959 -- # uname 00:06:30.549 11:28:31 event.cpu_locks -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:06:30.549 11:28:31 event.cpu_locks -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 63077 00:06:30.549 11:28:31 event.cpu_locks -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:06:30.549 11:28:31 event.cpu_locks -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:06:30.549 killing process with pid 63077 00:06:30.549 11:28:31 event.cpu_locks -- common/autotest_common.sh@972 -- # echo 'killing process with pid 63077' 00:06:30.549 11:28:31 event.cpu_locks -- common/autotest_common.sh@973 -- # kill 63077 00:06:30.549 11:28:31 event.cpu_locks -- common/autotest_common.sh@978 -- # wait 63077 00:06:31.117 11:28:31 event.cpu_locks -- event/cpu_locks.sh@16 -- # [[ -z 63107 ]] 00:06:31.117 11:28:31 event.cpu_locks -- event/cpu_locks.sh@16 -- # killprocess 63107 00:06:31.117 11:28:31 event.cpu_locks -- common/autotest_common.sh@954 -- # '[' -z 63107 ']' 00:06:31.117 11:28:31 event.cpu_locks -- common/autotest_common.sh@958 -- # kill -0 63107 00:06:31.117 11:28:31 event.cpu_locks -- common/autotest_common.sh@959 -- # uname 00:06:31.117 11:28:31 event.cpu_locks -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:06:31.117 11:28:31 event.cpu_locks -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 63107 00:06:31.117 11:28:31 event.cpu_locks -- common/autotest_common.sh@960 -- # process_name=reactor_2 00:06:31.117 11:28:31 event.cpu_locks -- common/autotest_common.sh@964 -- # '[' reactor_2 = sudo ']' 00:06:31.117 11:28:31 event.cpu_locks -- common/autotest_common.sh@972 -- # echo 'killing process with pid 63107' 00:06:31.117 killing process with pid 63107 00:06:31.117 11:28:31 event.cpu_locks -- common/autotest_common.sh@973 -- # kill 63107 00:06:31.117 11:28:31 event.cpu_locks -- common/autotest_common.sh@978 -- # wait 63107 00:06:31.684 11:28:32 event.cpu_locks -- event/cpu_locks.sh@18 -- # rm -f 00:06:31.684 11:28:32 event.cpu_locks -- event/cpu_locks.sh@1 -- # cleanup 00:06:31.684 11:28:32 event.cpu_locks -- event/cpu_locks.sh@15 -- # [[ -z 63077 ]] 00:06:31.684 11:28:32 event.cpu_locks -- event/cpu_locks.sh@15 -- # killprocess 63077 00:06:31.684 11:28:32 event.cpu_locks -- common/autotest_common.sh@954 -- # '[' -z 63077 ']' 00:06:31.684 11:28:32 event.cpu_locks -- common/autotest_common.sh@958 -- # kill -0 63077 00:06:31.684 /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh: line 958: kill: (63077) - No such process 00:06:31.684 Process with pid 63077 is not found 00:06:31.684 11:28:32 event.cpu_locks -- common/autotest_common.sh@981 -- # echo 'Process with pid 63077 is not found' 00:06:31.684 11:28:32 event.cpu_locks -- event/cpu_locks.sh@16 -- # [[ -z 63107 ]] 00:06:31.684 11:28:32 event.cpu_locks -- event/cpu_locks.sh@16 -- # killprocess 63107 00:06:31.684 11:28:32 event.cpu_locks -- common/autotest_common.sh@954 -- # '[' -z 63107 ']' 00:06:31.684 11:28:32 event.cpu_locks -- common/autotest_common.sh@958 -- # kill -0 63107 00:06:31.684 /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh: line 958: kill: (63107) - No such process 00:06:31.684 Process with pid 63107 is not found 00:06:31.684 11:28:32 event.cpu_locks -- common/autotest_common.sh@981 -- # echo 'Process with pid 63107 is not found' 00:06:31.684 11:28:32 event.cpu_locks -- event/cpu_locks.sh@18 -- # rm -f 00:06:31.684 ************************************ 00:06:31.684 END TEST cpu_locks 00:06:31.684 ************************************ 00:06:31.684 00:06:31.684 real 0m20.444s 00:06:31.684 user 0m38.241s 00:06:31.684 sys 0m5.870s 00:06:31.684 11:28:32 event.cpu_locks -- common/autotest_common.sh@1130 -- # xtrace_disable 00:06:31.684 11:28:32 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:06:31.684 00:06:31.684 real 0m48.652s 00:06:31.684 user 1m36.084s 00:06:31.684 sys 0m9.812s 00:06:31.684 11:28:32 event -- common/autotest_common.sh@1130 -- # xtrace_disable 00:06:31.684 11:28:32 event -- common/autotest_common.sh@10 -- # set +x 00:06:31.684 ************************************ 00:06:31.684 END TEST event 00:06:31.684 ************************************ 00:06:31.685 11:28:32 -- spdk/autotest.sh@169 -- # run_test thread /home/vagrant/spdk_repo/spdk/test/thread/thread.sh 00:06:31.685 11:28:32 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:06:31.685 11:28:32 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:06:31.685 11:28:32 -- common/autotest_common.sh@10 -- # set +x 00:06:31.685 ************************************ 00:06:31.685 START TEST thread 00:06:31.685 ************************************ 00:06:31.685 11:28:32 thread -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/thread/thread.sh 00:06:31.943 * Looking for test storage... 00:06:31.943 * Found test storage at /home/vagrant/spdk_repo/spdk/test/thread 00:06:31.943 11:28:32 thread -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:06:31.943 11:28:32 thread -- common/autotest_common.sh@1711 -- # lcov --version 00:06:31.943 11:28:32 thread -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:06:31.943 11:28:32 thread -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:06:31.943 11:28:32 thread -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:06:31.943 11:28:32 thread -- scripts/common.sh@333 -- # local ver1 ver1_l 00:06:31.943 11:28:32 thread -- scripts/common.sh@334 -- # local ver2 ver2_l 00:06:31.943 11:28:32 thread -- scripts/common.sh@336 -- # IFS=.-: 00:06:31.943 11:28:32 thread -- scripts/common.sh@336 -- # read -ra ver1 00:06:31.943 11:28:32 thread -- scripts/common.sh@337 -- # IFS=.-: 00:06:31.943 11:28:32 thread -- scripts/common.sh@337 -- # read -ra ver2 00:06:31.943 11:28:32 thread -- scripts/common.sh@338 -- # local 'op=<' 00:06:31.943 11:28:32 thread -- scripts/common.sh@340 -- # ver1_l=2 00:06:31.943 11:28:32 thread -- scripts/common.sh@341 -- # ver2_l=1 00:06:31.943 11:28:32 thread -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:06:31.943 11:28:32 thread -- scripts/common.sh@344 -- # case "$op" in 00:06:31.943 11:28:32 thread -- scripts/common.sh@345 -- # : 1 00:06:31.943 11:28:32 thread -- scripts/common.sh@364 -- # (( v = 0 )) 00:06:31.943 11:28:32 thread -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:06:31.943 11:28:32 thread -- scripts/common.sh@365 -- # decimal 1 00:06:31.943 11:28:32 thread -- scripts/common.sh@353 -- # local d=1 00:06:31.943 11:28:32 thread -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:06:31.943 11:28:32 thread -- scripts/common.sh@355 -- # echo 1 00:06:31.943 11:28:32 thread -- scripts/common.sh@365 -- # ver1[v]=1 00:06:31.943 11:28:32 thread -- scripts/common.sh@366 -- # decimal 2 00:06:31.943 11:28:32 thread -- scripts/common.sh@353 -- # local d=2 00:06:31.943 11:28:32 thread -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:06:31.943 11:28:32 thread -- scripts/common.sh@355 -- # echo 2 00:06:31.943 11:28:32 thread -- scripts/common.sh@366 -- # ver2[v]=2 00:06:31.943 11:28:32 thread -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:06:31.943 11:28:32 thread -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:06:31.943 11:28:32 thread -- scripts/common.sh@368 -- # return 0 00:06:31.943 11:28:32 thread -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:06:31.943 11:28:32 thread -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:06:31.943 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:31.943 --rc genhtml_branch_coverage=1 00:06:31.943 --rc genhtml_function_coverage=1 00:06:31.943 --rc genhtml_legend=1 00:06:31.943 --rc geninfo_all_blocks=1 00:06:31.943 --rc geninfo_unexecuted_blocks=1 00:06:31.943 00:06:31.943 ' 00:06:31.943 11:28:32 thread -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:06:31.943 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:31.943 --rc genhtml_branch_coverage=1 00:06:31.943 --rc genhtml_function_coverage=1 00:06:31.943 --rc genhtml_legend=1 00:06:31.943 --rc geninfo_all_blocks=1 00:06:31.943 --rc geninfo_unexecuted_blocks=1 00:06:31.943 00:06:31.943 ' 00:06:31.943 11:28:32 thread -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:06:31.943 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:31.943 --rc genhtml_branch_coverage=1 00:06:31.943 --rc genhtml_function_coverage=1 00:06:31.943 --rc genhtml_legend=1 00:06:31.943 --rc geninfo_all_blocks=1 00:06:31.943 --rc geninfo_unexecuted_blocks=1 00:06:31.943 00:06:31.944 ' 00:06:31.944 11:28:32 thread -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:06:31.944 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:31.944 --rc genhtml_branch_coverage=1 00:06:31.944 --rc genhtml_function_coverage=1 00:06:31.944 --rc genhtml_legend=1 00:06:31.944 --rc geninfo_all_blocks=1 00:06:31.944 --rc geninfo_unexecuted_blocks=1 00:06:31.944 00:06:31.944 ' 00:06:31.944 11:28:32 thread -- thread/thread.sh@11 -- # run_test thread_poller_perf /home/vagrant/spdk_repo/spdk/test/thread/poller_perf/poller_perf -b 1000 -l 1 -t 1 00:06:31.944 11:28:32 thread -- common/autotest_common.sh@1105 -- # '[' 8 -le 1 ']' 00:06:31.944 11:28:32 thread -- common/autotest_common.sh@1111 -- # xtrace_disable 00:06:31.944 11:28:32 thread -- common/autotest_common.sh@10 -- # set +x 00:06:31.944 ************************************ 00:06:31.944 START TEST thread_poller_perf 00:06:31.944 ************************************ 00:06:31.944 11:28:32 thread.thread_poller_perf -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/thread/poller_perf/poller_perf -b 1000 -l 1 -t 1 00:06:31.944 [2024-12-16 11:28:32.595000] Starting SPDK v25.01-pre git sha1 c360129e7 / DPDK 24.03.0 initialization... 00:06:31.944 [2024-12-16 11:28:32.595097] [ DPDK EAL parameters: poller_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid63271 ] 00:06:32.202 [2024-12-16 11:28:32.744380] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:32.202 Running 1000 pollers for 1 seconds with 1 microseconds period. 00:06:32.202 [2024-12-16 11:28:32.812155] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:06:33.219 [2024-12-16T11:28:33.972Z] ====================================== 00:06:33.219 [2024-12-16T11:28:33.972Z] busy:2208110764 (cyc) 00:06:33.219 [2024-12-16T11:28:33.972Z] total_run_count: 304000 00:06:33.219 [2024-12-16T11:28:33.972Z] tsc_hz: 2200000000 (cyc) 00:06:33.219 [2024-12-16T11:28:33.972Z] ====================================== 00:06:33.219 [2024-12-16T11:28:33.972Z] poller_cost: 7263 (cyc), 3301 (nsec) 00:06:33.219 00:06:33.219 real 0m1.296s 00:06:33.219 user 0m1.131s 00:06:33.219 sys 0m0.057s 00:06:33.219 11:28:33 thread.thread_poller_perf -- common/autotest_common.sh@1130 -- # xtrace_disable 00:06:33.219 ************************************ 00:06:33.219 END TEST thread_poller_perf 00:06:33.219 ************************************ 00:06:33.219 11:28:33 thread.thread_poller_perf -- common/autotest_common.sh@10 -- # set +x 00:06:33.219 11:28:33 thread -- thread/thread.sh@12 -- # run_test thread_poller_perf /home/vagrant/spdk_repo/spdk/test/thread/poller_perf/poller_perf -b 1000 -l 0 -t 1 00:06:33.219 11:28:33 thread -- common/autotest_common.sh@1105 -- # '[' 8 -le 1 ']' 00:06:33.219 11:28:33 thread -- common/autotest_common.sh@1111 -- # xtrace_disable 00:06:33.219 11:28:33 thread -- common/autotest_common.sh@10 -- # set +x 00:06:33.219 ************************************ 00:06:33.219 START TEST thread_poller_perf 00:06:33.219 ************************************ 00:06:33.219 11:28:33 thread.thread_poller_perf -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/thread/poller_perf/poller_perf -b 1000 -l 0 -t 1 00:06:33.219 [2024-12-16 11:28:33.943022] Starting SPDK v25.01-pre git sha1 c360129e7 / DPDK 24.03.0 initialization... 00:06:33.219 [2024-12-16 11:28:33.943171] [ DPDK EAL parameters: poller_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid63302 ] 00:06:33.477 [2024-12-16 11:28:34.090961] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:33.477 Running 1000 pollers for 1 seconds with 0 microseconds period. 00:06:33.478 [2024-12-16 11:28:34.133458] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:06:34.857 [2024-12-16T11:28:35.610Z] ====================================== 00:06:34.857 [2024-12-16T11:28:35.610Z] busy:2202346846 (cyc) 00:06:34.857 [2024-12-16T11:28:35.610Z] total_run_count: 4086000 00:06:34.857 [2024-12-16T11:28:35.610Z] tsc_hz: 2200000000 (cyc) 00:06:34.857 [2024-12-16T11:28:35.610Z] ====================================== 00:06:34.857 [2024-12-16T11:28:35.610Z] poller_cost: 538 (cyc), 244 (nsec) 00:06:34.857 00:06:34.857 real 0m1.260s 00:06:34.857 user 0m1.102s 00:06:34.857 sys 0m0.050s 00:06:34.857 11:28:35 thread.thread_poller_perf -- common/autotest_common.sh@1130 -- # xtrace_disable 00:06:34.857 ************************************ 00:06:34.857 END TEST thread_poller_perf 00:06:34.857 ************************************ 00:06:34.857 11:28:35 thread.thread_poller_perf -- common/autotest_common.sh@10 -- # set +x 00:06:34.857 11:28:35 thread -- thread/thread.sh@17 -- # [[ y != \y ]] 00:06:34.857 00:06:34.857 real 0m2.838s 00:06:34.857 user 0m2.376s 00:06:34.857 sys 0m0.249s 00:06:34.857 11:28:35 thread -- common/autotest_common.sh@1130 -- # xtrace_disable 00:06:34.857 11:28:35 thread -- common/autotest_common.sh@10 -- # set +x 00:06:34.857 ************************************ 00:06:34.857 END TEST thread 00:06:34.857 ************************************ 00:06:34.857 11:28:35 -- spdk/autotest.sh@171 -- # [[ 0 -eq 1 ]] 00:06:34.857 11:28:35 -- spdk/autotest.sh@176 -- # run_test app_cmdline /home/vagrant/spdk_repo/spdk/test/app/cmdline.sh 00:06:34.857 11:28:35 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:06:34.857 11:28:35 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:06:34.857 11:28:35 -- common/autotest_common.sh@10 -- # set +x 00:06:34.858 ************************************ 00:06:34.858 START TEST app_cmdline 00:06:34.858 ************************************ 00:06:34.858 11:28:35 app_cmdline -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/app/cmdline.sh 00:06:34.858 * Looking for test storage... 00:06:34.858 * Found test storage at /home/vagrant/spdk_repo/spdk/test/app 00:06:34.858 11:28:35 app_cmdline -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:06:34.858 11:28:35 app_cmdline -- common/autotest_common.sh@1711 -- # lcov --version 00:06:34.858 11:28:35 app_cmdline -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:06:34.858 11:28:35 app_cmdline -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:06:34.858 11:28:35 app_cmdline -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:06:34.858 11:28:35 app_cmdline -- scripts/common.sh@333 -- # local ver1 ver1_l 00:06:34.858 11:28:35 app_cmdline -- scripts/common.sh@334 -- # local ver2 ver2_l 00:06:34.858 11:28:35 app_cmdline -- scripts/common.sh@336 -- # IFS=.-: 00:06:34.858 11:28:35 app_cmdline -- scripts/common.sh@336 -- # read -ra ver1 00:06:34.858 11:28:35 app_cmdline -- scripts/common.sh@337 -- # IFS=.-: 00:06:34.858 11:28:35 app_cmdline -- scripts/common.sh@337 -- # read -ra ver2 00:06:34.858 11:28:35 app_cmdline -- scripts/common.sh@338 -- # local 'op=<' 00:06:34.858 11:28:35 app_cmdline -- scripts/common.sh@340 -- # ver1_l=2 00:06:34.858 11:28:35 app_cmdline -- scripts/common.sh@341 -- # ver2_l=1 00:06:34.858 11:28:35 app_cmdline -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:06:34.858 11:28:35 app_cmdline -- scripts/common.sh@344 -- # case "$op" in 00:06:34.858 11:28:35 app_cmdline -- scripts/common.sh@345 -- # : 1 00:06:34.858 11:28:35 app_cmdline -- scripts/common.sh@364 -- # (( v = 0 )) 00:06:34.858 11:28:35 app_cmdline -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:06:34.858 11:28:35 app_cmdline -- scripts/common.sh@365 -- # decimal 1 00:06:34.858 11:28:35 app_cmdline -- scripts/common.sh@353 -- # local d=1 00:06:34.858 11:28:35 app_cmdline -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:06:34.858 11:28:35 app_cmdline -- scripts/common.sh@355 -- # echo 1 00:06:34.858 11:28:35 app_cmdline -- scripts/common.sh@365 -- # ver1[v]=1 00:06:34.858 11:28:35 app_cmdline -- scripts/common.sh@366 -- # decimal 2 00:06:34.858 11:28:35 app_cmdline -- scripts/common.sh@353 -- # local d=2 00:06:34.858 11:28:35 app_cmdline -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:06:34.858 11:28:35 app_cmdline -- scripts/common.sh@355 -- # echo 2 00:06:34.858 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:34.858 11:28:35 app_cmdline -- scripts/common.sh@366 -- # ver2[v]=2 00:06:34.858 11:28:35 app_cmdline -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:06:34.858 11:28:35 app_cmdline -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:06:34.858 11:28:35 app_cmdline -- scripts/common.sh@368 -- # return 0 00:06:34.858 11:28:35 app_cmdline -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:06:34.858 11:28:35 app_cmdline -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:06:34.858 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:34.858 --rc genhtml_branch_coverage=1 00:06:34.858 --rc genhtml_function_coverage=1 00:06:34.858 --rc genhtml_legend=1 00:06:34.858 --rc geninfo_all_blocks=1 00:06:34.858 --rc geninfo_unexecuted_blocks=1 00:06:34.858 00:06:34.858 ' 00:06:34.858 11:28:35 app_cmdline -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:06:34.858 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:34.858 --rc genhtml_branch_coverage=1 00:06:34.858 --rc genhtml_function_coverage=1 00:06:34.858 --rc genhtml_legend=1 00:06:34.858 --rc geninfo_all_blocks=1 00:06:34.858 --rc geninfo_unexecuted_blocks=1 00:06:34.858 00:06:34.858 ' 00:06:34.858 11:28:35 app_cmdline -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:06:34.858 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:34.858 --rc genhtml_branch_coverage=1 00:06:34.858 --rc genhtml_function_coverage=1 00:06:34.858 --rc genhtml_legend=1 00:06:34.858 --rc geninfo_all_blocks=1 00:06:34.858 --rc geninfo_unexecuted_blocks=1 00:06:34.858 00:06:34.858 ' 00:06:34.858 11:28:35 app_cmdline -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:06:34.858 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:34.858 --rc genhtml_branch_coverage=1 00:06:34.858 --rc genhtml_function_coverage=1 00:06:34.858 --rc genhtml_legend=1 00:06:34.858 --rc geninfo_all_blocks=1 00:06:34.858 --rc geninfo_unexecuted_blocks=1 00:06:34.858 00:06:34.858 ' 00:06:34.858 11:28:35 app_cmdline -- app/cmdline.sh@14 -- # trap 'killprocess $spdk_tgt_pid' EXIT 00:06:34.858 11:28:35 app_cmdline -- app/cmdline.sh@17 -- # spdk_tgt_pid=63385 00:06:34.858 11:28:35 app_cmdline -- app/cmdline.sh@18 -- # waitforlisten 63385 00:06:34.858 11:28:35 app_cmdline -- common/autotest_common.sh@835 -- # '[' -z 63385 ']' 00:06:34.858 11:28:35 app_cmdline -- app/cmdline.sh@16 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt --rpcs-allowed spdk_get_version,rpc_get_methods 00:06:34.858 11:28:35 app_cmdline -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:34.858 11:28:35 app_cmdline -- common/autotest_common.sh@840 -- # local max_retries=100 00:06:34.858 11:28:35 app_cmdline -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:34.858 11:28:35 app_cmdline -- common/autotest_common.sh@844 -- # xtrace_disable 00:06:34.858 11:28:35 app_cmdline -- common/autotest_common.sh@10 -- # set +x 00:06:34.858 [2024-12-16 11:28:35.523628] Starting SPDK v25.01-pre git sha1 c360129e7 / DPDK 24.03.0 initialization... 00:06:34.858 [2024-12-16 11:28:35.523961] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid63385 ] 00:06:35.116 [2024-12-16 11:28:35.668312] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:35.116 [2024-12-16 11:28:35.755323] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:06:36.052 11:28:36 app_cmdline -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:06:36.052 11:28:36 app_cmdline -- common/autotest_common.sh@868 -- # return 0 00:06:36.052 11:28:36 app_cmdline -- app/cmdline.sh@20 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py spdk_get_version 00:06:36.311 { 00:06:36.311 "fields": { 00:06:36.311 "commit": "c360129e7", 00:06:36.311 "major": 25, 00:06:36.311 "minor": 1, 00:06:36.311 "patch": 0, 00:06:36.311 "suffix": "-pre" 00:06:36.311 }, 00:06:36.311 "version": "SPDK v25.01-pre git sha1 c360129e7" 00:06:36.311 } 00:06:36.311 11:28:36 app_cmdline -- app/cmdline.sh@22 -- # expected_methods=() 00:06:36.311 11:28:36 app_cmdline -- app/cmdline.sh@23 -- # expected_methods+=("rpc_get_methods") 00:06:36.311 11:28:36 app_cmdline -- app/cmdline.sh@24 -- # expected_methods+=("spdk_get_version") 00:06:36.311 11:28:36 app_cmdline -- app/cmdline.sh@26 -- # methods=($(rpc_cmd rpc_get_methods | jq -r ".[]" | sort)) 00:06:36.311 11:28:36 app_cmdline -- app/cmdline.sh@26 -- # rpc_cmd rpc_get_methods 00:06:36.311 11:28:36 app_cmdline -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:36.311 11:28:36 app_cmdline -- app/cmdline.sh@26 -- # sort 00:06:36.311 11:28:36 app_cmdline -- common/autotest_common.sh@10 -- # set +x 00:06:36.311 11:28:36 app_cmdline -- app/cmdline.sh@26 -- # jq -r '.[]' 00:06:36.311 11:28:36 app_cmdline -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:36.311 11:28:36 app_cmdline -- app/cmdline.sh@27 -- # (( 2 == 2 )) 00:06:36.311 11:28:36 app_cmdline -- app/cmdline.sh@28 -- # [[ rpc_get_methods spdk_get_version == \r\p\c\_\g\e\t\_\m\e\t\h\o\d\s\ \s\p\d\k\_\g\e\t\_\v\e\r\s\i\o\n ]] 00:06:36.311 11:28:36 app_cmdline -- app/cmdline.sh@30 -- # NOT /home/vagrant/spdk_repo/spdk/scripts/rpc.py env_dpdk_get_mem_stats 00:06:36.311 11:28:36 app_cmdline -- common/autotest_common.sh@652 -- # local es=0 00:06:36.311 11:28:36 app_cmdline -- common/autotest_common.sh@654 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/scripts/rpc.py env_dpdk_get_mem_stats 00:06:36.311 11:28:36 app_cmdline -- common/autotest_common.sh@640 -- # local arg=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:06:36.312 11:28:36 app_cmdline -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:06:36.312 11:28:36 app_cmdline -- common/autotest_common.sh@644 -- # type -t /home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:06:36.312 11:28:36 app_cmdline -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:06:36.312 11:28:36 app_cmdline -- common/autotest_common.sh@646 -- # type -P /home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:06:36.312 11:28:36 app_cmdline -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:06:36.312 11:28:36 app_cmdline -- common/autotest_common.sh@646 -- # arg=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:06:36.312 11:28:36 app_cmdline -- common/autotest_common.sh@646 -- # [[ -x /home/vagrant/spdk_repo/spdk/scripts/rpc.py ]] 00:06:36.312 11:28:36 app_cmdline -- common/autotest_common.sh@655 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py env_dpdk_get_mem_stats 00:06:36.571 2024/12/16 11:28:37 error on JSON-RPC call, method: env_dpdk_get_mem_stats, params: map[], err: error received for env_dpdk_get_mem_stats method, err: Code=-32601 Msg=Method not found 00:06:36.571 request: 00:06:36.571 { 00:06:36.571 "method": "env_dpdk_get_mem_stats", 00:06:36.571 "params": {} 00:06:36.571 } 00:06:36.571 Got JSON-RPC error response 00:06:36.571 GoRPCClient: error on JSON-RPC call 00:06:36.571 11:28:37 app_cmdline -- common/autotest_common.sh@655 -- # es=1 00:06:36.571 11:28:37 app_cmdline -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:06:36.571 11:28:37 app_cmdline -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:06:36.571 11:28:37 app_cmdline -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:06:36.571 11:28:37 app_cmdline -- app/cmdline.sh@1 -- # killprocess 63385 00:06:36.571 11:28:37 app_cmdline -- common/autotest_common.sh@954 -- # '[' -z 63385 ']' 00:06:36.571 11:28:37 app_cmdline -- common/autotest_common.sh@958 -- # kill -0 63385 00:06:36.571 11:28:37 app_cmdline -- common/autotest_common.sh@959 -- # uname 00:06:36.571 11:28:37 app_cmdline -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:06:36.571 11:28:37 app_cmdline -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 63385 00:06:36.571 killing process with pid 63385 00:06:36.571 11:28:37 app_cmdline -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:06:36.571 11:28:37 app_cmdline -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:06:36.571 11:28:37 app_cmdline -- common/autotest_common.sh@972 -- # echo 'killing process with pid 63385' 00:06:36.571 11:28:37 app_cmdline -- common/autotest_common.sh@973 -- # kill 63385 00:06:36.571 11:28:37 app_cmdline -- common/autotest_common.sh@978 -- # wait 63385 00:06:37.137 00:06:37.137 real 0m2.343s 00:06:37.137 user 0m2.923s 00:06:37.137 sys 0m0.570s 00:06:37.137 11:28:37 app_cmdline -- common/autotest_common.sh@1130 -- # xtrace_disable 00:06:37.137 11:28:37 app_cmdline -- common/autotest_common.sh@10 -- # set +x 00:06:37.137 ************************************ 00:06:37.137 END TEST app_cmdline 00:06:37.137 ************************************ 00:06:37.137 11:28:37 -- spdk/autotest.sh@177 -- # run_test version /home/vagrant/spdk_repo/spdk/test/app/version.sh 00:06:37.137 11:28:37 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:06:37.137 11:28:37 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:06:37.137 11:28:37 -- common/autotest_common.sh@10 -- # set +x 00:06:37.137 ************************************ 00:06:37.137 START TEST version 00:06:37.137 ************************************ 00:06:37.137 11:28:37 version -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/app/version.sh 00:06:37.137 * Looking for test storage... 00:06:37.137 * Found test storage at /home/vagrant/spdk_repo/spdk/test/app 00:06:37.137 11:28:37 version -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:06:37.137 11:28:37 version -- common/autotest_common.sh@1711 -- # lcov --version 00:06:37.137 11:28:37 version -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:06:37.137 11:28:37 version -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:06:37.137 11:28:37 version -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:06:37.137 11:28:37 version -- scripts/common.sh@333 -- # local ver1 ver1_l 00:06:37.137 11:28:37 version -- scripts/common.sh@334 -- # local ver2 ver2_l 00:06:37.137 11:28:37 version -- scripts/common.sh@336 -- # IFS=.-: 00:06:37.137 11:28:37 version -- scripts/common.sh@336 -- # read -ra ver1 00:06:37.137 11:28:37 version -- scripts/common.sh@337 -- # IFS=.-: 00:06:37.137 11:28:37 version -- scripts/common.sh@337 -- # read -ra ver2 00:06:37.137 11:28:37 version -- scripts/common.sh@338 -- # local 'op=<' 00:06:37.137 11:28:37 version -- scripts/common.sh@340 -- # ver1_l=2 00:06:37.137 11:28:37 version -- scripts/common.sh@341 -- # ver2_l=1 00:06:37.137 11:28:37 version -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:06:37.137 11:28:37 version -- scripts/common.sh@344 -- # case "$op" in 00:06:37.137 11:28:37 version -- scripts/common.sh@345 -- # : 1 00:06:37.137 11:28:37 version -- scripts/common.sh@364 -- # (( v = 0 )) 00:06:37.137 11:28:37 version -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:06:37.137 11:28:37 version -- scripts/common.sh@365 -- # decimal 1 00:06:37.137 11:28:37 version -- scripts/common.sh@353 -- # local d=1 00:06:37.137 11:28:37 version -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:06:37.137 11:28:37 version -- scripts/common.sh@355 -- # echo 1 00:06:37.137 11:28:37 version -- scripts/common.sh@365 -- # ver1[v]=1 00:06:37.137 11:28:37 version -- scripts/common.sh@366 -- # decimal 2 00:06:37.137 11:28:37 version -- scripts/common.sh@353 -- # local d=2 00:06:37.137 11:28:37 version -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:06:37.137 11:28:37 version -- scripts/common.sh@355 -- # echo 2 00:06:37.137 11:28:37 version -- scripts/common.sh@366 -- # ver2[v]=2 00:06:37.137 11:28:37 version -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:06:37.137 11:28:37 version -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:06:37.137 11:28:37 version -- scripts/common.sh@368 -- # return 0 00:06:37.137 11:28:37 version -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:06:37.137 11:28:37 version -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:06:37.137 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:37.137 --rc genhtml_branch_coverage=1 00:06:37.137 --rc genhtml_function_coverage=1 00:06:37.137 --rc genhtml_legend=1 00:06:37.137 --rc geninfo_all_blocks=1 00:06:37.137 --rc geninfo_unexecuted_blocks=1 00:06:37.137 00:06:37.137 ' 00:06:37.137 11:28:37 version -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:06:37.137 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:37.137 --rc genhtml_branch_coverage=1 00:06:37.137 --rc genhtml_function_coverage=1 00:06:37.137 --rc genhtml_legend=1 00:06:37.137 --rc geninfo_all_blocks=1 00:06:37.137 --rc geninfo_unexecuted_blocks=1 00:06:37.137 00:06:37.137 ' 00:06:37.137 11:28:37 version -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:06:37.137 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:37.137 --rc genhtml_branch_coverage=1 00:06:37.137 --rc genhtml_function_coverage=1 00:06:37.137 --rc genhtml_legend=1 00:06:37.137 --rc geninfo_all_blocks=1 00:06:37.137 --rc geninfo_unexecuted_blocks=1 00:06:37.137 00:06:37.137 ' 00:06:37.137 11:28:37 version -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:06:37.137 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:37.137 --rc genhtml_branch_coverage=1 00:06:37.137 --rc genhtml_function_coverage=1 00:06:37.137 --rc genhtml_legend=1 00:06:37.137 --rc geninfo_all_blocks=1 00:06:37.137 --rc geninfo_unexecuted_blocks=1 00:06:37.137 00:06:37.137 ' 00:06:37.137 11:28:37 version -- app/version.sh@17 -- # get_header_version major 00:06:37.137 11:28:37 version -- app/version.sh@13 -- # grep -E '^#define SPDK_VERSION_MAJOR[[:space:]]+' /home/vagrant/spdk_repo/spdk/include/spdk/version.h 00:06:37.137 11:28:37 version -- app/version.sh@14 -- # cut -f2 00:06:37.137 11:28:37 version -- app/version.sh@14 -- # tr -d '"' 00:06:37.137 11:28:37 version -- app/version.sh@17 -- # major=25 00:06:37.137 11:28:37 version -- app/version.sh@18 -- # get_header_version minor 00:06:37.137 11:28:37 version -- app/version.sh@13 -- # grep -E '^#define SPDK_VERSION_MINOR[[:space:]]+' /home/vagrant/spdk_repo/spdk/include/spdk/version.h 00:06:37.137 11:28:37 version -- app/version.sh@14 -- # cut -f2 00:06:37.138 11:28:37 version -- app/version.sh@14 -- # tr -d '"' 00:06:37.138 11:28:37 version -- app/version.sh@18 -- # minor=1 00:06:37.138 11:28:37 version -- app/version.sh@19 -- # get_header_version patch 00:06:37.138 11:28:37 version -- app/version.sh@13 -- # grep -E '^#define SPDK_VERSION_PATCH[[:space:]]+' /home/vagrant/spdk_repo/spdk/include/spdk/version.h 00:06:37.138 11:28:37 version -- app/version.sh@14 -- # cut -f2 00:06:37.138 11:28:37 version -- app/version.sh@14 -- # tr -d '"' 00:06:37.138 11:28:37 version -- app/version.sh@19 -- # patch=0 00:06:37.138 11:28:37 version -- app/version.sh@20 -- # get_header_version suffix 00:06:37.138 11:28:37 version -- app/version.sh@13 -- # grep -E '^#define SPDK_VERSION_SUFFIX[[:space:]]+' /home/vagrant/spdk_repo/spdk/include/spdk/version.h 00:06:37.138 11:28:37 version -- app/version.sh@14 -- # tr -d '"' 00:06:37.138 11:28:37 version -- app/version.sh@14 -- # cut -f2 00:06:37.138 11:28:37 version -- app/version.sh@20 -- # suffix=-pre 00:06:37.138 11:28:37 version -- app/version.sh@22 -- # version=25.1 00:06:37.138 11:28:37 version -- app/version.sh@25 -- # (( patch != 0 )) 00:06:37.138 11:28:37 version -- app/version.sh@28 -- # version=25.1rc0 00:06:37.138 11:28:37 version -- app/version.sh@30 -- # PYTHONPATH=:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/python 00:06:37.138 11:28:37 version -- app/version.sh@30 -- # python3 -c 'import spdk; print(spdk.__version__)' 00:06:37.396 11:28:37 version -- app/version.sh@30 -- # py_version=25.1rc0 00:06:37.396 11:28:37 version -- app/version.sh@31 -- # [[ 25.1rc0 == \2\5\.\1\r\c\0 ]] 00:06:37.396 00:06:37.396 real 0m0.258s 00:06:37.396 user 0m0.160s 00:06:37.396 sys 0m0.133s 00:06:37.396 11:28:37 version -- common/autotest_common.sh@1130 -- # xtrace_disable 00:06:37.396 11:28:37 version -- common/autotest_common.sh@10 -- # set +x 00:06:37.396 ************************************ 00:06:37.396 END TEST version 00:06:37.396 ************************************ 00:06:37.396 11:28:37 -- spdk/autotest.sh@179 -- # '[' 0 -eq 1 ']' 00:06:37.396 11:28:37 -- spdk/autotest.sh@188 -- # [[ 0 -eq 1 ]] 00:06:37.396 11:28:37 -- spdk/autotest.sh@194 -- # uname -s 00:06:37.396 11:28:37 -- spdk/autotest.sh@194 -- # [[ Linux == Linux ]] 00:06:37.396 11:28:37 -- spdk/autotest.sh@195 -- # [[ 0 -eq 1 ]] 00:06:37.396 11:28:37 -- spdk/autotest.sh@195 -- # [[ 0 -eq 1 ]] 00:06:37.396 11:28:37 -- spdk/autotest.sh@207 -- # '[' 0 -eq 1 ']' 00:06:37.396 11:28:37 -- spdk/autotest.sh@256 -- # '[' 0 -eq 1 ']' 00:06:37.396 11:28:37 -- spdk/autotest.sh@260 -- # timing_exit lib 00:06:37.396 11:28:37 -- common/autotest_common.sh@732 -- # xtrace_disable 00:06:37.396 11:28:37 -- common/autotest_common.sh@10 -- # set +x 00:06:37.396 11:28:38 -- spdk/autotest.sh@262 -- # '[' 0 -eq 1 ']' 00:06:37.396 11:28:38 -- spdk/autotest.sh@267 -- # '[' 0 -eq 1 ']' 00:06:37.396 11:28:38 -- spdk/autotest.sh@276 -- # '[' 1 -eq 1 ']' 00:06:37.396 11:28:38 -- spdk/autotest.sh@277 -- # export NET_TYPE 00:06:37.396 11:28:38 -- spdk/autotest.sh@280 -- # '[' tcp = rdma ']' 00:06:37.396 11:28:38 -- spdk/autotest.sh@283 -- # '[' tcp = tcp ']' 00:06:37.396 11:28:38 -- spdk/autotest.sh@284 -- # run_test nvmf_tcp /home/vagrant/spdk_repo/spdk/test/nvmf/nvmf.sh --transport=tcp 00:06:37.396 11:28:38 -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:06:37.396 11:28:38 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:06:37.396 11:28:38 -- common/autotest_common.sh@10 -- # set +x 00:06:37.396 ************************************ 00:06:37.396 START TEST nvmf_tcp 00:06:37.396 ************************************ 00:06:37.396 11:28:38 nvmf_tcp -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/nvmf.sh --transport=tcp 00:06:37.396 * Looking for test storage... 00:06:37.396 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf 00:06:37.396 11:28:38 nvmf_tcp -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:06:37.396 11:28:38 nvmf_tcp -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:06:37.396 11:28:38 nvmf_tcp -- common/autotest_common.sh@1711 -- # lcov --version 00:06:37.655 11:28:38 nvmf_tcp -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:06:37.655 11:28:38 nvmf_tcp -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:06:37.655 11:28:38 nvmf_tcp -- scripts/common.sh@333 -- # local ver1 ver1_l 00:06:37.655 11:28:38 nvmf_tcp -- scripts/common.sh@334 -- # local ver2 ver2_l 00:06:37.655 11:28:38 nvmf_tcp -- scripts/common.sh@336 -- # IFS=.-: 00:06:37.655 11:28:38 nvmf_tcp -- scripts/common.sh@336 -- # read -ra ver1 00:06:37.655 11:28:38 nvmf_tcp -- scripts/common.sh@337 -- # IFS=.-: 00:06:37.655 11:28:38 nvmf_tcp -- scripts/common.sh@337 -- # read -ra ver2 00:06:37.655 11:28:38 nvmf_tcp -- scripts/common.sh@338 -- # local 'op=<' 00:06:37.655 11:28:38 nvmf_tcp -- scripts/common.sh@340 -- # ver1_l=2 00:06:37.655 11:28:38 nvmf_tcp -- scripts/common.sh@341 -- # ver2_l=1 00:06:37.655 11:28:38 nvmf_tcp -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:06:37.655 11:28:38 nvmf_tcp -- scripts/common.sh@344 -- # case "$op" in 00:06:37.655 11:28:38 nvmf_tcp -- scripts/common.sh@345 -- # : 1 00:06:37.655 11:28:38 nvmf_tcp -- scripts/common.sh@364 -- # (( v = 0 )) 00:06:37.655 11:28:38 nvmf_tcp -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:06:37.655 11:28:38 nvmf_tcp -- scripts/common.sh@365 -- # decimal 1 00:06:37.655 11:28:38 nvmf_tcp -- scripts/common.sh@353 -- # local d=1 00:06:37.655 11:28:38 nvmf_tcp -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:06:37.655 11:28:38 nvmf_tcp -- scripts/common.sh@355 -- # echo 1 00:06:37.655 11:28:38 nvmf_tcp -- scripts/common.sh@365 -- # ver1[v]=1 00:06:37.655 11:28:38 nvmf_tcp -- scripts/common.sh@366 -- # decimal 2 00:06:37.655 11:28:38 nvmf_tcp -- scripts/common.sh@353 -- # local d=2 00:06:37.655 11:28:38 nvmf_tcp -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:06:37.655 11:28:38 nvmf_tcp -- scripts/common.sh@355 -- # echo 2 00:06:37.655 11:28:38 nvmf_tcp -- scripts/common.sh@366 -- # ver2[v]=2 00:06:37.655 11:28:38 nvmf_tcp -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:06:37.655 11:28:38 nvmf_tcp -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:06:37.655 11:28:38 nvmf_tcp -- scripts/common.sh@368 -- # return 0 00:06:37.655 11:28:38 nvmf_tcp -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:06:37.655 11:28:38 nvmf_tcp -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:06:37.655 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:37.655 --rc genhtml_branch_coverage=1 00:06:37.655 --rc genhtml_function_coverage=1 00:06:37.655 --rc genhtml_legend=1 00:06:37.655 --rc geninfo_all_blocks=1 00:06:37.655 --rc geninfo_unexecuted_blocks=1 00:06:37.655 00:06:37.655 ' 00:06:37.655 11:28:38 nvmf_tcp -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:06:37.655 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:37.655 --rc genhtml_branch_coverage=1 00:06:37.655 --rc genhtml_function_coverage=1 00:06:37.655 --rc genhtml_legend=1 00:06:37.655 --rc geninfo_all_blocks=1 00:06:37.655 --rc geninfo_unexecuted_blocks=1 00:06:37.655 00:06:37.655 ' 00:06:37.655 11:28:38 nvmf_tcp -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:06:37.655 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:37.655 --rc genhtml_branch_coverage=1 00:06:37.655 --rc genhtml_function_coverage=1 00:06:37.655 --rc genhtml_legend=1 00:06:37.655 --rc geninfo_all_blocks=1 00:06:37.655 --rc geninfo_unexecuted_blocks=1 00:06:37.655 00:06:37.655 ' 00:06:37.655 11:28:38 nvmf_tcp -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:06:37.655 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:37.655 --rc genhtml_branch_coverage=1 00:06:37.655 --rc genhtml_function_coverage=1 00:06:37.655 --rc genhtml_legend=1 00:06:37.655 --rc geninfo_all_blocks=1 00:06:37.655 --rc geninfo_unexecuted_blocks=1 00:06:37.655 00:06:37.655 ' 00:06:37.655 11:28:38 nvmf_tcp -- nvmf/nvmf.sh@10 -- # uname -s 00:06:37.655 11:28:38 nvmf_tcp -- nvmf/nvmf.sh@10 -- # '[' '!' Linux = Linux ']' 00:06:37.655 11:28:38 nvmf_tcp -- nvmf/nvmf.sh@14 -- # run_test nvmf_target_core /home/vagrant/spdk_repo/spdk/test/nvmf/nvmf_target_core.sh --transport=tcp 00:06:37.655 11:28:38 nvmf_tcp -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:06:37.655 11:28:38 nvmf_tcp -- common/autotest_common.sh@1111 -- # xtrace_disable 00:06:37.655 11:28:38 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:06:37.655 ************************************ 00:06:37.655 START TEST nvmf_target_core 00:06:37.655 ************************************ 00:06:37.655 11:28:38 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/nvmf_target_core.sh --transport=tcp 00:06:37.655 * Looking for test storage... 00:06:37.655 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf 00:06:37.655 11:28:38 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:06:37.655 11:28:38 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1711 -- # lcov --version 00:06:37.655 11:28:38 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:06:37.655 11:28:38 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:06:37.655 11:28:38 nvmf_tcp.nvmf_target_core -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:06:37.655 11:28:38 nvmf_tcp.nvmf_target_core -- scripts/common.sh@333 -- # local ver1 ver1_l 00:06:37.655 11:28:38 nvmf_tcp.nvmf_target_core -- scripts/common.sh@334 -- # local ver2 ver2_l 00:06:37.655 11:28:38 nvmf_tcp.nvmf_target_core -- scripts/common.sh@336 -- # IFS=.-: 00:06:37.655 11:28:38 nvmf_tcp.nvmf_target_core -- scripts/common.sh@336 -- # read -ra ver1 00:06:37.655 11:28:38 nvmf_tcp.nvmf_target_core -- scripts/common.sh@337 -- # IFS=.-: 00:06:37.655 11:28:38 nvmf_tcp.nvmf_target_core -- scripts/common.sh@337 -- # read -ra ver2 00:06:37.655 11:28:38 nvmf_tcp.nvmf_target_core -- scripts/common.sh@338 -- # local 'op=<' 00:06:37.655 11:28:38 nvmf_tcp.nvmf_target_core -- scripts/common.sh@340 -- # ver1_l=2 00:06:37.655 11:28:38 nvmf_tcp.nvmf_target_core -- scripts/common.sh@341 -- # ver2_l=1 00:06:37.655 11:28:38 nvmf_tcp.nvmf_target_core -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:06:37.655 11:28:38 nvmf_tcp.nvmf_target_core -- scripts/common.sh@344 -- # case "$op" in 00:06:37.655 11:28:38 nvmf_tcp.nvmf_target_core -- scripts/common.sh@345 -- # : 1 00:06:37.655 11:28:38 nvmf_tcp.nvmf_target_core -- scripts/common.sh@364 -- # (( v = 0 )) 00:06:37.655 11:28:38 nvmf_tcp.nvmf_target_core -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:06:37.655 11:28:38 nvmf_tcp.nvmf_target_core -- scripts/common.sh@365 -- # decimal 1 00:06:37.655 11:28:38 nvmf_tcp.nvmf_target_core -- scripts/common.sh@353 -- # local d=1 00:06:37.655 11:28:38 nvmf_tcp.nvmf_target_core -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:06:37.655 11:28:38 nvmf_tcp.nvmf_target_core -- scripts/common.sh@355 -- # echo 1 00:06:37.655 11:28:38 nvmf_tcp.nvmf_target_core -- scripts/common.sh@365 -- # ver1[v]=1 00:06:37.655 11:28:38 nvmf_tcp.nvmf_target_core -- scripts/common.sh@366 -- # decimal 2 00:06:37.655 11:28:38 nvmf_tcp.nvmf_target_core -- scripts/common.sh@353 -- # local d=2 00:06:37.655 11:28:38 nvmf_tcp.nvmf_target_core -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:06:37.655 11:28:38 nvmf_tcp.nvmf_target_core -- scripts/common.sh@355 -- # echo 2 00:06:37.655 11:28:38 nvmf_tcp.nvmf_target_core -- scripts/common.sh@366 -- # ver2[v]=2 00:06:37.655 11:28:38 nvmf_tcp.nvmf_target_core -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:06:37.655 11:28:38 nvmf_tcp.nvmf_target_core -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:06:37.655 11:28:38 nvmf_tcp.nvmf_target_core -- scripts/common.sh@368 -- # return 0 00:06:37.655 11:28:38 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:06:37.655 11:28:38 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:06:37.655 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:37.655 --rc genhtml_branch_coverage=1 00:06:37.655 --rc genhtml_function_coverage=1 00:06:37.655 --rc genhtml_legend=1 00:06:37.655 --rc geninfo_all_blocks=1 00:06:37.655 --rc geninfo_unexecuted_blocks=1 00:06:37.655 00:06:37.655 ' 00:06:37.655 11:28:38 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:06:37.655 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:37.655 --rc genhtml_branch_coverage=1 00:06:37.655 --rc genhtml_function_coverage=1 00:06:37.655 --rc genhtml_legend=1 00:06:37.655 --rc geninfo_all_blocks=1 00:06:37.655 --rc geninfo_unexecuted_blocks=1 00:06:37.655 00:06:37.655 ' 00:06:37.655 11:28:38 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:06:37.655 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:37.655 --rc genhtml_branch_coverage=1 00:06:37.655 --rc genhtml_function_coverage=1 00:06:37.655 --rc genhtml_legend=1 00:06:37.655 --rc geninfo_all_blocks=1 00:06:37.655 --rc geninfo_unexecuted_blocks=1 00:06:37.655 00:06:37.655 ' 00:06:37.655 11:28:38 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:06:37.655 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:37.655 --rc genhtml_branch_coverage=1 00:06:37.655 --rc genhtml_function_coverage=1 00:06:37.655 --rc genhtml_legend=1 00:06:37.655 --rc geninfo_all_blocks=1 00:06:37.655 --rc geninfo_unexecuted_blocks=1 00:06:37.655 00:06:37.655 ' 00:06:37.655 11:28:38 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@10 -- # uname -s 00:06:37.655 11:28:38 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@10 -- # '[' '!' Linux = Linux ']' 00:06:37.655 11:28:38 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@14 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:06:37.655 11:28:38 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@7 -- # uname -s 00:06:37.915 11:28:38 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:06:37.915 11:28:38 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:06:37.915 11:28:38 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:06:37.915 11:28:38 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:06:37.915 11:28:38 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:06:37.915 11:28:38 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:06:37.915 11:28:38 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:06:37.915 11:28:38 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:06:37.915 11:28:38 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:06:37.915 11:28:38 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:06:37.915 11:28:38 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:2fb5c03d-b67f-4471-b8a9-a83bf7b15be8 00:06:37.915 11:28:38 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@18 -- # NVME_HOSTID=2fb5c03d-b67f-4471-b8a9-a83bf7b15be8 00:06:37.915 11:28:38 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:06:37.915 11:28:38 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:06:37.915 11:28:38 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:06:37.915 11:28:38 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:06:37.915 11:28:38 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@49 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:06:37.915 11:28:38 nvmf_tcp.nvmf_target_core -- scripts/common.sh@15 -- # shopt -s extglob 00:06:37.915 11:28:38 nvmf_tcp.nvmf_target_core -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:06:37.915 11:28:38 nvmf_tcp.nvmf_target_core -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:06:37.915 11:28:38 nvmf_tcp.nvmf_target_core -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:06:37.915 11:28:38 nvmf_tcp.nvmf_target_core -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:37.915 11:28:38 nvmf_tcp.nvmf_target_core -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:37.915 11:28:38 nvmf_tcp.nvmf_target_core -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:37.915 11:28:38 nvmf_tcp.nvmf_target_core -- paths/export.sh@5 -- # export PATH 00:06:37.915 11:28:38 nvmf_tcp.nvmf_target_core -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:37.915 11:28:38 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@51 -- # : 0 00:06:37.915 11:28:38 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:06:37.915 11:28:38 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:06:37.915 11:28:38 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:06:37.915 11:28:38 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:06:37.915 11:28:38 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:06:37.915 11:28:38 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:06:37.915 /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:06:37.915 11:28:38 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:06:37.915 11:28:38 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:06:37.915 11:28:38 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@55 -- # have_pci_nics=0 00:06:37.915 11:28:38 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@16 -- # trap 'exit 1' SIGINT SIGTERM EXIT 00:06:37.915 11:28:38 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@18 -- # TEST_ARGS=("$@") 00:06:37.915 11:28:38 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@20 -- # [[ 0 -eq 0 ]] 00:06:37.915 11:28:38 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@21 -- # run_test nvmf_abort /home/vagrant/spdk_repo/spdk/test/nvmf/target/abort.sh --transport=tcp 00:06:37.915 11:28:38 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:06:37.915 11:28:38 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1111 -- # xtrace_disable 00:06:37.915 11:28:38 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@10 -- # set +x 00:06:37.915 ************************************ 00:06:37.915 START TEST nvmf_abort 00:06:37.915 ************************************ 00:06:37.915 11:28:38 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/abort.sh --transport=tcp 00:06:37.915 * Looking for test storage... 00:06:37.915 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:06:37.915 11:28:38 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:06:37.915 11:28:38 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@1711 -- # lcov --version 00:06:37.915 11:28:38 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:06:37.915 11:28:38 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:06:37.915 11:28:38 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:06:37.915 11:28:38 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@333 -- # local ver1 ver1_l 00:06:37.915 11:28:38 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@334 -- # local ver2 ver2_l 00:06:37.915 11:28:38 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@336 -- # IFS=.-: 00:06:37.915 11:28:38 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@336 -- # read -ra ver1 00:06:37.915 11:28:38 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@337 -- # IFS=.-: 00:06:37.915 11:28:38 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@337 -- # read -ra ver2 00:06:37.915 11:28:38 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@338 -- # local 'op=<' 00:06:37.915 11:28:38 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@340 -- # ver1_l=2 00:06:37.915 11:28:38 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@341 -- # ver2_l=1 00:06:37.915 11:28:38 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:06:37.915 11:28:38 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@344 -- # case "$op" in 00:06:37.915 11:28:38 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@345 -- # : 1 00:06:37.915 11:28:38 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@364 -- # (( v = 0 )) 00:06:37.915 11:28:38 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:06:37.915 11:28:38 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@365 -- # decimal 1 00:06:37.915 11:28:38 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@353 -- # local d=1 00:06:37.915 11:28:38 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:06:37.915 11:28:38 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@355 -- # echo 1 00:06:37.916 11:28:38 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@365 -- # ver1[v]=1 00:06:37.916 11:28:38 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@366 -- # decimal 2 00:06:37.916 11:28:38 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@353 -- # local d=2 00:06:37.916 11:28:38 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:06:37.916 11:28:38 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@355 -- # echo 2 00:06:37.916 11:28:38 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@366 -- # ver2[v]=2 00:06:37.916 11:28:38 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:06:37.916 11:28:38 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:06:37.916 11:28:38 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@368 -- # return 0 00:06:37.916 11:28:38 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:06:37.916 11:28:38 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:06:37.916 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:37.916 --rc genhtml_branch_coverage=1 00:06:37.916 --rc genhtml_function_coverage=1 00:06:37.916 --rc genhtml_legend=1 00:06:37.916 --rc geninfo_all_blocks=1 00:06:37.916 --rc geninfo_unexecuted_blocks=1 00:06:37.916 00:06:37.916 ' 00:06:37.916 11:28:38 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:06:37.916 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:37.916 --rc genhtml_branch_coverage=1 00:06:37.916 --rc genhtml_function_coverage=1 00:06:37.916 --rc genhtml_legend=1 00:06:37.916 --rc geninfo_all_blocks=1 00:06:37.916 --rc geninfo_unexecuted_blocks=1 00:06:37.916 00:06:37.916 ' 00:06:37.916 11:28:38 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:06:37.916 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:37.916 --rc genhtml_branch_coverage=1 00:06:37.916 --rc genhtml_function_coverage=1 00:06:37.916 --rc genhtml_legend=1 00:06:37.916 --rc geninfo_all_blocks=1 00:06:37.916 --rc geninfo_unexecuted_blocks=1 00:06:37.916 00:06:37.916 ' 00:06:37.916 11:28:38 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:06:37.916 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:37.916 --rc genhtml_branch_coverage=1 00:06:37.916 --rc genhtml_function_coverage=1 00:06:37.916 --rc genhtml_legend=1 00:06:37.916 --rc geninfo_all_blocks=1 00:06:37.916 --rc geninfo_unexecuted_blocks=1 00:06:37.916 00:06:37.916 ' 00:06:37.916 11:28:38 nvmf_tcp.nvmf_target_core.nvmf_abort -- target/abort.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:06:37.916 11:28:38 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@7 -- # uname -s 00:06:37.916 11:28:38 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:06:37.916 11:28:38 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:06:37.916 11:28:38 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:06:37.916 11:28:38 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:06:37.916 11:28:38 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:06:37.916 11:28:38 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:06:37.916 11:28:38 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:06:37.916 11:28:38 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:06:37.916 11:28:38 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:06:37.916 11:28:38 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:06:37.916 11:28:38 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:2fb5c03d-b67f-4471-b8a9-a83bf7b15be8 00:06:37.916 11:28:38 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@18 -- # NVME_HOSTID=2fb5c03d-b67f-4471-b8a9-a83bf7b15be8 00:06:37.916 11:28:38 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:06:37.916 11:28:38 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:06:37.916 11:28:38 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:06:37.916 11:28:38 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:06:37.916 11:28:38 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@49 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:06:37.916 11:28:38 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@15 -- # shopt -s extglob 00:06:37.916 11:28:38 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:06:37.916 11:28:38 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:06:37.916 11:28:38 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:06:37.916 11:28:38 nvmf_tcp.nvmf_target_core.nvmf_abort -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:37.916 11:28:38 nvmf_tcp.nvmf_target_core.nvmf_abort -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:37.916 11:28:38 nvmf_tcp.nvmf_target_core.nvmf_abort -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:37.916 11:28:38 nvmf_tcp.nvmf_target_core.nvmf_abort -- paths/export.sh@5 -- # export PATH 00:06:37.916 11:28:38 nvmf_tcp.nvmf_target_core.nvmf_abort -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:37.916 11:28:38 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@51 -- # : 0 00:06:37.916 11:28:38 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:06:37.916 11:28:38 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:06:37.916 11:28:38 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:06:37.916 11:28:38 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:06:37.916 11:28:38 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:06:37.916 11:28:38 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:06:37.916 /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:06:37.916 11:28:38 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:06:37.916 11:28:38 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:06:37.916 11:28:38 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@55 -- # have_pci_nics=0 00:06:37.916 11:28:38 nvmf_tcp.nvmf_target_core.nvmf_abort -- target/abort.sh@11 -- # MALLOC_BDEV_SIZE=64 00:06:37.916 11:28:38 nvmf_tcp.nvmf_target_core.nvmf_abort -- target/abort.sh@12 -- # MALLOC_BLOCK_SIZE=4096 00:06:37.916 11:28:38 nvmf_tcp.nvmf_target_core.nvmf_abort -- target/abort.sh@14 -- # nvmftestinit 00:06:37.916 11:28:38 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:06:37.916 11:28:38 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:06:37.916 11:28:38 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@476 -- # prepare_net_devs 00:06:37.916 11:28:38 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@438 -- # local -g is_hw=no 00:06:37.916 11:28:38 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@440 -- # remove_spdk_ns 00:06:37.916 11:28:38 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:06:37.916 11:28:38 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:06:37.916 11:28:38 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:06:37.916 11:28:38 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@442 -- # [[ virt != virt ]] 00:06:37.916 11:28:38 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@444 -- # [[ no == yes ]] 00:06:37.916 11:28:38 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@451 -- # [[ virt == phy ]] 00:06:37.916 11:28:38 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@454 -- # [[ virt == phy-fallback ]] 00:06:37.916 11:28:38 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@459 -- # [[ tcp == tcp ]] 00:06:37.916 11:28:38 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@460 -- # nvmf_veth_init 00:06:37.916 11:28:38 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@145 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:06:37.916 11:28:38 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@146 -- # NVMF_SECOND_INITIATOR_IP=10.0.0.2 00:06:37.916 11:28:38 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@147 -- # NVMF_FIRST_TARGET_IP=10.0.0.3 00:06:37.916 11:28:38 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@148 -- # NVMF_SECOND_TARGET_IP=10.0.0.4 00:06:37.916 11:28:38 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@149 -- # NVMF_INITIATOR_IP=10.0.0.1 00:06:37.916 11:28:38 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@150 -- # NVMF_BRIDGE=nvmf_br 00:06:37.916 11:28:38 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@151 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:06:37.916 11:28:38 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@152 -- # NVMF_INITIATOR_INTERFACE2=nvmf_init_if2 00:06:37.916 11:28:38 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@153 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:06:37.916 11:28:38 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@154 -- # NVMF_INITIATOR_BRIDGE2=nvmf_init_br2 00:06:37.916 11:28:38 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@155 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:06:37.916 11:28:38 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@156 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:06:37.916 11:28:38 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@157 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:06:37.916 11:28:38 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@158 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:06:37.916 11:28:38 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@159 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:06:37.917 11:28:38 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@160 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:06:37.917 11:28:38 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@162 -- # ip link set nvmf_init_br nomaster 00:06:38.175 Cannot find device "nvmf_init_br" 00:06:38.175 11:28:38 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@162 -- # true 00:06:38.175 11:28:38 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@163 -- # ip link set nvmf_init_br2 nomaster 00:06:38.175 Cannot find device "nvmf_init_br2" 00:06:38.175 11:28:38 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@163 -- # true 00:06:38.175 11:28:38 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@164 -- # ip link set nvmf_tgt_br nomaster 00:06:38.175 Cannot find device "nvmf_tgt_br" 00:06:38.175 11:28:38 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@164 -- # true 00:06:38.175 11:28:38 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@165 -- # ip link set nvmf_tgt_br2 nomaster 00:06:38.175 Cannot find device "nvmf_tgt_br2" 00:06:38.175 11:28:38 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@165 -- # true 00:06:38.175 11:28:38 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@166 -- # ip link set nvmf_init_br down 00:06:38.175 Cannot find device "nvmf_init_br" 00:06:38.175 11:28:38 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@166 -- # true 00:06:38.175 11:28:38 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@167 -- # ip link set nvmf_init_br2 down 00:06:38.175 Cannot find device "nvmf_init_br2" 00:06:38.175 11:28:38 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@167 -- # true 00:06:38.175 11:28:38 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@168 -- # ip link set nvmf_tgt_br down 00:06:38.175 Cannot find device "nvmf_tgt_br" 00:06:38.175 11:28:38 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@168 -- # true 00:06:38.175 11:28:38 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@169 -- # ip link set nvmf_tgt_br2 down 00:06:38.175 Cannot find device "nvmf_tgt_br2" 00:06:38.175 11:28:38 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@169 -- # true 00:06:38.175 11:28:38 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@170 -- # ip link delete nvmf_br type bridge 00:06:38.175 Cannot find device "nvmf_br" 00:06:38.175 11:28:38 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@170 -- # true 00:06:38.175 11:28:38 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@171 -- # ip link delete nvmf_init_if 00:06:38.175 Cannot find device "nvmf_init_if" 00:06:38.175 11:28:38 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@171 -- # true 00:06:38.175 11:28:38 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@172 -- # ip link delete nvmf_init_if2 00:06:38.175 Cannot find device "nvmf_init_if2" 00:06:38.175 11:28:38 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@172 -- # true 00:06:38.175 11:28:38 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@173 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:06:38.175 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:06:38.175 11:28:38 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@173 -- # true 00:06:38.175 11:28:38 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@174 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:06:38.175 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:06:38.175 11:28:38 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@174 -- # true 00:06:38.175 11:28:38 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@177 -- # ip netns add nvmf_tgt_ns_spdk 00:06:38.175 11:28:38 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@180 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:06:38.175 11:28:38 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@181 -- # ip link add nvmf_init_if2 type veth peer name nvmf_init_br2 00:06:38.175 11:28:38 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@182 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:06:38.175 11:28:38 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@183 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:06:38.175 11:28:38 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:06:38.175 11:28:38 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@187 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:06:38.175 11:28:38 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@190 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:06:38.175 11:28:38 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@191 -- # ip addr add 10.0.0.2/24 dev nvmf_init_if2 00:06:38.175 11:28:38 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@192 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if 00:06:38.175 11:28:38 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@193 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.4/24 dev nvmf_tgt_if2 00:06:38.175 11:28:38 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@196 -- # ip link set nvmf_init_if up 00:06:38.175 11:28:38 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@197 -- # ip link set nvmf_init_if2 up 00:06:38.175 11:28:38 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@198 -- # ip link set nvmf_init_br up 00:06:38.175 11:28:38 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@199 -- # ip link set nvmf_init_br2 up 00:06:38.175 11:28:38 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@200 -- # ip link set nvmf_tgt_br up 00:06:38.175 11:28:38 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@201 -- # ip link set nvmf_tgt_br2 up 00:06:38.175 11:28:38 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@202 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:06:38.175 11:28:38 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@203 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:06:38.175 11:28:38 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@204 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:06:38.175 11:28:38 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@207 -- # ip link add nvmf_br type bridge 00:06:38.434 11:28:38 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@208 -- # ip link set nvmf_br up 00:06:38.434 11:28:38 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@211 -- # ip link set nvmf_init_br master nvmf_br 00:06:38.434 11:28:39 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@212 -- # ip link set nvmf_init_br2 master nvmf_br 00:06:38.434 11:28:39 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@213 -- # ip link set nvmf_tgt_br master nvmf_br 00:06:38.434 11:28:39 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@214 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:06:38.434 11:28:39 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@217 -- # ipts -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:06:38.434 11:28:39 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT' 00:06:38.434 11:28:39 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@218 -- # ipts -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT 00:06:38.434 11:28:39 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT' 00:06:38.434 11:28:39 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@219 -- # ipts -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:06:38.434 11:28:39 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@790 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT -m comment --comment 'SPDK_NVMF:-A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT' 00:06:38.434 11:28:39 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@222 -- # ping -c 1 10.0.0.3 00:06:38.434 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:06:38.434 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.077 ms 00:06:38.434 00:06:38.434 --- 10.0.0.3 ping statistics --- 00:06:38.434 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:06:38.434 rtt min/avg/max/mdev = 0.077/0.077/0.077/0.000 ms 00:06:38.434 11:28:39 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@223 -- # ping -c 1 10.0.0.4 00:06:38.434 PING 10.0.0.4 (10.0.0.4) 56(84) bytes of data. 00:06:38.434 64 bytes from 10.0.0.4: icmp_seq=1 ttl=64 time=0.044 ms 00:06:38.434 00:06:38.434 --- 10.0.0.4 ping statistics --- 00:06:38.434 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:06:38.434 rtt min/avg/max/mdev = 0.044/0.044/0.044/0.000 ms 00:06:38.434 11:28:39 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@224 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:06:38.434 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:06:38.434 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.035 ms 00:06:38.434 00:06:38.434 --- 10.0.0.1 ping statistics --- 00:06:38.434 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:06:38.434 rtt min/avg/max/mdev = 0.035/0.035/0.035/0.000 ms 00:06:38.434 11:28:39 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@225 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.2 00:06:38.434 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:06:38.434 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.041 ms 00:06:38.434 00:06:38.434 --- 10.0.0.2 ping statistics --- 00:06:38.434 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:06:38.434 rtt min/avg/max/mdev = 0.041/0.041/0.041/0.000 ms 00:06:38.434 11:28:39 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@227 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:06:38.434 11:28:39 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@461 -- # return 0 00:06:38.434 11:28:39 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:06:38.434 11:28:39 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:06:38.434 11:28:39 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:06:38.434 11:28:39 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:06:38.434 11:28:39 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:06:38.435 11:28:39 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:06:38.435 11:28:39 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:06:38.435 11:28:39 nvmf_tcp.nvmf_target_core.nvmf_abort -- target/abort.sh@15 -- # nvmfappstart -m 0xE 00:06:38.435 11:28:39 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:06:38.435 11:28:39 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@726 -- # xtrace_disable 00:06:38.435 11:28:39 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:06:38.435 11:28:39 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@509 -- # nvmfpid=63823 00:06:38.435 11:28:39 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@510 -- # waitforlisten 63823 00:06:38.435 11:28:39 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@508 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xE 00:06:38.435 11:28:39 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@835 -- # '[' -z 63823 ']' 00:06:38.435 11:28:39 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:38.435 11:28:39 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@840 -- # local max_retries=100 00:06:38.435 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:38.435 11:28:39 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:38.435 11:28:39 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@844 -- # xtrace_disable 00:06:38.435 11:28:39 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:06:38.435 [2024-12-16 11:28:39.184025] Starting SPDK v25.01-pre git sha1 c360129e7 / DPDK 24.03.0 initialization... 00:06:38.435 [2024-12-16 11:28:39.184123] [ DPDK EAL parameters: nvmf -c 0xE --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:06:38.693 [2024-12-16 11:28:39.340433] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:06:38.693 [2024-12-16 11:28:39.424746] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:06:38.693 [2024-12-16 11:28:39.424838] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:06:38.693 [2024-12-16 11:28:39.424860] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:06:38.693 [2024-12-16 11:28:39.424878] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:06:38.693 [2024-12-16 11:28:39.424893] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:06:38.693 [2024-12-16 11:28:39.426499] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 2 00:06:38.693 [2024-12-16 11:28:39.426657] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 3 00:06:38.693 [2024-12-16 11:28:39.426677] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 1 00:06:38.952 11:28:39 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:06:38.952 11:28:39 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@868 -- # return 0 00:06:38.952 11:28:39 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:06:38.952 11:28:39 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@732 -- # xtrace_disable 00:06:38.952 11:28:39 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:06:38.952 11:28:39 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:06:38.952 11:28:39 nvmf_tcp.nvmf_target_core.nvmf_abort -- target/abort.sh@17 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 -a 256 00:06:38.952 11:28:39 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:38.952 11:28:39 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:06:38.952 [2024-12-16 11:28:39.619002] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:06:38.952 11:28:39 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:38.952 11:28:39 nvmf_tcp.nvmf_target_core.nvmf_abort -- target/abort.sh@20 -- # rpc_cmd bdev_malloc_create 64 4096 -b Malloc0 00:06:38.952 11:28:39 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:38.952 11:28:39 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:06:38.952 Malloc0 00:06:38.952 11:28:39 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:38.952 11:28:39 nvmf_tcp.nvmf_target_core.nvmf_abort -- target/abort.sh@21 -- # rpc_cmd bdev_delay_create -b Malloc0 -d Delay0 -r 1000000 -t 1000000 -w 1000000 -n 1000000 00:06:38.952 11:28:39 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:38.952 11:28:39 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:06:38.952 Delay0 00:06:38.952 11:28:39 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:38.952 11:28:39 nvmf_tcp.nvmf_target_core.nvmf_abort -- target/abort.sh@24 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 -a -s SPDK0 00:06:38.952 11:28:39 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:38.952 11:28:39 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:06:38.952 11:28:39 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:38.952 11:28:39 nvmf_tcp.nvmf_target_core.nvmf_abort -- target/abort.sh@25 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 Delay0 00:06:38.952 11:28:39 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:38.952 11:28:39 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:06:38.952 11:28:39 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:38.952 11:28:39 nvmf_tcp.nvmf_target_core.nvmf_abort -- target/abort.sh@26 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.3 -s 4420 00:06:38.952 11:28:39 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:38.952 11:28:39 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:06:38.952 [2024-12-16 11:28:39.700387] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4420 *** 00:06:39.211 11:28:39 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:39.211 11:28:39 nvmf_tcp.nvmf_target_core.nvmf_abort -- target/abort.sh@27 -- # rpc_cmd nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.3 -s 4420 00:06:39.211 11:28:39 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:39.211 11:28:39 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:06:39.211 11:28:39 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:39.211 11:28:39 nvmf_tcp.nvmf_target_core.nvmf_abort -- target/abort.sh@30 -- # /home/vagrant/spdk_repo/spdk/build/examples/abort -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.3 trsvcid:4420' -c 0x1 -t 1 -l warning -q 128 00:06:39.211 [2024-12-16 11:28:39.896904] nvme_fabric.c: 295:nvme_fabric_discover_probe: *WARNING*: Skipping unsupported current discovery service or discovery service referral 00:06:41.742 Initializing NVMe Controllers 00:06:41.742 Attached to NVMe over Fabrics controller at 10.0.0.3:4420: nqn.2016-06.io.spdk:cnode0 00:06:41.742 controller IO queue size 128 less than required 00:06:41.742 Consider using lower queue depth or small IO size because IO requests may be queued at the NVMe driver. 00:06:41.742 Associating TCP (addr:10.0.0.3 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 with lcore 0 00:06:41.742 Initialization complete. Launching workers. 00:06:41.742 NS: TCP (addr:10.0.0.3 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 I/O completed: 123, failed: 27611 00:06:41.742 CTRLR: TCP (addr:10.0.0.3 subnqn:nqn.2016-06.io.spdk:cnode0) abort submitted 27672, failed to submit 62 00:06:41.742 success 27615, unsuccessful 57, failed 0 00:06:41.742 11:28:41 nvmf_tcp.nvmf_target_core.nvmf_abort -- target/abort.sh@34 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:06:41.742 11:28:41 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:41.742 11:28:41 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:06:41.742 11:28:41 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:41.742 11:28:41 nvmf_tcp.nvmf_target_core.nvmf_abort -- target/abort.sh@36 -- # trap - SIGINT SIGTERM EXIT 00:06:41.742 11:28:41 nvmf_tcp.nvmf_target_core.nvmf_abort -- target/abort.sh@38 -- # nvmftestfini 00:06:41.742 11:28:41 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@516 -- # nvmfcleanup 00:06:41.742 11:28:41 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@121 -- # sync 00:06:41.742 11:28:41 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:06:41.742 11:28:41 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@124 -- # set +e 00:06:41.743 11:28:41 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@125 -- # for i in {1..20} 00:06:41.743 11:28:41 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:06:41.743 rmmod nvme_tcp 00:06:41.743 rmmod nvme_fabrics 00:06:41.743 rmmod nvme_keyring 00:06:41.743 11:28:42 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:06:41.743 11:28:42 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@128 -- # set -e 00:06:41.743 11:28:42 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@129 -- # return 0 00:06:41.743 11:28:42 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@517 -- # '[' -n 63823 ']' 00:06:41.743 11:28:42 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@518 -- # killprocess 63823 00:06:41.743 11:28:42 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@954 -- # '[' -z 63823 ']' 00:06:41.743 11:28:42 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@958 -- # kill -0 63823 00:06:41.743 11:28:42 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@959 -- # uname 00:06:41.743 11:28:42 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:06:41.743 11:28:42 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 63823 00:06:41.743 11:28:42 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:06:41.743 killing process with pid 63823 00:06:41.743 11:28:42 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:06:41.743 11:28:42 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@972 -- # echo 'killing process with pid 63823' 00:06:41.743 11:28:42 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@973 -- # kill 63823 00:06:41.743 11:28:42 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@978 -- # wait 63823 00:06:41.743 11:28:42 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:06:41.743 11:28:42 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:06:41.743 11:28:42 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:06:41.743 11:28:42 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@297 -- # iptr 00:06:41.743 11:28:42 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:06:41.743 11:28:42 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@791 -- # iptables-save 00:06:41.743 11:28:42 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@791 -- # iptables-restore 00:06:41.743 11:28:42 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@298 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:06:41.743 11:28:42 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@299 -- # nvmf_veth_fini 00:06:41.743 11:28:42 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@233 -- # ip link set nvmf_init_br nomaster 00:06:41.743 11:28:42 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@234 -- # ip link set nvmf_init_br2 nomaster 00:06:41.743 11:28:42 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@235 -- # ip link set nvmf_tgt_br nomaster 00:06:41.743 11:28:42 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@236 -- # ip link set nvmf_tgt_br2 nomaster 00:06:41.743 11:28:42 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@237 -- # ip link set nvmf_init_br down 00:06:41.743 11:28:42 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@238 -- # ip link set nvmf_init_br2 down 00:06:41.743 11:28:42 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@239 -- # ip link set nvmf_tgt_br down 00:06:41.743 11:28:42 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@240 -- # ip link set nvmf_tgt_br2 down 00:06:41.743 11:28:42 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@241 -- # ip link delete nvmf_br type bridge 00:06:41.743 11:28:42 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@242 -- # ip link delete nvmf_init_if 00:06:42.001 11:28:42 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@243 -- # ip link delete nvmf_init_if2 00:06:42.001 11:28:42 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@244 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:06:42.001 11:28:42 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@245 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:06:42.001 11:28:42 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@246 -- # remove_spdk_ns 00:06:42.001 11:28:42 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:06:42.001 11:28:42 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:06:42.001 11:28:42 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:06:42.001 11:28:42 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@300 -- # return 0 00:06:42.001 00:06:42.001 real 0m4.158s 00:06:42.001 user 0m10.725s 00:06:42.001 sys 0m1.112s 00:06:42.001 11:28:42 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@1130 -- # xtrace_disable 00:06:42.001 11:28:42 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:06:42.001 ************************************ 00:06:42.001 END TEST nvmf_abort 00:06:42.001 ************************************ 00:06:42.001 11:28:42 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@22 -- # run_test nvmf_ns_hotplug_stress /home/vagrant/spdk_repo/spdk/test/nvmf/target/ns_hotplug_stress.sh --transport=tcp 00:06:42.001 11:28:42 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:06:42.001 11:28:42 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1111 -- # xtrace_disable 00:06:42.001 11:28:42 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@10 -- # set +x 00:06:42.001 ************************************ 00:06:42.001 START TEST nvmf_ns_hotplug_stress 00:06:42.001 ************************************ 00:06:42.001 11:28:42 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/ns_hotplug_stress.sh --transport=tcp 00:06:42.001 * Looking for test storage... 00:06:42.001 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:06:42.002 11:28:42 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:06:42.002 11:28:42 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1711 -- # lcov --version 00:06:42.002 11:28:42 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:06:42.262 11:28:42 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:06:42.262 11:28:42 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:06:42.262 11:28:42 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@333 -- # local ver1 ver1_l 00:06:42.262 11:28:42 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@334 -- # local ver2 ver2_l 00:06:42.262 11:28:42 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@336 -- # IFS=.-: 00:06:42.262 11:28:42 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@336 -- # read -ra ver1 00:06:42.262 11:28:42 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@337 -- # IFS=.-: 00:06:42.262 11:28:42 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@337 -- # read -ra ver2 00:06:42.262 11:28:42 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@338 -- # local 'op=<' 00:06:42.262 11:28:42 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@340 -- # ver1_l=2 00:06:42.262 11:28:42 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@341 -- # ver2_l=1 00:06:42.262 11:28:42 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:06:42.262 11:28:42 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@344 -- # case "$op" in 00:06:42.262 11:28:42 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@345 -- # : 1 00:06:42.262 11:28:42 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@364 -- # (( v = 0 )) 00:06:42.262 11:28:42 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:06:42.262 11:28:42 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@365 -- # decimal 1 00:06:42.262 11:28:42 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@353 -- # local d=1 00:06:42.262 11:28:42 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:06:42.262 11:28:42 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@355 -- # echo 1 00:06:42.262 11:28:42 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@365 -- # ver1[v]=1 00:06:42.262 11:28:42 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@366 -- # decimal 2 00:06:42.262 11:28:42 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@353 -- # local d=2 00:06:42.262 11:28:42 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:06:42.262 11:28:42 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@355 -- # echo 2 00:06:42.262 11:28:42 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@366 -- # ver2[v]=2 00:06:42.262 11:28:42 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:06:42.262 11:28:42 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:06:42.262 11:28:42 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@368 -- # return 0 00:06:42.262 11:28:42 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:06:42.262 11:28:42 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:06:42.262 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:42.262 --rc genhtml_branch_coverage=1 00:06:42.262 --rc genhtml_function_coverage=1 00:06:42.262 --rc genhtml_legend=1 00:06:42.262 --rc geninfo_all_blocks=1 00:06:42.262 --rc geninfo_unexecuted_blocks=1 00:06:42.262 00:06:42.262 ' 00:06:42.262 11:28:42 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:06:42.262 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:42.262 --rc genhtml_branch_coverage=1 00:06:42.262 --rc genhtml_function_coverage=1 00:06:42.262 --rc genhtml_legend=1 00:06:42.262 --rc geninfo_all_blocks=1 00:06:42.262 --rc geninfo_unexecuted_blocks=1 00:06:42.262 00:06:42.262 ' 00:06:42.262 11:28:42 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:06:42.262 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:42.262 --rc genhtml_branch_coverage=1 00:06:42.262 --rc genhtml_function_coverage=1 00:06:42.262 --rc genhtml_legend=1 00:06:42.262 --rc geninfo_all_blocks=1 00:06:42.262 --rc geninfo_unexecuted_blocks=1 00:06:42.262 00:06:42.262 ' 00:06:42.262 11:28:42 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:06:42.262 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:42.262 --rc genhtml_branch_coverage=1 00:06:42.262 --rc genhtml_function_coverage=1 00:06:42.262 --rc genhtml_legend=1 00:06:42.262 --rc geninfo_all_blocks=1 00:06:42.262 --rc geninfo_unexecuted_blocks=1 00:06:42.262 00:06:42.262 ' 00:06:42.262 11:28:42 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:06:42.262 11:28:42 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@7 -- # uname -s 00:06:42.262 11:28:42 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:06:42.262 11:28:42 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:06:42.262 11:28:42 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:06:42.262 11:28:42 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:06:42.262 11:28:42 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:06:42.262 11:28:42 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:06:42.262 11:28:42 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:06:42.262 11:28:42 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:06:42.262 11:28:42 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:06:42.262 11:28:42 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:06:42.262 11:28:42 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:2fb5c03d-b67f-4471-b8a9-a83bf7b15be8 00:06:42.262 11:28:42 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@18 -- # NVME_HOSTID=2fb5c03d-b67f-4471-b8a9-a83bf7b15be8 00:06:42.262 11:28:42 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:06:42.262 11:28:42 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:06:42.262 11:28:42 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:06:42.262 11:28:42 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:06:42.262 11:28:42 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@49 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:06:42.262 11:28:42 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@15 -- # shopt -s extglob 00:06:42.262 11:28:42 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:06:42.262 11:28:42 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:06:42.262 11:28:42 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:06:42.263 11:28:42 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:42.263 11:28:42 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:42.263 11:28:42 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:42.263 11:28:42 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- paths/export.sh@5 -- # export PATH 00:06:42.263 11:28:42 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:42.263 11:28:42 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@51 -- # : 0 00:06:42.263 11:28:42 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:06:42.263 11:28:42 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:06:42.263 11:28:42 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:06:42.263 11:28:42 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:06:42.263 11:28:42 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:06:42.263 11:28:42 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:06:42.263 /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:06:42.263 11:28:42 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:06:42.263 11:28:42 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:06:42.263 11:28:42 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@55 -- # have_pci_nics=0 00:06:42.263 11:28:42 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@11 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:06:42.263 11:28:42 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@22 -- # nvmftestinit 00:06:42.263 11:28:42 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:06:42.263 11:28:42 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:06:42.263 11:28:42 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@476 -- # prepare_net_devs 00:06:42.263 11:28:42 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@438 -- # local -g is_hw=no 00:06:42.263 11:28:42 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@440 -- # remove_spdk_ns 00:06:42.263 11:28:42 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:06:42.263 11:28:42 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:06:42.263 11:28:42 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:06:42.263 11:28:42 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@442 -- # [[ virt != virt ]] 00:06:42.263 11:28:42 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@444 -- # [[ no == yes ]] 00:06:42.263 11:28:42 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@451 -- # [[ virt == phy ]] 00:06:42.263 11:28:42 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@454 -- # [[ virt == phy-fallback ]] 00:06:42.263 11:28:42 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@459 -- # [[ tcp == tcp ]] 00:06:42.263 11:28:42 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@460 -- # nvmf_veth_init 00:06:42.263 11:28:42 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@145 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:06:42.263 11:28:42 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@146 -- # NVMF_SECOND_INITIATOR_IP=10.0.0.2 00:06:42.263 11:28:42 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@147 -- # NVMF_FIRST_TARGET_IP=10.0.0.3 00:06:42.263 11:28:42 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@148 -- # NVMF_SECOND_TARGET_IP=10.0.0.4 00:06:42.263 11:28:42 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@149 -- # NVMF_INITIATOR_IP=10.0.0.1 00:06:42.263 11:28:42 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@150 -- # NVMF_BRIDGE=nvmf_br 00:06:42.263 11:28:42 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@151 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:06:42.263 11:28:42 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@152 -- # NVMF_INITIATOR_INTERFACE2=nvmf_init_if2 00:06:42.263 11:28:42 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@153 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:06:42.263 11:28:42 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@154 -- # NVMF_INITIATOR_BRIDGE2=nvmf_init_br2 00:06:42.263 11:28:42 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@155 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:06:42.263 11:28:42 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@156 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:06:42.263 11:28:42 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@157 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:06:42.263 11:28:42 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@158 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:06:42.263 11:28:42 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@159 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:06:42.263 11:28:42 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@160 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:06:42.263 11:28:42 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@162 -- # ip link set nvmf_init_br nomaster 00:06:42.263 Cannot find device "nvmf_init_br" 00:06:42.263 11:28:42 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@162 -- # true 00:06:42.263 11:28:42 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@163 -- # ip link set nvmf_init_br2 nomaster 00:06:42.263 Cannot find device "nvmf_init_br2" 00:06:42.263 11:28:42 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@163 -- # true 00:06:42.263 11:28:42 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@164 -- # ip link set nvmf_tgt_br nomaster 00:06:42.263 Cannot find device "nvmf_tgt_br" 00:06:42.263 11:28:42 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@164 -- # true 00:06:42.263 11:28:42 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@165 -- # ip link set nvmf_tgt_br2 nomaster 00:06:42.263 Cannot find device "nvmf_tgt_br2" 00:06:42.263 11:28:42 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@165 -- # true 00:06:42.263 11:28:42 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@166 -- # ip link set nvmf_init_br down 00:06:42.263 Cannot find device "nvmf_init_br" 00:06:42.263 11:28:42 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@166 -- # true 00:06:42.263 11:28:42 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@167 -- # ip link set nvmf_init_br2 down 00:06:42.263 Cannot find device "nvmf_init_br2" 00:06:42.263 11:28:42 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@167 -- # true 00:06:42.263 11:28:42 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@168 -- # ip link set nvmf_tgt_br down 00:06:42.263 Cannot find device "nvmf_tgt_br" 00:06:42.263 11:28:42 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@168 -- # true 00:06:42.263 11:28:42 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@169 -- # ip link set nvmf_tgt_br2 down 00:06:42.263 Cannot find device "nvmf_tgt_br2" 00:06:42.263 11:28:42 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@169 -- # true 00:06:42.263 11:28:42 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@170 -- # ip link delete nvmf_br type bridge 00:06:42.263 Cannot find device "nvmf_br" 00:06:42.263 11:28:42 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@170 -- # true 00:06:42.263 11:28:42 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@171 -- # ip link delete nvmf_init_if 00:06:42.263 Cannot find device "nvmf_init_if" 00:06:42.263 11:28:42 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@171 -- # true 00:06:42.263 11:28:42 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@172 -- # ip link delete nvmf_init_if2 00:06:42.263 Cannot find device "nvmf_init_if2" 00:06:42.263 11:28:43 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@172 -- # true 00:06:42.263 11:28:43 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@173 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:06:42.263 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:06:42.263 11:28:43 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@173 -- # true 00:06:42.263 11:28:43 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@174 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:06:42.523 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:06:42.523 11:28:43 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@174 -- # true 00:06:42.523 11:28:43 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@177 -- # ip netns add nvmf_tgt_ns_spdk 00:06:42.523 11:28:43 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@180 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:06:42.523 11:28:43 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@181 -- # ip link add nvmf_init_if2 type veth peer name nvmf_init_br2 00:06:42.523 11:28:43 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@182 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:06:42.523 11:28:43 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@183 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:06:42.523 11:28:43 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:06:42.523 11:28:43 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@187 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:06:42.523 11:28:43 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@190 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:06:42.523 11:28:43 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@191 -- # ip addr add 10.0.0.2/24 dev nvmf_init_if2 00:06:42.523 11:28:43 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@192 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if 00:06:42.523 11:28:43 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@193 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.4/24 dev nvmf_tgt_if2 00:06:42.523 11:28:43 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@196 -- # ip link set nvmf_init_if up 00:06:42.523 11:28:43 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@197 -- # ip link set nvmf_init_if2 up 00:06:42.523 11:28:43 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@198 -- # ip link set nvmf_init_br up 00:06:42.523 11:28:43 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@199 -- # ip link set nvmf_init_br2 up 00:06:42.523 11:28:43 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@200 -- # ip link set nvmf_tgt_br up 00:06:42.523 11:28:43 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@201 -- # ip link set nvmf_tgt_br2 up 00:06:42.523 11:28:43 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@202 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:06:42.523 11:28:43 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@203 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:06:42.523 11:28:43 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@204 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:06:42.523 11:28:43 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@207 -- # ip link add nvmf_br type bridge 00:06:42.523 11:28:43 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@208 -- # ip link set nvmf_br up 00:06:42.523 11:28:43 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@211 -- # ip link set nvmf_init_br master nvmf_br 00:06:42.523 11:28:43 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@212 -- # ip link set nvmf_init_br2 master nvmf_br 00:06:42.523 11:28:43 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@213 -- # ip link set nvmf_tgt_br master nvmf_br 00:06:42.523 11:28:43 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@214 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:06:42.523 11:28:43 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@217 -- # ipts -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:06:42.523 11:28:43 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT' 00:06:42.523 11:28:43 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@218 -- # ipts -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT 00:06:42.523 11:28:43 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT' 00:06:42.523 11:28:43 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@219 -- # ipts -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:06:42.523 11:28:43 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@790 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT -m comment --comment 'SPDK_NVMF:-A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT' 00:06:42.523 11:28:43 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@222 -- # ping -c 1 10.0.0.3 00:06:42.523 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:06:42.523 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.047 ms 00:06:42.523 00:06:42.523 --- 10.0.0.3 ping statistics --- 00:06:42.523 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:06:42.523 rtt min/avg/max/mdev = 0.047/0.047/0.047/0.000 ms 00:06:42.523 11:28:43 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@223 -- # ping -c 1 10.0.0.4 00:06:42.523 PING 10.0.0.4 (10.0.0.4) 56(84) bytes of data. 00:06:42.523 64 bytes from 10.0.0.4: icmp_seq=1 ttl=64 time=0.048 ms 00:06:42.523 00:06:42.523 --- 10.0.0.4 ping statistics --- 00:06:42.523 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:06:42.523 rtt min/avg/max/mdev = 0.048/0.048/0.048/0.000 ms 00:06:42.523 11:28:43 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@224 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:06:42.523 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:06:42.523 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.033 ms 00:06:42.523 00:06:42.523 --- 10.0.0.1 ping statistics --- 00:06:42.523 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:06:42.523 rtt min/avg/max/mdev = 0.033/0.033/0.033/0.000 ms 00:06:42.523 11:28:43 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@225 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.2 00:06:42.523 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:06:42.523 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.058 ms 00:06:42.523 00:06:42.523 --- 10.0.0.2 ping statistics --- 00:06:42.523 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:06:42.523 rtt min/avg/max/mdev = 0.058/0.058/0.058/0.000 ms 00:06:42.523 11:28:43 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@227 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:06:42.523 11:28:43 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@461 -- # return 0 00:06:42.523 11:28:43 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:06:42.523 11:28:43 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:06:42.523 11:28:43 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:06:42.523 11:28:43 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:06:42.523 11:28:43 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:06:42.523 11:28:43 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:06:42.523 11:28:43 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:06:42.523 11:28:43 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@23 -- # nvmfappstart -m 0xE 00:06:42.523 11:28:43 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:06:42.523 11:28:43 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@726 -- # xtrace_disable 00:06:42.523 11:28:43 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@10 -- # set +x 00:06:42.523 11:28:43 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@509 -- # nvmfpid=64108 00:06:42.523 11:28:43 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@510 -- # waitforlisten 64108 00:06:42.523 11:28:43 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@835 -- # '[' -z 64108 ']' 00:06:42.523 11:28:43 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:42.523 11:28:43 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@508 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xE 00:06:42.523 11:28:43 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@840 -- # local max_retries=100 00:06:42.523 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:42.523 11:28:43 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:42.523 11:28:43 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@844 -- # xtrace_disable 00:06:42.523 11:28:43 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@10 -- # set +x 00:06:42.799 [2024-12-16 11:28:43.341657] Starting SPDK v25.01-pre git sha1 c360129e7 / DPDK 24.03.0 initialization... 00:06:42.799 [2024-12-16 11:28:43.341794] [ DPDK EAL parameters: nvmf -c 0xE --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:06:42.799 [2024-12-16 11:28:43.498757] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:06:43.091 [2024-12-16 11:28:43.570992] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:06:43.091 [2024-12-16 11:28:43.571078] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:06:43.091 [2024-12-16 11:28:43.571104] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:06:43.091 [2024-12-16 11:28:43.571114] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:06:43.091 [2024-12-16 11:28:43.571124] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:06:43.091 [2024-12-16 11:28:43.572514] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 2 00:06:43.091 [2024-12-16 11:28:43.572728] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 3 00:06:43.091 [2024-12-16 11:28:43.572732] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 1 00:06:43.657 11:28:44 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:06:43.657 11:28:44 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@868 -- # return 0 00:06:43.657 11:28:44 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:06:43.657 11:28:44 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@732 -- # xtrace_disable 00:06:43.657 11:28:44 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@10 -- # set +x 00:06:43.657 11:28:44 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:06:43.657 11:28:44 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@25 -- # null_size=1000 00:06:43.657 11:28:44 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@27 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:06:44.225 [2024-12-16 11:28:44.692071] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:06:44.225 11:28:44 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@29 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 10 00:06:44.487 11:28:44 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@30 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 00:06:44.746 [2024-12-16 11:28:45.245188] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4420 *** 00:06:44.746 11:28:45 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.3 -s 4420 00:06:45.005 11:28:45 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@32 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_create 32 512 -b Malloc0 00:06:45.264 Malloc0 00:06:45.264 11:28:45 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@33 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_delay_create -b Malloc0 -d Delay0 -r 1000000 -t 1000000 -w 1000000 -n 1000000 00:06:45.524 Delay0 00:06:45.524 11:28:46 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@34 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:06:45.782 11:28:46 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@35 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_null_create NULL1 1000 512 00:06:46.350 NULL1 00:06:46.350 11:28:46 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@36 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 NULL1 00:06:46.608 11:28:47 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@42 -- # PERF_PID=64239 00:06:46.608 11:28:47 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@40 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf -c 0x1 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.3 trsvcid:4420' -t 30 -q 128 -w randread -o 512 -Q 1000 00:06:46.608 11:28:47 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 64239 00:06:46.608 11:28:47 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:06:47.984 Read completed with error (sct=0, sc=11) 00:06:47.984 11:28:48 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:06:47.984 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:06:47.984 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:06:47.984 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:06:47.984 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:06:47.984 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:06:47.984 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:06:47.984 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:06:47.984 11:28:48 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1001 00:06:47.984 11:28:48 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_null_resize NULL1 1001 00:06:48.242 true 00:06:48.242 11:28:48 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 64239 00:06:48.242 11:28:48 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:06:49.179 11:28:49 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:06:49.179 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:06:49.437 11:28:49 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1002 00:06:49.437 11:28:49 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_null_resize NULL1 1002 00:06:49.695 true 00:06:49.695 11:28:50 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 64239 00:06:49.695 11:28:50 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:06:49.953 11:28:50 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:06:50.212 11:28:50 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1003 00:06:50.212 11:28:50 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_null_resize NULL1 1003 00:06:50.470 true 00:06:50.470 11:28:51 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 64239 00:06:50.470 11:28:51 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:06:50.729 11:28:51 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:06:51.296 11:28:51 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1004 00:06:51.296 11:28:51 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_null_resize NULL1 1004 00:06:51.296 true 00:06:51.296 11:28:52 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 64239 00:06:51.296 11:28:52 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:06:51.863 11:28:52 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:06:51.863 11:28:52 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1005 00:06:51.863 11:28:52 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_null_resize NULL1 1005 00:06:52.122 true 00:06:52.380 11:28:52 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 64239 00:06:52.380 11:28:52 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:06:53.317 11:28:53 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:06:53.317 11:28:54 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1006 00:06:53.317 11:28:54 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_null_resize NULL1 1006 00:06:53.575 true 00:06:53.575 11:28:54 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 64239 00:06:53.575 11:28:54 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:06:54.143 11:28:54 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:06:54.402 11:28:54 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1007 00:06:54.402 11:28:54 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_null_resize NULL1 1007 00:06:54.660 true 00:06:54.660 11:28:55 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 64239 00:06:54.660 11:28:55 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:06:54.919 11:28:55 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:06:55.178 11:28:55 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1008 00:06:55.178 11:28:55 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_null_resize NULL1 1008 00:06:55.436 true 00:06:55.436 11:28:56 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 64239 00:06:55.436 11:28:56 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:06:55.695 11:28:56 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:06:55.953 11:28:56 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1009 00:06:55.953 11:28:56 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_null_resize NULL1 1009 00:06:56.212 true 00:06:56.212 11:28:56 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 64239 00:06:56.212 11:28:56 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:06:57.148 11:28:57 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:06:57.407 11:28:58 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1010 00:06:57.407 11:28:58 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_null_resize NULL1 1010 00:06:57.665 true 00:06:57.665 11:28:58 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 64239 00:06:57.665 11:28:58 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:06:57.924 11:28:58 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:06:58.183 11:28:58 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1011 00:06:58.183 11:28:58 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_null_resize NULL1 1011 00:06:58.441 true 00:06:58.441 11:28:59 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 64239 00:06:58.441 11:28:59 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:06:59.007 11:28:59 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:06:59.007 11:28:59 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1012 00:06:59.007 11:28:59 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_null_resize NULL1 1012 00:06:59.264 true 00:06:59.520 11:29:00 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 64239 00:06:59.520 11:29:00 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:06:59.778 11:29:00 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:07:00.039 11:29:00 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1013 00:07:00.039 11:29:00 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_null_resize NULL1 1013 00:07:00.313 true 00:07:00.313 11:29:00 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 64239 00:07:00.313 11:29:00 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:07:01.245 11:29:01 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:07:01.503 11:29:02 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1014 00:07:01.503 11:29:02 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_null_resize NULL1 1014 00:07:01.761 true 00:07:01.761 11:29:02 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 64239 00:07:01.761 11:29:02 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:07:02.019 11:29:02 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:07:02.277 11:29:02 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1015 00:07:02.277 11:29:02 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_null_resize NULL1 1015 00:07:02.534 true 00:07:02.534 11:29:03 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 64239 00:07:02.534 11:29:03 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:07:02.791 11:29:03 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:07:03.049 11:29:03 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1016 00:07:03.049 11:29:03 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_null_resize NULL1 1016 00:07:03.615 true 00:07:03.615 11:29:04 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 64239 00:07:03.615 11:29:04 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:07:03.615 11:29:04 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:07:04.182 11:29:04 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1017 00:07:04.182 11:29:04 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_null_resize NULL1 1017 00:07:04.182 true 00:07:04.182 11:29:04 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 64239 00:07:04.182 11:29:04 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:07:05.121 11:29:05 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:07:05.381 11:29:06 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1018 00:07:05.381 11:29:06 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_null_resize NULL1 1018 00:07:05.639 true 00:07:05.639 11:29:06 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 64239 00:07:05.639 11:29:06 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:07:05.897 11:29:06 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:07:06.156 11:29:06 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1019 00:07:06.156 11:29:06 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_null_resize NULL1 1019 00:07:06.415 true 00:07:06.415 11:29:07 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 64239 00:07:06.415 11:29:07 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:07:06.983 11:29:07 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:07:07.242 11:29:07 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1020 00:07:07.242 11:29:07 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_null_resize NULL1 1020 00:07:07.501 true 00:07:07.501 11:29:08 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 64239 00:07:07.501 11:29:08 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:07:07.760 11:29:08 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:07:08.019 11:29:08 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1021 00:07:08.019 11:29:08 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_null_resize NULL1 1021 00:07:08.277 true 00:07:08.277 11:29:08 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 64239 00:07:08.277 11:29:08 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:07:09.212 11:29:09 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:07:09.471 11:29:10 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1022 00:07:09.471 11:29:10 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_null_resize NULL1 1022 00:07:09.729 true 00:07:09.729 11:29:10 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 64239 00:07:09.729 11:29:10 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:07:09.987 11:29:10 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:07:10.249 11:29:10 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1023 00:07:10.249 11:29:10 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_null_resize NULL1 1023 00:07:10.507 true 00:07:10.507 11:29:11 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 64239 00:07:10.507 11:29:11 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:07:10.766 11:29:11 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:07:11.025 11:29:11 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1024 00:07:11.025 11:29:11 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_null_resize NULL1 1024 00:07:11.284 true 00:07:11.284 11:29:12 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 64239 00:07:11.284 11:29:12 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:07:12.220 11:29:12 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:07:12.479 11:29:13 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1025 00:07:12.479 11:29:13 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_null_resize NULL1 1025 00:07:12.737 true 00:07:12.737 11:29:13 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 64239 00:07:12.737 11:29:13 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:07:12.995 11:29:13 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:07:13.561 11:29:14 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1026 00:07:13.561 11:29:14 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_null_resize NULL1 1026 00:07:13.561 true 00:07:13.561 11:29:14 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 64239 00:07:13.561 11:29:14 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:07:13.820 11:29:14 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:07:14.387 11:29:14 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1027 00:07:14.387 11:29:14 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_null_resize NULL1 1027 00:07:14.646 true 00:07:14.646 11:29:15 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 64239 00:07:14.646 11:29:15 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:07:14.904 11:29:15 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:07:15.176 11:29:15 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1028 00:07:15.176 11:29:15 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_null_resize NULL1 1028 00:07:15.449 true 00:07:15.449 11:29:16 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 64239 00:07:15.449 11:29:16 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:07:15.707 11:29:16 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:07:15.965 11:29:16 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1029 00:07:15.965 11:29:16 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_null_resize NULL1 1029 00:07:16.223 true 00:07:16.223 11:29:16 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 64239 00:07:16.223 11:29:16 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:07:17.158 Initializing NVMe Controllers 00:07:17.158 Attached to NVMe over Fabrics controller at 10.0.0.3:4420: nqn.2016-06.io.spdk:cnode1 00:07:17.158 Controller IO queue size 128, less than required. 00:07:17.158 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:07:17.158 Controller IO queue size 128, less than required. 00:07:17.158 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:07:17.158 Associating TCP (addr:10.0.0.3 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:07:17.158 Associating TCP (addr:10.0.0.3 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 with lcore 0 00:07:17.158 Initialization complete. Launching workers. 00:07:17.158 ======================================================== 00:07:17.158 Latency(us) 00:07:17.158 Device Information : IOPS MiB/s Average min max 00:07:17.158 TCP (addr:10.0.0.3 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 330.30 0.16 127506.45 3819.29 1019566.36 00:07:17.158 TCP (addr:10.0.0.3 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 from core 0: 6411.07 3.13 19964.89 3440.61 638283.13 00:07:17.158 ======================================================== 00:07:17.158 Total : 6741.37 3.29 25234.00 3440.61 1019566.36 00:07:17.158 00:07:17.158 11:29:17 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:07:17.416 11:29:18 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1030 00:07:17.416 11:29:18 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_null_resize NULL1 1030 00:07:17.675 true 00:07:17.675 11:29:18 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 64239 00:07:17.675 /home/vagrant/spdk_repo/spdk/test/nvmf/target/ns_hotplug_stress.sh: line 44: kill: (64239) - No such process 00:07:17.675 11:29:18 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@53 -- # wait 64239 00:07:17.675 11:29:18 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:07:18.243 11:29:18 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@55 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:07:18.501 11:29:19 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@58 -- # nthreads=8 00:07:18.501 11:29:19 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@58 -- # pids=() 00:07:18.501 11:29:19 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i = 0 )) 00:07:18.501 11:29:19 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:07:18.501 11:29:19 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_null_create null0 100 4096 00:07:18.760 null0 00:07:18.760 11:29:19 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:07:18.760 11:29:19 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:07:18.760 11:29:19 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_null_create null1 100 4096 00:07:19.018 null1 00:07:19.018 11:29:19 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:07:19.018 11:29:19 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:07:19.018 11:29:19 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_null_create null2 100 4096 00:07:19.277 null2 00:07:19.277 11:29:19 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:07:19.277 11:29:19 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:07:19.277 11:29:19 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_null_create null3 100 4096 00:07:19.534 null3 00:07:19.534 11:29:20 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:07:19.534 11:29:20 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:07:19.534 11:29:20 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_null_create null4 100 4096 00:07:19.792 null4 00:07:19.792 11:29:20 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:07:19.792 11:29:20 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:07:19.792 11:29:20 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_null_create null5 100 4096 00:07:20.050 null5 00:07:20.050 11:29:20 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:07:20.050 11:29:20 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:07:20.050 11:29:20 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_null_create null6 100 4096 00:07:20.338 null6 00:07:20.338 11:29:21 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:07:20.339 11:29:21 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:07:20.339 11:29:21 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_null_create null7 100 4096 00:07:20.649 null7 00:07:20.649 11:29:21 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:07:20.649 11:29:21 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:07:20.649 11:29:21 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i = 0 )) 00:07:20.649 11:29:21 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:07:20.649 11:29:21 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 1 null0 00:07:20.649 11:29:21 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:07:20.649 11:29:21 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=1 bdev=null0 00:07:20.649 11:29:21 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:07:20.649 11:29:21 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:07:20.649 11:29:21 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:20.649 11:29:21 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:07:20.649 11:29:21 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:07:20.649 11:29:21 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:07:20.649 11:29:21 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 2 null1 00:07:20.649 11:29:21 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=2 bdev=null1 00:07:20.649 11:29:21 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:07:20.649 11:29:21 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:07:20.650 11:29:21 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:07:20.650 11:29:21 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:20.650 11:29:21 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:07:20.650 11:29:21 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:07:20.650 11:29:21 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 3 null2 00:07:20.650 11:29:21 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=3 bdev=null2 00:07:20.650 11:29:21 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:07:20.650 11:29:21 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:07:20.650 11:29:21 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:07:20.650 11:29:21 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:20.650 11:29:21 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:07:20.650 11:29:21 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:07:20.650 11:29:21 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 4 null3 00:07:20.650 11:29:21 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:07:20.650 11:29:21 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:07:20.650 11:29:21 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=4 bdev=null3 00:07:20.650 11:29:21 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:07:20.650 11:29:21 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:07:20.650 11:29:21 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:07:20.650 11:29:21 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 5 null4 00:07:20.650 11:29:21 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:07:20.650 11:29:21 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=5 bdev=null4 00:07:20.650 11:29:21 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:07:20.650 11:29:21 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:20.650 11:29:21 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:07:20.650 11:29:21 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:07:20.650 11:29:21 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:07:20.650 11:29:21 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:07:20.650 11:29:21 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 6 null5 00:07:20.650 11:29:21 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:20.650 11:29:21 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=6 bdev=null5 00:07:20.650 11:29:21 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:07:20.650 11:29:21 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:07:20.650 11:29:21 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:07:20.650 11:29:21 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:07:20.650 11:29:21 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:20.650 11:29:21 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:07:20.650 11:29:21 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:07:20.650 11:29:21 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:07:20.650 11:29:21 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:07:20.650 11:29:21 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:07:20.650 11:29:21 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@66 -- # wait 65319 65321 65322 65324 65325 65327 65329 65330 00:07:20.650 11:29:21 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 7 null6 00:07:20.650 11:29:21 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=7 bdev=null6 00:07:20.650 11:29:21 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:07:20.650 11:29:21 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:20.650 11:29:21 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:07:20.650 11:29:21 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 8 null7 00:07:20.650 11:29:21 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=8 bdev=null7 00:07:20.650 11:29:21 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:07:20.650 11:29:21 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:20.650 11:29:21 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:07:20.909 11:29:21 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:07:20.909 11:29:21 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:07:20.909 11:29:21 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:07:21.167 11:29:21 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:07:21.167 11:29:21 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:07:21.167 11:29:21 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:07:21.167 11:29:21 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:07:21.167 11:29:21 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:07:21.427 11:29:21 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:21.427 11:29:21 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:21.427 11:29:21 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:07:21.427 11:29:21 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:21.427 11:29:21 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:21.427 11:29:21 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:07:21.427 11:29:21 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:21.427 11:29:21 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:21.427 11:29:21 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:07:21.427 11:29:22 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:21.427 11:29:22 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:21.427 11:29:22 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:07:21.427 11:29:22 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:21.427 11:29:22 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:21.427 11:29:22 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:07:21.427 11:29:22 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:21.427 11:29:22 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:21.427 11:29:22 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:07:21.427 11:29:22 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:21.427 11:29:22 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:21.427 11:29:22 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:07:21.427 11:29:22 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:21.427 11:29:22 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:21.427 11:29:22 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:07:21.685 11:29:22 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:07:21.685 11:29:22 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:07:21.685 11:29:22 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:07:21.685 11:29:22 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:07:21.685 11:29:22 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:07:21.685 11:29:22 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:07:21.944 11:29:22 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:07:21.944 11:29:22 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:07:21.944 11:29:22 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:21.944 11:29:22 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:21.944 11:29:22 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:07:21.944 11:29:22 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:21.944 11:29:22 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:21.944 11:29:22 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:07:21.944 11:29:22 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:21.944 11:29:22 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:21.944 11:29:22 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:07:21.944 11:29:22 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:21.944 11:29:22 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:21.944 11:29:22 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:07:21.944 11:29:22 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:21.944 11:29:22 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:21.944 11:29:22 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:07:22.203 11:29:22 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:22.203 11:29:22 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:22.203 11:29:22 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:07:22.203 11:29:22 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:22.203 11:29:22 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:22.203 11:29:22 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:07:22.204 11:29:22 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:07:22.204 11:29:22 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:22.204 11:29:22 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:22.204 11:29:22 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:07:22.204 11:29:22 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:07:22.204 11:29:22 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:07:22.461 11:29:22 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:07:22.461 11:29:23 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:07:22.461 11:29:23 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:07:22.461 11:29:23 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:07:22.461 11:29:23 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:22.461 11:29:23 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:22.461 11:29:23 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:07:22.461 11:29:23 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:22.461 11:29:23 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:22.461 11:29:23 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:07:22.461 11:29:23 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:07:22.719 11:29:23 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:22.719 11:29:23 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:22.719 11:29:23 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:07:22.719 11:29:23 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:22.719 11:29:23 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:22.719 11:29:23 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:07:22.719 11:29:23 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:22.719 11:29:23 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:22.719 11:29:23 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:07:22.719 11:29:23 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:22.719 11:29:23 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:22.719 11:29:23 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:07:22.719 11:29:23 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:07:22.719 11:29:23 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:22.719 11:29:23 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:22.719 11:29:23 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:07:22.719 11:29:23 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:07:22.977 11:29:23 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:22.977 11:29:23 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:22.977 11:29:23 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:07:22.977 11:29:23 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:07:22.977 11:29:23 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:07:22.977 11:29:23 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:07:22.977 11:29:23 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:07:22.977 11:29:23 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:07:23.236 11:29:23 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:23.236 11:29:23 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:23.236 11:29:23 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:07:23.236 11:29:23 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:23.236 11:29:23 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:23.236 11:29:23 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:07:23.236 11:29:23 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:23.236 11:29:23 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:23.236 11:29:23 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:07:23.236 11:29:23 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:07:23.236 11:29:23 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:23.236 11:29:23 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:23.236 11:29:23 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:07:23.236 11:29:23 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:23.236 11:29:23 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:23.236 11:29:23 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:07:23.236 11:29:23 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:23.236 11:29:23 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:23.236 11:29:23 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:07:23.494 11:29:24 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:23.494 11:29:24 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:23.494 11:29:24 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:07:23.494 11:29:24 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:07:23.494 11:29:24 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:07:23.494 11:29:24 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:23.494 11:29:24 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:23.494 11:29:24 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:07:23.494 11:29:24 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:07:23.494 11:29:24 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:07:23.754 11:29:24 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:07:23.754 11:29:24 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:07:23.754 11:29:24 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:23.754 11:29:24 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:23.754 11:29:24 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:07:23.754 11:29:24 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:07:23.754 11:29:24 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:23.754 11:29:24 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:23.754 11:29:24 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:07:23.754 11:29:24 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:23.754 11:29:24 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:23.754 11:29:24 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:07:23.754 11:29:24 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:23.754 11:29:24 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:23.754 11:29:24 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:07:23.754 11:29:24 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:07:24.013 11:29:24 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:24.013 11:29:24 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:24.013 11:29:24 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:07:24.013 11:29:24 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:24.013 11:29:24 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:24.013 11:29:24 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:07:24.013 11:29:24 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:24.013 11:29:24 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:24.013 11:29:24 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:07:24.013 11:29:24 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:07:24.013 11:29:24 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:07:24.013 11:29:24 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:07:24.270 11:29:24 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:07:24.270 11:29:24 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:24.270 11:29:24 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:24.270 11:29:24 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:07:24.270 11:29:24 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:07:24.270 11:29:24 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:07:24.271 11:29:24 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:07:24.271 11:29:24 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:24.271 11:29:24 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:24.271 11:29:24 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:07:24.271 11:29:25 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:24.271 11:29:25 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:24.271 11:29:25 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:07:24.528 11:29:25 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:24.528 11:29:25 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:24.528 11:29:25 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:07:24.528 11:29:25 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:24.528 11:29:25 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:24.528 11:29:25 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:07:24.528 11:29:25 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:07:24.528 11:29:25 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:24.528 11:29:25 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:24.528 11:29:25 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:07:24.528 11:29:25 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:24.528 11:29:25 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:24.528 11:29:25 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:07:24.528 11:29:25 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:24.528 11:29:25 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:24.528 11:29:25 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:07:24.528 11:29:25 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:07:24.841 11:29:25 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:07:24.841 11:29:25 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:07:24.841 11:29:25 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:24.841 11:29:25 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:24.841 11:29:25 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:07:24.841 11:29:25 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:07:24.841 11:29:25 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:07:24.841 11:29:25 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:07:24.841 11:29:25 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:07:24.841 11:29:25 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:24.841 11:29:25 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:24.841 11:29:25 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:07:25.100 11:29:25 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:25.100 11:29:25 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:25.100 11:29:25 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:07:25.100 11:29:25 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:25.100 11:29:25 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:25.100 11:29:25 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:07:25.100 11:29:25 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:25.100 11:29:25 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:25.100 11:29:25 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:07:25.100 11:29:25 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:07:25.100 11:29:25 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:25.100 11:29:25 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:25.100 11:29:25 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:07:25.100 11:29:25 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:25.100 11:29:25 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:25.100 11:29:25 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:07:25.100 11:29:25 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:25.100 11:29:25 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:25.100 11:29:25 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:07:25.358 11:29:25 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:07:25.358 11:29:25 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:07:25.358 11:29:25 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:25.358 11:29:25 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:25.358 11:29:25 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:07:25.358 11:29:25 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:07:25.358 11:29:25 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:07:25.359 11:29:26 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:07:25.617 11:29:26 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:25.617 11:29:26 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:25.617 11:29:26 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:07:25.617 11:29:26 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:07:25.617 11:29:26 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:07:25.617 11:29:26 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:25.617 11:29:26 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:25.617 11:29:26 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:07:25.617 11:29:26 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:07:25.617 11:29:26 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:25.617 11:29:26 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:25.617 11:29:26 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:07:25.617 11:29:26 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:25.617 11:29:26 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:25.617 11:29:26 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:07:25.875 11:29:26 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:25.876 11:29:26 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:25.876 11:29:26 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:07:25.876 11:29:26 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:07:25.876 11:29:26 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:25.876 11:29:26 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:25.876 11:29:26 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:07:25.876 11:29:26 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:25.876 11:29:26 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:25.876 11:29:26 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:07:25.876 11:29:26 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:25.876 11:29:26 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:25.876 11:29:26 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:07:25.876 11:29:26 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:07:25.876 11:29:26 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:07:25.876 11:29:26 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:07:26.134 11:29:26 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:07:26.134 11:29:26 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:26.134 11:29:26 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:26.134 11:29:26 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:07:26.134 11:29:26 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:07:26.134 11:29:26 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:07:26.134 11:29:26 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:07:26.134 11:29:26 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:26.134 11:29:26 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:26.134 11:29:26 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:07:26.393 11:29:26 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:26.393 11:29:26 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:26.393 11:29:26 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:07:26.393 11:29:26 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:26.393 11:29:26 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:26.393 11:29:26 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:07:26.393 11:29:26 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:26.393 11:29:26 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:26.393 11:29:26 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:07:26.393 11:29:27 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:07:26.393 11:29:27 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:26.393 11:29:27 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:26.393 11:29:27 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:07:26.393 11:29:27 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:26.393 11:29:27 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:26.393 11:29:27 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:07:26.393 11:29:27 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:26.393 11:29:27 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:26.393 11:29:27 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:07:26.393 11:29:27 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:07:26.652 11:29:27 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:07:26.652 11:29:27 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:07:26.652 11:29:27 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:07:26.652 11:29:27 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:07:26.652 11:29:27 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:07:26.652 11:29:27 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:26.652 11:29:27 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:26.652 11:29:27 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:07:26.911 11:29:27 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:26.911 11:29:27 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:26.911 11:29:27 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:26.911 11:29:27 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:26.911 11:29:27 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:26.911 11:29:27 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:26.911 11:29:27 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:26.911 11:29:27 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:26.911 11:29:27 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:26.911 11:29:27 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:26.911 11:29:27 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:26.911 11:29:27 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:27.170 11:29:27 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:27.170 11:29:27 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:27.170 11:29:27 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@68 -- # trap - SIGINT SIGTERM EXIT 00:07:27.170 11:29:27 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@70 -- # nvmftestfini 00:07:27.170 11:29:27 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@516 -- # nvmfcleanup 00:07:27.170 11:29:27 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@121 -- # sync 00:07:27.170 11:29:27 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:07:27.170 11:29:27 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@124 -- # set +e 00:07:27.170 11:29:27 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@125 -- # for i in {1..20} 00:07:27.170 11:29:27 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:07:27.170 rmmod nvme_tcp 00:07:27.170 rmmod nvme_fabrics 00:07:27.170 rmmod nvme_keyring 00:07:27.170 11:29:27 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:07:27.170 11:29:27 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@128 -- # set -e 00:07:27.170 11:29:27 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@129 -- # return 0 00:07:27.170 11:29:27 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@517 -- # '[' -n 64108 ']' 00:07:27.170 11:29:27 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@518 -- # killprocess 64108 00:07:27.170 11:29:27 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@954 -- # '[' -z 64108 ']' 00:07:27.170 11:29:27 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@958 -- # kill -0 64108 00:07:27.170 11:29:27 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@959 -- # uname 00:07:27.170 11:29:27 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:07:27.170 11:29:27 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 64108 00:07:27.170 11:29:27 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:07:27.170 11:29:27 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:07:27.170 killing process with pid 64108 00:07:27.170 11:29:27 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@972 -- # echo 'killing process with pid 64108' 00:07:27.170 11:29:27 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@973 -- # kill 64108 00:07:27.170 11:29:27 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@978 -- # wait 64108 00:07:27.429 11:29:28 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:07:27.429 11:29:28 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:07:27.429 11:29:28 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:07:27.429 11:29:28 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@297 -- # iptr 00:07:27.429 11:29:28 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@791 -- # iptables-save 00:07:27.429 11:29:28 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:07:27.429 11:29:28 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@791 -- # iptables-restore 00:07:27.429 11:29:28 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@298 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:07:27.429 11:29:28 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@299 -- # nvmf_veth_fini 00:07:27.429 11:29:28 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@233 -- # ip link set nvmf_init_br nomaster 00:07:27.429 11:29:28 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@234 -- # ip link set nvmf_init_br2 nomaster 00:07:27.429 11:29:28 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@235 -- # ip link set nvmf_tgt_br nomaster 00:07:27.429 11:29:28 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@236 -- # ip link set nvmf_tgt_br2 nomaster 00:07:27.429 11:29:28 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@237 -- # ip link set nvmf_init_br down 00:07:27.429 11:29:28 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@238 -- # ip link set nvmf_init_br2 down 00:07:27.429 11:29:28 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@239 -- # ip link set nvmf_tgt_br down 00:07:27.429 11:29:28 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@240 -- # ip link set nvmf_tgt_br2 down 00:07:27.429 11:29:28 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@241 -- # ip link delete nvmf_br type bridge 00:07:27.429 11:29:28 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@242 -- # ip link delete nvmf_init_if 00:07:27.429 11:29:28 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@243 -- # ip link delete nvmf_init_if2 00:07:27.429 11:29:28 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@244 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:07:27.688 11:29:28 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@245 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:07:27.688 11:29:28 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@246 -- # remove_spdk_ns 00:07:27.688 11:29:28 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:07:27.688 11:29:28 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:07:27.688 11:29:28 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:07:27.688 11:29:28 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@300 -- # return 0 00:07:27.688 ************************************ 00:07:27.688 END TEST nvmf_ns_hotplug_stress 00:07:27.688 ************************************ 00:07:27.688 00:07:27.688 real 0m45.598s 00:07:27.688 user 3m43.931s 00:07:27.688 sys 0m13.006s 00:07:27.688 11:29:28 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1130 -- # xtrace_disable 00:07:27.688 11:29:28 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@10 -- # set +x 00:07:27.688 11:29:28 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@23 -- # run_test nvmf_delete_subsystem /home/vagrant/spdk_repo/spdk/test/nvmf/target/delete_subsystem.sh --transport=tcp 00:07:27.688 11:29:28 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:07:27.688 11:29:28 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1111 -- # xtrace_disable 00:07:27.688 11:29:28 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@10 -- # set +x 00:07:27.688 ************************************ 00:07:27.688 START TEST nvmf_delete_subsystem 00:07:27.688 ************************************ 00:07:27.688 11:29:28 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/delete_subsystem.sh --transport=tcp 00:07:27.688 * Looking for test storage... 00:07:27.688 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:07:27.688 11:29:28 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:07:27.688 11:29:28 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@1711 -- # lcov --version 00:07:27.688 11:29:28 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:07:27.948 11:29:28 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:07:27.948 11:29:28 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:07:27.948 11:29:28 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@333 -- # local ver1 ver1_l 00:07:27.948 11:29:28 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@334 -- # local ver2 ver2_l 00:07:27.948 11:29:28 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@336 -- # IFS=.-: 00:07:27.948 11:29:28 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@336 -- # read -ra ver1 00:07:27.948 11:29:28 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@337 -- # IFS=.-: 00:07:27.948 11:29:28 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@337 -- # read -ra ver2 00:07:27.948 11:29:28 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@338 -- # local 'op=<' 00:07:27.948 11:29:28 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@340 -- # ver1_l=2 00:07:27.948 11:29:28 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@341 -- # ver2_l=1 00:07:27.948 11:29:28 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:07:27.948 11:29:28 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@344 -- # case "$op" in 00:07:27.948 11:29:28 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@345 -- # : 1 00:07:27.948 11:29:28 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@364 -- # (( v = 0 )) 00:07:27.948 11:29:28 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:07:27.948 11:29:28 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@365 -- # decimal 1 00:07:27.948 11:29:28 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@353 -- # local d=1 00:07:27.948 11:29:28 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:07:27.948 11:29:28 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@355 -- # echo 1 00:07:27.948 11:29:28 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@365 -- # ver1[v]=1 00:07:27.948 11:29:28 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@366 -- # decimal 2 00:07:27.948 11:29:28 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@353 -- # local d=2 00:07:27.948 11:29:28 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:07:27.948 11:29:28 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@355 -- # echo 2 00:07:27.948 11:29:28 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@366 -- # ver2[v]=2 00:07:27.948 11:29:28 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:07:27.948 11:29:28 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:07:27.948 11:29:28 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@368 -- # return 0 00:07:27.948 11:29:28 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:07:27.948 11:29:28 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:07:27.948 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:27.948 --rc genhtml_branch_coverage=1 00:07:27.948 --rc genhtml_function_coverage=1 00:07:27.948 --rc genhtml_legend=1 00:07:27.948 --rc geninfo_all_blocks=1 00:07:27.948 --rc geninfo_unexecuted_blocks=1 00:07:27.948 00:07:27.948 ' 00:07:27.948 11:29:28 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:07:27.948 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:27.948 --rc genhtml_branch_coverage=1 00:07:27.948 --rc genhtml_function_coverage=1 00:07:27.948 --rc genhtml_legend=1 00:07:27.948 --rc geninfo_all_blocks=1 00:07:27.948 --rc geninfo_unexecuted_blocks=1 00:07:27.948 00:07:27.948 ' 00:07:27.948 11:29:28 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:07:27.948 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:27.948 --rc genhtml_branch_coverage=1 00:07:27.948 --rc genhtml_function_coverage=1 00:07:27.948 --rc genhtml_legend=1 00:07:27.948 --rc geninfo_all_blocks=1 00:07:27.948 --rc geninfo_unexecuted_blocks=1 00:07:27.948 00:07:27.948 ' 00:07:27.948 11:29:28 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:07:27.948 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:27.948 --rc genhtml_branch_coverage=1 00:07:27.948 --rc genhtml_function_coverage=1 00:07:27.948 --rc genhtml_legend=1 00:07:27.948 --rc geninfo_all_blocks=1 00:07:27.948 --rc geninfo_unexecuted_blocks=1 00:07:27.948 00:07:27.948 ' 00:07:27.948 11:29:28 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@10 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:07:27.948 11:29:28 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@7 -- # uname -s 00:07:27.948 11:29:28 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:07:27.948 11:29:28 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:07:27.948 11:29:28 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:07:27.948 11:29:28 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:07:27.948 11:29:28 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:07:27.948 11:29:28 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:07:27.948 11:29:28 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:07:27.948 11:29:28 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:07:27.948 11:29:28 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:07:27.948 11:29:28 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:07:27.948 11:29:28 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:2fb5c03d-b67f-4471-b8a9-a83bf7b15be8 00:07:27.948 11:29:28 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@18 -- # NVME_HOSTID=2fb5c03d-b67f-4471-b8a9-a83bf7b15be8 00:07:27.949 11:29:28 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:07:27.949 11:29:28 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:07:27.949 11:29:28 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:07:27.949 11:29:28 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:07:27.949 11:29:28 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@49 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:07:27.949 11:29:28 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@15 -- # shopt -s extglob 00:07:27.949 11:29:28 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:07:27.949 11:29:28 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:07:27.949 11:29:28 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:07:27.949 11:29:28 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:27.949 11:29:28 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:27.949 11:29:28 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:27.949 11:29:28 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- paths/export.sh@5 -- # export PATH 00:07:27.949 11:29:28 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:27.949 11:29:28 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@51 -- # : 0 00:07:27.949 11:29:28 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:07:27.949 11:29:28 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:07:27.949 11:29:28 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:07:27.949 11:29:28 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:07:27.949 11:29:28 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:07:27.949 11:29:28 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:07:27.949 /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:07:27.949 11:29:28 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:07:27.949 11:29:28 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:07:27.949 11:29:28 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@55 -- # have_pci_nics=0 00:07:27.949 11:29:28 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@12 -- # nvmftestinit 00:07:27.949 11:29:28 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:07:27.949 11:29:28 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:07:27.949 11:29:28 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@476 -- # prepare_net_devs 00:07:27.949 11:29:28 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@438 -- # local -g is_hw=no 00:07:27.949 11:29:28 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@440 -- # remove_spdk_ns 00:07:27.949 11:29:28 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:07:27.949 11:29:28 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:07:27.949 11:29:28 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:07:27.949 11:29:28 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@442 -- # [[ virt != virt ]] 00:07:27.949 11:29:28 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@444 -- # [[ no == yes ]] 00:07:27.949 11:29:28 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@451 -- # [[ virt == phy ]] 00:07:27.949 11:29:28 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@454 -- # [[ virt == phy-fallback ]] 00:07:27.949 11:29:28 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@459 -- # [[ tcp == tcp ]] 00:07:27.949 11:29:28 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@460 -- # nvmf_veth_init 00:07:27.949 11:29:28 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@145 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:07:27.949 11:29:28 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@146 -- # NVMF_SECOND_INITIATOR_IP=10.0.0.2 00:07:27.949 11:29:28 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@147 -- # NVMF_FIRST_TARGET_IP=10.0.0.3 00:07:27.949 11:29:28 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@148 -- # NVMF_SECOND_TARGET_IP=10.0.0.4 00:07:27.949 11:29:28 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@149 -- # NVMF_INITIATOR_IP=10.0.0.1 00:07:27.949 11:29:28 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@150 -- # NVMF_BRIDGE=nvmf_br 00:07:27.949 11:29:28 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@151 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:07:27.949 11:29:28 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@152 -- # NVMF_INITIATOR_INTERFACE2=nvmf_init_if2 00:07:27.949 11:29:28 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@153 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:07:27.949 11:29:28 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@154 -- # NVMF_INITIATOR_BRIDGE2=nvmf_init_br2 00:07:27.949 11:29:28 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@155 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:07:27.949 11:29:28 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@156 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:07:27.949 11:29:28 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@157 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:07:27.949 11:29:28 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@158 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:07:27.949 11:29:28 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@159 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:07:27.949 11:29:28 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@160 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:07:27.949 11:29:28 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@162 -- # ip link set nvmf_init_br nomaster 00:07:27.949 Cannot find device "nvmf_init_br" 00:07:27.949 11:29:28 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@162 -- # true 00:07:27.949 11:29:28 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@163 -- # ip link set nvmf_init_br2 nomaster 00:07:27.949 Cannot find device "nvmf_init_br2" 00:07:27.949 11:29:28 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@163 -- # true 00:07:27.949 11:29:28 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@164 -- # ip link set nvmf_tgt_br nomaster 00:07:27.949 Cannot find device "nvmf_tgt_br" 00:07:27.949 11:29:28 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@164 -- # true 00:07:27.949 11:29:28 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@165 -- # ip link set nvmf_tgt_br2 nomaster 00:07:27.949 Cannot find device "nvmf_tgt_br2" 00:07:27.949 11:29:28 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@165 -- # true 00:07:27.949 11:29:28 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@166 -- # ip link set nvmf_init_br down 00:07:27.949 Cannot find device "nvmf_init_br" 00:07:27.949 11:29:28 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@166 -- # true 00:07:27.949 11:29:28 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@167 -- # ip link set nvmf_init_br2 down 00:07:27.949 Cannot find device "nvmf_init_br2" 00:07:27.949 11:29:28 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@167 -- # true 00:07:27.949 11:29:28 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@168 -- # ip link set nvmf_tgt_br down 00:07:27.949 Cannot find device "nvmf_tgt_br" 00:07:27.949 11:29:28 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@168 -- # true 00:07:27.949 11:29:28 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@169 -- # ip link set nvmf_tgt_br2 down 00:07:27.949 Cannot find device "nvmf_tgt_br2" 00:07:27.949 11:29:28 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@169 -- # true 00:07:27.949 11:29:28 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@170 -- # ip link delete nvmf_br type bridge 00:07:27.949 Cannot find device "nvmf_br" 00:07:27.949 11:29:28 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@170 -- # true 00:07:27.949 11:29:28 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@171 -- # ip link delete nvmf_init_if 00:07:27.949 Cannot find device "nvmf_init_if" 00:07:27.949 11:29:28 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@171 -- # true 00:07:27.949 11:29:28 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@172 -- # ip link delete nvmf_init_if2 00:07:27.949 Cannot find device "nvmf_init_if2" 00:07:27.949 11:29:28 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@172 -- # true 00:07:27.949 11:29:28 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@173 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:07:27.949 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:07:27.949 11:29:28 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@173 -- # true 00:07:27.949 11:29:28 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@174 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:07:27.949 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:07:27.949 11:29:28 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@174 -- # true 00:07:27.949 11:29:28 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@177 -- # ip netns add nvmf_tgt_ns_spdk 00:07:27.949 11:29:28 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@180 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:07:27.949 11:29:28 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@181 -- # ip link add nvmf_init_if2 type veth peer name nvmf_init_br2 00:07:27.949 11:29:28 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@182 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:07:27.949 11:29:28 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@183 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:07:28.208 11:29:28 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:07:28.208 11:29:28 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@187 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:07:28.208 11:29:28 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@190 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:07:28.208 11:29:28 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@191 -- # ip addr add 10.0.0.2/24 dev nvmf_init_if2 00:07:28.208 11:29:28 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@192 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if 00:07:28.208 11:29:28 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@193 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.4/24 dev nvmf_tgt_if2 00:07:28.208 11:29:28 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@196 -- # ip link set nvmf_init_if up 00:07:28.208 11:29:28 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@197 -- # ip link set nvmf_init_if2 up 00:07:28.208 11:29:28 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@198 -- # ip link set nvmf_init_br up 00:07:28.208 11:29:28 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@199 -- # ip link set nvmf_init_br2 up 00:07:28.208 11:29:28 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@200 -- # ip link set nvmf_tgt_br up 00:07:28.208 11:29:28 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@201 -- # ip link set nvmf_tgt_br2 up 00:07:28.208 11:29:28 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@202 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:07:28.208 11:29:28 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@203 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:07:28.208 11:29:28 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@204 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:07:28.208 11:29:28 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@207 -- # ip link add nvmf_br type bridge 00:07:28.208 11:29:28 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@208 -- # ip link set nvmf_br up 00:07:28.208 11:29:28 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@211 -- # ip link set nvmf_init_br master nvmf_br 00:07:28.208 11:29:28 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@212 -- # ip link set nvmf_init_br2 master nvmf_br 00:07:28.208 11:29:28 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@213 -- # ip link set nvmf_tgt_br master nvmf_br 00:07:28.208 11:29:28 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@214 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:07:28.208 11:29:28 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@217 -- # ipts -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:07:28.208 11:29:28 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT' 00:07:28.208 11:29:28 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@218 -- # ipts -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT 00:07:28.208 11:29:28 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT' 00:07:28.208 11:29:28 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@219 -- # ipts -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:07:28.208 11:29:28 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@790 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT -m comment --comment 'SPDK_NVMF:-A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT' 00:07:28.208 11:29:28 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@222 -- # ping -c 1 10.0.0.3 00:07:28.208 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:07:28.208 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.087 ms 00:07:28.208 00:07:28.208 --- 10.0.0.3 ping statistics --- 00:07:28.208 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:07:28.208 rtt min/avg/max/mdev = 0.087/0.087/0.087/0.000 ms 00:07:28.208 11:29:28 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@223 -- # ping -c 1 10.0.0.4 00:07:28.208 PING 10.0.0.4 (10.0.0.4) 56(84) bytes of data. 00:07:28.208 64 bytes from 10.0.0.4: icmp_seq=1 ttl=64 time=0.061 ms 00:07:28.208 00:07:28.208 --- 10.0.0.4 ping statistics --- 00:07:28.208 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:07:28.208 rtt min/avg/max/mdev = 0.061/0.061/0.061/0.000 ms 00:07:28.209 11:29:28 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@224 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:07:28.209 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:07:28.209 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.031 ms 00:07:28.209 00:07:28.209 --- 10.0.0.1 ping statistics --- 00:07:28.209 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:07:28.209 rtt min/avg/max/mdev = 0.031/0.031/0.031/0.000 ms 00:07:28.209 11:29:28 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@225 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.2 00:07:28.209 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:07:28.209 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.072 ms 00:07:28.209 00:07:28.209 --- 10.0.0.2 ping statistics --- 00:07:28.209 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:07:28.209 rtt min/avg/max/mdev = 0.072/0.072/0.072/0.000 ms 00:07:28.209 11:29:28 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@227 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:07:28.209 11:29:28 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@461 -- # return 0 00:07:28.209 11:29:28 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:07:28.209 11:29:28 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:07:28.209 11:29:28 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:07:28.209 11:29:28 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:07:28.209 11:29:28 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:07:28.209 11:29:28 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:07:28.209 11:29:28 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:07:28.209 11:29:28 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@13 -- # nvmfappstart -m 0x3 00:07:28.209 11:29:28 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:07:28.209 11:29:28 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@726 -- # xtrace_disable 00:07:28.209 11:29:28 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:07:28.209 11:29:28 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@509 -- # nvmfpid=66724 00:07:28.209 11:29:28 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@508 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x3 00:07:28.209 11:29:28 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@510 -- # waitforlisten 66724 00:07:28.209 11:29:28 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@835 -- # '[' -z 66724 ']' 00:07:28.209 11:29:28 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:07:28.209 11:29:28 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@840 -- # local max_retries=100 00:07:28.209 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:07:28.209 11:29:28 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:07:28.209 11:29:28 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@844 -- # xtrace_disable 00:07:28.209 11:29:28 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:07:28.467 [2024-12-16 11:29:29.008081] Starting SPDK v25.01-pre git sha1 c360129e7 / DPDK 24.03.0 initialization... 00:07:28.467 [2024-12-16 11:29:29.008267] [ DPDK EAL parameters: nvmf -c 0x3 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:07:28.467 [2024-12-16 11:29:29.163654] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:07:28.726 [2024-12-16 11:29:29.251063] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:07:28.726 [2024-12-16 11:29:29.251129] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:07:28.726 [2024-12-16 11:29:29.251196] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:07:28.726 [2024-12-16 11:29:29.251212] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:07:28.726 [2024-12-16 11:29:29.251223] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:07:28.726 [2024-12-16 11:29:29.252563] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 1 00:07:28.726 [2024-12-16 11:29:29.252582] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:07:29.661 11:29:30 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:07:29.661 11:29:30 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@868 -- # return 0 00:07:29.661 11:29:30 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:07:29.661 11:29:30 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@732 -- # xtrace_disable 00:07:29.661 11:29:30 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:07:29.661 11:29:30 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:07:29.661 11:29:30 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@15 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:07:29.661 11:29:30 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:29.661 11:29:30 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:07:29.661 [2024-12-16 11:29:30.098486] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:07:29.661 11:29:30 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:29.661 11:29:30 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@16 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 10 00:07:29.661 11:29:30 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:29.661 11:29:30 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:07:29.661 11:29:30 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:29.661 11:29:30 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@17 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 00:07:29.661 11:29:30 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:29.661 11:29:30 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:07:29.661 [2024-12-16 11:29:30.114665] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4420 *** 00:07:29.661 11:29:30 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:29.661 11:29:30 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@18 -- # rpc_cmd bdev_null_create NULL1 1000 512 00:07:29.661 11:29:30 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:29.661 11:29:30 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:07:29.661 NULL1 00:07:29.661 11:29:30 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:29.661 11:29:30 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@23 -- # rpc_cmd bdev_delay_create -b NULL1 -d Delay0 -r 1000000 -t 1000000 -w 1000000 -n 1000000 00:07:29.661 11:29:30 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:29.661 11:29:30 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:07:29.661 Delay0 00:07:29.661 11:29:30 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:29.661 11:29:30 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:07:29.661 11:29:30 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:29.661 11:29:30 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:07:29.661 11:29:30 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:29.661 11:29:30 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@28 -- # perf_pid=66775 00:07:29.661 11:29:30 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@26 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf -c 0xC -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.3 trsvcid:4420' -t 5 -q 128 -w randrw -M 70 -o 512 -P 4 00:07:29.661 11:29:30 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@30 -- # sleep 2 00:07:29.661 [2024-12-16 11:29:30.329440] subsystem.c:1641:spdk_nvmf_subsystem_listener_allowed: *WARNING*: Allowing connection to discovery subsystem on TCP/10.0.0.3/4420, even though this listener was not added to the discovery subsystem. This behavior is deprecated and will be removed in a future release. 00:07:31.583 11:29:32 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@32 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:07:31.583 11:29:32 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:31.583 11:29:32 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:07:31.845 Read completed with error (sct=0, sc=8) 00:07:31.845 starting I/O failed: -6 00:07:31.845 Write completed with error (sct=0, sc=8) 00:07:31.845 Read completed with error (sct=0, sc=8) 00:07:31.845 Read completed with error (sct=0, sc=8) 00:07:31.845 Write completed with error (sct=0, sc=8) 00:07:31.845 starting I/O failed: -6 00:07:31.845 Read completed with error (sct=0, sc=8) 00:07:31.845 Read completed with error (sct=0, sc=8) 00:07:31.845 Read completed with error (sct=0, sc=8) 00:07:31.845 Read completed with error (sct=0, sc=8) 00:07:31.845 starting I/O failed: -6 00:07:31.845 Read completed with error (sct=0, sc=8) 00:07:31.845 Read completed with error (sct=0, sc=8) 00:07:31.845 Read completed with error (sct=0, sc=8) 00:07:31.845 Write completed with error (sct=0, sc=8) 00:07:31.845 starting I/O failed: -6 00:07:31.845 Write completed with error (sct=0, sc=8) 00:07:31.845 Read completed with error (sct=0, sc=8) 00:07:31.845 Read completed with error (sct=0, sc=8) 00:07:31.845 Read completed with error (sct=0, sc=8) 00:07:31.845 starting I/O failed: -6 00:07:31.845 Read completed with error (sct=0, sc=8) 00:07:31.845 Read completed with error (sct=0, sc=8) 00:07:31.845 Read completed with error (sct=0, sc=8) 00:07:31.845 Read completed with error (sct=0, sc=8) 00:07:31.845 starting I/O failed: -6 00:07:31.845 Read completed with error (sct=0, sc=8) 00:07:31.845 Read completed with error (sct=0, sc=8) 00:07:31.845 Read completed with error (sct=0, sc=8) 00:07:31.845 Write completed with error (sct=0, sc=8) 00:07:31.845 starting I/O failed: -6 00:07:31.845 Read completed with error (sct=0, sc=8) 00:07:31.845 Read completed with error (sct=0, sc=8) 00:07:31.845 Read completed with error (sct=0, sc=8) 00:07:31.845 Write completed with error (sct=0, sc=8) 00:07:31.845 starting I/O failed: -6 00:07:31.845 Read completed with error (sct=0, sc=8) 00:07:31.845 Write completed with error (sct=0, sc=8) 00:07:31.845 Write completed with error (sct=0, sc=8) 00:07:31.845 Read completed with error (sct=0, sc=8) 00:07:31.845 starting I/O failed: -6 00:07:31.845 Read completed with error (sct=0, sc=8) 00:07:31.845 Read completed with error (sct=0, sc=8) 00:07:31.845 Read completed with error (sct=0, sc=8) 00:07:31.845 Read completed with error (sct=0, sc=8) 00:07:31.845 starting I/O failed: -6 00:07:31.845 Read completed with error (sct=0, sc=8) 00:07:31.845 Write completed with error (sct=0, sc=8) 00:07:31.845 Read completed with error (sct=0, sc=8) 00:07:31.845 Read completed with error (sct=0, sc=8) 00:07:31.845 starting I/O failed: -6 00:07:31.845 Read completed with error (sct=0, sc=8) 00:07:31.845 Read completed with error (sct=0, sc=8) 00:07:31.846 [2024-12-16 11:29:32.365940] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x74cc30 is same with the state(6) to be set 00:07:31.846 Read completed with error (sct=0, sc=8) 00:07:31.846 Read completed with error (sct=0, sc=8) 00:07:31.846 Read completed with error (sct=0, sc=8) 00:07:31.846 Read completed with error (sct=0, sc=8) 00:07:31.846 Write completed with error (sct=0, sc=8) 00:07:31.846 Read completed with error (sct=0, sc=8) 00:07:31.846 Read completed with error (sct=0, sc=8) 00:07:31.846 Read completed with error (sct=0, sc=8) 00:07:31.846 Read completed with error (sct=0, sc=8) 00:07:31.846 Write completed with error (sct=0, sc=8) 00:07:31.846 Read completed with error (sct=0, sc=8) 00:07:31.846 Write completed with error (sct=0, sc=8) 00:07:31.846 Read completed with error (sct=0, sc=8) 00:07:31.846 Write completed with error (sct=0, sc=8) 00:07:31.846 Read completed with error (sct=0, sc=8) 00:07:31.846 Read completed with error (sct=0, sc=8) 00:07:31.846 Write completed with error (sct=0, sc=8) 00:07:31.846 Read completed with error (sct=0, sc=8) 00:07:31.846 Read completed with error (sct=0, sc=8) 00:07:31.846 Write completed with error (sct=0, sc=8) 00:07:31.846 Read completed with error (sct=0, sc=8) 00:07:31.846 Read completed with error (sct=0, sc=8) 00:07:31.846 Write completed with error (sct=0, sc=8) 00:07:31.846 Write completed with error (sct=0, sc=8) 00:07:31.846 Read completed with error (sct=0, sc=8) 00:07:31.846 Write completed with error (sct=0, sc=8) 00:07:31.846 Read completed with error (sct=0, sc=8) 00:07:31.846 Write completed with error (sct=0, sc=8) 00:07:31.846 Read completed with error (sct=0, sc=8) 00:07:31.846 Read completed with error (sct=0, sc=8) 00:07:31.846 Read completed with error (sct=0, sc=8) 00:07:31.846 Read completed with error (sct=0, sc=8) 00:07:31.846 Read completed with error (sct=0, sc=8) 00:07:31.846 Write completed with error (sct=0, sc=8) 00:07:31.846 Read completed with error (sct=0, sc=8) 00:07:31.846 Read completed with error (sct=0, sc=8) 00:07:31.846 Read completed with error (sct=0, sc=8) 00:07:31.846 Read completed with error (sct=0, sc=8) 00:07:31.846 Read completed with error (sct=0, sc=8) 00:07:31.846 Write completed with error (sct=0, sc=8) 00:07:31.846 Read completed with error (sct=0, sc=8) 00:07:31.846 Write completed with error (sct=0, sc=8) 00:07:31.846 Read completed with error (sct=0, sc=8) 00:07:31.846 Read completed with error (sct=0, sc=8) 00:07:31.846 Write completed with error (sct=0, sc=8) 00:07:31.846 Read completed with error (sct=0, sc=8) 00:07:31.846 Read completed with error (sct=0, sc=8) 00:07:31.846 Read completed with error (sct=0, sc=8) 00:07:31.846 Read completed with error (sct=0, sc=8) 00:07:31.846 Read completed with error (sct=0, sc=8) 00:07:31.846 Read completed with error (sct=0, sc=8) 00:07:31.846 Read completed with error (sct=0, sc=8) 00:07:31.846 Read completed with error (sct=0, sc=8) 00:07:31.846 Read completed with error (sct=0, sc=8) 00:07:31.846 Read completed with error (sct=0, sc=8) 00:07:31.846 Read completed with error (sct=0, sc=8) 00:07:31.846 Read completed with error (sct=0, sc=8) 00:07:31.846 starting I/O failed: -6 00:07:31.846 Read completed with error (sct=0, sc=8) 00:07:31.846 Read completed with error (sct=0, sc=8) 00:07:31.846 Write completed with error (sct=0, sc=8) 00:07:31.846 Write completed with error (sct=0, sc=8) 00:07:31.846 starting I/O failed: -6 00:07:31.846 Read completed with error (sct=0, sc=8) 00:07:31.846 Read completed with error (sct=0, sc=8) 00:07:31.846 Read completed with error (sct=0, sc=8) 00:07:31.846 Read completed with error (sct=0, sc=8) 00:07:31.846 starting I/O failed: -6 00:07:31.846 Read completed with error (sct=0, sc=8) 00:07:31.846 Read completed with error (sct=0, sc=8) 00:07:31.846 Read completed with error (sct=0, sc=8) 00:07:31.846 Read completed with error (sct=0, sc=8) 00:07:31.846 starting I/O failed: -6 00:07:31.846 Read completed with error (sct=0, sc=8) 00:07:31.846 Write completed with error (sct=0, sc=8) 00:07:31.846 Read completed with error (sct=0, sc=8) 00:07:31.846 Write completed with error (sct=0, sc=8) 00:07:31.846 starting I/O failed: -6 00:07:31.846 Read completed with error (sct=0, sc=8) 00:07:31.846 Read completed with error (sct=0, sc=8) 00:07:31.846 Read completed with error (sct=0, sc=8) 00:07:31.846 Read completed with error (sct=0, sc=8) 00:07:31.846 starting I/O failed: -6 00:07:31.846 Read completed with error (sct=0, sc=8) 00:07:31.846 Write completed with error (sct=0, sc=8) 00:07:31.846 Write completed with error (sct=0, sc=8) 00:07:31.846 Read completed with error (sct=0, sc=8) 00:07:31.846 starting I/O failed: -6 00:07:31.846 Read completed with error (sct=0, sc=8) 00:07:31.846 Write completed with error (sct=0, sc=8) 00:07:31.846 Write completed with error (sct=0, sc=8) 00:07:31.846 Write completed with error (sct=0, sc=8) 00:07:31.846 starting I/O failed: -6 00:07:31.846 Read completed with error (sct=0, sc=8) 00:07:31.846 Write completed with error (sct=0, sc=8) 00:07:31.846 Write completed with error (sct=0, sc=8) 00:07:31.846 Read completed with error (sct=0, sc=8) 00:07:31.846 starting I/O failed: -6 00:07:31.846 Read completed with error (sct=0, sc=8) 00:07:31.846 Read completed with error (sct=0, sc=8) 00:07:31.846 Read completed with error (sct=0, sc=8) 00:07:31.846 [2024-12-16 11:29:32.367921] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7ff20c00d510 is same with the state(6) to be set 00:07:31.846 Read completed with error (sct=0, sc=8) 00:07:31.846 Read completed with error (sct=0, sc=8) 00:07:31.846 Read completed with error (sct=0, sc=8) 00:07:31.846 Read completed with error (sct=0, sc=8) 00:07:31.846 Write completed with error (sct=0, sc=8) 00:07:31.846 Read completed with error (sct=0, sc=8) 00:07:31.846 Read completed with error (sct=0, sc=8) 00:07:31.846 Read completed with error (sct=0, sc=8) 00:07:31.846 Read completed with error (sct=0, sc=8) 00:07:31.846 Read completed with error (sct=0, sc=8) 00:07:31.846 Write completed with error (sct=0, sc=8) 00:07:31.846 Read completed with error (sct=0, sc=8) 00:07:31.846 Read completed with error (sct=0, sc=8) 00:07:31.846 Read completed with error (sct=0, sc=8) 00:07:31.846 Write completed with error (sct=0, sc=8) 00:07:31.846 Write completed with error (sct=0, sc=8) 00:07:31.846 Read completed with error (sct=0, sc=8) 00:07:31.846 Write completed with error (sct=0, sc=8) 00:07:31.846 Read completed with error (sct=0, sc=8) 00:07:31.846 Write completed with error (sct=0, sc=8) 00:07:31.846 Read completed with error (sct=0, sc=8) 00:07:31.846 Read completed with error (sct=0, sc=8) 00:07:31.846 Read completed with error (sct=0, sc=8) 00:07:31.846 Read completed with error (sct=0, sc=8) 00:07:31.846 Read completed with error (sct=0, sc=8) 00:07:31.846 Read completed with error (sct=0, sc=8) 00:07:31.846 Read completed with error (sct=0, sc=8) 00:07:31.846 Read completed with error (sct=0, sc=8) 00:07:31.846 Read completed with error (sct=0, sc=8) 00:07:31.846 Read completed with error (sct=0, sc=8) 00:07:31.846 Write completed with error (sct=0, sc=8) 00:07:31.846 Read completed with error (sct=0, sc=8) 00:07:31.846 Read completed with error (sct=0, sc=8) 00:07:31.846 Read completed with error (sct=0, sc=8) 00:07:31.846 Read completed with error (sct=0, sc=8) 00:07:31.846 Read completed with error (sct=0, sc=8) 00:07:31.846 Read completed with error (sct=0, sc=8) 00:07:31.846 Read completed with error (sct=0, sc=8) 00:07:31.846 Read completed with error (sct=0, sc=8) 00:07:31.846 Read completed with error (sct=0, sc=8) 00:07:31.846 Read completed with error (sct=0, sc=8) 00:07:31.846 Read completed with error (sct=0, sc=8) 00:07:31.846 Write completed with error (sct=0, sc=8) 00:07:31.846 Read completed with error (sct=0, sc=8) 00:07:31.846 Read completed with error (sct=0, sc=8) 00:07:31.846 Write completed with error (sct=0, sc=8) 00:07:31.846 Write completed with error (sct=0, sc=8) 00:07:32.780 [2024-12-16 11:29:33.343725] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x741aa0 is same with the state(6) to be set 00:07:32.780 Read completed with error (sct=0, sc=8) 00:07:32.780 Write completed with error (sct=0, sc=8) 00:07:32.780 Read completed with error (sct=0, sc=8) 00:07:32.780 Read completed with error (sct=0, sc=8) 00:07:32.780 Write completed with error (sct=0, sc=8) 00:07:32.780 Write completed with error (sct=0, sc=8) 00:07:32.780 Write completed with error (sct=0, sc=8) 00:07:32.780 Read completed with error (sct=0, sc=8) 00:07:32.780 Read completed with error (sct=0, sc=8) 00:07:32.780 Read completed with error (sct=0, sc=8) 00:07:32.780 Read completed with error (sct=0, sc=8) 00:07:32.780 Read completed with error (sct=0, sc=8) 00:07:32.780 Read completed with error (sct=0, sc=8) 00:07:32.780 Write completed with error (sct=0, sc=8) 00:07:32.780 Read completed with error (sct=0, sc=8) 00:07:32.780 Read completed with error (sct=0, sc=8) 00:07:32.780 Read completed with error (sct=0, sc=8) 00:07:32.780 Read completed with error (sct=0, sc=8) 00:07:32.780 Write completed with error (sct=0, sc=8) 00:07:32.780 Read completed with error (sct=0, sc=8) 00:07:32.780 [2024-12-16 11:29:33.366800] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7ff20c00d840 is same with the state(6) to be set 00:07:32.780 Write completed with error (sct=0, sc=8) 00:07:32.780 Read completed with error (sct=0, sc=8) 00:07:32.780 Read completed with error (sct=0, sc=8) 00:07:32.780 Write completed with error (sct=0, sc=8) 00:07:32.780 Read completed with error (sct=0, sc=8) 00:07:32.780 Write completed with error (sct=0, sc=8) 00:07:32.780 Read completed with error (sct=0, sc=8) 00:07:32.780 Write completed with error (sct=0, sc=8) 00:07:32.780 Write completed with error (sct=0, sc=8) 00:07:32.780 Read completed with error (sct=0, sc=8) 00:07:32.780 Write completed with error (sct=0, sc=8) 00:07:32.781 Read completed with error (sct=0, sc=8) 00:07:32.781 Read completed with error (sct=0, sc=8) 00:07:32.781 Write completed with error (sct=0, sc=8) 00:07:32.781 Write completed with error (sct=0, sc=8) 00:07:32.781 Read completed with error (sct=0, sc=8) 00:07:32.781 Write completed with error (sct=0, sc=8) 00:07:32.781 Write completed with error (sct=0, sc=8) 00:07:32.781 Read completed with error (sct=0, sc=8) 00:07:32.781 Read completed with error (sct=0, sc=8) 00:07:32.781 Read completed with error (sct=0, sc=8) 00:07:32.781 Read completed with error (sct=0, sc=8) 00:07:32.781 Read completed with error (sct=0, sc=8) 00:07:32.781 Write completed with error (sct=0, sc=8) 00:07:32.781 Read completed with error (sct=0, sc=8) 00:07:32.781 Read completed with error (sct=0, sc=8) 00:07:32.781 Read completed with error (sct=0, sc=8) 00:07:32.781 Read completed with error (sct=0, sc=8) 00:07:32.781 Read completed with error (sct=0, sc=8) 00:07:32.781 Write completed with error (sct=0, sc=8) 00:07:32.781 Read completed with error (sct=0, sc=8) 00:07:32.781 [2024-12-16 11:29:33.367036] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7ff20c00d060 is same with the state(6) to be set 00:07:32.781 Write completed with error (sct=0, sc=8) 00:07:32.781 Read completed with error (sct=0, sc=8) 00:07:32.781 Write completed with error (sct=0, sc=8) 00:07:32.781 Read completed with error (sct=0, sc=8) 00:07:32.781 Read completed with error (sct=0, sc=8) 00:07:32.781 Write completed with error (sct=0, sc=8) 00:07:32.781 Write completed with error (sct=0, sc=8) 00:07:32.781 Write completed with error (sct=0, sc=8) 00:07:32.781 Read completed with error (sct=0, sc=8) 00:07:32.781 Write completed with error (sct=0, sc=8) 00:07:32.781 Read completed with error (sct=0, sc=8) 00:07:32.781 Read completed with error (sct=0, sc=8) 00:07:32.781 Read completed with error (sct=0, sc=8) 00:07:32.781 Read completed with error (sct=0, sc=8) 00:07:32.781 Read completed with error (sct=0, sc=8) 00:07:32.781 Write completed with error (sct=0, sc=8) 00:07:32.781 Read completed with error (sct=0, sc=8) 00:07:32.781 Read completed with error (sct=0, sc=8) 00:07:32.781 Read completed with error (sct=0, sc=8) 00:07:32.781 Read completed with error (sct=0, sc=8) 00:07:32.781 Write completed with error (sct=0, sc=8) 00:07:32.781 Read completed with error (sct=0, sc=8) 00:07:32.781 [2024-12-16 11:29:33.367888] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x74ca50 is same with the state(6) to be set 00:07:32.781 Write completed with error (sct=0, sc=8) 00:07:32.781 Read completed with error (sct=0, sc=8) 00:07:32.781 Write completed with error (sct=0, sc=8) 00:07:32.781 Read completed with error (sct=0, sc=8) 00:07:32.781 Read completed with error (sct=0, sc=8) 00:07:32.781 Write completed with error (sct=0, sc=8) 00:07:32.781 Read completed with error (sct=0, sc=8) 00:07:32.781 Read completed with error (sct=0, sc=8) 00:07:32.781 Read completed with error (sct=0, sc=8) 00:07:32.781 Read completed with error (sct=0, sc=8) 00:07:32.781 Write completed with error (sct=0, sc=8) 00:07:32.781 Write completed with error (sct=0, sc=8) 00:07:32.781 Write completed with error (sct=0, sc=8) 00:07:32.781 Write completed with error (sct=0, sc=8) 00:07:32.781 Write completed with error (sct=0, sc=8) 00:07:32.781 Write completed with error (sct=0, sc=8) 00:07:32.781 Read completed with error (sct=0, sc=8) 00:07:32.781 Write completed with error (sct=0, sc=8) 00:07:32.781 Read completed with error (sct=0, sc=8) 00:07:32.781 Read completed with error (sct=0, sc=8) 00:07:32.781 Read completed with error (sct=0, sc=8) 00:07:32.781 Write completed with error (sct=0, sc=8) 00:07:32.781 [2024-12-16 11:29:33.369000] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x74fea0 is same with the state(6) to be set 00:07:32.781 Initializing NVMe Controllers 00:07:32.781 Attached to NVMe over Fabrics controller at 10.0.0.3:4420: nqn.2016-06.io.spdk:cnode1 00:07:32.781 Controller IO queue size 128, less than required. 00:07:32.781 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:07:32.781 Associating TCP (addr:10.0.0.3 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 2 00:07:32.781 Associating TCP (addr:10.0.0.3 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 3 00:07:32.781 Initialization complete. Launching workers. 00:07:32.781 ======================================================== 00:07:32.781 Latency(us) 00:07:32.781 Device Information : IOPS MiB/s Average min max 00:07:32.781 TCP (addr:10.0.0.3 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 2: 165.85 0.08 903902.36 641.67 1011492.42 00:07:32.781 TCP (addr:10.0.0.3 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 3: 151.45 0.07 1012881.94 308.95 2001591.74 00:07:32.781 ======================================================== 00:07:32.781 Total : 317.30 0.15 955919.22 308.95 2001591.74 00:07:32.781 00:07:32.781 [2024-12-16 11:29:33.369980] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x741aa0 (9): Bad file descriptor 00:07:32.781 /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf: errors occurred 00:07:32.781 11:29:33 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:32.781 11:29:33 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@34 -- # delay=0 00:07:32.781 11:29:33 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@35 -- # kill -0 66775 00:07:32.781 11:29:33 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@36 -- # sleep 0.5 00:07:33.348 11:29:33 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@38 -- # (( delay++ > 30 )) 00:07:33.348 11:29:33 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@35 -- # kill -0 66775 00:07:33.348 /home/vagrant/spdk_repo/spdk/test/nvmf/target/delete_subsystem.sh: line 35: kill: (66775) - No such process 00:07:33.348 11:29:33 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@45 -- # NOT wait 66775 00:07:33.348 11:29:33 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@652 -- # local es=0 00:07:33.348 11:29:33 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@654 -- # valid_exec_arg wait 66775 00:07:33.348 11:29:33 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@640 -- # local arg=wait 00:07:33.348 11:29:33 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:07:33.348 11:29:33 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@644 -- # type -t wait 00:07:33.348 11:29:33 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:07:33.348 11:29:33 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@655 -- # wait 66775 00:07:33.348 11:29:33 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@655 -- # es=1 00:07:33.348 11:29:33 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:07:33.348 11:29:33 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:07:33.348 11:29:33 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:07:33.348 11:29:33 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@48 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 10 00:07:33.348 11:29:33 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:33.348 11:29:33 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:07:33.348 11:29:33 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:33.348 11:29:33 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@49 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 00:07:33.348 11:29:33 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:33.348 11:29:33 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:07:33.348 [2024-12-16 11:29:33.896773] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4420 *** 00:07:33.348 11:29:33 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:33.348 11:29:33 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@50 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:07:33.348 11:29:33 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:33.348 11:29:33 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:07:33.348 11:29:33 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:33.348 11:29:33 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@54 -- # perf_pid=66828 00:07:33.348 11:29:33 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@56 -- # delay=0 00:07:33.348 11:29:33 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@52 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf -c 0xC -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.3 trsvcid:4420' -t 3 -q 128 -w randrw -M 70 -o 512 -P 4 00:07:33.348 11:29:33 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 66828 00:07:33.348 11:29:33 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:07:33.348 [2024-12-16 11:29:34.095074] subsystem.c:1641:spdk_nvmf_subsystem_listener_allowed: *WARNING*: Allowing connection to discovery subsystem on TCP/10.0.0.3/4420, even though this listener was not added to the discovery subsystem. This behavior is deprecated and will be removed in a future release. 00:07:33.915 11:29:34 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:07:33.915 11:29:34 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 66828 00:07:33.915 11:29:34 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:07:34.173 11:29:34 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:07:34.173 11:29:34 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 66828 00:07:34.173 11:29:34 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:07:34.739 11:29:35 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:07:34.739 11:29:35 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 66828 00:07:34.739 11:29:35 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:07:35.305 11:29:35 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:07:35.305 11:29:35 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 66828 00:07:35.305 11:29:35 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:07:35.872 11:29:36 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:07:35.872 11:29:36 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 66828 00:07:35.872 11:29:36 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:07:36.437 11:29:36 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:07:36.437 11:29:36 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 66828 00:07:36.437 11:29:36 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:07:36.437 Initializing NVMe Controllers 00:07:36.437 Attached to NVMe over Fabrics controller at 10.0.0.3:4420: nqn.2016-06.io.spdk:cnode1 00:07:36.437 Controller IO queue size 128, less than required. 00:07:36.437 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:07:36.437 Associating TCP (addr:10.0.0.3 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 2 00:07:36.437 Associating TCP (addr:10.0.0.3 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 3 00:07:36.437 Initialization complete. Launching workers. 00:07:36.437 ======================================================== 00:07:36.437 Latency(us) 00:07:36.438 Device Information : IOPS MiB/s Average min max 00:07:36.438 TCP (addr:10.0.0.3 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 2: 128.00 0.06 1003481.51 1000156.66 1041227.46 00:07:36.438 TCP (addr:10.0.0.3 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 3: 128.00 0.06 1004876.59 1000133.94 1041384.86 00:07:36.438 ======================================================== 00:07:36.438 Total : 256.00 0.12 1004179.05 1000133.94 1041384.86 00:07:36.438 00:07:36.696 11:29:37 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:07:36.696 11:29:37 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 66828 00:07:36.696 /home/vagrant/spdk_repo/spdk/test/nvmf/target/delete_subsystem.sh: line 57: kill: (66828) - No such process 00:07:36.696 11:29:37 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@67 -- # wait 66828 00:07:36.696 11:29:37 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@69 -- # trap - SIGINT SIGTERM EXIT 00:07:36.696 11:29:37 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@71 -- # nvmftestfini 00:07:36.696 11:29:37 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@516 -- # nvmfcleanup 00:07:36.696 11:29:37 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@121 -- # sync 00:07:36.954 11:29:37 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:07:36.954 11:29:37 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@124 -- # set +e 00:07:36.954 11:29:37 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@125 -- # for i in {1..20} 00:07:36.954 11:29:37 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:07:36.954 rmmod nvme_tcp 00:07:36.954 rmmod nvme_fabrics 00:07:36.954 rmmod nvme_keyring 00:07:36.954 11:29:37 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:07:36.954 11:29:37 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@128 -- # set -e 00:07:36.954 11:29:37 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@129 -- # return 0 00:07:36.954 11:29:37 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@517 -- # '[' -n 66724 ']' 00:07:36.954 11:29:37 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@518 -- # killprocess 66724 00:07:36.954 11:29:37 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@954 -- # '[' -z 66724 ']' 00:07:36.954 11:29:37 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@958 -- # kill -0 66724 00:07:36.954 11:29:37 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@959 -- # uname 00:07:36.954 11:29:37 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:07:36.954 11:29:37 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 66724 00:07:36.954 killing process with pid 66724 00:07:36.954 11:29:37 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:07:36.954 11:29:37 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:07:36.954 11:29:37 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@972 -- # echo 'killing process with pid 66724' 00:07:36.954 11:29:37 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@973 -- # kill 66724 00:07:36.954 11:29:37 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@978 -- # wait 66724 00:07:37.213 11:29:37 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:07:37.213 11:29:37 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:07:37.213 11:29:37 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:07:37.213 11:29:37 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@297 -- # iptr 00:07:37.213 11:29:37 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@791 -- # iptables-save 00:07:37.213 11:29:37 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:07:37.213 11:29:37 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@791 -- # iptables-restore 00:07:37.213 11:29:37 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@298 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:07:37.213 11:29:37 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@299 -- # nvmf_veth_fini 00:07:37.213 11:29:37 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@233 -- # ip link set nvmf_init_br nomaster 00:07:37.213 11:29:37 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@234 -- # ip link set nvmf_init_br2 nomaster 00:07:37.213 11:29:37 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@235 -- # ip link set nvmf_tgt_br nomaster 00:07:37.213 11:29:37 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@236 -- # ip link set nvmf_tgt_br2 nomaster 00:07:37.213 11:29:37 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@237 -- # ip link set nvmf_init_br down 00:07:37.213 11:29:37 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@238 -- # ip link set nvmf_init_br2 down 00:07:37.213 11:29:37 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@239 -- # ip link set nvmf_tgt_br down 00:07:37.213 11:29:37 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@240 -- # ip link set nvmf_tgt_br2 down 00:07:37.213 11:29:37 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@241 -- # ip link delete nvmf_br type bridge 00:07:37.213 11:29:37 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@242 -- # ip link delete nvmf_init_if 00:07:37.471 11:29:37 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@243 -- # ip link delete nvmf_init_if2 00:07:37.472 11:29:37 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@244 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:07:37.472 11:29:38 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@245 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:07:37.472 11:29:38 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@246 -- # remove_spdk_ns 00:07:37.472 11:29:38 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:07:37.472 11:29:38 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:07:37.472 11:29:38 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:07:37.472 11:29:38 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@300 -- # return 0 00:07:37.472 00:07:37.472 real 0m9.769s 00:07:37.472 user 0m29.093s 00:07:37.472 sys 0m1.759s 00:07:37.472 ************************************ 00:07:37.472 END TEST nvmf_delete_subsystem 00:07:37.472 ************************************ 00:07:37.472 11:29:38 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@1130 -- # xtrace_disable 00:07:37.472 11:29:38 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:07:37.472 11:29:38 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@26 -- # run_test nvmf_host_management /home/vagrant/spdk_repo/spdk/test/nvmf/target/host_management.sh --transport=tcp 00:07:37.472 11:29:38 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:07:37.472 11:29:38 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1111 -- # xtrace_disable 00:07:37.472 11:29:38 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@10 -- # set +x 00:07:37.472 ************************************ 00:07:37.472 START TEST nvmf_host_management 00:07:37.472 ************************************ 00:07:37.472 11:29:38 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/host_management.sh --transport=tcp 00:07:37.472 * Looking for test storage... 00:07:37.472 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:07:37.472 11:29:38 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:07:37.472 11:29:38 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@1711 -- # lcov --version 00:07:37.472 11:29:38 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:07:37.731 11:29:38 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:07:37.731 11:29:38 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:07:37.731 11:29:38 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@333 -- # local ver1 ver1_l 00:07:37.731 11:29:38 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@334 -- # local ver2 ver2_l 00:07:37.731 11:29:38 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@336 -- # IFS=.-: 00:07:37.731 11:29:38 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@336 -- # read -ra ver1 00:07:37.731 11:29:38 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@337 -- # IFS=.-: 00:07:37.731 11:29:38 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@337 -- # read -ra ver2 00:07:37.731 11:29:38 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@338 -- # local 'op=<' 00:07:37.731 11:29:38 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@340 -- # ver1_l=2 00:07:37.731 11:29:38 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@341 -- # ver2_l=1 00:07:37.731 11:29:38 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:07:37.731 11:29:38 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@344 -- # case "$op" in 00:07:37.731 11:29:38 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@345 -- # : 1 00:07:37.731 11:29:38 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@364 -- # (( v = 0 )) 00:07:37.731 11:29:38 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:07:37.731 11:29:38 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@365 -- # decimal 1 00:07:37.731 11:29:38 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@353 -- # local d=1 00:07:37.731 11:29:38 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:07:37.731 11:29:38 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@355 -- # echo 1 00:07:37.731 11:29:38 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@365 -- # ver1[v]=1 00:07:37.731 11:29:38 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@366 -- # decimal 2 00:07:37.731 11:29:38 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@353 -- # local d=2 00:07:37.731 11:29:38 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:07:37.731 11:29:38 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@355 -- # echo 2 00:07:37.731 11:29:38 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@366 -- # ver2[v]=2 00:07:37.731 11:29:38 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:07:37.731 11:29:38 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:07:37.731 11:29:38 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@368 -- # return 0 00:07:37.731 11:29:38 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:07:37.731 11:29:38 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:07:37.731 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:37.731 --rc genhtml_branch_coverage=1 00:07:37.731 --rc genhtml_function_coverage=1 00:07:37.731 --rc genhtml_legend=1 00:07:37.731 --rc geninfo_all_blocks=1 00:07:37.731 --rc geninfo_unexecuted_blocks=1 00:07:37.731 00:07:37.731 ' 00:07:37.731 11:29:38 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:07:37.731 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:37.731 --rc genhtml_branch_coverage=1 00:07:37.731 --rc genhtml_function_coverage=1 00:07:37.731 --rc genhtml_legend=1 00:07:37.731 --rc geninfo_all_blocks=1 00:07:37.731 --rc geninfo_unexecuted_blocks=1 00:07:37.731 00:07:37.731 ' 00:07:37.731 11:29:38 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:07:37.731 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:37.731 --rc genhtml_branch_coverage=1 00:07:37.731 --rc genhtml_function_coverage=1 00:07:37.731 --rc genhtml_legend=1 00:07:37.731 --rc geninfo_all_blocks=1 00:07:37.731 --rc geninfo_unexecuted_blocks=1 00:07:37.731 00:07:37.731 ' 00:07:37.731 11:29:38 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:07:37.731 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:37.731 --rc genhtml_branch_coverage=1 00:07:37.731 --rc genhtml_function_coverage=1 00:07:37.731 --rc genhtml_legend=1 00:07:37.731 --rc geninfo_all_blocks=1 00:07:37.731 --rc geninfo_unexecuted_blocks=1 00:07:37.731 00:07:37.731 ' 00:07:37.731 11:29:38 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:07:37.731 11:29:38 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@7 -- # uname -s 00:07:37.731 11:29:38 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:07:37.731 11:29:38 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:07:37.732 11:29:38 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:07:37.732 11:29:38 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:07:37.732 11:29:38 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:07:37.732 11:29:38 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:07:37.732 11:29:38 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:07:37.732 11:29:38 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:07:37.732 11:29:38 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:07:37.732 11:29:38 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:07:37.732 11:29:38 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:2fb5c03d-b67f-4471-b8a9-a83bf7b15be8 00:07:37.732 11:29:38 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@18 -- # NVME_HOSTID=2fb5c03d-b67f-4471-b8a9-a83bf7b15be8 00:07:37.732 11:29:38 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:07:37.732 11:29:38 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:07:37.732 11:29:38 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:07:37.732 11:29:38 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:07:37.732 11:29:38 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@49 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:07:37.732 11:29:38 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@15 -- # shopt -s extglob 00:07:37.732 11:29:38 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:07:37.732 11:29:38 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:07:37.732 11:29:38 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:07:37.732 11:29:38 nvmf_tcp.nvmf_target_core.nvmf_host_management -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:37.732 11:29:38 nvmf_tcp.nvmf_target_core.nvmf_host_management -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:37.732 11:29:38 nvmf_tcp.nvmf_target_core.nvmf_host_management -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:37.732 11:29:38 nvmf_tcp.nvmf_target_core.nvmf_host_management -- paths/export.sh@5 -- # export PATH 00:07:37.732 11:29:38 nvmf_tcp.nvmf_target_core.nvmf_host_management -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:37.732 11:29:38 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@51 -- # : 0 00:07:37.732 11:29:38 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:07:37.732 11:29:38 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:07:37.732 11:29:38 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:07:37.732 11:29:38 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:07:37.732 11:29:38 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:07:37.732 11:29:38 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:07:37.732 /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:07:37.732 11:29:38 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:07:37.732 11:29:38 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:07:37.732 11:29:38 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@55 -- # have_pci_nics=0 00:07:37.732 11:29:38 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@11 -- # MALLOC_BDEV_SIZE=64 00:07:37.732 11:29:38 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:07:37.732 11:29:38 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@105 -- # nvmftestinit 00:07:37.732 11:29:38 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:07:37.732 11:29:38 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:07:37.732 11:29:38 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@476 -- # prepare_net_devs 00:07:37.732 11:29:38 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@438 -- # local -g is_hw=no 00:07:37.732 11:29:38 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@440 -- # remove_spdk_ns 00:07:37.732 11:29:38 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:07:37.732 11:29:38 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:07:37.732 11:29:38 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:07:37.732 11:29:38 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@442 -- # [[ virt != virt ]] 00:07:37.732 11:29:38 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@444 -- # [[ no == yes ]] 00:07:37.732 11:29:38 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@451 -- # [[ virt == phy ]] 00:07:37.732 11:29:38 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@454 -- # [[ virt == phy-fallback ]] 00:07:37.732 11:29:38 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@459 -- # [[ tcp == tcp ]] 00:07:37.732 11:29:38 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@460 -- # nvmf_veth_init 00:07:37.732 11:29:38 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@145 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:07:37.732 11:29:38 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@146 -- # NVMF_SECOND_INITIATOR_IP=10.0.0.2 00:07:37.732 11:29:38 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@147 -- # NVMF_FIRST_TARGET_IP=10.0.0.3 00:07:37.732 11:29:38 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@148 -- # NVMF_SECOND_TARGET_IP=10.0.0.4 00:07:37.732 11:29:38 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@149 -- # NVMF_INITIATOR_IP=10.0.0.1 00:07:37.732 11:29:38 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@150 -- # NVMF_BRIDGE=nvmf_br 00:07:37.732 11:29:38 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@151 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:07:37.732 11:29:38 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@152 -- # NVMF_INITIATOR_INTERFACE2=nvmf_init_if2 00:07:37.732 11:29:38 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@153 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:07:37.732 11:29:38 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@154 -- # NVMF_INITIATOR_BRIDGE2=nvmf_init_br2 00:07:37.732 11:29:38 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@155 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:07:37.732 11:29:38 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@156 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:07:37.732 11:29:38 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@157 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:07:37.732 11:29:38 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@158 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:07:37.732 11:29:38 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@159 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:07:37.732 11:29:38 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@160 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:07:37.732 11:29:38 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@162 -- # ip link set nvmf_init_br nomaster 00:07:37.732 Cannot find device "nvmf_init_br" 00:07:37.732 11:29:38 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@162 -- # true 00:07:37.732 11:29:38 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@163 -- # ip link set nvmf_init_br2 nomaster 00:07:37.732 Cannot find device "nvmf_init_br2" 00:07:37.732 11:29:38 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@163 -- # true 00:07:37.732 11:29:38 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@164 -- # ip link set nvmf_tgt_br nomaster 00:07:37.732 Cannot find device "nvmf_tgt_br" 00:07:37.732 11:29:38 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@164 -- # true 00:07:37.732 11:29:38 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@165 -- # ip link set nvmf_tgt_br2 nomaster 00:07:37.732 Cannot find device "nvmf_tgt_br2" 00:07:37.732 11:29:38 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@165 -- # true 00:07:37.732 11:29:38 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@166 -- # ip link set nvmf_init_br down 00:07:37.732 Cannot find device "nvmf_init_br" 00:07:37.732 11:29:38 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@166 -- # true 00:07:37.732 11:29:38 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@167 -- # ip link set nvmf_init_br2 down 00:07:37.732 Cannot find device "nvmf_init_br2" 00:07:37.732 11:29:38 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@167 -- # true 00:07:37.732 11:29:38 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@168 -- # ip link set nvmf_tgt_br down 00:07:37.732 Cannot find device "nvmf_tgt_br" 00:07:37.732 11:29:38 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@168 -- # true 00:07:37.732 11:29:38 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@169 -- # ip link set nvmf_tgt_br2 down 00:07:37.732 Cannot find device "nvmf_tgt_br2" 00:07:37.732 11:29:38 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@169 -- # true 00:07:37.732 11:29:38 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@170 -- # ip link delete nvmf_br type bridge 00:07:37.732 Cannot find device "nvmf_br" 00:07:37.732 11:29:38 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@170 -- # true 00:07:37.732 11:29:38 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@171 -- # ip link delete nvmf_init_if 00:07:37.732 Cannot find device "nvmf_init_if" 00:07:37.732 11:29:38 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@171 -- # true 00:07:37.732 11:29:38 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@172 -- # ip link delete nvmf_init_if2 00:07:37.992 Cannot find device "nvmf_init_if2" 00:07:37.992 11:29:38 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@172 -- # true 00:07:37.992 11:29:38 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@173 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:07:37.992 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:07:37.992 11:29:38 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@173 -- # true 00:07:37.992 11:29:38 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@174 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:07:37.992 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:07:37.992 11:29:38 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@174 -- # true 00:07:37.992 11:29:38 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@177 -- # ip netns add nvmf_tgt_ns_spdk 00:07:37.992 11:29:38 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@180 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:07:37.992 11:29:38 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@181 -- # ip link add nvmf_init_if2 type veth peer name nvmf_init_br2 00:07:37.992 11:29:38 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@182 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:07:37.992 11:29:38 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@183 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:07:37.992 11:29:38 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:07:37.992 11:29:38 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@187 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:07:37.992 11:29:38 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@190 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:07:37.992 11:29:38 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@191 -- # ip addr add 10.0.0.2/24 dev nvmf_init_if2 00:07:37.992 11:29:38 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@192 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if 00:07:37.992 11:29:38 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@193 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.4/24 dev nvmf_tgt_if2 00:07:37.992 11:29:38 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@196 -- # ip link set nvmf_init_if up 00:07:37.992 11:29:38 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@197 -- # ip link set nvmf_init_if2 up 00:07:37.992 11:29:38 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@198 -- # ip link set nvmf_init_br up 00:07:37.992 11:29:38 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@199 -- # ip link set nvmf_init_br2 up 00:07:37.992 11:29:38 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@200 -- # ip link set nvmf_tgt_br up 00:07:37.992 11:29:38 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@201 -- # ip link set nvmf_tgt_br2 up 00:07:37.992 11:29:38 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@202 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:07:37.992 11:29:38 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@203 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:07:37.992 11:29:38 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@204 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:07:37.992 11:29:38 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@207 -- # ip link add nvmf_br type bridge 00:07:37.992 11:29:38 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@208 -- # ip link set nvmf_br up 00:07:37.992 11:29:38 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@211 -- # ip link set nvmf_init_br master nvmf_br 00:07:37.992 11:29:38 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@212 -- # ip link set nvmf_init_br2 master nvmf_br 00:07:37.992 11:29:38 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@213 -- # ip link set nvmf_tgt_br master nvmf_br 00:07:37.992 11:29:38 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@214 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:07:37.992 11:29:38 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@217 -- # ipts -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:07:37.992 11:29:38 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT' 00:07:37.992 11:29:38 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@218 -- # ipts -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT 00:07:37.992 11:29:38 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT' 00:07:37.992 11:29:38 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@219 -- # ipts -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:07:37.992 11:29:38 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@790 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT -m comment --comment 'SPDK_NVMF:-A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT' 00:07:37.992 11:29:38 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@222 -- # ping -c 1 10.0.0.3 00:07:37.992 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:07:37.992 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.088 ms 00:07:37.992 00:07:37.992 --- 10.0.0.3 ping statistics --- 00:07:37.992 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:07:37.992 rtt min/avg/max/mdev = 0.088/0.088/0.088/0.000 ms 00:07:37.992 11:29:38 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@223 -- # ping -c 1 10.0.0.4 00:07:37.992 PING 10.0.0.4 (10.0.0.4) 56(84) bytes of data. 00:07:37.992 64 bytes from 10.0.0.4: icmp_seq=1 ttl=64 time=0.047 ms 00:07:37.992 00:07:37.992 --- 10.0.0.4 ping statistics --- 00:07:37.992 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:07:37.992 rtt min/avg/max/mdev = 0.047/0.047/0.047/0.000 ms 00:07:37.992 11:29:38 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@224 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:07:37.992 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:07:37.992 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.030 ms 00:07:37.992 00:07:37.992 --- 10.0.0.1 ping statistics --- 00:07:37.992 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:07:37.992 rtt min/avg/max/mdev = 0.030/0.030/0.030/0.000 ms 00:07:38.251 11:29:38 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@225 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.2 00:07:38.251 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:07:38.251 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.074 ms 00:07:38.251 00:07:38.251 --- 10.0.0.2 ping statistics --- 00:07:38.251 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:07:38.251 rtt min/avg/max/mdev = 0.074/0.074/0.074/0.000 ms 00:07:38.251 11:29:38 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@227 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:07:38.251 11:29:38 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@461 -- # return 0 00:07:38.251 11:29:38 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:07:38.251 11:29:38 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:07:38.251 11:29:38 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:07:38.251 11:29:38 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:07:38.251 11:29:38 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:07:38.251 11:29:38 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:07:38.251 11:29:38 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:07:38.251 11:29:38 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@107 -- # nvmf_host_management 00:07:38.251 11:29:38 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@69 -- # starttarget 00:07:38.251 11:29:38 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@16 -- # nvmfappstart -m 0x1E 00:07:38.251 11:29:38 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:07:38.251 11:29:38 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@726 -- # xtrace_disable 00:07:38.251 11:29:38 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:07:38.251 11:29:38 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@509 -- # nvmfpid=67117 00:07:38.251 11:29:38 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@510 -- # waitforlisten 67117 00:07:38.251 11:29:38 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@508 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x1E 00:07:38.251 11:29:38 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@835 -- # '[' -z 67117 ']' 00:07:38.251 11:29:38 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:07:38.251 11:29:38 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@840 -- # local max_retries=100 00:07:38.251 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:07:38.251 11:29:38 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:07:38.251 11:29:38 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@844 -- # xtrace_disable 00:07:38.251 11:29:38 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:07:38.251 [2024-12-16 11:29:38.840846] Starting SPDK v25.01-pre git sha1 c360129e7 / DPDK 24.03.0 initialization... 00:07:38.251 [2024-12-16 11:29:38.840954] [ DPDK EAL parameters: nvmf -c 0x1E --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:07:38.251 [2024-12-16 11:29:38.995873] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:07:38.509 [2024-12-16 11:29:39.066428] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:07:38.509 [2024-12-16 11:29:39.066725] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:07:38.509 [2024-12-16 11:29:39.066880] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:07:38.509 [2024-12-16 11:29:39.067026] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:07:38.509 [2024-12-16 11:29:39.067068] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:07:38.509 [2024-12-16 11:29:39.068451] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 2 00:07:38.509 [2024-12-16 11:29:39.068496] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 3 00:07:38.509 [2024-12-16 11:29:39.068601] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 4 00:07:38.509 [2024-12-16 11:29:39.068610] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 1 00:07:38.509 11:29:39 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:07:38.509 11:29:39 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@868 -- # return 0 00:07:38.509 11:29:39 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:07:38.509 11:29:39 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@732 -- # xtrace_disable 00:07:38.509 11:29:39 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:07:38.509 11:29:39 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:07:38.509 11:29:39 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@18 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:07:38.509 11:29:39 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:38.509 11:29:39 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:07:38.509 [2024-12-16 11:29:39.258983] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:07:38.768 11:29:39 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:38.768 11:29:39 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@20 -- # timing_enter create_subsystem 00:07:38.768 11:29:39 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@726 -- # xtrace_disable 00:07:38.768 11:29:39 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:07:38.768 11:29:39 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@22 -- # rm -rf /home/vagrant/spdk_repo/spdk/test/nvmf/target/rpcs.txt 00:07:38.768 11:29:39 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@23 -- # cat 00:07:38.768 11:29:39 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@30 -- # rpc_cmd 00:07:38.768 11:29:39 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:38.768 11:29:39 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:07:38.768 Malloc0 00:07:38.768 [2024-12-16 11:29:39.333406] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4420 *** 00:07:38.768 11:29:39 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:38.768 11:29:39 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@31 -- # timing_exit create_subsystems 00:07:38.768 11:29:39 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@732 -- # xtrace_disable 00:07:38.768 11:29:39 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:07:38.768 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:07:38.768 11:29:39 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@73 -- # perfpid=67170 00:07:38.768 11:29:39 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@74 -- # waitforlisten 67170 /var/tmp/bdevperf.sock 00:07:38.768 11:29:39 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@835 -- # '[' -z 67170 ']' 00:07:38.768 11:29:39 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@72 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -r /var/tmp/bdevperf.sock --json /dev/fd/63 -q 64 -o 65536 -w verify -t 10 00:07:38.768 11:29:39 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:07:38.768 11:29:39 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@72 -- # gen_nvmf_target_json 0 00:07:38.768 11:29:39 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@840 -- # local max_retries=100 00:07:38.768 11:29:39 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:07:38.768 11:29:39 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@560 -- # config=() 00:07:38.768 11:29:39 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@844 -- # xtrace_disable 00:07:38.768 11:29:39 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@560 -- # local subsystem config 00:07:38.768 11:29:39 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:07:38.768 11:29:39 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:07:38.768 11:29:39 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:07:38.768 { 00:07:38.768 "params": { 00:07:38.768 "name": "Nvme$subsystem", 00:07:38.768 "trtype": "$TEST_TRANSPORT", 00:07:38.768 "traddr": "$NVMF_FIRST_TARGET_IP", 00:07:38.768 "adrfam": "ipv4", 00:07:38.768 "trsvcid": "$NVMF_PORT", 00:07:38.768 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:07:38.768 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:07:38.768 "hdgst": ${hdgst:-false}, 00:07:38.768 "ddgst": ${ddgst:-false} 00:07:38.768 }, 00:07:38.768 "method": "bdev_nvme_attach_controller" 00:07:38.768 } 00:07:38.768 EOF 00:07:38.768 )") 00:07:38.768 11:29:39 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@582 -- # cat 00:07:38.768 11:29:39 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@584 -- # jq . 00:07:38.768 11:29:39 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@585 -- # IFS=, 00:07:38.768 11:29:39 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:07:38.768 "params": { 00:07:38.768 "name": "Nvme0", 00:07:38.768 "trtype": "tcp", 00:07:38.768 "traddr": "10.0.0.3", 00:07:38.768 "adrfam": "ipv4", 00:07:38.768 "trsvcid": "4420", 00:07:38.768 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:07:38.768 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:07:38.768 "hdgst": false, 00:07:38.768 "ddgst": false 00:07:38.768 }, 00:07:38.768 "method": "bdev_nvme_attach_controller" 00:07:38.768 }' 00:07:38.768 [2024-12-16 11:29:39.450890] Starting SPDK v25.01-pre git sha1 c360129e7 / DPDK 24.03.0 initialization... 00:07:38.768 [2024-12-16 11:29:39.451383] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid67170 ] 00:07:39.026 [2024-12-16 11:29:39.661786] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:39.026 [2024-12-16 11:29:39.736684] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:07:39.285 Running I/O for 10 seconds... 00:07:39.852 11:29:40 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:07:39.852 11:29:40 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@868 -- # return 0 00:07:39.852 11:29:40 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@75 -- # rpc_cmd -s /var/tmp/bdevperf.sock framework_wait_init 00:07:39.852 11:29:40 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:39.852 11:29:40 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:07:39.852 11:29:40 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:39.852 11:29:40 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@78 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; kill -9 $perfpid || true; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:07:39.852 11:29:40 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@80 -- # waitforio /var/tmp/bdevperf.sock Nvme0n1 00:07:39.852 11:29:40 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@45 -- # '[' -z /var/tmp/bdevperf.sock ']' 00:07:39.852 11:29:40 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@49 -- # '[' -z Nvme0n1 ']' 00:07:39.852 11:29:40 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@52 -- # local ret=1 00:07:39.852 11:29:40 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@53 -- # local i 00:07:39.852 11:29:40 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@54 -- # (( i = 10 )) 00:07:39.852 11:29:40 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@54 -- # (( i != 0 )) 00:07:39.852 11:29:40 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@55 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_get_iostat -b Nvme0n1 00:07:39.852 11:29:40 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@55 -- # jq -r '.bdevs[0].num_read_ops' 00:07:39.852 11:29:40 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:39.852 11:29:40 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:07:39.852 11:29:40 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:40.150 11:29:40 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@55 -- # read_io_count=963 00:07:40.150 11:29:40 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@58 -- # '[' 963 -ge 100 ']' 00:07:40.150 11:29:40 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@59 -- # ret=0 00:07:40.150 11:29:40 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@60 -- # break 00:07:40.150 11:29:40 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@64 -- # return 0 00:07:40.150 11:29:40 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@84 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2016-06.io.spdk:cnode0 nqn.2016-06.io.spdk:host0 00:07:40.150 11:29:40 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:40.150 11:29:40 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:07:40.150 11:29:40 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:40.150 11:29:40 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@85 -- # rpc_cmd nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode0 nqn.2016-06.io.spdk:host0 00:07:40.150 11:29:40 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:40.150 11:29:40 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:07:40.150 [2024-12-16 11:29:40.656344] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:07:40.150 [2024-12-16 11:29:40.656401] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:40.150 [2024-12-16 11:29:40.656417] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:07:40.150 [2024-12-16 11:29:40.656427] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:40.150 [2024-12-16 11:29:40.656438] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:07:40.150 [2024-12-16 11:29:40.656447] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:40.150 [2024-12-16 11:29:40.656457] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:07:40.150 [2024-12-16 11:29:40.656466] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:40.150 [2024-12-16 11:29:40.656476] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1dd4130 is same with the state(6) to be set 00:07:40.150 11:29:40 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:40.150 11:29:40 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@87 -- # sleep 1 00:07:40.150 [2024-12-16 11:29:40.667623] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1dd4130 (9): Bad file descriptor 00:07:40.150 [2024-12-16 11:29:40.667720] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:8192 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:40.150 [2024-12-16 11:29:40.667737] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:40.150 [2024-12-16 11:29:40.667759] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:8320 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:40.150 [2024-12-16 11:29:40.667769] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:40.150 [2024-12-16 11:29:40.667781] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:8448 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:40.150 [2024-12-16 11:29:40.667790] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:40.150 [2024-12-16 11:29:40.667802] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:8576 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:40.150 [2024-12-16 11:29:40.667811] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:40.150 [2024-12-16 11:29:40.667822] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:8704 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:40.150 [2024-12-16 11:29:40.667831] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:40.150 [2024-12-16 11:29:40.667843] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:8832 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:40.150 [2024-12-16 11:29:40.667852] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:40.150 [2024-12-16 11:29:40.667863] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:8960 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:40.150 [2024-12-16 11:29:40.667872] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:40.150 [2024-12-16 11:29:40.667883] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:9088 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:40.150 [2024-12-16 11:29:40.667892] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:40.150 [2024-12-16 11:29:40.667904] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:9216 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:40.150 [2024-12-16 11:29:40.667913] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:40.150 [2024-12-16 11:29:40.667924] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:9344 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:40.150 [2024-12-16 11:29:40.667933] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:40.150 [2024-12-16 11:29:40.667944] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:9472 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:40.150 [2024-12-16 11:29:40.667952] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:40.151 [2024-12-16 11:29:40.667964] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:9600 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:40.151 [2024-12-16 11:29:40.667972] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:40.151 [2024-12-16 11:29:40.667983] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:9728 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:40.151 [2024-12-16 11:29:40.667992] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:40.151 [2024-12-16 11:29:40.668011] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:9856 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:40.151 [2024-12-16 11:29:40.668020] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:40.151 [2024-12-16 11:29:40.668032] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:9984 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:40.151 [2024-12-16 11:29:40.668041] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:40.151 [2024-12-16 11:29:40.668052] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:10112 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:40.151 [2024-12-16 11:29:40.668061] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:40.151 [2024-12-16 11:29:40.668073] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:10240 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:40.151 [2024-12-16 11:29:40.668081] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:40.151 [2024-12-16 11:29:40.668093] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:10368 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:40.151 [2024-12-16 11:29:40.668103] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:40.151 [2024-12-16 11:29:40.668115] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:10496 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:40.151 [2024-12-16 11:29:40.668125] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:40.151 [2024-12-16 11:29:40.668136] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:10624 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:40.151 [2024-12-16 11:29:40.668159] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:40.151 [2024-12-16 11:29:40.668172] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:10752 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:40.151 [2024-12-16 11:29:40.668181] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:40.151 [2024-12-16 11:29:40.668192] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:10880 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:40.151 [2024-12-16 11:29:40.668201] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:40.151 [2024-12-16 11:29:40.668213] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:11008 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:40.151 [2024-12-16 11:29:40.668222] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:40.151 [2024-12-16 11:29:40.668243] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:11136 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:40.151 [2024-12-16 11:29:40.668253] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:40.151 [2024-12-16 11:29:40.668264] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:11264 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:40.151 [2024-12-16 11:29:40.668273] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:40.151 [2024-12-16 11:29:40.668285] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:11392 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:40.151 [2024-12-16 11:29:40.668294] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:40.151 [2024-12-16 11:29:40.668305] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:11520 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:40.151 [2024-12-16 11:29:40.668314] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:40.151 [2024-12-16 11:29:40.668326] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:11648 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:40.151 [2024-12-16 11:29:40.668335] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:40.151 [2024-12-16 11:29:40.668346] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:11776 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:40.151 [2024-12-16 11:29:40.668355] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:40.151 [2024-12-16 11:29:40.668367] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:11904 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:40.151 [2024-12-16 11:29:40.668376] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:40.151 [2024-12-16 11:29:40.668387] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:12032 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:40.151 [2024-12-16 11:29:40.668396] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:40.151 [2024-12-16 11:29:40.668407] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:12160 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:40.151 [2024-12-16 11:29:40.668417] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:40.151 [2024-12-16 11:29:40.668428] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:12288 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:40.151 [2024-12-16 11:29:40.668437] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:40.151 [2024-12-16 11:29:40.668449] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:12416 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:40.151 [2024-12-16 11:29:40.668458] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:40.151 [2024-12-16 11:29:40.668469] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:12544 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:40.151 [2024-12-16 11:29:40.668478] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:40.151 [2024-12-16 11:29:40.668490] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:12672 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:40.151 [2024-12-16 11:29:40.668499] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:40.151 [2024-12-16 11:29:40.668510] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:12800 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:40.151 [2024-12-16 11:29:40.668519] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:40.151 [2024-12-16 11:29:40.668530] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:12928 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:40.151 [2024-12-16 11:29:40.668539] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:40.151 [2024-12-16 11:29:40.668550] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:13056 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:40.151 [2024-12-16 11:29:40.668559] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:40.151 [2024-12-16 11:29:40.668575] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:13184 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:40.151 [2024-12-16 11:29:40.668594] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:40.151 [2024-12-16 11:29:40.668605] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:13312 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:40.151 [2024-12-16 11:29:40.668614] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:40.151 [2024-12-16 11:29:40.668625] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:13440 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:40.151 [2024-12-16 11:29:40.668643] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:40.151 [2024-12-16 11:29:40.668654] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:13568 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:40.151 [2024-12-16 11:29:40.668663] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:40.151 [2024-12-16 11:29:40.668674] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:13696 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:40.151 [2024-12-16 11:29:40.668683] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:40.151 [2024-12-16 11:29:40.668695] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:13824 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:40.151 [2024-12-16 11:29:40.668704] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:40.151 [2024-12-16 11:29:40.668715] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:13952 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:40.151 [2024-12-16 11:29:40.668724] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:40.151 [2024-12-16 11:29:40.668734] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:14080 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:40.151 [2024-12-16 11:29:40.668743] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:40.151 [2024-12-16 11:29:40.668754] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:14208 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:40.151 [2024-12-16 11:29:40.668763] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:40.151 [2024-12-16 11:29:40.668774] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:14336 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:40.151 [2024-12-16 11:29:40.668784] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:40.151 [2024-12-16 11:29:40.668803] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:14464 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:40.151 [2024-12-16 11:29:40.668813] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:40.151 [2024-12-16 11:29:40.668824] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:14592 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:40.151 [2024-12-16 11:29:40.668833] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:40.152 [2024-12-16 11:29:40.668845] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:14720 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:40.152 [2024-12-16 11:29:40.668854] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:40.152 [2024-12-16 11:29:40.668865] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:14848 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:40.152 [2024-12-16 11:29:40.668874] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:40.152 [2024-12-16 11:29:40.668884] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:14976 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:40.152 [2024-12-16 11:29:40.668893] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:40.152 [2024-12-16 11:29:40.668905] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:15104 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:40.152 [2024-12-16 11:29:40.668914] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:40.152 [2024-12-16 11:29:40.668930] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:15232 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:40.152 [2024-12-16 11:29:40.668940] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:40.152 [2024-12-16 11:29:40.668951] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:15360 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:40.152 [2024-12-16 11:29:40.668960] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:40.152 [2024-12-16 11:29:40.668971] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:15488 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:40.152 [2024-12-16 11:29:40.668980] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:40.152 [2024-12-16 11:29:40.668991] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:15616 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:40.152 [2024-12-16 11:29:40.669000] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:40.152 [2024-12-16 11:29:40.669010] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:15744 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:40.152 [2024-12-16 11:29:40.669019] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:40.152 [2024-12-16 11:29:40.669030] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:15872 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:40.152 [2024-12-16 11:29:40.669048] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:40.152 [2024-12-16 11:29:40.669059] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:16000 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:40.152 [2024-12-16 11:29:40.669072] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:40.152 [2024-12-16 11:29:40.669083] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:16128 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:40.152 [2024-12-16 11:29:40.669092] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:40.152 [2024-12-16 11:29:40.669103] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:16256 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:40.152 [2024-12-16 11:29:40.669112] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:40.152 [2024-12-16 11:29:40.670331] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0, 1] resetting controller 00:07:40.152 task offset: 8192 on job bdev=Nvme0n1 fails 00:07:40.152 00:07:40.152 Latency(us) 00:07:40.152 [2024-12-16T11:29:40.905Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:07:40.152 Job: Nvme0n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:07:40.152 Job: Nvme0n1 ended in about 0.74 seconds with error 00:07:40.152 Verification LBA range: start 0x0 length 0x400 00:07:40.152 Nvme0n1 : 0.74 1470.41 91.90 86.49 0.00 40140.71 1906.50 37653.41 00:07:40.152 [2024-12-16T11:29:40.905Z] =================================================================================================================== 00:07:40.152 [2024-12-16T11:29:40.905Z] Total : 1470.41 91.90 86.49 0.00 40140.71 1906.50 37653.41 00:07:40.152 [2024-12-16 11:29:40.672673] app.c:1064:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:07:40.152 [2024-12-16 11:29:40.676074] bdev_nvme.c:2287:bdev_nvme_reset_ctrlr_complete: *NOTICE*: [nqn.2016-06.io.spdk:cnode0, 2] Resetting controller successful. 00:07:41.089 11:29:41 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@91 -- # kill -9 67170 00:07:41.089 /home/vagrant/spdk_repo/spdk/test/nvmf/target/host_management.sh: line 91: kill: (67170) - No such process 00:07:41.089 11:29:41 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@91 -- # true 00:07:41.089 11:29:41 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@97 -- # rm -f /var/tmp/spdk_cpu_lock_001 /var/tmp/spdk_cpu_lock_002 /var/tmp/spdk_cpu_lock_003 /var/tmp/spdk_cpu_lock_004 00:07:41.089 11:29:41 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@100 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /dev/fd/62 -q 64 -o 65536 -w verify -t 1 00:07:41.089 11:29:41 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@100 -- # gen_nvmf_target_json 0 00:07:41.089 11:29:41 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@560 -- # config=() 00:07:41.089 11:29:41 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@560 -- # local subsystem config 00:07:41.089 11:29:41 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:07:41.089 11:29:41 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:07:41.089 { 00:07:41.089 "params": { 00:07:41.089 "name": "Nvme$subsystem", 00:07:41.089 "trtype": "$TEST_TRANSPORT", 00:07:41.089 "traddr": "$NVMF_FIRST_TARGET_IP", 00:07:41.089 "adrfam": "ipv4", 00:07:41.089 "trsvcid": "$NVMF_PORT", 00:07:41.089 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:07:41.089 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:07:41.089 "hdgst": ${hdgst:-false}, 00:07:41.089 "ddgst": ${ddgst:-false} 00:07:41.089 }, 00:07:41.089 "method": "bdev_nvme_attach_controller" 00:07:41.089 } 00:07:41.089 EOF 00:07:41.089 )") 00:07:41.089 11:29:41 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@582 -- # cat 00:07:41.089 11:29:41 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@584 -- # jq . 00:07:41.089 11:29:41 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@585 -- # IFS=, 00:07:41.089 11:29:41 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:07:41.089 "params": { 00:07:41.089 "name": "Nvme0", 00:07:41.089 "trtype": "tcp", 00:07:41.089 "traddr": "10.0.0.3", 00:07:41.089 "adrfam": "ipv4", 00:07:41.089 "trsvcid": "4420", 00:07:41.089 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:07:41.089 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:07:41.089 "hdgst": false, 00:07:41.089 "ddgst": false 00:07:41.089 }, 00:07:41.089 "method": "bdev_nvme_attach_controller" 00:07:41.089 }' 00:07:41.089 [2024-12-16 11:29:41.734126] Starting SPDK v25.01-pre git sha1 c360129e7 / DPDK 24.03.0 initialization... 00:07:41.089 [2024-12-16 11:29:41.734248] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid67226 ] 00:07:41.347 [2024-12-16 11:29:41.885611] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:41.347 [2024-12-16 11:29:41.956431] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:07:41.606 Running I/O for 1 seconds... 00:07:42.540 1472.00 IOPS, 92.00 MiB/s 00:07:42.540 Latency(us) 00:07:42.540 [2024-12-16T11:29:43.293Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:07:42.540 Job: Nvme0n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:07:42.540 Verification LBA range: start 0x0 length 0x400 00:07:42.540 Nvme0n1 : 1.01 1522.89 95.18 0.00 0.00 41178.42 6106.76 37653.41 00:07:42.540 [2024-12-16T11:29:43.293Z] =================================================================================================================== 00:07:42.540 [2024-12-16T11:29:43.293Z] Total : 1522.89 95.18 0.00 0.00 41178.42 6106.76 37653.41 00:07:42.799 11:29:43 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@102 -- # stoptarget 00:07:42.799 11:29:43 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@36 -- # rm -f ./local-job0-0-verify.state 00:07:42.799 11:29:43 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@37 -- # rm -rf /home/vagrant/spdk_repo/spdk/test/nvmf/target/bdevperf.conf 00:07:42.799 11:29:43 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@38 -- # rm -rf /home/vagrant/spdk_repo/spdk/test/nvmf/target/rpcs.txt 00:07:42.799 11:29:43 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@40 -- # nvmftestfini 00:07:42.799 11:29:43 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@516 -- # nvmfcleanup 00:07:42.799 11:29:43 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@121 -- # sync 00:07:42.799 11:29:43 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:07:42.799 11:29:43 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@124 -- # set +e 00:07:42.799 11:29:43 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@125 -- # for i in {1..20} 00:07:42.799 11:29:43 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:07:42.799 rmmod nvme_tcp 00:07:42.799 rmmod nvme_fabrics 00:07:42.799 rmmod nvme_keyring 00:07:42.799 11:29:43 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:07:42.799 11:29:43 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@128 -- # set -e 00:07:42.799 11:29:43 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@129 -- # return 0 00:07:42.799 11:29:43 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@517 -- # '[' -n 67117 ']' 00:07:42.799 11:29:43 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@518 -- # killprocess 67117 00:07:42.799 11:29:43 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@954 -- # '[' -z 67117 ']' 00:07:42.799 11:29:43 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@958 -- # kill -0 67117 00:07:42.799 11:29:43 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@959 -- # uname 00:07:42.799 11:29:43 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:07:42.799 11:29:43 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 67117 00:07:42.799 11:29:43 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:07:42.799 11:29:43 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:07:42.799 killing process with pid 67117 00:07:42.799 11:29:43 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@972 -- # echo 'killing process with pid 67117' 00:07:42.799 11:29:43 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@973 -- # kill 67117 00:07:42.799 11:29:43 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@978 -- # wait 67117 00:07:43.058 [2024-12-16 11:29:43.715567] app.c: 721:unclaim_cpu_cores: *ERROR*: Failed to unlink lock fd for core 1, errno: 2 00:07:43.058 11:29:43 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:07:43.058 11:29:43 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:07:43.058 11:29:43 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:07:43.058 11:29:43 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@297 -- # iptr 00:07:43.058 11:29:43 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:07:43.058 11:29:43 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@791 -- # iptables-save 00:07:43.058 11:29:43 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@791 -- # iptables-restore 00:07:43.058 11:29:43 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@298 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:07:43.058 11:29:43 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@299 -- # nvmf_veth_fini 00:07:43.058 11:29:43 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@233 -- # ip link set nvmf_init_br nomaster 00:07:43.058 11:29:43 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@234 -- # ip link set nvmf_init_br2 nomaster 00:07:43.058 11:29:43 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@235 -- # ip link set nvmf_tgt_br nomaster 00:07:43.316 11:29:43 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@236 -- # ip link set nvmf_tgt_br2 nomaster 00:07:43.316 11:29:43 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@237 -- # ip link set nvmf_init_br down 00:07:43.316 11:29:43 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@238 -- # ip link set nvmf_init_br2 down 00:07:43.316 11:29:43 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@239 -- # ip link set nvmf_tgt_br down 00:07:43.316 11:29:43 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@240 -- # ip link set nvmf_tgt_br2 down 00:07:43.316 11:29:43 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@241 -- # ip link delete nvmf_br type bridge 00:07:43.316 11:29:43 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@242 -- # ip link delete nvmf_init_if 00:07:43.316 11:29:43 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@243 -- # ip link delete nvmf_init_if2 00:07:43.316 11:29:43 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@244 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:07:43.316 11:29:43 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@245 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:07:43.316 11:29:43 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@246 -- # remove_spdk_ns 00:07:43.316 11:29:43 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:07:43.316 11:29:43 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:07:43.316 11:29:43 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:07:43.316 11:29:43 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@300 -- # return 0 00:07:43.316 11:29:43 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@109 -- # trap - SIGINT SIGTERM EXIT 00:07:43.316 00:07:43.316 real 0m5.872s 00:07:43.316 user 0m21.635s 00:07:43.316 sys 0m1.590s 00:07:43.316 11:29:43 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@1130 -- # xtrace_disable 00:07:43.316 11:29:43 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:07:43.316 ************************************ 00:07:43.316 END TEST nvmf_host_management 00:07:43.316 ************************************ 00:07:43.316 11:29:44 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@27 -- # run_test nvmf_lvol /home/vagrant/spdk_repo/spdk/test/nvmf/target/nvmf_lvol.sh --transport=tcp 00:07:43.316 11:29:44 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:07:43.316 11:29:44 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1111 -- # xtrace_disable 00:07:43.316 11:29:44 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@10 -- # set +x 00:07:43.316 ************************************ 00:07:43.316 START TEST nvmf_lvol 00:07:43.316 ************************************ 00:07:43.316 11:29:44 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/nvmf_lvol.sh --transport=tcp 00:07:43.576 * Looking for test storage... 00:07:43.576 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:07:43.576 11:29:44 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:07:43.576 11:29:44 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@1711 -- # lcov --version 00:07:43.576 11:29:44 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:07:43.576 11:29:44 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:07:43.576 11:29:44 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:07:43.576 11:29:44 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@333 -- # local ver1 ver1_l 00:07:43.576 11:29:44 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@334 -- # local ver2 ver2_l 00:07:43.576 11:29:44 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@336 -- # IFS=.-: 00:07:43.576 11:29:44 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@336 -- # read -ra ver1 00:07:43.576 11:29:44 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@337 -- # IFS=.-: 00:07:43.576 11:29:44 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@337 -- # read -ra ver2 00:07:43.576 11:29:44 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@338 -- # local 'op=<' 00:07:43.576 11:29:44 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@340 -- # ver1_l=2 00:07:43.576 11:29:44 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@341 -- # ver2_l=1 00:07:43.576 11:29:44 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:07:43.576 11:29:44 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@344 -- # case "$op" in 00:07:43.576 11:29:44 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@345 -- # : 1 00:07:43.576 11:29:44 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@364 -- # (( v = 0 )) 00:07:43.576 11:29:44 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:07:43.576 11:29:44 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@365 -- # decimal 1 00:07:43.576 11:29:44 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@353 -- # local d=1 00:07:43.576 11:29:44 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:07:43.576 11:29:44 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@355 -- # echo 1 00:07:43.576 11:29:44 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@365 -- # ver1[v]=1 00:07:43.576 11:29:44 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@366 -- # decimal 2 00:07:43.576 11:29:44 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@353 -- # local d=2 00:07:43.576 11:29:44 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:07:43.576 11:29:44 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@355 -- # echo 2 00:07:43.576 11:29:44 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@366 -- # ver2[v]=2 00:07:43.576 11:29:44 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:07:43.576 11:29:44 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:07:43.576 11:29:44 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@368 -- # return 0 00:07:43.576 11:29:44 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:07:43.576 11:29:44 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:07:43.576 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:43.576 --rc genhtml_branch_coverage=1 00:07:43.576 --rc genhtml_function_coverage=1 00:07:43.576 --rc genhtml_legend=1 00:07:43.576 --rc geninfo_all_blocks=1 00:07:43.576 --rc geninfo_unexecuted_blocks=1 00:07:43.576 00:07:43.576 ' 00:07:43.576 11:29:44 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:07:43.576 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:43.576 --rc genhtml_branch_coverage=1 00:07:43.576 --rc genhtml_function_coverage=1 00:07:43.576 --rc genhtml_legend=1 00:07:43.576 --rc geninfo_all_blocks=1 00:07:43.576 --rc geninfo_unexecuted_blocks=1 00:07:43.576 00:07:43.576 ' 00:07:43.576 11:29:44 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:07:43.576 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:43.576 --rc genhtml_branch_coverage=1 00:07:43.576 --rc genhtml_function_coverage=1 00:07:43.576 --rc genhtml_legend=1 00:07:43.576 --rc geninfo_all_blocks=1 00:07:43.576 --rc geninfo_unexecuted_blocks=1 00:07:43.576 00:07:43.576 ' 00:07:43.576 11:29:44 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:07:43.576 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:43.576 --rc genhtml_branch_coverage=1 00:07:43.576 --rc genhtml_function_coverage=1 00:07:43.576 --rc genhtml_legend=1 00:07:43.576 --rc geninfo_all_blocks=1 00:07:43.576 --rc geninfo_unexecuted_blocks=1 00:07:43.576 00:07:43.576 ' 00:07:43.576 11:29:44 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:07:43.576 11:29:44 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@7 -- # uname -s 00:07:43.576 11:29:44 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:07:43.576 11:29:44 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:07:43.576 11:29:44 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:07:43.576 11:29:44 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:07:43.576 11:29:44 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:07:43.576 11:29:44 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:07:43.576 11:29:44 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:07:43.576 11:29:44 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:07:43.576 11:29:44 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:07:43.576 11:29:44 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:07:43.576 11:29:44 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:2fb5c03d-b67f-4471-b8a9-a83bf7b15be8 00:07:43.576 11:29:44 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@18 -- # NVME_HOSTID=2fb5c03d-b67f-4471-b8a9-a83bf7b15be8 00:07:43.576 11:29:44 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:07:43.576 11:29:44 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:07:43.576 11:29:44 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:07:43.576 11:29:44 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:07:43.576 11:29:44 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@49 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:07:43.576 11:29:44 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@15 -- # shopt -s extglob 00:07:43.576 11:29:44 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:07:43.576 11:29:44 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:07:43.576 11:29:44 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:07:43.576 11:29:44 nvmf_tcp.nvmf_target_core.nvmf_lvol -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:43.576 11:29:44 nvmf_tcp.nvmf_target_core.nvmf_lvol -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:43.576 11:29:44 nvmf_tcp.nvmf_target_core.nvmf_lvol -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:43.576 11:29:44 nvmf_tcp.nvmf_target_core.nvmf_lvol -- paths/export.sh@5 -- # export PATH 00:07:43.576 11:29:44 nvmf_tcp.nvmf_target_core.nvmf_lvol -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:43.576 11:29:44 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@51 -- # : 0 00:07:43.576 11:29:44 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:07:43.576 11:29:44 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:07:43.576 11:29:44 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:07:43.576 11:29:44 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:07:43.576 11:29:44 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:07:43.576 11:29:44 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:07:43.576 /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:07:43.576 11:29:44 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:07:43.576 11:29:44 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:07:43.577 11:29:44 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@55 -- # have_pci_nics=0 00:07:43.577 11:29:44 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@11 -- # MALLOC_BDEV_SIZE=64 00:07:43.577 11:29:44 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:07:43.577 11:29:44 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@13 -- # LVOL_BDEV_INIT_SIZE=20 00:07:43.577 11:29:44 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@14 -- # LVOL_BDEV_FINAL_SIZE=30 00:07:43.577 11:29:44 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@16 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:07:43.577 11:29:44 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@18 -- # nvmftestinit 00:07:43.577 11:29:44 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:07:43.577 11:29:44 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:07:43.577 11:29:44 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@476 -- # prepare_net_devs 00:07:43.577 11:29:44 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@438 -- # local -g is_hw=no 00:07:43.577 11:29:44 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@440 -- # remove_spdk_ns 00:07:43.577 11:29:44 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:07:43.577 11:29:44 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:07:43.577 11:29:44 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:07:43.577 11:29:44 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@442 -- # [[ virt != virt ]] 00:07:43.577 11:29:44 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@444 -- # [[ no == yes ]] 00:07:43.577 11:29:44 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@451 -- # [[ virt == phy ]] 00:07:43.577 11:29:44 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@454 -- # [[ virt == phy-fallback ]] 00:07:43.577 11:29:44 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@459 -- # [[ tcp == tcp ]] 00:07:43.577 11:29:44 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@460 -- # nvmf_veth_init 00:07:43.577 11:29:44 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@145 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:07:43.577 11:29:44 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@146 -- # NVMF_SECOND_INITIATOR_IP=10.0.0.2 00:07:43.577 11:29:44 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@147 -- # NVMF_FIRST_TARGET_IP=10.0.0.3 00:07:43.577 11:29:44 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@148 -- # NVMF_SECOND_TARGET_IP=10.0.0.4 00:07:43.577 11:29:44 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@149 -- # NVMF_INITIATOR_IP=10.0.0.1 00:07:43.577 11:29:44 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@150 -- # NVMF_BRIDGE=nvmf_br 00:07:43.577 11:29:44 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@151 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:07:43.577 11:29:44 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@152 -- # NVMF_INITIATOR_INTERFACE2=nvmf_init_if2 00:07:43.577 11:29:44 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@153 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:07:43.577 11:29:44 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@154 -- # NVMF_INITIATOR_BRIDGE2=nvmf_init_br2 00:07:43.577 11:29:44 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@155 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:07:43.577 11:29:44 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@156 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:07:43.577 11:29:44 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@157 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:07:43.577 11:29:44 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@158 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:07:43.577 11:29:44 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@159 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:07:43.577 11:29:44 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@160 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:07:43.577 11:29:44 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@162 -- # ip link set nvmf_init_br nomaster 00:07:43.577 Cannot find device "nvmf_init_br" 00:07:43.577 11:29:44 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@162 -- # true 00:07:43.577 11:29:44 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@163 -- # ip link set nvmf_init_br2 nomaster 00:07:43.577 Cannot find device "nvmf_init_br2" 00:07:43.577 11:29:44 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@163 -- # true 00:07:43.577 11:29:44 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@164 -- # ip link set nvmf_tgt_br nomaster 00:07:43.577 Cannot find device "nvmf_tgt_br" 00:07:43.577 11:29:44 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@164 -- # true 00:07:43.577 11:29:44 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@165 -- # ip link set nvmf_tgt_br2 nomaster 00:07:43.577 Cannot find device "nvmf_tgt_br2" 00:07:43.577 11:29:44 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@165 -- # true 00:07:43.577 11:29:44 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@166 -- # ip link set nvmf_init_br down 00:07:43.836 Cannot find device "nvmf_init_br" 00:07:43.836 11:29:44 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@166 -- # true 00:07:43.836 11:29:44 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@167 -- # ip link set nvmf_init_br2 down 00:07:43.836 Cannot find device "nvmf_init_br2" 00:07:43.836 11:29:44 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@167 -- # true 00:07:43.836 11:29:44 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@168 -- # ip link set nvmf_tgt_br down 00:07:43.836 Cannot find device "nvmf_tgt_br" 00:07:43.836 11:29:44 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@168 -- # true 00:07:43.836 11:29:44 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@169 -- # ip link set nvmf_tgt_br2 down 00:07:43.836 Cannot find device "nvmf_tgt_br2" 00:07:43.836 11:29:44 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@169 -- # true 00:07:43.836 11:29:44 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@170 -- # ip link delete nvmf_br type bridge 00:07:43.836 Cannot find device "nvmf_br" 00:07:43.836 11:29:44 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@170 -- # true 00:07:43.836 11:29:44 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@171 -- # ip link delete nvmf_init_if 00:07:43.836 Cannot find device "nvmf_init_if" 00:07:43.836 11:29:44 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@171 -- # true 00:07:43.836 11:29:44 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@172 -- # ip link delete nvmf_init_if2 00:07:43.836 Cannot find device "nvmf_init_if2" 00:07:43.836 11:29:44 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@172 -- # true 00:07:43.836 11:29:44 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@173 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:07:43.836 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:07:43.836 11:29:44 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@173 -- # true 00:07:43.836 11:29:44 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@174 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:07:43.836 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:07:43.836 11:29:44 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@174 -- # true 00:07:43.836 11:29:44 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@177 -- # ip netns add nvmf_tgt_ns_spdk 00:07:43.836 11:29:44 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@180 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:07:43.836 11:29:44 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@181 -- # ip link add nvmf_init_if2 type veth peer name nvmf_init_br2 00:07:43.836 11:29:44 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@182 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:07:43.836 11:29:44 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@183 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:07:43.836 11:29:44 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:07:43.836 11:29:44 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@187 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:07:43.836 11:29:44 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@190 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:07:43.836 11:29:44 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@191 -- # ip addr add 10.0.0.2/24 dev nvmf_init_if2 00:07:43.836 11:29:44 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@192 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if 00:07:43.836 11:29:44 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@193 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.4/24 dev nvmf_tgt_if2 00:07:43.836 11:29:44 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@196 -- # ip link set nvmf_init_if up 00:07:43.836 11:29:44 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@197 -- # ip link set nvmf_init_if2 up 00:07:43.836 11:29:44 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@198 -- # ip link set nvmf_init_br up 00:07:43.836 11:29:44 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@199 -- # ip link set nvmf_init_br2 up 00:07:43.836 11:29:44 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@200 -- # ip link set nvmf_tgt_br up 00:07:43.836 11:29:44 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@201 -- # ip link set nvmf_tgt_br2 up 00:07:43.836 11:29:44 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@202 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:07:43.836 11:29:44 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@203 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:07:43.836 11:29:44 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@204 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:07:43.836 11:29:44 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@207 -- # ip link add nvmf_br type bridge 00:07:43.836 11:29:44 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@208 -- # ip link set nvmf_br up 00:07:43.837 11:29:44 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@211 -- # ip link set nvmf_init_br master nvmf_br 00:07:43.837 11:29:44 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@212 -- # ip link set nvmf_init_br2 master nvmf_br 00:07:43.837 11:29:44 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@213 -- # ip link set nvmf_tgt_br master nvmf_br 00:07:43.837 11:29:44 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@214 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:07:43.837 11:29:44 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@217 -- # ipts -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:07:43.837 11:29:44 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT' 00:07:43.837 11:29:44 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@218 -- # ipts -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT 00:07:43.837 11:29:44 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT' 00:07:43.837 11:29:44 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@219 -- # ipts -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:07:43.837 11:29:44 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@790 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT -m comment --comment 'SPDK_NVMF:-A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT' 00:07:44.147 11:29:44 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@222 -- # ping -c 1 10.0.0.3 00:07:44.147 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:07:44.147 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.080 ms 00:07:44.147 00:07:44.147 --- 10.0.0.3 ping statistics --- 00:07:44.147 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:07:44.147 rtt min/avg/max/mdev = 0.080/0.080/0.080/0.000 ms 00:07:44.147 11:29:44 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@223 -- # ping -c 1 10.0.0.4 00:07:44.147 PING 10.0.0.4 (10.0.0.4) 56(84) bytes of data. 00:07:44.147 64 bytes from 10.0.0.4: icmp_seq=1 ttl=64 time=0.036 ms 00:07:44.147 00:07:44.147 --- 10.0.0.4 ping statistics --- 00:07:44.147 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:07:44.147 rtt min/avg/max/mdev = 0.036/0.036/0.036/0.000 ms 00:07:44.147 11:29:44 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@224 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:07:44.147 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:07:44.147 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.019 ms 00:07:44.147 00:07:44.147 --- 10.0.0.1 ping statistics --- 00:07:44.147 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:07:44.147 rtt min/avg/max/mdev = 0.019/0.019/0.019/0.000 ms 00:07:44.147 11:29:44 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@225 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.2 00:07:44.147 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:07:44.147 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.065 ms 00:07:44.147 00:07:44.147 --- 10.0.0.2 ping statistics --- 00:07:44.147 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:07:44.147 rtt min/avg/max/mdev = 0.065/0.065/0.065/0.000 ms 00:07:44.147 11:29:44 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@227 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:07:44.147 11:29:44 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@461 -- # return 0 00:07:44.147 11:29:44 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:07:44.147 11:29:44 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:07:44.147 11:29:44 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:07:44.147 11:29:44 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:07:44.147 11:29:44 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:07:44.147 11:29:44 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:07:44.148 11:29:44 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:07:44.148 11:29:44 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@19 -- # nvmfappstart -m 0x7 00:07:44.148 11:29:44 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:07:44.148 11:29:44 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@726 -- # xtrace_disable 00:07:44.148 11:29:44 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@10 -- # set +x 00:07:44.148 11:29:44 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@509 -- # nvmfpid=67493 00:07:44.148 11:29:44 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@508 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x7 00:07:44.148 11:29:44 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@510 -- # waitforlisten 67493 00:07:44.148 11:29:44 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@835 -- # '[' -z 67493 ']' 00:07:44.148 11:29:44 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:07:44.148 11:29:44 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@840 -- # local max_retries=100 00:07:44.148 11:29:44 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:07:44.148 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:07:44.148 11:29:44 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@844 -- # xtrace_disable 00:07:44.148 11:29:44 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@10 -- # set +x 00:07:44.148 [2024-12-16 11:29:44.703064] Starting SPDK v25.01-pre git sha1 c360129e7 / DPDK 24.03.0 initialization... 00:07:44.148 [2024-12-16 11:29:44.703164] [ DPDK EAL parameters: nvmf -c 0x7 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:07:44.148 [2024-12-16 11:29:44.858160] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:07:44.406 [2024-12-16 11:29:44.933192] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:07:44.406 [2024-12-16 11:29:44.933288] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:07:44.406 [2024-12-16 11:29:44.933303] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:07:44.406 [2024-12-16 11:29:44.933313] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:07:44.406 [2024-12-16 11:29:44.933323] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:07:44.406 [2024-12-16 11:29:44.934679] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 1 00:07:44.406 [2024-12-16 11:29:44.934795] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 2 00:07:44.406 [2024-12-16 11:29:44.934805] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:07:44.406 11:29:45 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:07:44.406 11:29:45 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@868 -- # return 0 00:07:44.406 11:29:45 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:07:44.406 11:29:45 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@732 -- # xtrace_disable 00:07:44.406 11:29:45 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@10 -- # set +x 00:07:44.406 11:29:45 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:07:44.406 11:29:45 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@21 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:07:44.664 [2024-12-16 11:29:45.408056] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:07:44.922 11:29:45 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@24 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:07:45.181 11:29:45 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@24 -- # base_bdevs='Malloc0 ' 00:07:45.181 11:29:45 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@25 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:07:45.748 11:29:46 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@25 -- # base_bdevs+=Malloc1 00:07:45.748 11:29:46 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@26 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_raid_create -n raid0 -z 64 -r 0 -b 'Malloc0 Malloc1' 00:07:46.006 11:29:46 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@29 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create_lvstore raid0 lvs 00:07:46.265 11:29:46 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@29 -- # lvs=bfc3a68b-82b3-4733-b37d-17910f784664 00:07:46.265 11:29:46 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@32 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create -u bfc3a68b-82b3-4733-b37d-17910f784664 lvol 20 00:07:46.523 11:29:47 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@32 -- # lvol=d87dd26c-937b-410a-941e-c9d079dc22f5 00:07:46.523 11:29:47 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@35 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 -a -s SPDK0 00:07:47.090 11:29:47 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@36 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 d87dd26c-937b-410a-941e-c9d079dc22f5 00:07:47.348 11:29:47 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@37 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.3 -s 4420 00:07:47.607 [2024-12-16 11:29:48.229599] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4420 *** 00:07:47.607 11:29:48 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@38 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.3 -s 4420 00:07:47.865 11:29:48 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@42 -- # perf_pid=67632 00:07:47.865 11:29:48 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@41 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.3 trsvcid:4420' -o 4096 -q 128 -s 512 -w randwrite -t 10 -c 0x18 00:07:47.865 11:29:48 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@44 -- # sleep 1 00:07:49.266 11:29:49 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@47 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_snapshot d87dd26c-937b-410a-941e-c9d079dc22f5 MY_SNAPSHOT 00:07:49.266 11:29:49 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@47 -- # snapshot=783de9f7-3ed5-4f04-85de-62164aa6370e 00:07:49.266 11:29:49 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@48 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_resize d87dd26c-937b-410a-941e-c9d079dc22f5 30 00:07:49.833 11:29:50 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@49 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_clone 783de9f7-3ed5-4f04-85de-62164aa6370e MY_CLONE 00:07:50.092 11:29:50 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@49 -- # clone=6ca44227-5870-41c0-b4f7-55d027261451 00:07:50.092 11:29:50 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@50 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_inflate 6ca44227-5870-41c0-b4f7-55d027261451 00:07:51.025 11:29:51 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@53 -- # wait 67632 00:07:59.153 Initializing NVMe Controllers 00:07:59.153 Attached to NVMe over Fabrics controller at 10.0.0.3:4420: nqn.2016-06.io.spdk:cnode0 00:07:59.153 Controller IO queue size 128, less than required. 00:07:59.153 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:07:59.153 Associating TCP (addr:10.0.0.3 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 with lcore 3 00:07:59.153 Associating TCP (addr:10.0.0.3 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 with lcore 4 00:07:59.153 Initialization complete. Launching workers. 00:07:59.153 ======================================================== 00:07:59.153 Latency(us) 00:07:59.153 Device Information : IOPS MiB/s Average min max 00:07:59.153 TCP (addr:10.0.0.3 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 from core 3: 7359.40 28.75 17401.27 448.13 90267.22 00:07:59.153 TCP (addr:10.0.0.3 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 from core 4: 7664.00 29.94 16702.23 3574.94 76180.00 00:07:59.153 ======================================================== 00:07:59.153 Total : 15023.40 58.69 17044.66 448.13 90267.22 00:07:59.153 00:07:59.153 11:29:58 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@56 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:07:59.153 11:29:59 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@57 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_delete d87dd26c-937b-410a-941e-c9d079dc22f5 00:07:59.153 11:29:59 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@58 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -u bfc3a68b-82b3-4733-b37d-17910f784664 00:07:59.153 11:29:59 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@60 -- # rm -f 00:07:59.153 11:29:59 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@62 -- # trap - SIGINT SIGTERM EXIT 00:07:59.153 11:29:59 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@64 -- # nvmftestfini 00:07:59.153 11:29:59 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@516 -- # nvmfcleanup 00:07:59.153 11:29:59 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@121 -- # sync 00:07:59.153 11:29:59 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:07:59.153 11:29:59 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@124 -- # set +e 00:07:59.153 11:29:59 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@125 -- # for i in {1..20} 00:07:59.153 11:29:59 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:07:59.153 rmmod nvme_tcp 00:07:59.153 rmmod nvme_fabrics 00:07:59.153 rmmod nvme_keyring 00:07:59.153 11:29:59 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:07:59.153 11:29:59 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@128 -- # set -e 00:07:59.153 11:29:59 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@129 -- # return 0 00:07:59.153 11:29:59 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@517 -- # '[' -n 67493 ']' 00:07:59.153 11:29:59 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@518 -- # killprocess 67493 00:07:59.153 11:29:59 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@954 -- # '[' -z 67493 ']' 00:07:59.153 11:29:59 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@958 -- # kill -0 67493 00:07:59.153 11:29:59 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@959 -- # uname 00:07:59.153 11:29:59 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:07:59.153 11:29:59 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 67493 00:07:59.443 killing process with pid 67493 00:07:59.443 11:29:59 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:07:59.443 11:29:59 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:07:59.443 11:29:59 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@972 -- # echo 'killing process with pid 67493' 00:07:59.443 11:29:59 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@973 -- # kill 67493 00:07:59.443 11:29:59 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@978 -- # wait 67493 00:07:59.443 11:30:00 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:07:59.443 11:30:00 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:07:59.443 11:30:00 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:07:59.443 11:30:00 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@297 -- # iptr 00:07:59.443 11:30:00 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@791 -- # iptables-save 00:07:59.443 11:30:00 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:07:59.443 11:30:00 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@791 -- # iptables-restore 00:07:59.443 11:30:00 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@298 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:07:59.443 11:30:00 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@299 -- # nvmf_veth_fini 00:07:59.443 11:30:00 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@233 -- # ip link set nvmf_init_br nomaster 00:07:59.443 11:30:00 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@234 -- # ip link set nvmf_init_br2 nomaster 00:07:59.701 11:30:00 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@235 -- # ip link set nvmf_tgt_br nomaster 00:07:59.701 11:30:00 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@236 -- # ip link set nvmf_tgt_br2 nomaster 00:07:59.701 11:30:00 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@237 -- # ip link set nvmf_init_br down 00:07:59.701 11:30:00 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@238 -- # ip link set nvmf_init_br2 down 00:07:59.701 11:30:00 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@239 -- # ip link set nvmf_tgt_br down 00:07:59.701 11:30:00 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@240 -- # ip link set nvmf_tgt_br2 down 00:07:59.701 11:30:00 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@241 -- # ip link delete nvmf_br type bridge 00:07:59.701 11:30:00 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@242 -- # ip link delete nvmf_init_if 00:07:59.701 11:30:00 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@243 -- # ip link delete nvmf_init_if2 00:07:59.701 11:30:00 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@244 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:07:59.701 11:30:00 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@245 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:07:59.701 11:30:00 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@246 -- # remove_spdk_ns 00:07:59.701 11:30:00 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:07:59.701 11:30:00 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:07:59.701 11:30:00 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:07:59.701 11:30:00 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@300 -- # return 0 00:07:59.701 00:07:59.701 real 0m16.365s 00:07:59.701 user 1m7.991s 00:07:59.701 sys 0m3.856s 00:07:59.701 11:30:00 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@1130 -- # xtrace_disable 00:07:59.701 11:30:00 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@10 -- # set +x 00:07:59.701 ************************************ 00:07:59.701 END TEST nvmf_lvol 00:07:59.701 ************************************ 00:07:59.701 11:30:00 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@28 -- # run_test nvmf_lvs_grow /home/vagrant/spdk_repo/spdk/test/nvmf/target/nvmf_lvs_grow.sh --transport=tcp 00:07:59.701 11:30:00 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:07:59.701 11:30:00 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1111 -- # xtrace_disable 00:07:59.701 11:30:00 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@10 -- # set +x 00:07:59.959 ************************************ 00:07:59.959 START TEST nvmf_lvs_grow 00:07:59.959 ************************************ 00:07:59.959 11:30:00 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/nvmf_lvs_grow.sh --transport=tcp 00:07:59.959 * Looking for test storage... 00:07:59.959 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:07:59.959 11:30:00 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:07:59.959 11:30:00 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@1711 -- # lcov --version 00:07:59.959 11:30:00 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:07:59.959 11:30:00 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:07:59.959 11:30:00 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:07:59.959 11:30:00 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@333 -- # local ver1 ver1_l 00:07:59.959 11:30:00 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@334 -- # local ver2 ver2_l 00:07:59.959 11:30:00 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@336 -- # IFS=.-: 00:07:59.959 11:30:00 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@336 -- # read -ra ver1 00:07:59.959 11:30:00 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@337 -- # IFS=.-: 00:07:59.959 11:30:00 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@337 -- # read -ra ver2 00:07:59.959 11:30:00 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@338 -- # local 'op=<' 00:07:59.959 11:30:00 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@340 -- # ver1_l=2 00:07:59.959 11:30:00 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@341 -- # ver2_l=1 00:07:59.959 11:30:00 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:07:59.959 11:30:00 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@344 -- # case "$op" in 00:07:59.959 11:30:00 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@345 -- # : 1 00:07:59.959 11:30:00 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@364 -- # (( v = 0 )) 00:07:59.959 11:30:00 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:07:59.959 11:30:00 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@365 -- # decimal 1 00:07:59.959 11:30:00 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@353 -- # local d=1 00:07:59.959 11:30:00 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:07:59.959 11:30:00 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@355 -- # echo 1 00:07:59.959 11:30:00 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@365 -- # ver1[v]=1 00:07:59.959 11:30:00 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@366 -- # decimal 2 00:07:59.959 11:30:00 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@353 -- # local d=2 00:07:59.959 11:30:00 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:07:59.959 11:30:00 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@355 -- # echo 2 00:07:59.959 11:30:00 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@366 -- # ver2[v]=2 00:07:59.959 11:30:00 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:07:59.959 11:30:00 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:07:59.959 11:30:00 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@368 -- # return 0 00:07:59.959 11:30:00 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:07:59.959 11:30:00 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:07:59.959 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:59.959 --rc genhtml_branch_coverage=1 00:07:59.959 --rc genhtml_function_coverage=1 00:07:59.959 --rc genhtml_legend=1 00:07:59.959 --rc geninfo_all_blocks=1 00:07:59.959 --rc geninfo_unexecuted_blocks=1 00:07:59.959 00:07:59.959 ' 00:07:59.959 11:30:00 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:07:59.959 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:59.959 --rc genhtml_branch_coverage=1 00:07:59.959 --rc genhtml_function_coverage=1 00:07:59.959 --rc genhtml_legend=1 00:07:59.959 --rc geninfo_all_blocks=1 00:07:59.959 --rc geninfo_unexecuted_blocks=1 00:07:59.959 00:07:59.959 ' 00:07:59.959 11:30:00 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:07:59.959 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:59.959 --rc genhtml_branch_coverage=1 00:07:59.959 --rc genhtml_function_coverage=1 00:07:59.959 --rc genhtml_legend=1 00:07:59.959 --rc geninfo_all_blocks=1 00:07:59.959 --rc geninfo_unexecuted_blocks=1 00:07:59.959 00:07:59.959 ' 00:07:59.959 11:30:00 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:07:59.959 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:59.959 --rc genhtml_branch_coverage=1 00:07:59.959 --rc genhtml_function_coverage=1 00:07:59.959 --rc genhtml_legend=1 00:07:59.959 --rc geninfo_all_blocks=1 00:07:59.959 --rc geninfo_unexecuted_blocks=1 00:07:59.959 00:07:59.959 ' 00:07:59.959 11:30:00 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:07:59.959 11:30:00 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@7 -- # uname -s 00:07:59.959 11:30:00 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:07:59.959 11:30:00 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:07:59.959 11:30:00 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:07:59.959 11:30:00 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:07:59.959 11:30:00 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:07:59.959 11:30:00 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:07:59.959 11:30:00 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:07:59.959 11:30:00 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:07:59.959 11:30:00 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:07:59.959 11:30:00 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:07:59.959 11:30:00 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:2fb5c03d-b67f-4471-b8a9-a83bf7b15be8 00:07:59.959 11:30:00 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@18 -- # NVME_HOSTID=2fb5c03d-b67f-4471-b8a9-a83bf7b15be8 00:07:59.959 11:30:00 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:07:59.959 11:30:00 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:07:59.959 11:30:00 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:07:59.959 11:30:00 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:07:59.959 11:30:00 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@49 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:07:59.959 11:30:00 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@15 -- # shopt -s extglob 00:07:59.959 11:30:00 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:07:59.959 11:30:00 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:07:59.960 11:30:00 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:07:59.960 11:30:00 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:59.960 11:30:00 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:59.960 11:30:00 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:59.960 11:30:00 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- paths/export.sh@5 -- # export PATH 00:07:59.960 11:30:00 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:59.960 11:30:00 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@51 -- # : 0 00:07:59.960 11:30:00 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:07:59.960 11:30:00 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:07:59.960 11:30:00 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:07:59.960 11:30:00 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:07:59.960 11:30:00 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:07:59.960 11:30:00 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:07:59.960 /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:07:59.960 11:30:00 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:07:59.960 11:30:00 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:07:59.960 11:30:00 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@55 -- # have_pci_nics=0 00:07:59.960 11:30:00 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@11 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:07:59.960 11:30:00 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@12 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:07:59.960 11:30:00 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@98 -- # nvmftestinit 00:07:59.960 11:30:00 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:07:59.960 11:30:00 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:07:59.960 11:30:00 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@476 -- # prepare_net_devs 00:07:59.960 11:30:00 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@438 -- # local -g is_hw=no 00:07:59.960 11:30:00 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@440 -- # remove_spdk_ns 00:07:59.960 11:30:00 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:07:59.960 11:30:00 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:07:59.960 11:30:00 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:07:59.960 11:30:00 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@442 -- # [[ virt != virt ]] 00:07:59.960 11:30:00 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@444 -- # [[ no == yes ]] 00:07:59.960 11:30:00 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@451 -- # [[ virt == phy ]] 00:07:59.960 11:30:00 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@454 -- # [[ virt == phy-fallback ]] 00:07:59.960 11:30:00 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@459 -- # [[ tcp == tcp ]] 00:07:59.960 11:30:00 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@460 -- # nvmf_veth_init 00:07:59.960 11:30:00 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@145 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:07:59.960 11:30:00 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@146 -- # NVMF_SECOND_INITIATOR_IP=10.0.0.2 00:07:59.960 11:30:00 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@147 -- # NVMF_FIRST_TARGET_IP=10.0.0.3 00:07:59.960 11:30:00 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@148 -- # NVMF_SECOND_TARGET_IP=10.0.0.4 00:07:59.960 11:30:00 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@149 -- # NVMF_INITIATOR_IP=10.0.0.1 00:07:59.960 11:30:00 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@150 -- # NVMF_BRIDGE=nvmf_br 00:07:59.960 11:30:00 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@151 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:07:59.960 11:30:00 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@152 -- # NVMF_INITIATOR_INTERFACE2=nvmf_init_if2 00:07:59.960 11:30:00 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@153 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:07:59.960 11:30:00 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@154 -- # NVMF_INITIATOR_BRIDGE2=nvmf_init_br2 00:07:59.960 11:30:00 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@155 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:07:59.960 11:30:00 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@156 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:07:59.960 11:30:00 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@157 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:07:59.960 11:30:00 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@158 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:07:59.960 11:30:00 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@159 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:07:59.960 11:30:00 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@160 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:07:59.960 11:30:00 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@162 -- # ip link set nvmf_init_br nomaster 00:07:59.960 Cannot find device "nvmf_init_br" 00:07:59.960 11:30:00 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@162 -- # true 00:07:59.960 11:30:00 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@163 -- # ip link set nvmf_init_br2 nomaster 00:08:00.218 Cannot find device "nvmf_init_br2" 00:08:00.218 11:30:00 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@163 -- # true 00:08:00.218 11:30:00 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@164 -- # ip link set nvmf_tgt_br nomaster 00:08:00.218 Cannot find device "nvmf_tgt_br" 00:08:00.218 11:30:00 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@164 -- # true 00:08:00.218 11:30:00 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@165 -- # ip link set nvmf_tgt_br2 nomaster 00:08:00.218 Cannot find device "nvmf_tgt_br2" 00:08:00.218 11:30:00 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@165 -- # true 00:08:00.218 11:30:00 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@166 -- # ip link set nvmf_init_br down 00:08:00.218 Cannot find device "nvmf_init_br" 00:08:00.218 11:30:00 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@166 -- # true 00:08:00.218 11:30:00 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@167 -- # ip link set nvmf_init_br2 down 00:08:00.218 Cannot find device "nvmf_init_br2" 00:08:00.218 11:30:00 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@167 -- # true 00:08:00.218 11:30:00 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@168 -- # ip link set nvmf_tgt_br down 00:08:00.218 Cannot find device "nvmf_tgt_br" 00:08:00.218 11:30:00 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@168 -- # true 00:08:00.218 11:30:00 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@169 -- # ip link set nvmf_tgt_br2 down 00:08:00.218 Cannot find device "nvmf_tgt_br2" 00:08:00.218 11:30:00 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@169 -- # true 00:08:00.218 11:30:00 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@170 -- # ip link delete nvmf_br type bridge 00:08:00.218 Cannot find device "nvmf_br" 00:08:00.218 11:30:00 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@170 -- # true 00:08:00.218 11:30:00 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@171 -- # ip link delete nvmf_init_if 00:08:00.218 Cannot find device "nvmf_init_if" 00:08:00.218 11:30:00 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@171 -- # true 00:08:00.218 11:30:00 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@172 -- # ip link delete nvmf_init_if2 00:08:00.218 Cannot find device "nvmf_init_if2" 00:08:00.218 11:30:00 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@172 -- # true 00:08:00.218 11:30:00 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@173 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:08:00.218 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:08:00.218 11:30:00 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@173 -- # true 00:08:00.218 11:30:00 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@174 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:08:00.218 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:08:00.218 11:30:00 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@174 -- # true 00:08:00.218 11:30:00 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@177 -- # ip netns add nvmf_tgt_ns_spdk 00:08:00.218 11:30:00 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@180 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:08:00.218 11:30:00 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@181 -- # ip link add nvmf_init_if2 type veth peer name nvmf_init_br2 00:08:00.218 11:30:00 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@182 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:08:00.218 11:30:00 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@183 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:08:00.218 11:30:00 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:08:00.218 11:30:00 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@187 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:08:00.218 11:30:00 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@190 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:08:00.218 11:30:00 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@191 -- # ip addr add 10.0.0.2/24 dev nvmf_init_if2 00:08:00.218 11:30:00 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@192 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if 00:08:00.218 11:30:00 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@193 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.4/24 dev nvmf_tgt_if2 00:08:00.218 11:30:00 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@196 -- # ip link set nvmf_init_if up 00:08:00.218 11:30:00 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@197 -- # ip link set nvmf_init_if2 up 00:08:00.218 11:30:00 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@198 -- # ip link set nvmf_init_br up 00:08:00.218 11:30:00 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@199 -- # ip link set nvmf_init_br2 up 00:08:00.218 11:30:00 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@200 -- # ip link set nvmf_tgt_br up 00:08:00.218 11:30:00 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@201 -- # ip link set nvmf_tgt_br2 up 00:08:00.218 11:30:00 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@202 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:08:00.218 11:30:00 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@203 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:08:00.218 11:30:00 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@204 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:08:00.218 11:30:00 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@207 -- # ip link add nvmf_br type bridge 00:08:00.218 11:30:00 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@208 -- # ip link set nvmf_br up 00:08:00.218 11:30:00 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@211 -- # ip link set nvmf_init_br master nvmf_br 00:08:00.476 11:30:00 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@212 -- # ip link set nvmf_init_br2 master nvmf_br 00:08:00.476 11:30:00 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@213 -- # ip link set nvmf_tgt_br master nvmf_br 00:08:00.476 11:30:01 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@214 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:08:00.476 11:30:01 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@217 -- # ipts -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:08:00.476 11:30:01 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT' 00:08:00.476 11:30:01 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@218 -- # ipts -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT 00:08:00.476 11:30:01 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT' 00:08:00.476 11:30:01 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@219 -- # ipts -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:08:00.476 11:30:01 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@790 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT -m comment --comment 'SPDK_NVMF:-A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT' 00:08:00.476 11:30:01 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@222 -- # ping -c 1 10.0.0.3 00:08:00.476 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:08:00.476 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.079 ms 00:08:00.476 00:08:00.476 --- 10.0.0.3 ping statistics --- 00:08:00.476 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:08:00.476 rtt min/avg/max/mdev = 0.079/0.079/0.079/0.000 ms 00:08:00.476 11:30:01 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@223 -- # ping -c 1 10.0.0.4 00:08:00.476 PING 10.0.0.4 (10.0.0.4) 56(84) bytes of data. 00:08:00.476 64 bytes from 10.0.0.4: icmp_seq=1 ttl=64 time=0.059 ms 00:08:00.476 00:08:00.476 --- 10.0.0.4 ping statistics --- 00:08:00.476 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:08:00.476 rtt min/avg/max/mdev = 0.059/0.059/0.059/0.000 ms 00:08:00.476 11:30:01 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@224 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:08:00.476 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:08:00.476 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.032 ms 00:08:00.476 00:08:00.476 --- 10.0.0.1 ping statistics --- 00:08:00.476 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:08:00.476 rtt min/avg/max/mdev = 0.032/0.032/0.032/0.000 ms 00:08:00.476 11:30:01 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@225 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.2 00:08:00.476 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:08:00.476 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.057 ms 00:08:00.476 00:08:00.476 --- 10.0.0.2 ping statistics --- 00:08:00.476 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:08:00.476 rtt min/avg/max/mdev = 0.057/0.057/0.057/0.000 ms 00:08:00.476 11:30:01 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@227 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:08:00.476 11:30:01 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@461 -- # return 0 00:08:00.476 11:30:01 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:08:00.476 11:30:01 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:08:00.476 11:30:01 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:08:00.476 11:30:01 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:08:00.476 11:30:01 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:08:00.476 11:30:01 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:08:00.476 11:30:01 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:08:00.476 11:30:01 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@99 -- # nvmfappstart -m 0x1 00:08:00.476 11:30:01 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:08:00.476 11:30:01 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@726 -- # xtrace_disable 00:08:00.476 11:30:01 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:08:00.476 11:30:01 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@509 -- # nvmfpid=68057 00:08:00.476 11:30:01 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@508 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x1 00:08:00.476 11:30:01 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@510 -- # waitforlisten 68057 00:08:00.476 11:30:01 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@835 -- # '[' -z 68057 ']' 00:08:00.476 11:30:01 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:08:00.476 11:30:01 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@840 -- # local max_retries=100 00:08:00.476 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:08:00.476 11:30:01 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:08:00.476 11:30:01 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@844 -- # xtrace_disable 00:08:00.476 11:30:01 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:08:00.476 [2024-12-16 11:30:01.167746] Starting SPDK v25.01-pre git sha1 c360129e7 / DPDK 24.03.0 initialization... 00:08:00.476 [2024-12-16 11:30:01.167902] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:08:00.733 [2024-12-16 11:30:01.324488] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:00.733 [2024-12-16 11:30:01.386489] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:08:00.733 [2024-12-16 11:30:01.386542] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:08:00.733 [2024-12-16 11:30:01.386555] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:08:00.733 [2024-12-16 11:30:01.386564] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:08:00.733 [2024-12-16 11:30:01.386572] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:08:00.733 [2024-12-16 11:30:01.386961] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:08:01.664 11:30:02 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:08:01.664 11:30:02 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@868 -- # return 0 00:08:01.664 11:30:02 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:08:01.664 11:30:02 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@732 -- # xtrace_disable 00:08:01.664 11:30:02 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:08:01.664 11:30:02 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:08:01.664 11:30:02 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@100 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:08:01.922 [2024-12-16 11:30:02.595722] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:08:01.922 11:30:02 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@102 -- # run_test lvs_grow_clean lvs_grow 00:08:01.922 11:30:02 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:08:01.922 11:30:02 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@1111 -- # xtrace_disable 00:08:01.922 11:30:02 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:08:01.922 ************************************ 00:08:01.922 START TEST lvs_grow_clean 00:08:01.922 ************************************ 00:08:01.922 11:30:02 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@1129 -- # lvs_grow 00:08:01.922 11:30:02 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@15 -- # local aio_bdev lvs lvol 00:08:01.922 11:30:02 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@16 -- # local data_clusters free_clusters 00:08:01.922 11:30:02 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@17 -- # local bdevperf_pid run_test_pid 00:08:01.922 11:30:02 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@18 -- # local aio_init_size_mb=200 00:08:01.922 11:30:02 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@19 -- # local aio_final_size_mb=400 00:08:01.922 11:30:02 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@20 -- # local lvol_bdev_size_mb=150 00:08:01.922 11:30:02 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@23 -- # rm -f /home/vagrant/spdk_repo/spdk/test/nvmf/target/aio_bdev 00:08:01.922 11:30:02 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@24 -- # truncate -s 200M /home/vagrant/spdk_repo/spdk/test/nvmf/target/aio_bdev 00:08:01.922 11:30:02 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@25 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_aio_create /home/vagrant/spdk_repo/spdk/test/nvmf/target/aio_bdev aio_bdev 4096 00:08:02.488 11:30:02 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@25 -- # aio_bdev=aio_bdev 00:08:02.488 11:30:02 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create_lvstore --cluster-sz 4194304 --md-pages-per-cluster-ratio 300 aio_bdev lvs 00:08:02.805 11:30:03 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@28 -- # lvs=5ed93a21-1c53-4725-aaa8-3b85f5a38e08 00:08:02.806 11:30:03 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@29 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 5ed93a21-1c53-4725-aaa8-3b85f5a38e08 00:08:02.806 11:30:03 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@29 -- # jq -r '.[0].total_data_clusters' 00:08:03.064 11:30:03 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@29 -- # data_clusters=49 00:08:03.064 11:30:03 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@30 -- # (( data_clusters == 49 )) 00:08:03.064 11:30:03 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@33 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create -u 5ed93a21-1c53-4725-aaa8-3b85f5a38e08 lvol 150 00:08:03.321 11:30:04 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@33 -- # lvol=70917d11-9f20-45af-9e80-82837430a976 00:08:03.321 11:30:04 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@36 -- # truncate -s 400M /home/vagrant/spdk_repo/spdk/test/nvmf/target/aio_bdev 00:08:03.321 11:30:04 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@37 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_aio_rescan aio_bdev 00:08:03.579 [2024-12-16 11:30:04.322178] bdev_aio.c:1053:bdev_aio_rescan: *NOTICE*: AIO device is resized: bdev name /home/vagrant/spdk_repo/spdk/test/nvmf/target/aio_bdev, old block count 51200, new block count 102400 00:08:03.579 [2024-12-16 11:30:04.322259] vbdev_lvol.c: 165:vbdev_lvs_base_bdev_event_cb: *NOTICE*: Unsupported bdev event: type 1 00:08:03.579 true 00:08:03.837 11:30:04 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@38 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 5ed93a21-1c53-4725-aaa8-3b85f5a38e08 00:08:03.837 11:30:04 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@38 -- # jq -r '.[0].total_data_clusters' 00:08:04.095 11:30:04 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@38 -- # (( data_clusters == 49 )) 00:08:04.095 11:30:04 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@41 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 -a -s SPDK0 00:08:04.352 11:30:04 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@42 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 70917d11-9f20-45af-9e80-82837430a976 00:08:04.610 11:30:05 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@43 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.3 -s 4420 00:08:04.868 [2024-12-16 11:30:05.494872] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4420 *** 00:08:04.868 11:30:05 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@44 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.3 -s 4420 00:08:05.125 11:30:05 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@48 -- # bdevperf_pid=68224 00:08:05.125 11:30:05 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@47 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -r /var/tmp/bdevperf.sock -m 0x2 -o 4096 -q 128 -w randwrite -t 10 -S 1 -z 00:08:05.125 11:30:05 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@49 -- # trap 'killprocess $bdevperf_pid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:08:05.125 11:30:05 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@50 -- # waitforlisten 68224 /var/tmp/bdevperf.sock 00:08:05.125 11:30:05 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@835 -- # '[' -z 68224 ']' 00:08:05.125 11:30:05 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:08:05.125 11:30:05 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@840 -- # local max_retries=100 00:08:05.125 11:30:05 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:08:05.125 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:08:05.125 11:30:05 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@844 -- # xtrace_disable 00:08:05.125 11:30:05 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@10 -- # set +x 00:08:05.126 [2024-12-16 11:30:05.857868] Starting SPDK v25.01-pre git sha1 c360129e7 / DPDK 24.03.0 initialization... 00:08:05.126 [2024-12-16 11:30:05.857962] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid68224 ] 00:08:05.384 [2024-12-16 11:30:06.000971] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:05.384 [2024-12-16 11:30:06.065490] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 1 00:08:05.642 11:30:06 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:08:05.642 11:30:06 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@868 -- # return 0 00:08:05.642 11:30:06 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@52 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b Nvme0 -t tcp -a 10.0.0.3 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 00:08:05.900 Nvme0n1 00:08:05.900 11:30:06 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@53 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_get_bdevs -b Nvme0n1 -t 3000 00:08:06.158 [ 00:08:06.158 { 00:08:06.158 "aliases": [ 00:08:06.158 "70917d11-9f20-45af-9e80-82837430a976" 00:08:06.158 ], 00:08:06.158 "assigned_rate_limits": { 00:08:06.158 "r_mbytes_per_sec": 0, 00:08:06.158 "rw_ios_per_sec": 0, 00:08:06.158 "rw_mbytes_per_sec": 0, 00:08:06.158 "w_mbytes_per_sec": 0 00:08:06.158 }, 00:08:06.158 "block_size": 4096, 00:08:06.158 "claimed": false, 00:08:06.158 "driver_specific": { 00:08:06.158 "mp_policy": "active_passive", 00:08:06.158 "nvme": [ 00:08:06.158 { 00:08:06.158 "ctrlr_data": { 00:08:06.158 "ana_reporting": false, 00:08:06.158 "cntlid": 1, 00:08:06.158 "firmware_revision": "25.01", 00:08:06.158 "model_number": "SPDK bdev Controller", 00:08:06.158 "multi_ctrlr": true, 00:08:06.158 "oacs": { 00:08:06.158 "firmware": 0, 00:08:06.158 "format": 0, 00:08:06.158 "ns_manage": 0, 00:08:06.158 "security": 0 00:08:06.158 }, 00:08:06.158 "serial_number": "SPDK0", 00:08:06.158 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:08:06.158 "vendor_id": "0x8086" 00:08:06.158 }, 00:08:06.158 "ns_data": { 00:08:06.158 "can_share": true, 00:08:06.158 "id": 1 00:08:06.158 }, 00:08:06.158 "trid": { 00:08:06.158 "adrfam": "IPv4", 00:08:06.158 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:08:06.158 "traddr": "10.0.0.3", 00:08:06.158 "trsvcid": "4420", 00:08:06.158 "trtype": "TCP" 00:08:06.158 }, 00:08:06.158 "vs": { 00:08:06.158 "nvme_version": "1.3" 00:08:06.158 } 00:08:06.158 } 00:08:06.158 ] 00:08:06.158 }, 00:08:06.158 "memory_domains": [ 00:08:06.158 { 00:08:06.158 "dma_device_id": "system", 00:08:06.158 "dma_device_type": 1 00:08:06.158 } 00:08:06.158 ], 00:08:06.158 "name": "Nvme0n1", 00:08:06.158 "num_blocks": 38912, 00:08:06.158 "numa_id": -1, 00:08:06.158 "product_name": "NVMe disk", 00:08:06.158 "supported_io_types": { 00:08:06.158 "abort": true, 00:08:06.158 "compare": true, 00:08:06.158 "compare_and_write": true, 00:08:06.158 "copy": true, 00:08:06.158 "flush": true, 00:08:06.158 "get_zone_info": false, 00:08:06.158 "nvme_admin": true, 00:08:06.158 "nvme_io": true, 00:08:06.158 "nvme_io_md": false, 00:08:06.158 "nvme_iov_md": false, 00:08:06.158 "read": true, 00:08:06.158 "reset": true, 00:08:06.158 "seek_data": false, 00:08:06.158 "seek_hole": false, 00:08:06.158 "unmap": true, 00:08:06.158 "write": true, 00:08:06.158 "write_zeroes": true, 00:08:06.158 "zcopy": false, 00:08:06.158 "zone_append": false, 00:08:06.158 "zone_management": false 00:08:06.158 }, 00:08:06.158 "uuid": "70917d11-9f20-45af-9e80-82837430a976", 00:08:06.158 "zoned": false 00:08:06.158 } 00:08:06.158 ] 00:08:06.158 11:30:06 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@56 -- # run_test_pid=68258 00:08:06.158 11:30:06 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@55 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:08:06.158 11:30:06 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@57 -- # sleep 2 00:08:06.158 Running I/O for 10 seconds... 00:08:07.625 Latency(us) 00:08:07.625 [2024-12-16T11:30:08.378Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:08:07.625 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:08:07.625 Nvme0n1 : 1.00 8381.00 32.74 0.00 0.00 0.00 0.00 0.00 00:08:07.625 [2024-12-16T11:30:08.378Z] =================================================================================================================== 00:08:07.625 [2024-12-16T11:30:08.378Z] Total : 8381.00 32.74 0.00 0.00 0.00 0.00 0.00 00:08:07.625 00:08:08.191 11:30:08 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@60 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_grow_lvstore -u 5ed93a21-1c53-4725-aaa8-3b85f5a38e08 00:08:08.191 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:08:08.191 Nvme0n1 : 2.00 8241.00 32.19 0.00 0.00 0.00 0.00 0.00 00:08:08.191 [2024-12-16T11:30:08.944Z] =================================================================================================================== 00:08:08.191 [2024-12-16T11:30:08.944Z] Total : 8241.00 32.19 0.00 0.00 0.00 0.00 0.00 00:08:08.191 00:08:08.448 true 00:08:08.449 11:30:09 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@61 -- # jq -r '.[0].total_data_clusters' 00:08:08.449 11:30:09 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@61 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 5ed93a21-1c53-4725-aaa8-3b85f5a38e08 00:08:09.015 11:30:09 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@61 -- # data_clusters=99 00:08:09.015 11:30:09 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@62 -- # (( data_clusters == 99 )) 00:08:09.015 11:30:09 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@65 -- # wait 68258 00:08:09.274 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:08:09.274 Nvme0n1 : 3.00 8092.00 31.61 0.00 0.00 0.00 0.00 0.00 00:08:09.274 [2024-12-16T11:30:10.027Z] =================================================================================================================== 00:08:09.274 [2024-12-16T11:30:10.027Z] Total : 8092.00 31.61 0.00 0.00 0.00 0.00 0.00 00:08:09.274 00:08:10.208 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:08:10.208 Nvme0n1 : 4.00 8039.75 31.41 0.00 0.00 0.00 0.00 0.00 00:08:10.208 [2024-12-16T11:30:10.961Z] =================================================================================================================== 00:08:10.208 [2024-12-16T11:30:10.961Z] Total : 8039.75 31.41 0.00 0.00 0.00 0.00 0.00 00:08:10.208 00:08:11.584 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:08:11.584 Nvme0n1 : 5.00 8055.60 31.47 0.00 0.00 0.00 0.00 0.00 00:08:11.584 [2024-12-16T11:30:12.337Z] =================================================================================================================== 00:08:11.584 [2024-12-16T11:30:12.337Z] Total : 8055.60 31.47 0.00 0.00 0.00 0.00 0.00 00:08:11.584 00:08:12.150 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:08:12.150 Nvme0n1 : 6.00 7983.00 31.18 0.00 0.00 0.00 0.00 0.00 00:08:12.150 [2024-12-16T11:30:12.903Z] =================================================================================================================== 00:08:12.150 [2024-12-16T11:30:12.903Z] Total : 7983.00 31.18 0.00 0.00 0.00 0.00 0.00 00:08:12.150 00:08:13.535 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:08:13.535 Nvme0n1 : 7.00 7937.29 31.01 0.00 0.00 0.00 0.00 0.00 00:08:13.535 [2024-12-16T11:30:14.288Z] =================================================================================================================== 00:08:13.535 [2024-12-16T11:30:14.288Z] Total : 7937.29 31.01 0.00 0.00 0.00 0.00 0.00 00:08:13.535 00:08:14.480 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:08:14.480 Nvme0n1 : 8.00 7876.62 30.77 0.00 0.00 0.00 0.00 0.00 00:08:14.480 [2024-12-16T11:30:15.233Z] =================================================================================================================== 00:08:14.480 [2024-12-16T11:30:15.233Z] Total : 7876.62 30.77 0.00 0.00 0.00 0.00 0.00 00:08:14.480 00:08:15.415 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:08:15.415 Nvme0n1 : 9.00 7775.00 30.37 0.00 0.00 0.00 0.00 0.00 00:08:15.415 [2024-12-16T11:30:16.168Z] =================================================================================================================== 00:08:15.415 [2024-12-16T11:30:16.168Z] Total : 7775.00 30.37 0.00 0.00 0.00 0.00 0.00 00:08:15.415 00:08:16.350 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:08:16.350 Nvme0n1 : 10.00 7730.70 30.20 0.00 0.00 0.00 0.00 0.00 00:08:16.350 [2024-12-16T11:30:17.103Z] =================================================================================================================== 00:08:16.350 [2024-12-16T11:30:17.103Z] Total : 7730.70 30.20 0.00 0.00 0.00 0.00 0.00 00:08:16.350 00:08:16.350 00:08:16.350 Latency(us) 00:08:16.350 [2024-12-16T11:30:17.103Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:08:16.350 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:08:16.350 Nvme0n1 : 10.01 7736.46 30.22 0.00 0.00 16538.95 5749.29 74353.57 00:08:16.350 [2024-12-16T11:30:17.103Z] =================================================================================================================== 00:08:16.350 [2024-12-16T11:30:17.103Z] Total : 7736.46 30.22 0.00 0.00 16538.95 5749.29 74353.57 00:08:16.350 { 00:08:16.350 "results": [ 00:08:16.350 { 00:08:16.350 "job": "Nvme0n1", 00:08:16.350 "core_mask": "0x2", 00:08:16.350 "workload": "randwrite", 00:08:16.350 "status": "finished", 00:08:16.350 "queue_depth": 128, 00:08:16.350 "io_size": 4096, 00:08:16.350 "runtime": 10.009099, 00:08:16.350 "iops": 7736.46059450506, 00:08:16.350 "mibps": 30.22054919728539, 00:08:16.350 "io_failed": 0, 00:08:16.350 "io_timeout": 0, 00:08:16.350 "avg_latency_us": 16538.953283469422, 00:08:16.350 "min_latency_us": 5749.294545454545, 00:08:16.350 "max_latency_us": 74353.57090909091 00:08:16.350 } 00:08:16.350 ], 00:08:16.350 "core_count": 1 00:08:16.350 } 00:08:16.350 11:30:16 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@66 -- # killprocess 68224 00:08:16.350 11:30:16 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@954 -- # '[' -z 68224 ']' 00:08:16.350 11:30:16 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@958 -- # kill -0 68224 00:08:16.350 11:30:16 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@959 -- # uname 00:08:16.350 11:30:16 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:08:16.350 11:30:16 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 68224 00:08:16.351 11:30:16 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:08:16.351 11:30:16 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:08:16.351 killing process with pid 68224 00:08:16.351 11:30:16 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@972 -- # echo 'killing process with pid 68224' 00:08:16.351 11:30:16 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@973 -- # kill 68224 00:08:16.351 Received shutdown signal, test time was about 10.000000 seconds 00:08:16.351 00:08:16.351 Latency(us) 00:08:16.351 [2024-12-16T11:30:17.104Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:08:16.351 [2024-12-16T11:30:17.104Z] =================================================================================================================== 00:08:16.351 [2024-12-16T11:30:17.104Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:08:16.351 11:30:16 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@978 -- # wait 68224 00:08:16.609 11:30:17 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@68 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_listener discovery -t tcp -a 10.0.0.3 -s 4420 00:08:16.867 11:30:17 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@69 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:08:17.126 11:30:17 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@70 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 5ed93a21-1c53-4725-aaa8-3b85f5a38e08 00:08:17.126 11:30:17 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@70 -- # jq -r '.[0].free_clusters' 00:08:17.385 11:30:18 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@70 -- # free_clusters=61 00:08:17.385 11:30:18 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@72 -- # [[ '' == \d\i\r\t\y ]] 00:08:17.385 11:30:18 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@84 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_aio_delete aio_bdev 00:08:17.643 [2024-12-16 11:30:18.366824] vbdev_lvol.c: 150:vbdev_lvs_hotremove_cb: *NOTICE*: bdev aio_bdev being removed: closing lvstore lvs 00:08:17.902 11:30:18 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@85 -- # NOT /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 5ed93a21-1c53-4725-aaa8-3b85f5a38e08 00:08:17.902 11:30:18 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@652 -- # local es=0 00:08:17.902 11:30:18 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@654 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 5ed93a21-1c53-4725-aaa8-3b85f5a38e08 00:08:17.902 11:30:18 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@640 -- # local arg=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:08:17.902 11:30:18 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:08:17.902 11:30:18 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@644 -- # type -t /home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:08:17.902 11:30:18 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:08:17.902 11:30:18 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@646 -- # type -P /home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:08:17.902 11:30:18 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:08:17.902 11:30:18 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@646 -- # arg=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:08:17.902 11:30:18 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@646 -- # [[ -x /home/vagrant/spdk_repo/spdk/scripts/rpc.py ]] 00:08:17.902 11:30:18 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@655 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 5ed93a21-1c53-4725-aaa8-3b85f5a38e08 00:08:18.161 2024/12/16 11:30:18 error on JSON-RPC call, method: bdev_lvol_get_lvstores, params: map[uuid:5ed93a21-1c53-4725-aaa8-3b85f5a38e08], err: error received for bdev_lvol_get_lvstores method, err: Code=-19 Msg=No such device 00:08:18.161 request: 00:08:18.161 { 00:08:18.161 "method": "bdev_lvol_get_lvstores", 00:08:18.161 "params": { 00:08:18.161 "uuid": "5ed93a21-1c53-4725-aaa8-3b85f5a38e08" 00:08:18.161 } 00:08:18.161 } 00:08:18.161 Got JSON-RPC error response 00:08:18.161 GoRPCClient: error on JSON-RPC call 00:08:18.161 11:30:18 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@655 -- # es=1 00:08:18.161 11:30:18 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:08:18.161 11:30:18 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:08:18.161 11:30:18 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:08:18.161 11:30:18 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@86 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_aio_create /home/vagrant/spdk_repo/spdk/test/nvmf/target/aio_bdev aio_bdev 4096 00:08:18.420 aio_bdev 00:08:18.420 11:30:18 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@87 -- # waitforbdev 70917d11-9f20-45af-9e80-82837430a976 00:08:18.420 11:30:18 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@903 -- # local bdev_name=70917d11-9f20-45af-9e80-82837430a976 00:08:18.420 11:30:18 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:08:18.420 11:30:18 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@905 -- # local i 00:08:18.420 11:30:18 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:08:18.420 11:30:18 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:08:18.420 11:30:18 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@908 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_wait_for_examine 00:08:18.678 11:30:19 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@910 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b 70917d11-9f20-45af-9e80-82837430a976 -t 2000 00:08:18.937 [ 00:08:18.937 { 00:08:18.937 "aliases": [ 00:08:18.937 "lvs/lvol" 00:08:18.937 ], 00:08:18.937 "assigned_rate_limits": { 00:08:18.937 "r_mbytes_per_sec": 0, 00:08:18.937 "rw_ios_per_sec": 0, 00:08:18.937 "rw_mbytes_per_sec": 0, 00:08:18.937 "w_mbytes_per_sec": 0 00:08:18.937 }, 00:08:18.937 "block_size": 4096, 00:08:18.937 "claimed": false, 00:08:18.937 "driver_specific": { 00:08:18.937 "lvol": { 00:08:18.937 "base_bdev": "aio_bdev", 00:08:18.937 "clone": false, 00:08:18.937 "esnap_clone": false, 00:08:18.937 "lvol_store_uuid": "5ed93a21-1c53-4725-aaa8-3b85f5a38e08", 00:08:18.937 "num_allocated_clusters": 38, 00:08:18.937 "snapshot": false, 00:08:18.937 "thin_provision": false 00:08:18.937 } 00:08:18.937 }, 00:08:18.937 "name": "70917d11-9f20-45af-9e80-82837430a976", 00:08:18.937 "num_blocks": 38912, 00:08:18.937 "product_name": "Logical Volume", 00:08:18.937 "supported_io_types": { 00:08:18.937 "abort": false, 00:08:18.937 "compare": false, 00:08:18.937 "compare_and_write": false, 00:08:18.937 "copy": false, 00:08:18.937 "flush": false, 00:08:18.937 "get_zone_info": false, 00:08:18.937 "nvme_admin": false, 00:08:18.937 "nvme_io": false, 00:08:18.937 "nvme_io_md": false, 00:08:18.937 "nvme_iov_md": false, 00:08:18.937 "read": true, 00:08:18.937 "reset": true, 00:08:18.937 "seek_data": true, 00:08:18.937 "seek_hole": true, 00:08:18.937 "unmap": true, 00:08:18.937 "write": true, 00:08:18.937 "write_zeroes": true, 00:08:18.937 "zcopy": false, 00:08:18.937 "zone_append": false, 00:08:18.937 "zone_management": false 00:08:18.937 }, 00:08:18.937 "uuid": "70917d11-9f20-45af-9e80-82837430a976", 00:08:18.937 "zoned": false 00:08:18.937 } 00:08:18.937 ] 00:08:18.937 11:30:19 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@911 -- # return 0 00:08:18.937 11:30:19 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@88 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 5ed93a21-1c53-4725-aaa8-3b85f5a38e08 00:08:18.937 11:30:19 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@88 -- # jq -r '.[0].free_clusters' 00:08:19.195 11:30:19 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@88 -- # (( free_clusters == 61 )) 00:08:19.195 11:30:19 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@89 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 5ed93a21-1c53-4725-aaa8-3b85f5a38e08 00:08:19.195 11:30:19 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@89 -- # jq -r '.[0].total_data_clusters' 00:08:19.453 11:30:20 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@89 -- # (( data_clusters == 99 )) 00:08:19.453 11:30:20 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@92 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_delete 70917d11-9f20-45af-9e80-82837430a976 00:08:19.737 11:30:20 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@93 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -u 5ed93a21-1c53-4725-aaa8-3b85f5a38e08 00:08:19.996 11:30:20 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@94 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_aio_delete aio_bdev 00:08:20.256 11:30:20 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@95 -- # rm -f /home/vagrant/spdk_repo/spdk/test/nvmf/target/aio_bdev 00:08:20.823 00:08:20.823 real 0m18.712s 00:08:20.823 user 0m17.848s 00:08:20.823 sys 0m2.274s 00:08:20.823 11:30:21 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@1130 -- # xtrace_disable 00:08:20.823 11:30:21 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@10 -- # set +x 00:08:20.823 ************************************ 00:08:20.823 END TEST lvs_grow_clean 00:08:20.823 ************************************ 00:08:20.823 11:30:21 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@103 -- # run_test lvs_grow_dirty lvs_grow dirty 00:08:20.823 11:30:21 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:08:20.823 11:30:21 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@1111 -- # xtrace_disable 00:08:20.823 11:30:21 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:08:20.823 ************************************ 00:08:20.823 START TEST lvs_grow_dirty 00:08:20.823 ************************************ 00:08:20.823 11:30:21 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@1129 -- # lvs_grow dirty 00:08:20.823 11:30:21 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@15 -- # local aio_bdev lvs lvol 00:08:20.823 11:30:21 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@16 -- # local data_clusters free_clusters 00:08:20.823 11:30:21 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@17 -- # local bdevperf_pid run_test_pid 00:08:20.823 11:30:21 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@18 -- # local aio_init_size_mb=200 00:08:20.823 11:30:21 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@19 -- # local aio_final_size_mb=400 00:08:20.823 11:30:21 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@20 -- # local lvol_bdev_size_mb=150 00:08:20.823 11:30:21 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@23 -- # rm -f /home/vagrant/spdk_repo/spdk/test/nvmf/target/aio_bdev 00:08:20.823 11:30:21 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@24 -- # truncate -s 200M /home/vagrant/spdk_repo/spdk/test/nvmf/target/aio_bdev 00:08:20.824 11:30:21 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@25 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_aio_create /home/vagrant/spdk_repo/spdk/test/nvmf/target/aio_bdev aio_bdev 4096 00:08:21.083 11:30:21 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@25 -- # aio_bdev=aio_bdev 00:08:21.083 11:30:21 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create_lvstore --cluster-sz 4194304 --md-pages-per-cluster-ratio 300 aio_bdev lvs 00:08:21.340 11:30:22 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@28 -- # lvs=cf01c725-5dee-4083-a460-7336382c7b92 00:08:21.340 11:30:22 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@29 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u cf01c725-5dee-4083-a460-7336382c7b92 00:08:21.340 11:30:22 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@29 -- # jq -r '.[0].total_data_clusters' 00:08:21.907 11:30:22 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@29 -- # data_clusters=49 00:08:21.907 11:30:22 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@30 -- # (( data_clusters == 49 )) 00:08:21.907 11:30:22 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@33 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create -u cf01c725-5dee-4083-a460-7336382c7b92 lvol 150 00:08:22.165 11:30:22 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@33 -- # lvol=ad258545-4ca9-419d-bc25-bb31835a9994 00:08:22.165 11:30:22 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@36 -- # truncate -s 400M /home/vagrant/spdk_repo/spdk/test/nvmf/target/aio_bdev 00:08:22.165 11:30:22 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@37 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_aio_rescan aio_bdev 00:08:22.424 [2024-12-16 11:30:23.040975] bdev_aio.c:1053:bdev_aio_rescan: *NOTICE*: AIO device is resized: bdev name /home/vagrant/spdk_repo/spdk/test/nvmf/target/aio_bdev, old block count 51200, new block count 102400 00:08:22.424 [2024-12-16 11:30:23.041078] vbdev_lvol.c: 165:vbdev_lvs_base_bdev_event_cb: *NOTICE*: Unsupported bdev event: type 1 00:08:22.424 true 00:08:22.424 11:30:23 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@38 -- # jq -r '.[0].total_data_clusters' 00:08:22.424 11:30:23 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@38 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u cf01c725-5dee-4083-a460-7336382c7b92 00:08:22.682 11:30:23 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@38 -- # (( data_clusters == 49 )) 00:08:22.682 11:30:23 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@41 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 -a -s SPDK0 00:08:22.940 11:30:23 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@42 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 ad258545-4ca9-419d-bc25-bb31835a9994 00:08:23.198 11:30:23 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@43 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.3 -s 4420 00:08:23.457 [2024-12-16 11:30:24.109545] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4420 *** 00:08:23.457 11:30:24 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@44 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.3 -s 4420 00:08:23.715 11:30:24 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@47 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -r /var/tmp/bdevperf.sock -m 0x2 -o 4096 -q 128 -w randwrite -t 10 -S 1 -z 00:08:23.715 11:30:24 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@48 -- # bdevperf_pid=68667 00:08:23.715 11:30:24 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@49 -- # trap 'killprocess $bdevperf_pid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:08:23.715 11:30:24 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@50 -- # waitforlisten 68667 /var/tmp/bdevperf.sock 00:08:23.715 11:30:24 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@835 -- # '[' -z 68667 ']' 00:08:23.715 11:30:24 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:08:23.715 11:30:24 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@840 -- # local max_retries=100 00:08:23.715 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:08:23.715 11:30:24 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:08:23.715 11:30:24 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@844 -- # xtrace_disable 00:08:23.715 11:30:24 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@10 -- # set +x 00:08:23.715 [2024-12-16 11:30:24.436814] Starting SPDK v25.01-pre git sha1 c360129e7 / DPDK 24.03.0 initialization... 00:08:23.715 [2024-12-16 11:30:24.436902] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid68667 ] 00:08:23.974 [2024-12-16 11:30:24.585837] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:23.974 [2024-12-16 11:30:24.647680] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 1 00:08:24.913 11:30:25 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:08:24.913 11:30:25 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@868 -- # return 0 00:08:24.913 11:30:25 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@52 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b Nvme0 -t tcp -a 10.0.0.3 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 00:08:25.172 Nvme0n1 00:08:25.172 11:30:25 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@53 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_get_bdevs -b Nvme0n1 -t 3000 00:08:25.430 [ 00:08:25.430 { 00:08:25.430 "aliases": [ 00:08:25.430 "ad258545-4ca9-419d-bc25-bb31835a9994" 00:08:25.430 ], 00:08:25.430 "assigned_rate_limits": { 00:08:25.430 "r_mbytes_per_sec": 0, 00:08:25.430 "rw_ios_per_sec": 0, 00:08:25.430 "rw_mbytes_per_sec": 0, 00:08:25.430 "w_mbytes_per_sec": 0 00:08:25.430 }, 00:08:25.430 "block_size": 4096, 00:08:25.430 "claimed": false, 00:08:25.430 "driver_specific": { 00:08:25.430 "mp_policy": "active_passive", 00:08:25.430 "nvme": [ 00:08:25.430 { 00:08:25.430 "ctrlr_data": { 00:08:25.430 "ana_reporting": false, 00:08:25.430 "cntlid": 1, 00:08:25.430 "firmware_revision": "25.01", 00:08:25.430 "model_number": "SPDK bdev Controller", 00:08:25.430 "multi_ctrlr": true, 00:08:25.430 "oacs": { 00:08:25.430 "firmware": 0, 00:08:25.430 "format": 0, 00:08:25.430 "ns_manage": 0, 00:08:25.430 "security": 0 00:08:25.430 }, 00:08:25.430 "serial_number": "SPDK0", 00:08:25.430 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:08:25.430 "vendor_id": "0x8086" 00:08:25.430 }, 00:08:25.430 "ns_data": { 00:08:25.430 "can_share": true, 00:08:25.430 "id": 1 00:08:25.430 }, 00:08:25.430 "trid": { 00:08:25.430 "adrfam": "IPv4", 00:08:25.430 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:08:25.430 "traddr": "10.0.0.3", 00:08:25.430 "trsvcid": "4420", 00:08:25.430 "trtype": "TCP" 00:08:25.430 }, 00:08:25.430 "vs": { 00:08:25.430 "nvme_version": "1.3" 00:08:25.430 } 00:08:25.430 } 00:08:25.430 ] 00:08:25.430 }, 00:08:25.430 "memory_domains": [ 00:08:25.430 { 00:08:25.430 "dma_device_id": "system", 00:08:25.430 "dma_device_type": 1 00:08:25.430 } 00:08:25.430 ], 00:08:25.430 "name": "Nvme0n1", 00:08:25.431 "num_blocks": 38912, 00:08:25.431 "numa_id": -1, 00:08:25.431 "product_name": "NVMe disk", 00:08:25.431 "supported_io_types": { 00:08:25.431 "abort": true, 00:08:25.431 "compare": true, 00:08:25.431 "compare_and_write": true, 00:08:25.431 "copy": true, 00:08:25.431 "flush": true, 00:08:25.431 "get_zone_info": false, 00:08:25.431 "nvme_admin": true, 00:08:25.431 "nvme_io": true, 00:08:25.431 "nvme_io_md": false, 00:08:25.431 "nvme_iov_md": false, 00:08:25.431 "read": true, 00:08:25.431 "reset": true, 00:08:25.431 "seek_data": false, 00:08:25.431 "seek_hole": false, 00:08:25.431 "unmap": true, 00:08:25.431 "write": true, 00:08:25.431 "write_zeroes": true, 00:08:25.431 "zcopy": false, 00:08:25.431 "zone_append": false, 00:08:25.431 "zone_management": false 00:08:25.431 }, 00:08:25.431 "uuid": "ad258545-4ca9-419d-bc25-bb31835a9994", 00:08:25.431 "zoned": false 00:08:25.431 } 00:08:25.431 ] 00:08:25.431 11:30:25 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@56 -- # run_test_pid=68715 00:08:25.431 11:30:25 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@57 -- # sleep 2 00:08:25.431 11:30:25 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@55 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:08:25.431 Running I/O for 10 seconds... 00:08:26.807 Latency(us) 00:08:26.807 [2024-12-16T11:30:27.560Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:08:26.807 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:08:26.807 Nvme0n1 : 1.00 7678.00 29.99 0.00 0.00 0.00 0.00 0.00 00:08:26.807 [2024-12-16T11:30:27.560Z] =================================================================================================================== 00:08:26.807 [2024-12-16T11:30:27.560Z] Total : 7678.00 29.99 0.00 0.00 0.00 0.00 0.00 00:08:26.807 00:08:27.374 11:30:27 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@60 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_grow_lvstore -u cf01c725-5dee-4083-a460-7336382c7b92 00:08:27.633 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:08:27.633 Nvme0n1 : 2.00 7903.50 30.87 0.00 0.00 0.00 0.00 0.00 00:08:27.633 [2024-12-16T11:30:28.386Z] =================================================================================================================== 00:08:27.633 [2024-12-16T11:30:28.386Z] Total : 7903.50 30.87 0.00 0.00 0.00 0.00 0.00 00:08:27.633 00:08:27.633 true 00:08:27.633 11:30:28 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@61 -- # jq -r '.[0].total_data_clusters' 00:08:27.633 11:30:28 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@61 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u cf01c725-5dee-4083-a460-7336382c7b92 00:08:28.225 11:30:28 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@61 -- # data_clusters=99 00:08:28.225 11:30:28 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@62 -- # (( data_clusters == 99 )) 00:08:28.225 11:30:28 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@65 -- # wait 68715 00:08:28.483 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:08:28.483 Nvme0n1 : 3.00 7912.00 30.91 0.00 0.00 0.00 0.00 0.00 00:08:28.483 [2024-12-16T11:30:29.236Z] =================================================================================================================== 00:08:28.483 [2024-12-16T11:30:29.236Z] Total : 7912.00 30.91 0.00 0.00 0.00 0.00 0.00 00:08:28.483 00:08:29.419 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:08:29.419 Nvme0n1 : 4.00 7869.75 30.74 0.00 0.00 0.00 0.00 0.00 00:08:29.419 [2024-12-16T11:30:30.172Z] =================================================================================================================== 00:08:29.419 [2024-12-16T11:30:30.172Z] Total : 7869.75 30.74 0.00 0.00 0.00 0.00 0.00 00:08:29.419 00:08:30.795 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:08:30.795 Nvme0n1 : 5.00 7805.60 30.49 0.00 0.00 0.00 0.00 0.00 00:08:30.795 [2024-12-16T11:30:31.548Z] =================================================================================================================== 00:08:30.795 [2024-12-16T11:30:31.548Z] Total : 7805.60 30.49 0.00 0.00 0.00 0.00 0.00 00:08:30.795 00:08:31.737 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:08:31.737 Nvme0n1 : 6.00 7761.67 30.32 0.00 0.00 0.00 0.00 0.00 00:08:31.737 [2024-12-16T11:30:32.490Z] =================================================================================================================== 00:08:31.737 [2024-12-16T11:30:32.490Z] Total : 7761.67 30.32 0.00 0.00 0.00 0.00 0.00 00:08:31.737 00:08:32.722 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:08:32.722 Nvme0n1 : 7.00 7729.29 30.19 0.00 0.00 0.00 0.00 0.00 00:08:32.722 [2024-12-16T11:30:33.475Z] =================================================================================================================== 00:08:32.722 [2024-12-16T11:30:33.475Z] Total : 7729.29 30.19 0.00 0.00 0.00 0.00 0.00 00:08:32.722 00:08:33.676 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:08:33.676 Nvme0n1 : 8.00 7700.75 30.08 0.00 0.00 0.00 0.00 0.00 00:08:33.676 [2024-12-16T11:30:34.429Z] =================================================================================================================== 00:08:33.676 [2024-12-16T11:30:34.429Z] Total : 7700.75 30.08 0.00 0.00 0.00 0.00 0.00 00:08:33.676 00:08:34.609 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:08:34.609 Nvme0n1 : 9.00 7679.22 30.00 0.00 0.00 0.00 0.00 0.00 00:08:34.609 [2024-12-16T11:30:35.362Z] =================================================================================================================== 00:08:34.609 [2024-12-16T11:30:35.362Z] Total : 7679.22 30.00 0.00 0.00 0.00 0.00 0.00 00:08:34.609 00:08:35.545 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:08:35.545 Nvme0n1 : 10.00 7652.00 29.89 0.00 0.00 0.00 0.00 0.00 00:08:35.545 [2024-12-16T11:30:36.298Z] =================================================================================================================== 00:08:35.545 [2024-12-16T11:30:36.298Z] Total : 7652.00 29.89 0.00 0.00 0.00 0.00 0.00 00:08:35.545 00:08:35.545 00:08:35.545 Latency(us) 00:08:35.545 [2024-12-16T11:30:36.298Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:08:35.545 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:08:35.545 Nvme0n1 : 10.01 7656.42 29.91 0.00 0.00 16712.73 7983.48 47662.55 00:08:35.545 [2024-12-16T11:30:36.298Z] =================================================================================================================== 00:08:35.545 [2024-12-16T11:30:36.298Z] Total : 7656.42 29.91 0.00 0.00 16712.73 7983.48 47662.55 00:08:35.545 { 00:08:35.545 "results": [ 00:08:35.545 { 00:08:35.545 "job": "Nvme0n1", 00:08:35.545 "core_mask": "0x2", 00:08:35.545 "workload": "randwrite", 00:08:35.545 "status": "finished", 00:08:35.545 "queue_depth": 128, 00:08:35.545 "io_size": 4096, 00:08:35.545 "runtime": 10.010946, 00:08:35.545 "iops": 7656.419283452333, 00:08:35.545 "mibps": 29.907887825985675, 00:08:35.545 "io_failed": 0, 00:08:35.545 "io_timeout": 0, 00:08:35.545 "avg_latency_us": 16712.73245874885, 00:08:35.545 "min_latency_us": 7983.476363636363, 00:08:35.545 "max_latency_us": 47662.545454545456 00:08:35.545 } 00:08:35.545 ], 00:08:35.545 "core_count": 1 00:08:35.545 } 00:08:35.545 11:30:36 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@66 -- # killprocess 68667 00:08:35.545 11:30:36 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@954 -- # '[' -z 68667 ']' 00:08:35.545 11:30:36 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@958 -- # kill -0 68667 00:08:35.545 11:30:36 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@959 -- # uname 00:08:35.545 11:30:36 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:08:35.545 11:30:36 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 68667 00:08:35.545 11:30:36 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:08:35.545 11:30:36 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:08:35.545 killing process with pid 68667 00:08:35.545 11:30:36 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@972 -- # echo 'killing process with pid 68667' 00:08:35.545 11:30:36 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@973 -- # kill 68667 00:08:35.545 Received shutdown signal, test time was about 10.000000 seconds 00:08:35.545 00:08:35.545 Latency(us) 00:08:35.545 [2024-12-16T11:30:36.298Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:08:35.545 [2024-12-16T11:30:36.298Z] =================================================================================================================== 00:08:35.545 [2024-12-16T11:30:36.298Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:08:35.545 11:30:36 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@978 -- # wait 68667 00:08:35.803 11:30:36 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@68 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_listener discovery -t tcp -a 10.0.0.3 -s 4420 00:08:36.062 11:30:36 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@69 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:08:36.319 11:30:37 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@70 -- # jq -r '.[0].free_clusters' 00:08:36.320 11:30:37 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@70 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u cf01c725-5dee-4083-a460-7336382c7b92 00:08:36.578 11:30:37 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@70 -- # free_clusters=61 00:08:36.578 11:30:37 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@72 -- # [[ dirty == \d\i\r\t\y ]] 00:08:36.578 11:30:37 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@74 -- # kill -9 68057 00:08:36.578 11:30:37 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@75 -- # wait 68057 00:08:36.836 /home/vagrant/spdk_repo/spdk/test/nvmf/target/nvmf_lvs_grow.sh: line 75: 68057 Killed "${NVMF_APP[@]}" "$@" 00:08:36.836 11:30:37 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@75 -- # true 00:08:36.836 11:30:37 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@76 -- # nvmfappstart -m 0x1 00:08:36.836 11:30:37 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:08:36.836 11:30:37 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@726 -- # xtrace_disable 00:08:36.836 11:30:37 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@10 -- # set +x 00:08:36.836 11:30:37 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- nvmf/common.sh@509 -- # nvmfpid=68883 00:08:36.836 11:30:37 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- nvmf/common.sh@510 -- # waitforlisten 68883 00:08:36.836 11:30:37 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@835 -- # '[' -z 68883 ']' 00:08:36.836 11:30:37 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- nvmf/common.sh@508 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x1 00:08:36.836 11:30:37 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:08:36.836 11:30:37 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@840 -- # local max_retries=100 00:08:36.836 11:30:37 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:08:36.836 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:08:36.836 11:30:37 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@844 -- # xtrace_disable 00:08:36.836 11:30:37 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@10 -- # set +x 00:08:36.836 [2024-12-16 11:30:37.422051] Starting SPDK v25.01-pre git sha1 c360129e7 / DPDK 24.03.0 initialization... 00:08:36.836 [2024-12-16 11:30:37.422216] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:08:36.836 [2024-12-16 11:30:37.572565] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:37.094 [2024-12-16 11:30:37.632854] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:08:37.094 [2024-12-16 11:30:37.632920] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:08:37.094 [2024-12-16 11:30:37.632931] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:08:37.094 [2024-12-16 11:30:37.632939] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:08:37.094 [2024-12-16 11:30:37.632947] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:08:37.094 [2024-12-16 11:30:37.633379] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:08:37.673 11:30:38 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:08:37.673 11:30:38 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@868 -- # return 0 00:08:37.673 11:30:38 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:08:37.673 11:30:38 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@732 -- # xtrace_disable 00:08:37.673 11:30:38 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@10 -- # set +x 00:08:37.931 11:30:38 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:08:37.931 11:30:38 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@77 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_aio_create /home/vagrant/spdk_repo/spdk/test/nvmf/target/aio_bdev aio_bdev 4096 00:08:38.189 [2024-12-16 11:30:38.712511] blobstore.c:4899:bs_recover: *NOTICE*: Performing recovery on blobstore 00:08:38.189 [2024-12-16 11:30:38.713238] blobstore.c:4846:bs_load_replay_md_cpl: *NOTICE*: Recover: blob 0x0 00:08:38.189 [2024-12-16 11:30:38.714118] blobstore.c:4846:bs_load_replay_md_cpl: *NOTICE*: Recover: blob 0x1 00:08:38.189 11:30:38 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@77 -- # aio_bdev=aio_bdev 00:08:38.189 11:30:38 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@78 -- # waitforbdev ad258545-4ca9-419d-bc25-bb31835a9994 00:08:38.189 11:30:38 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@903 -- # local bdev_name=ad258545-4ca9-419d-bc25-bb31835a9994 00:08:38.189 11:30:38 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:08:38.189 11:30:38 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@905 -- # local i 00:08:38.189 11:30:38 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:08:38.189 11:30:38 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:08:38.189 11:30:38 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@908 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_wait_for_examine 00:08:38.447 11:30:39 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@910 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b ad258545-4ca9-419d-bc25-bb31835a9994 -t 2000 00:08:38.705 [ 00:08:38.705 { 00:08:38.705 "aliases": [ 00:08:38.705 "lvs/lvol" 00:08:38.705 ], 00:08:38.705 "assigned_rate_limits": { 00:08:38.705 "r_mbytes_per_sec": 0, 00:08:38.705 "rw_ios_per_sec": 0, 00:08:38.705 "rw_mbytes_per_sec": 0, 00:08:38.705 "w_mbytes_per_sec": 0 00:08:38.705 }, 00:08:38.706 "block_size": 4096, 00:08:38.706 "claimed": false, 00:08:38.706 "driver_specific": { 00:08:38.706 "lvol": { 00:08:38.706 "base_bdev": "aio_bdev", 00:08:38.706 "clone": false, 00:08:38.706 "esnap_clone": false, 00:08:38.706 "lvol_store_uuid": "cf01c725-5dee-4083-a460-7336382c7b92", 00:08:38.706 "num_allocated_clusters": 38, 00:08:38.706 "snapshot": false, 00:08:38.706 "thin_provision": false 00:08:38.706 } 00:08:38.706 }, 00:08:38.706 "name": "ad258545-4ca9-419d-bc25-bb31835a9994", 00:08:38.706 "num_blocks": 38912, 00:08:38.706 "product_name": "Logical Volume", 00:08:38.706 "supported_io_types": { 00:08:38.706 "abort": false, 00:08:38.706 "compare": false, 00:08:38.706 "compare_and_write": false, 00:08:38.706 "copy": false, 00:08:38.706 "flush": false, 00:08:38.706 "get_zone_info": false, 00:08:38.706 "nvme_admin": false, 00:08:38.706 "nvme_io": false, 00:08:38.706 "nvme_io_md": false, 00:08:38.706 "nvme_iov_md": false, 00:08:38.706 "read": true, 00:08:38.706 "reset": true, 00:08:38.706 "seek_data": true, 00:08:38.706 "seek_hole": true, 00:08:38.706 "unmap": true, 00:08:38.706 "write": true, 00:08:38.706 "write_zeroes": true, 00:08:38.706 "zcopy": false, 00:08:38.706 "zone_append": false, 00:08:38.706 "zone_management": false 00:08:38.706 }, 00:08:38.706 "uuid": "ad258545-4ca9-419d-bc25-bb31835a9994", 00:08:38.706 "zoned": false 00:08:38.706 } 00:08:38.706 ] 00:08:38.706 11:30:39 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@911 -- # return 0 00:08:38.706 11:30:39 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@79 -- # jq -r '.[0].free_clusters' 00:08:38.706 11:30:39 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@79 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u cf01c725-5dee-4083-a460-7336382c7b92 00:08:38.964 11:30:39 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@79 -- # (( free_clusters == 61 )) 00:08:38.964 11:30:39 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@80 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u cf01c725-5dee-4083-a460-7336382c7b92 00:08:38.964 11:30:39 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@80 -- # jq -r '.[0].total_data_clusters' 00:08:39.223 11:30:39 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@80 -- # (( data_clusters == 99 )) 00:08:39.223 11:30:39 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@84 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_aio_delete aio_bdev 00:08:39.791 [2024-12-16 11:30:40.277838] vbdev_lvol.c: 150:vbdev_lvs_hotremove_cb: *NOTICE*: bdev aio_bdev being removed: closing lvstore lvs 00:08:39.791 11:30:40 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@85 -- # NOT /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u cf01c725-5dee-4083-a460-7336382c7b92 00:08:39.791 11:30:40 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@652 -- # local es=0 00:08:39.791 11:30:40 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@654 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u cf01c725-5dee-4083-a460-7336382c7b92 00:08:39.791 11:30:40 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@640 -- # local arg=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:08:39.791 11:30:40 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:08:39.791 11:30:40 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@644 -- # type -t /home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:08:39.791 11:30:40 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:08:39.791 11:30:40 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@646 -- # type -P /home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:08:39.791 11:30:40 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:08:39.791 11:30:40 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@646 -- # arg=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:08:39.791 11:30:40 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@646 -- # [[ -x /home/vagrant/spdk_repo/spdk/scripts/rpc.py ]] 00:08:39.791 11:30:40 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@655 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u cf01c725-5dee-4083-a460-7336382c7b92 00:08:40.050 2024/12/16 11:30:40 error on JSON-RPC call, method: bdev_lvol_get_lvstores, params: map[uuid:cf01c725-5dee-4083-a460-7336382c7b92], err: error received for bdev_lvol_get_lvstores method, err: Code=-19 Msg=No such device 00:08:40.050 request: 00:08:40.050 { 00:08:40.050 "method": "bdev_lvol_get_lvstores", 00:08:40.050 "params": { 00:08:40.050 "uuid": "cf01c725-5dee-4083-a460-7336382c7b92" 00:08:40.050 } 00:08:40.050 } 00:08:40.050 Got JSON-RPC error response 00:08:40.050 GoRPCClient: error on JSON-RPC call 00:08:40.050 11:30:40 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@655 -- # es=1 00:08:40.050 11:30:40 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:08:40.050 11:30:40 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:08:40.050 11:30:40 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:08:40.050 11:30:40 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@86 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_aio_create /home/vagrant/spdk_repo/spdk/test/nvmf/target/aio_bdev aio_bdev 4096 00:08:40.308 aio_bdev 00:08:40.308 11:30:40 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@87 -- # waitforbdev ad258545-4ca9-419d-bc25-bb31835a9994 00:08:40.308 11:30:40 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@903 -- # local bdev_name=ad258545-4ca9-419d-bc25-bb31835a9994 00:08:40.308 11:30:40 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:08:40.308 11:30:40 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@905 -- # local i 00:08:40.308 11:30:40 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:08:40.308 11:30:40 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:08:40.308 11:30:40 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@908 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_wait_for_examine 00:08:40.567 11:30:41 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@910 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b ad258545-4ca9-419d-bc25-bb31835a9994 -t 2000 00:08:40.825 [ 00:08:40.825 { 00:08:40.825 "aliases": [ 00:08:40.825 "lvs/lvol" 00:08:40.825 ], 00:08:40.825 "assigned_rate_limits": { 00:08:40.825 "r_mbytes_per_sec": 0, 00:08:40.825 "rw_ios_per_sec": 0, 00:08:40.825 "rw_mbytes_per_sec": 0, 00:08:40.825 "w_mbytes_per_sec": 0 00:08:40.825 }, 00:08:40.825 "block_size": 4096, 00:08:40.826 "claimed": false, 00:08:40.826 "driver_specific": { 00:08:40.826 "lvol": { 00:08:40.826 "base_bdev": "aio_bdev", 00:08:40.826 "clone": false, 00:08:40.826 "esnap_clone": false, 00:08:40.826 "lvol_store_uuid": "cf01c725-5dee-4083-a460-7336382c7b92", 00:08:40.826 "num_allocated_clusters": 38, 00:08:40.826 "snapshot": false, 00:08:40.826 "thin_provision": false 00:08:40.826 } 00:08:40.826 }, 00:08:40.826 "name": "ad258545-4ca9-419d-bc25-bb31835a9994", 00:08:40.826 "num_blocks": 38912, 00:08:40.826 "product_name": "Logical Volume", 00:08:40.826 "supported_io_types": { 00:08:40.826 "abort": false, 00:08:40.826 "compare": false, 00:08:40.826 "compare_and_write": false, 00:08:40.826 "copy": false, 00:08:40.826 "flush": false, 00:08:40.826 "get_zone_info": false, 00:08:40.826 "nvme_admin": false, 00:08:40.826 "nvme_io": false, 00:08:40.826 "nvme_io_md": false, 00:08:40.826 "nvme_iov_md": false, 00:08:40.826 "read": true, 00:08:40.826 "reset": true, 00:08:40.826 "seek_data": true, 00:08:40.826 "seek_hole": true, 00:08:40.826 "unmap": true, 00:08:40.826 "write": true, 00:08:40.826 "write_zeroes": true, 00:08:40.826 "zcopy": false, 00:08:40.826 "zone_append": false, 00:08:40.826 "zone_management": false 00:08:40.826 }, 00:08:40.826 "uuid": "ad258545-4ca9-419d-bc25-bb31835a9994", 00:08:40.826 "zoned": false 00:08:40.826 } 00:08:40.826 ] 00:08:40.826 11:30:41 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@911 -- # return 0 00:08:40.826 11:30:41 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@88 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u cf01c725-5dee-4083-a460-7336382c7b92 00:08:40.826 11:30:41 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@88 -- # jq -r '.[0].free_clusters' 00:08:41.084 11:30:41 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@88 -- # (( free_clusters == 61 )) 00:08:41.084 11:30:41 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@89 -- # jq -r '.[0].total_data_clusters' 00:08:41.084 11:30:41 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@89 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u cf01c725-5dee-4083-a460-7336382c7b92 00:08:41.342 11:30:42 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@89 -- # (( data_clusters == 99 )) 00:08:41.342 11:30:42 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@92 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_delete ad258545-4ca9-419d-bc25-bb31835a9994 00:08:41.600 11:30:42 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@93 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -u cf01c725-5dee-4083-a460-7336382c7b92 00:08:41.902 11:30:42 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@94 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_aio_delete aio_bdev 00:08:42.160 11:30:42 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@95 -- # rm -f /home/vagrant/spdk_repo/spdk/test/nvmf/target/aio_bdev 00:08:42.728 00:08:42.728 real 0m21.911s 00:08:42.728 user 0m44.917s 00:08:42.728 sys 0m8.273s 00:08:42.728 11:30:43 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@1130 -- # xtrace_disable 00:08:42.728 11:30:43 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@10 -- # set +x 00:08:42.728 ************************************ 00:08:42.728 END TEST lvs_grow_dirty 00:08:42.728 ************************************ 00:08:42.728 11:30:43 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@1 -- # process_shm --id 0 00:08:42.728 11:30:43 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@812 -- # type=--id 00:08:42.728 11:30:43 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@813 -- # id=0 00:08:42.728 11:30:43 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@814 -- # '[' --id = --pid ']' 00:08:42.728 11:30:43 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@818 -- # find /dev/shm -name '*.0' -printf '%f\n' 00:08:42.728 11:30:43 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@818 -- # shm_files=nvmf_trace.0 00:08:42.728 11:30:43 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@820 -- # [[ -z nvmf_trace.0 ]] 00:08:42.728 11:30:43 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@824 -- # for n in $shm_files 00:08:42.728 11:30:43 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@825 -- # tar -C /dev/shm/ -cvzf /home/vagrant/spdk_repo/spdk/../output/nvmf_trace.0_shm.tar.gz nvmf_trace.0 00:08:42.728 nvmf_trace.0 00:08:42.728 11:30:43 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@827 -- # return 0 00:08:42.728 11:30:43 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@1 -- # nvmftestfini 00:08:42.728 11:30:43 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@516 -- # nvmfcleanup 00:08:42.728 11:30:43 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@121 -- # sync 00:08:43.295 11:30:43 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:08:43.295 11:30:43 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@124 -- # set +e 00:08:43.295 11:30:43 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@125 -- # for i in {1..20} 00:08:43.295 11:30:43 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:08:43.295 rmmod nvme_tcp 00:08:43.295 rmmod nvme_fabrics 00:08:43.295 rmmod nvme_keyring 00:08:43.295 11:30:43 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:08:43.295 11:30:43 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@128 -- # set -e 00:08:43.295 11:30:43 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@129 -- # return 0 00:08:43.295 11:30:43 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@517 -- # '[' -n 68883 ']' 00:08:43.295 11:30:43 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@518 -- # killprocess 68883 00:08:43.295 11:30:43 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@954 -- # '[' -z 68883 ']' 00:08:43.295 11:30:43 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@958 -- # kill -0 68883 00:08:43.295 11:30:43 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@959 -- # uname 00:08:43.295 11:30:43 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:08:43.295 11:30:43 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 68883 00:08:43.295 11:30:43 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:08:43.295 11:30:43 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:08:43.295 killing process with pid 68883 00:08:43.295 11:30:43 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@972 -- # echo 'killing process with pid 68883' 00:08:43.295 11:30:43 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@973 -- # kill 68883 00:08:43.295 11:30:43 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@978 -- # wait 68883 00:08:43.554 11:30:44 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:08:43.554 11:30:44 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:08:43.554 11:30:44 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:08:43.554 11:30:44 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@297 -- # iptr 00:08:43.554 11:30:44 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:08:43.554 11:30:44 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@791 -- # iptables-save 00:08:43.554 11:30:44 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@791 -- # iptables-restore 00:08:43.554 11:30:44 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@298 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:08:43.554 11:30:44 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@299 -- # nvmf_veth_fini 00:08:43.554 11:30:44 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@233 -- # ip link set nvmf_init_br nomaster 00:08:43.554 11:30:44 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@234 -- # ip link set nvmf_init_br2 nomaster 00:08:43.554 11:30:44 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@235 -- # ip link set nvmf_tgt_br nomaster 00:08:43.554 11:30:44 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@236 -- # ip link set nvmf_tgt_br2 nomaster 00:08:43.554 11:30:44 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@237 -- # ip link set nvmf_init_br down 00:08:43.554 11:30:44 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@238 -- # ip link set nvmf_init_br2 down 00:08:43.554 11:30:44 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@239 -- # ip link set nvmf_tgt_br down 00:08:43.554 11:30:44 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@240 -- # ip link set nvmf_tgt_br2 down 00:08:43.554 11:30:44 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@241 -- # ip link delete nvmf_br type bridge 00:08:43.554 11:30:44 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@242 -- # ip link delete nvmf_init_if 00:08:43.554 11:30:44 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@243 -- # ip link delete nvmf_init_if2 00:08:43.554 11:30:44 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@244 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:08:43.554 11:30:44 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@245 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:08:43.554 11:30:44 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@246 -- # remove_spdk_ns 00:08:43.554 11:30:44 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:08:43.554 11:30:44 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:08:43.554 11:30:44 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:08:43.813 11:30:44 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@300 -- # return 0 00:08:43.813 00:08:43.813 real 0m43.847s 00:08:43.813 user 1m10.321s 00:08:43.813 sys 0m11.574s 00:08:43.813 11:30:44 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@1130 -- # xtrace_disable 00:08:43.813 ************************************ 00:08:43.813 END TEST nvmf_lvs_grow 00:08:43.813 ************************************ 00:08:43.813 11:30:44 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:08:43.813 11:30:44 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@29 -- # run_test nvmf_bdev_io_wait /home/vagrant/spdk_repo/spdk/test/nvmf/target/bdev_io_wait.sh --transport=tcp 00:08:43.813 11:30:44 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:08:43.813 11:30:44 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1111 -- # xtrace_disable 00:08:43.813 11:30:44 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@10 -- # set +x 00:08:43.813 ************************************ 00:08:43.813 START TEST nvmf_bdev_io_wait 00:08:43.813 ************************************ 00:08:43.813 11:30:44 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/bdev_io_wait.sh --transport=tcp 00:08:43.813 * Looking for test storage... 00:08:43.813 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:08:43.813 11:30:44 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:08:43.813 11:30:44 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@1711 -- # lcov --version 00:08:43.813 11:30:44 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:08:43.813 11:30:44 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:08:43.813 11:30:44 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:08:43.813 11:30:44 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@333 -- # local ver1 ver1_l 00:08:43.813 11:30:44 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@334 -- # local ver2 ver2_l 00:08:43.813 11:30:44 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@336 -- # IFS=.-: 00:08:43.813 11:30:44 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@336 -- # read -ra ver1 00:08:43.813 11:30:44 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@337 -- # IFS=.-: 00:08:43.813 11:30:44 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@337 -- # read -ra ver2 00:08:43.813 11:30:44 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@338 -- # local 'op=<' 00:08:43.813 11:30:44 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@340 -- # ver1_l=2 00:08:43.813 11:30:44 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@341 -- # ver2_l=1 00:08:43.813 11:30:44 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:08:43.813 11:30:44 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@344 -- # case "$op" in 00:08:43.813 11:30:44 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@345 -- # : 1 00:08:43.813 11:30:44 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@364 -- # (( v = 0 )) 00:08:43.813 11:30:44 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:08:43.813 11:30:44 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@365 -- # decimal 1 00:08:43.813 11:30:44 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@353 -- # local d=1 00:08:43.813 11:30:44 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:08:43.813 11:30:44 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@355 -- # echo 1 00:08:43.813 11:30:44 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@365 -- # ver1[v]=1 00:08:43.813 11:30:44 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@366 -- # decimal 2 00:08:43.813 11:30:44 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@353 -- # local d=2 00:08:43.813 11:30:44 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:08:43.814 11:30:44 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@355 -- # echo 2 00:08:43.814 11:30:44 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@366 -- # ver2[v]=2 00:08:43.814 11:30:44 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:08:43.814 11:30:44 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:08:43.814 11:30:44 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@368 -- # return 0 00:08:43.814 11:30:44 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:08:43.814 11:30:44 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:08:43.814 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:43.814 --rc genhtml_branch_coverage=1 00:08:43.814 --rc genhtml_function_coverage=1 00:08:43.814 --rc genhtml_legend=1 00:08:43.814 --rc geninfo_all_blocks=1 00:08:43.814 --rc geninfo_unexecuted_blocks=1 00:08:43.814 00:08:43.814 ' 00:08:43.814 11:30:44 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:08:43.814 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:43.814 --rc genhtml_branch_coverage=1 00:08:43.814 --rc genhtml_function_coverage=1 00:08:43.814 --rc genhtml_legend=1 00:08:43.814 --rc geninfo_all_blocks=1 00:08:43.814 --rc geninfo_unexecuted_blocks=1 00:08:43.814 00:08:43.814 ' 00:08:43.814 11:30:44 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:08:43.814 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:43.814 --rc genhtml_branch_coverage=1 00:08:43.814 --rc genhtml_function_coverage=1 00:08:43.814 --rc genhtml_legend=1 00:08:43.814 --rc geninfo_all_blocks=1 00:08:43.814 --rc geninfo_unexecuted_blocks=1 00:08:43.814 00:08:43.814 ' 00:08:43.814 11:30:44 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:08:43.814 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:43.814 --rc genhtml_branch_coverage=1 00:08:43.814 --rc genhtml_function_coverage=1 00:08:43.814 --rc genhtml_legend=1 00:08:43.814 --rc geninfo_all_blocks=1 00:08:43.814 --rc geninfo_unexecuted_blocks=1 00:08:43.814 00:08:43.814 ' 00:08:43.814 11:30:44 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:08:43.814 11:30:44 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@7 -- # uname -s 00:08:43.814 11:30:44 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:08:43.814 11:30:44 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:08:43.814 11:30:44 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:08:43.814 11:30:44 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:08:43.814 11:30:44 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:08:43.814 11:30:44 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:08:43.814 11:30:44 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:08:43.814 11:30:44 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:08:43.814 11:30:44 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:08:43.814 11:30:44 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:08:44.073 11:30:44 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:2fb5c03d-b67f-4471-b8a9-a83bf7b15be8 00:08:44.073 11:30:44 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@18 -- # NVME_HOSTID=2fb5c03d-b67f-4471-b8a9-a83bf7b15be8 00:08:44.073 11:30:44 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:08:44.073 11:30:44 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:08:44.073 11:30:44 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:08:44.073 11:30:44 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:08:44.073 11:30:44 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@49 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:08:44.073 11:30:44 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@15 -- # shopt -s extglob 00:08:44.073 11:30:44 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:08:44.073 11:30:44 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:08:44.073 11:30:44 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:08:44.073 11:30:44 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:44.073 11:30:44 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:44.073 11:30:44 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:44.073 11:30:44 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- paths/export.sh@5 -- # export PATH 00:08:44.073 11:30:44 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:44.073 11:30:44 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@51 -- # : 0 00:08:44.073 11:30:44 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:08:44.073 11:30:44 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:08:44.073 11:30:44 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:08:44.073 11:30:44 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:08:44.073 11:30:44 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:08:44.073 11:30:44 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:08:44.073 /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:08:44.073 11:30:44 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:08:44.073 11:30:44 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:08:44.073 11:30:44 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@55 -- # have_pci_nics=0 00:08:44.073 11:30:44 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@11 -- # MALLOC_BDEV_SIZE=64 00:08:44.073 11:30:44 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:08:44.073 11:30:44 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@14 -- # nvmftestinit 00:08:44.073 11:30:44 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:08:44.073 11:30:44 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:08:44.073 11:30:44 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@476 -- # prepare_net_devs 00:08:44.074 11:30:44 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@438 -- # local -g is_hw=no 00:08:44.074 11:30:44 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@440 -- # remove_spdk_ns 00:08:44.074 11:30:44 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:08:44.074 11:30:44 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:08:44.074 11:30:44 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:08:44.074 11:30:44 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@442 -- # [[ virt != virt ]] 00:08:44.074 11:30:44 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@444 -- # [[ no == yes ]] 00:08:44.074 11:30:44 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@451 -- # [[ virt == phy ]] 00:08:44.074 11:30:44 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@454 -- # [[ virt == phy-fallback ]] 00:08:44.074 11:30:44 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@459 -- # [[ tcp == tcp ]] 00:08:44.074 11:30:44 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@460 -- # nvmf_veth_init 00:08:44.074 11:30:44 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@145 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:08:44.074 11:30:44 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@146 -- # NVMF_SECOND_INITIATOR_IP=10.0.0.2 00:08:44.074 11:30:44 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@147 -- # NVMF_FIRST_TARGET_IP=10.0.0.3 00:08:44.074 11:30:44 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@148 -- # NVMF_SECOND_TARGET_IP=10.0.0.4 00:08:44.074 11:30:44 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@149 -- # NVMF_INITIATOR_IP=10.0.0.1 00:08:44.074 11:30:44 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@150 -- # NVMF_BRIDGE=nvmf_br 00:08:44.074 11:30:44 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@151 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:08:44.074 11:30:44 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@152 -- # NVMF_INITIATOR_INTERFACE2=nvmf_init_if2 00:08:44.074 11:30:44 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@153 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:08:44.074 11:30:44 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@154 -- # NVMF_INITIATOR_BRIDGE2=nvmf_init_br2 00:08:44.074 11:30:44 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@155 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:08:44.074 11:30:44 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@156 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:08:44.074 11:30:44 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@157 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:08:44.074 11:30:44 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@158 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:08:44.074 11:30:44 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@159 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:08:44.074 11:30:44 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@160 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:08:44.074 11:30:44 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@162 -- # ip link set nvmf_init_br nomaster 00:08:44.074 Cannot find device "nvmf_init_br" 00:08:44.074 11:30:44 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@162 -- # true 00:08:44.074 11:30:44 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@163 -- # ip link set nvmf_init_br2 nomaster 00:08:44.074 Cannot find device "nvmf_init_br2" 00:08:44.074 11:30:44 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@163 -- # true 00:08:44.074 11:30:44 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@164 -- # ip link set nvmf_tgt_br nomaster 00:08:44.074 Cannot find device "nvmf_tgt_br" 00:08:44.074 11:30:44 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@164 -- # true 00:08:44.074 11:30:44 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@165 -- # ip link set nvmf_tgt_br2 nomaster 00:08:44.074 Cannot find device "nvmf_tgt_br2" 00:08:44.074 11:30:44 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@165 -- # true 00:08:44.074 11:30:44 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@166 -- # ip link set nvmf_init_br down 00:08:44.074 Cannot find device "nvmf_init_br" 00:08:44.074 11:30:44 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@166 -- # true 00:08:44.074 11:30:44 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@167 -- # ip link set nvmf_init_br2 down 00:08:44.074 Cannot find device "nvmf_init_br2" 00:08:44.074 11:30:44 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@167 -- # true 00:08:44.074 11:30:44 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@168 -- # ip link set nvmf_tgt_br down 00:08:44.074 Cannot find device "nvmf_tgt_br" 00:08:44.074 11:30:44 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@168 -- # true 00:08:44.074 11:30:44 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@169 -- # ip link set nvmf_tgt_br2 down 00:08:44.074 Cannot find device "nvmf_tgt_br2" 00:08:44.074 11:30:44 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@169 -- # true 00:08:44.074 11:30:44 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@170 -- # ip link delete nvmf_br type bridge 00:08:44.074 Cannot find device "nvmf_br" 00:08:44.074 11:30:44 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@170 -- # true 00:08:44.074 11:30:44 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@171 -- # ip link delete nvmf_init_if 00:08:44.074 Cannot find device "nvmf_init_if" 00:08:44.074 11:30:44 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@171 -- # true 00:08:44.074 11:30:44 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@172 -- # ip link delete nvmf_init_if2 00:08:44.074 Cannot find device "nvmf_init_if2" 00:08:44.074 11:30:44 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@172 -- # true 00:08:44.074 11:30:44 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@173 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:08:44.074 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:08:44.074 11:30:44 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@173 -- # true 00:08:44.074 11:30:44 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@174 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:08:44.074 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:08:44.074 11:30:44 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@174 -- # true 00:08:44.074 11:30:44 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@177 -- # ip netns add nvmf_tgt_ns_spdk 00:08:44.074 11:30:44 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@180 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:08:44.074 11:30:44 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@181 -- # ip link add nvmf_init_if2 type veth peer name nvmf_init_br2 00:08:44.074 11:30:44 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@182 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:08:44.074 11:30:44 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@183 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:08:44.074 11:30:44 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:08:44.074 11:30:44 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@187 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:08:44.074 11:30:44 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@190 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:08:44.074 11:30:44 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@191 -- # ip addr add 10.0.0.2/24 dev nvmf_init_if2 00:08:44.074 11:30:44 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@192 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if 00:08:44.074 11:30:44 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@193 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.4/24 dev nvmf_tgt_if2 00:08:44.074 11:30:44 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@196 -- # ip link set nvmf_init_if up 00:08:44.074 11:30:44 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@197 -- # ip link set nvmf_init_if2 up 00:08:44.074 11:30:44 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@198 -- # ip link set nvmf_init_br up 00:08:44.074 11:30:44 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@199 -- # ip link set nvmf_init_br2 up 00:08:44.074 11:30:44 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@200 -- # ip link set nvmf_tgt_br up 00:08:44.074 11:30:44 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@201 -- # ip link set nvmf_tgt_br2 up 00:08:44.074 11:30:44 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@202 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:08:44.334 11:30:44 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@203 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:08:44.334 11:30:44 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@204 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:08:44.334 11:30:44 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@207 -- # ip link add nvmf_br type bridge 00:08:44.334 11:30:44 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@208 -- # ip link set nvmf_br up 00:08:44.334 11:30:44 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@211 -- # ip link set nvmf_init_br master nvmf_br 00:08:44.334 11:30:44 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@212 -- # ip link set nvmf_init_br2 master nvmf_br 00:08:44.334 11:30:44 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@213 -- # ip link set nvmf_tgt_br master nvmf_br 00:08:44.334 11:30:44 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@214 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:08:44.334 11:30:44 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@217 -- # ipts -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:08:44.334 11:30:44 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT' 00:08:44.334 11:30:44 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@218 -- # ipts -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT 00:08:44.334 11:30:44 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT' 00:08:44.334 11:30:44 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@219 -- # ipts -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:08:44.334 11:30:44 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@790 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT -m comment --comment 'SPDK_NVMF:-A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT' 00:08:44.334 11:30:44 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@222 -- # ping -c 1 10.0.0.3 00:08:44.334 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:08:44.334 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.081 ms 00:08:44.334 00:08:44.334 --- 10.0.0.3 ping statistics --- 00:08:44.334 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:08:44.334 rtt min/avg/max/mdev = 0.081/0.081/0.081/0.000 ms 00:08:44.334 11:30:44 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@223 -- # ping -c 1 10.0.0.4 00:08:44.334 PING 10.0.0.4 (10.0.0.4) 56(84) bytes of data. 00:08:44.334 64 bytes from 10.0.0.4: icmp_seq=1 ttl=64 time=0.050 ms 00:08:44.334 00:08:44.334 --- 10.0.0.4 ping statistics --- 00:08:44.334 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:08:44.334 rtt min/avg/max/mdev = 0.050/0.050/0.050/0.000 ms 00:08:44.334 11:30:44 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@224 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:08:44.334 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:08:44.334 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.023 ms 00:08:44.334 00:08:44.334 --- 10.0.0.1 ping statistics --- 00:08:44.334 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:08:44.334 rtt min/avg/max/mdev = 0.023/0.023/0.023/0.000 ms 00:08:44.334 11:30:44 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@225 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.2 00:08:44.334 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:08:44.334 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.067 ms 00:08:44.334 00:08:44.334 --- 10.0.0.2 ping statistics --- 00:08:44.334 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:08:44.334 rtt min/avg/max/mdev = 0.067/0.067/0.067/0.000 ms 00:08:44.334 11:30:44 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@227 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:08:44.334 11:30:44 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@461 -- # return 0 00:08:44.334 11:30:44 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:08:44.334 11:30:44 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:08:44.334 11:30:44 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:08:44.334 11:30:44 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:08:44.334 11:30:44 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:08:44.334 11:30:44 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:08:44.334 11:30:44 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:08:44.334 11:30:44 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@15 -- # nvmfappstart -m 0xF --wait-for-rpc 00:08:44.334 11:30:44 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:08:44.334 11:30:44 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@726 -- # xtrace_disable 00:08:44.334 11:30:44 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:08:44.334 11:30:44 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@509 -- # nvmfpid=69368 00:08:44.334 11:30:44 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@508 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF --wait-for-rpc 00:08:44.334 11:30:44 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@510 -- # waitforlisten 69368 00:08:44.334 11:30:44 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@835 -- # '[' -z 69368 ']' 00:08:44.334 11:30:44 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:08:44.334 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:08:44.334 11:30:44 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@840 -- # local max_retries=100 00:08:44.334 11:30:44 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:08:44.334 11:30:44 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@844 -- # xtrace_disable 00:08:44.334 11:30:44 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:08:44.334 [2024-12-16 11:30:45.033118] Starting SPDK v25.01-pre git sha1 c360129e7 / DPDK 24.03.0 initialization... 00:08:44.334 [2024-12-16 11:30:45.033271] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:08:44.591 [2024-12-16 11:30:45.244605] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:08:44.591 [2024-12-16 11:30:45.320519] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:08:44.591 [2024-12-16 11:30:45.320605] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:08:44.591 [2024-12-16 11:30:45.320633] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:08:44.591 [2024-12-16 11:30:45.320642] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:08:44.591 [2024-12-16 11:30:45.320649] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:08:44.591 [2024-12-16 11:30:45.321908] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 1 00:08:44.591 [2024-12-16 11:30:45.321967] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 2 00:08:44.591 [2024-12-16 11:30:45.322090] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 3 00:08:44.591 [2024-12-16 11:30:45.322094] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:08:45.526 11:30:46 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:08:45.526 11:30:46 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@868 -- # return 0 00:08:45.526 11:30:46 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:08:45.526 11:30:46 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@732 -- # xtrace_disable 00:08:45.526 11:30:46 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:08:45.526 11:30:46 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:08:45.526 11:30:46 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@18 -- # rpc_cmd bdev_set_options -p 5 -c 1 00:08:45.526 11:30:46 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:45.526 11:30:46 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:08:45.526 11:30:46 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:45.526 11:30:46 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@19 -- # rpc_cmd framework_start_init 00:08:45.526 11:30:46 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:45.526 11:30:46 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:08:45.526 11:30:46 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:45.526 11:30:46 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@20 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:08:45.526 11:30:46 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:45.526 11:30:46 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:08:45.526 [2024-12-16 11:30:46.187204] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:08:45.526 11:30:46 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:45.526 11:30:46 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@22 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:08:45.526 11:30:46 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:45.526 11:30:46 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:08:45.526 Malloc0 00:08:45.526 11:30:46 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:45.526 11:30:46 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@23 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:08:45.526 11:30:46 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:45.526 11:30:46 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:08:45.526 11:30:46 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:45.526 11:30:46 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:08:45.526 11:30:46 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:45.526 11:30:46 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:08:45.526 11:30:46 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:45.526 11:30:46 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 00:08:45.526 11:30:46 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:45.526 11:30:46 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:08:45.526 [2024-12-16 11:30:46.248322] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4420 *** 00:08:45.526 11:30:46 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:45.526 11:30:46 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@28 -- # WRITE_PID=69421 00:08:45.526 11:30:46 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@27 -- # gen_nvmf_target_json 00:08:45.526 11:30:46 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@27 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 0x10 -i 1 --json /dev/fd/63 -q 128 -o 4096 -w write -t 1 -s 256 00:08:45.526 11:30:46 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@560 -- # config=() 00:08:45.526 11:30:46 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@560 -- # local subsystem config 00:08:45.526 11:30:46 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:08:45.526 11:30:46 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@30 -- # READ_PID=69423 00:08:45.526 11:30:46 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:08:45.526 { 00:08:45.526 "params": { 00:08:45.526 "name": "Nvme$subsystem", 00:08:45.526 "trtype": "$TEST_TRANSPORT", 00:08:45.526 "traddr": "$NVMF_FIRST_TARGET_IP", 00:08:45.526 "adrfam": "ipv4", 00:08:45.526 "trsvcid": "$NVMF_PORT", 00:08:45.526 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:08:45.526 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:08:45.526 "hdgst": ${hdgst:-false}, 00:08:45.526 "ddgst": ${ddgst:-false} 00:08:45.526 }, 00:08:45.526 "method": "bdev_nvme_attach_controller" 00:08:45.526 } 00:08:45.526 EOF 00:08:45.526 )") 00:08:45.526 11:30:46 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@29 -- # gen_nvmf_target_json 00:08:45.526 11:30:46 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@29 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 0x20 -i 2 --json /dev/fd/63 -q 128 -o 4096 -w read -t 1 -s 256 00:08:45.526 11:30:46 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@560 -- # config=() 00:08:45.526 11:30:46 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@560 -- # local subsystem config 00:08:45.526 11:30:46 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:08:45.526 11:30:46 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:08:45.526 { 00:08:45.526 "params": { 00:08:45.526 "name": "Nvme$subsystem", 00:08:45.526 "trtype": "$TEST_TRANSPORT", 00:08:45.526 "traddr": "$NVMF_FIRST_TARGET_IP", 00:08:45.526 "adrfam": "ipv4", 00:08:45.526 "trsvcid": "$NVMF_PORT", 00:08:45.526 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:08:45.526 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:08:45.526 "hdgst": ${hdgst:-false}, 00:08:45.526 "ddgst": ${ddgst:-false} 00:08:45.526 }, 00:08:45.527 "method": "bdev_nvme_attach_controller" 00:08:45.527 } 00:08:45.527 EOF 00:08:45.527 )") 00:08:45.527 11:30:46 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@582 -- # cat 00:08:45.527 11:30:46 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@582 -- # cat 00:08:45.527 11:30:46 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@31 -- # gen_nvmf_target_json 00:08:45.527 11:30:46 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@560 -- # config=() 00:08:45.527 11:30:46 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@31 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 0x40 -i 3 --json /dev/fd/63 -q 128 -o 4096 -w flush -t 1 -s 256 00:08:45.527 11:30:46 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@560 -- # local subsystem config 00:08:45.527 11:30:46 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:08:45.527 11:30:46 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:08:45.527 { 00:08:45.527 "params": { 00:08:45.527 "name": "Nvme$subsystem", 00:08:45.527 "trtype": "$TEST_TRANSPORT", 00:08:45.527 "traddr": "$NVMF_FIRST_TARGET_IP", 00:08:45.527 "adrfam": "ipv4", 00:08:45.527 "trsvcid": "$NVMF_PORT", 00:08:45.527 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:08:45.527 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:08:45.527 "hdgst": ${hdgst:-false}, 00:08:45.527 "ddgst": ${ddgst:-false} 00:08:45.527 }, 00:08:45.527 "method": "bdev_nvme_attach_controller" 00:08:45.527 } 00:08:45.527 EOF 00:08:45.527 )") 00:08:45.527 11:30:46 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@582 -- # cat 00:08:45.527 11:30:46 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@32 -- # FLUSH_PID=69427 00:08:45.527 11:30:46 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@584 -- # jq . 00:08:45.527 11:30:46 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@34 -- # UNMAP_PID=69433 00:08:45.527 11:30:46 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@35 -- # sync 00:08:45.527 11:30:46 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@584 -- # jq . 00:08:45.527 11:30:46 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@585 -- # IFS=, 00:08:45.527 11:30:46 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:08:45.527 "params": { 00:08:45.527 "name": "Nvme1", 00:08:45.527 "trtype": "tcp", 00:08:45.527 "traddr": "10.0.0.3", 00:08:45.527 "adrfam": "ipv4", 00:08:45.527 "trsvcid": "4420", 00:08:45.527 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:08:45.527 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:08:45.527 "hdgst": false, 00:08:45.527 "ddgst": false 00:08:45.527 }, 00:08:45.527 "method": "bdev_nvme_attach_controller" 00:08:45.527 }' 00:08:45.527 11:30:46 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@33 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 0x80 -i 4 --json /dev/fd/63 -q 128 -o 4096 -w unmap -t 1 -s 256 00:08:45.527 11:30:46 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@33 -- # gen_nvmf_target_json 00:08:45.527 11:30:46 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@560 -- # config=() 00:08:45.527 11:30:46 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@585 -- # IFS=, 00:08:45.527 11:30:46 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@560 -- # local subsystem config 00:08:45.527 11:30:46 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:08:45.527 11:30:46 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:08:45.527 "params": { 00:08:45.527 "name": "Nvme1", 00:08:45.527 "trtype": "tcp", 00:08:45.527 "traddr": "10.0.0.3", 00:08:45.527 "adrfam": "ipv4", 00:08:45.527 "trsvcid": "4420", 00:08:45.527 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:08:45.527 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:08:45.527 "hdgst": false, 00:08:45.527 "ddgst": false 00:08:45.527 }, 00:08:45.527 "method": "bdev_nvme_attach_controller" 00:08:45.527 }' 00:08:45.527 11:30:46 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:08:45.527 { 00:08:45.527 "params": { 00:08:45.527 "name": "Nvme$subsystem", 00:08:45.527 "trtype": "$TEST_TRANSPORT", 00:08:45.527 "traddr": "$NVMF_FIRST_TARGET_IP", 00:08:45.527 "adrfam": "ipv4", 00:08:45.527 "trsvcid": "$NVMF_PORT", 00:08:45.527 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:08:45.527 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:08:45.527 "hdgst": ${hdgst:-false}, 00:08:45.527 "ddgst": ${ddgst:-false} 00:08:45.527 }, 00:08:45.527 "method": "bdev_nvme_attach_controller" 00:08:45.527 } 00:08:45.527 EOF 00:08:45.527 )") 00:08:45.527 11:30:46 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@582 -- # cat 00:08:45.527 11:30:46 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@584 -- # jq . 00:08:45.786 11:30:46 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@585 -- # IFS=, 00:08:45.786 11:30:46 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:08:45.786 "params": { 00:08:45.786 "name": "Nvme1", 00:08:45.786 "trtype": "tcp", 00:08:45.786 "traddr": "10.0.0.3", 00:08:45.786 "adrfam": "ipv4", 00:08:45.786 "trsvcid": "4420", 00:08:45.786 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:08:45.786 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:08:45.786 "hdgst": false, 00:08:45.786 "ddgst": false 00:08:45.786 }, 00:08:45.786 "method": "bdev_nvme_attach_controller" 00:08:45.786 }' 00:08:45.786 11:30:46 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@584 -- # jq . 00:08:45.786 11:30:46 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@585 -- # IFS=, 00:08:45.786 11:30:46 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:08:45.786 "params": { 00:08:45.786 "name": "Nvme1", 00:08:45.786 "trtype": "tcp", 00:08:45.786 "traddr": "10.0.0.3", 00:08:45.786 "adrfam": "ipv4", 00:08:45.786 "trsvcid": "4420", 00:08:45.786 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:08:45.786 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:08:45.786 "hdgst": false, 00:08:45.786 "ddgst": false 00:08:45.786 }, 00:08:45.786 "method": "bdev_nvme_attach_controller" 00:08:45.786 }' 00:08:45.786 [2024-12-16 11:30:46.316396] Starting SPDK v25.01-pre git sha1 c360129e7 / DPDK 24.03.0 initialization... 00:08:45.786 [2024-12-16 11:30:46.316494] [ DPDK EAL parameters: bdevperf -c 0x40 -m 256 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk3 --proc-type=auto ] 00:08:45.786 [2024-12-16 11:30:46.325197] Starting SPDK v25.01-pre git sha1 c360129e7 / DPDK 24.03.0 initialization... 00:08:45.786 [2024-12-16 11:30:46.325286] [ DPDK EAL parameters: bdevperf -c 0x80 -m 256 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk4 --proc-type=auto ] 00:08:45.786 11:30:46 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@37 -- # wait 69421 00:08:45.786 [2024-12-16 11:30:46.349456] Starting SPDK v25.01-pre git sha1 c360129e7 / DPDK 24.03.0 initialization... 00:08:45.786 [2024-12-16 11:30:46.349554] [ DPDK EAL parameters: bdevperf -c 0x10 -m 256 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk1 --proc-type=auto ] 00:08:45.786 [2024-12-16 11:30:46.354400] Starting SPDK v25.01-pre git sha1 c360129e7 / DPDK 24.03.0 initialization... 00:08:45.786 [2024-12-16 11:30:46.354495] [ DPDK EAL parameters: bdevperf -c 0x20 -m 256 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk2 --proc-type=auto ] 00:08:46.044 [2024-12-16 11:30:46.541319] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:46.044 [2024-12-16 11:30:46.597134] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 6 00:08:46.044 [2024-12-16 11:30:46.610710] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:46.044 [2024-12-16 11:30:46.663481] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 7 00:08:46.044 [2024-12-16 11:30:46.691031] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:46.044 [2024-12-16 11:30:46.754705] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 4 00:08:46.044 [2024-12-16 11:30:46.776287] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:46.044 Running I/O for 1 seconds... 00:08:46.302 Running I/O for 1 seconds... 00:08:46.302 [2024-12-16 11:30:46.838516] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 5 00:08:46.302 Running I/O for 1 seconds... 00:08:46.302 Running I/O for 1 seconds... 00:08:47.237 182256.00 IOPS, 711.94 MiB/s 00:08:47.237 Latency(us) 00:08:47.237 [2024-12-16T11:30:47.990Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:08:47.237 Job: Nvme1n1 (Core Mask 0x40, workload: flush, depth: 128, IO size: 4096) 00:08:47.237 Nvme1n1 : 1.00 181915.45 710.61 0.00 0.00 699.44 288.58 1869.27 00:08:47.237 [2024-12-16T11:30:47.990Z] =================================================================================================================== 00:08:47.237 [2024-12-16T11:30:47.990Z] Total : 181915.45 710.61 0.00 0.00 699.44 288.58 1869.27 00:08:47.237 9498.00 IOPS, 37.10 MiB/s 00:08:47.237 Latency(us) 00:08:47.237 [2024-12-16T11:30:47.990Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:08:47.237 Job: Nvme1n1 (Core Mask 0x80, workload: unmap, depth: 128, IO size: 4096) 00:08:47.237 Nvme1n1 : 1.01 9554.90 37.32 0.00 0.00 13337.12 7596.22 43134.60 00:08:47.237 [2024-12-16T11:30:47.990Z] =================================================================================================================== 00:08:47.237 [2024-12-16T11:30:47.990Z] Total : 9554.90 37.32 0.00 0.00 13337.12 7596.22 43134.60 00:08:47.237 7993.00 IOPS, 31.22 MiB/s 00:08:47.237 Latency(us) 00:08:47.237 [2024-12-16T11:30:47.990Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:08:47.237 Job: Nvme1n1 (Core Mask 0x10, workload: write, depth: 128, IO size: 4096) 00:08:47.237 Nvme1n1 : 1.01 8058.70 31.48 0.00 0.00 15810.58 6106.76 23473.80 00:08:47.237 [2024-12-16T11:30:47.990Z] =================================================================================================================== 00:08:47.237 [2024-12-16T11:30:47.990Z] Total : 8058.70 31.48 0.00 0.00 15810.58 6106.76 23473.80 00:08:47.496 7068.00 IOPS, 27.61 MiB/s 00:08:47.496 Latency(us) 00:08:47.496 [2024-12-16T11:30:48.249Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:08:47.496 Job: Nvme1n1 (Core Mask 0x20, workload: read, depth: 128, IO size: 4096) 00:08:47.496 Nvme1n1 : 1.01 7130.85 27.85 0.00 0.00 17855.39 7923.90 26214.40 00:08:47.496 [2024-12-16T11:30:48.249Z] =================================================================================================================== 00:08:47.496 [2024-12-16T11:30:48.249Z] Total : 7130.85 27.85 0.00 0.00 17855.39 7923.90 26214.40 00:08:47.496 11:30:48 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@38 -- # wait 69423 00:08:47.496 11:30:48 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@39 -- # wait 69427 00:08:47.496 11:30:48 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@40 -- # wait 69433 00:08:47.496 11:30:48 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@42 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:08:47.496 11:30:48 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:47.496 11:30:48 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:08:47.496 11:30:48 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:47.496 11:30:48 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@44 -- # trap - SIGINT SIGTERM EXIT 00:08:47.496 11:30:48 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@46 -- # nvmftestfini 00:08:47.496 11:30:48 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@516 -- # nvmfcleanup 00:08:47.497 11:30:48 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@121 -- # sync 00:08:47.497 11:30:48 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:08:47.497 11:30:48 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@124 -- # set +e 00:08:47.497 11:30:48 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@125 -- # for i in {1..20} 00:08:47.497 11:30:48 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:08:47.497 rmmod nvme_tcp 00:08:47.497 rmmod nvme_fabrics 00:08:47.497 rmmod nvme_keyring 00:08:47.497 11:30:48 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:08:47.497 11:30:48 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@128 -- # set -e 00:08:47.497 11:30:48 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@129 -- # return 0 00:08:47.497 11:30:48 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@517 -- # '[' -n 69368 ']' 00:08:47.497 11:30:48 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@518 -- # killprocess 69368 00:08:47.497 11:30:48 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@954 -- # '[' -z 69368 ']' 00:08:47.497 11:30:48 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@958 -- # kill -0 69368 00:08:47.497 11:30:48 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@959 -- # uname 00:08:47.497 11:30:48 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:08:47.497 11:30:48 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 69368 00:08:47.756 11:30:48 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:08:47.756 killing process with pid 69368 00:08:47.756 11:30:48 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:08:47.756 11:30:48 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@972 -- # echo 'killing process with pid 69368' 00:08:47.756 11:30:48 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@973 -- # kill 69368 00:08:47.756 11:30:48 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@978 -- # wait 69368 00:08:47.756 11:30:48 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:08:47.756 11:30:48 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:08:47.756 11:30:48 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:08:47.756 11:30:48 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@297 -- # iptr 00:08:47.756 11:30:48 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:08:47.756 11:30:48 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@791 -- # iptables-save 00:08:47.756 11:30:48 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@791 -- # iptables-restore 00:08:47.756 11:30:48 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@298 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:08:47.756 11:30:48 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@299 -- # nvmf_veth_fini 00:08:47.756 11:30:48 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@233 -- # ip link set nvmf_init_br nomaster 00:08:47.756 11:30:48 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@234 -- # ip link set nvmf_init_br2 nomaster 00:08:48.015 11:30:48 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@235 -- # ip link set nvmf_tgt_br nomaster 00:08:48.015 11:30:48 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@236 -- # ip link set nvmf_tgt_br2 nomaster 00:08:48.015 11:30:48 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@237 -- # ip link set nvmf_init_br down 00:08:48.015 11:30:48 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@238 -- # ip link set nvmf_init_br2 down 00:08:48.015 11:30:48 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@239 -- # ip link set nvmf_tgt_br down 00:08:48.015 11:30:48 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@240 -- # ip link set nvmf_tgt_br2 down 00:08:48.015 11:30:48 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@241 -- # ip link delete nvmf_br type bridge 00:08:48.015 11:30:48 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@242 -- # ip link delete nvmf_init_if 00:08:48.015 11:30:48 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@243 -- # ip link delete nvmf_init_if2 00:08:48.015 11:30:48 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@244 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:08:48.015 11:30:48 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@245 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:08:48.015 11:30:48 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@246 -- # remove_spdk_ns 00:08:48.015 11:30:48 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:08:48.015 11:30:48 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:08:48.015 11:30:48 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:08:48.015 11:30:48 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@300 -- # return 0 00:08:48.015 00:08:48.015 real 0m4.353s 00:08:48.015 user 0m17.343s 00:08:48.015 sys 0m2.204s 00:08:48.015 ************************************ 00:08:48.015 END TEST nvmf_bdev_io_wait 00:08:48.015 ************************************ 00:08:48.015 11:30:48 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@1130 -- # xtrace_disable 00:08:48.015 11:30:48 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:08:48.015 11:30:48 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@30 -- # run_test nvmf_queue_depth /home/vagrant/spdk_repo/spdk/test/nvmf/target/queue_depth.sh --transport=tcp 00:08:48.015 11:30:48 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:08:48.015 11:30:48 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1111 -- # xtrace_disable 00:08:48.015 11:30:48 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@10 -- # set +x 00:08:48.015 ************************************ 00:08:48.015 START TEST nvmf_queue_depth 00:08:48.015 ************************************ 00:08:48.015 11:30:48 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/queue_depth.sh --transport=tcp 00:08:48.275 * Looking for test storage... 00:08:48.275 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:08:48.275 11:30:48 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:08:48.275 11:30:48 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@1711 -- # lcov --version 00:08:48.275 11:30:48 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:08:48.275 11:30:48 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:08:48.275 11:30:48 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:08:48.275 11:30:48 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@333 -- # local ver1 ver1_l 00:08:48.275 11:30:48 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@334 -- # local ver2 ver2_l 00:08:48.275 11:30:48 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@336 -- # IFS=.-: 00:08:48.275 11:30:48 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@336 -- # read -ra ver1 00:08:48.275 11:30:48 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@337 -- # IFS=.-: 00:08:48.275 11:30:48 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@337 -- # read -ra ver2 00:08:48.275 11:30:48 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@338 -- # local 'op=<' 00:08:48.275 11:30:48 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@340 -- # ver1_l=2 00:08:48.275 11:30:48 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@341 -- # ver2_l=1 00:08:48.275 11:30:48 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:08:48.275 11:30:48 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@344 -- # case "$op" in 00:08:48.275 11:30:48 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@345 -- # : 1 00:08:48.275 11:30:48 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@364 -- # (( v = 0 )) 00:08:48.275 11:30:48 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:08:48.275 11:30:48 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@365 -- # decimal 1 00:08:48.275 11:30:48 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@353 -- # local d=1 00:08:48.275 11:30:48 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:08:48.275 11:30:48 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@355 -- # echo 1 00:08:48.275 11:30:48 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@365 -- # ver1[v]=1 00:08:48.275 11:30:48 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@366 -- # decimal 2 00:08:48.275 11:30:48 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@353 -- # local d=2 00:08:48.275 11:30:48 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:08:48.275 11:30:48 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@355 -- # echo 2 00:08:48.275 11:30:48 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@366 -- # ver2[v]=2 00:08:48.275 11:30:48 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:08:48.275 11:30:48 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:08:48.275 11:30:48 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@368 -- # return 0 00:08:48.275 11:30:48 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:08:48.275 11:30:48 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:08:48.275 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:48.275 --rc genhtml_branch_coverage=1 00:08:48.275 --rc genhtml_function_coverage=1 00:08:48.275 --rc genhtml_legend=1 00:08:48.275 --rc geninfo_all_blocks=1 00:08:48.275 --rc geninfo_unexecuted_blocks=1 00:08:48.275 00:08:48.275 ' 00:08:48.275 11:30:48 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:08:48.275 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:48.275 --rc genhtml_branch_coverage=1 00:08:48.275 --rc genhtml_function_coverage=1 00:08:48.275 --rc genhtml_legend=1 00:08:48.275 --rc geninfo_all_blocks=1 00:08:48.275 --rc geninfo_unexecuted_blocks=1 00:08:48.275 00:08:48.275 ' 00:08:48.275 11:30:48 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:08:48.275 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:48.275 --rc genhtml_branch_coverage=1 00:08:48.275 --rc genhtml_function_coverage=1 00:08:48.275 --rc genhtml_legend=1 00:08:48.275 --rc geninfo_all_blocks=1 00:08:48.275 --rc geninfo_unexecuted_blocks=1 00:08:48.275 00:08:48.275 ' 00:08:48.275 11:30:48 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:08:48.275 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:48.275 --rc genhtml_branch_coverage=1 00:08:48.275 --rc genhtml_function_coverage=1 00:08:48.275 --rc genhtml_legend=1 00:08:48.275 --rc geninfo_all_blocks=1 00:08:48.275 --rc geninfo_unexecuted_blocks=1 00:08:48.275 00:08:48.275 ' 00:08:48.275 11:30:48 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@12 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:08:48.275 11:30:48 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@7 -- # uname -s 00:08:48.275 11:30:48 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:08:48.275 11:30:48 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:08:48.275 11:30:48 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:08:48.275 11:30:48 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:08:48.275 11:30:48 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:08:48.275 11:30:48 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:08:48.275 11:30:48 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:08:48.275 11:30:48 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:08:48.275 11:30:48 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:08:48.275 11:30:48 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:08:48.275 11:30:48 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:2fb5c03d-b67f-4471-b8a9-a83bf7b15be8 00:08:48.275 11:30:48 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@18 -- # NVME_HOSTID=2fb5c03d-b67f-4471-b8a9-a83bf7b15be8 00:08:48.275 11:30:48 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:08:48.275 11:30:48 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:08:48.275 11:30:48 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:08:48.275 11:30:48 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:08:48.275 11:30:48 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@49 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:08:48.275 11:30:48 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@15 -- # shopt -s extglob 00:08:48.275 11:30:48 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:08:48.275 11:30:48 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:08:48.275 11:30:48 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:08:48.275 11:30:48 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:48.275 11:30:48 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:48.276 11:30:48 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:48.276 11:30:48 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- paths/export.sh@5 -- # export PATH 00:08:48.276 11:30:48 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:48.276 11:30:48 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@51 -- # : 0 00:08:48.276 11:30:48 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:08:48.276 11:30:48 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:08:48.276 11:30:48 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:08:48.276 11:30:48 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:08:48.276 11:30:48 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:08:48.276 11:30:48 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:08:48.276 /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:08:48.276 11:30:48 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:08:48.276 11:30:48 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:08:48.276 11:30:48 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@55 -- # have_pci_nics=0 00:08:48.276 11:30:48 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@14 -- # MALLOC_BDEV_SIZE=64 00:08:48.276 11:30:48 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@15 -- # MALLOC_BLOCK_SIZE=512 00:08:48.276 11:30:48 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@17 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:08:48.276 11:30:48 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@19 -- # nvmftestinit 00:08:48.276 11:30:48 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:08:48.276 11:30:48 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:08:48.276 11:30:48 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@476 -- # prepare_net_devs 00:08:48.276 11:30:48 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@438 -- # local -g is_hw=no 00:08:48.276 11:30:48 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@440 -- # remove_spdk_ns 00:08:48.276 11:30:48 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:08:48.276 11:30:48 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:08:48.276 11:30:48 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:08:48.276 11:30:48 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@442 -- # [[ virt != virt ]] 00:08:48.276 11:30:48 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@444 -- # [[ no == yes ]] 00:08:48.276 11:30:48 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@451 -- # [[ virt == phy ]] 00:08:48.276 11:30:48 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@454 -- # [[ virt == phy-fallback ]] 00:08:48.276 11:30:48 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@459 -- # [[ tcp == tcp ]] 00:08:48.276 11:30:48 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@460 -- # nvmf_veth_init 00:08:48.276 11:30:48 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@145 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:08:48.276 11:30:48 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@146 -- # NVMF_SECOND_INITIATOR_IP=10.0.0.2 00:08:48.276 11:30:48 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@147 -- # NVMF_FIRST_TARGET_IP=10.0.0.3 00:08:48.276 11:30:48 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@148 -- # NVMF_SECOND_TARGET_IP=10.0.0.4 00:08:48.276 11:30:48 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@149 -- # NVMF_INITIATOR_IP=10.0.0.1 00:08:48.276 11:30:48 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@150 -- # NVMF_BRIDGE=nvmf_br 00:08:48.276 11:30:48 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@151 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:08:48.276 11:30:48 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@152 -- # NVMF_INITIATOR_INTERFACE2=nvmf_init_if2 00:08:48.276 11:30:48 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@153 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:08:48.276 11:30:48 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@154 -- # NVMF_INITIATOR_BRIDGE2=nvmf_init_br2 00:08:48.276 11:30:48 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@155 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:08:48.276 11:30:48 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@156 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:08:48.276 11:30:48 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@157 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:08:48.276 11:30:48 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@158 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:08:48.276 11:30:48 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@159 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:08:48.276 11:30:48 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@160 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:08:48.276 11:30:48 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@162 -- # ip link set nvmf_init_br nomaster 00:08:48.276 Cannot find device "nvmf_init_br" 00:08:48.276 11:30:48 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@162 -- # true 00:08:48.276 11:30:48 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@163 -- # ip link set nvmf_init_br2 nomaster 00:08:48.276 Cannot find device "nvmf_init_br2" 00:08:48.276 11:30:49 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@163 -- # true 00:08:48.276 11:30:49 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@164 -- # ip link set nvmf_tgt_br nomaster 00:08:48.276 Cannot find device "nvmf_tgt_br" 00:08:48.276 11:30:49 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@164 -- # true 00:08:48.276 11:30:49 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@165 -- # ip link set nvmf_tgt_br2 nomaster 00:08:48.535 Cannot find device "nvmf_tgt_br2" 00:08:48.535 11:30:49 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@165 -- # true 00:08:48.535 11:30:49 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@166 -- # ip link set nvmf_init_br down 00:08:48.535 Cannot find device "nvmf_init_br" 00:08:48.535 11:30:49 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@166 -- # true 00:08:48.535 11:30:49 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@167 -- # ip link set nvmf_init_br2 down 00:08:48.535 Cannot find device "nvmf_init_br2" 00:08:48.535 11:30:49 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@167 -- # true 00:08:48.535 11:30:49 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@168 -- # ip link set nvmf_tgt_br down 00:08:48.535 Cannot find device "nvmf_tgt_br" 00:08:48.535 11:30:49 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@168 -- # true 00:08:48.535 11:30:49 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@169 -- # ip link set nvmf_tgt_br2 down 00:08:48.535 Cannot find device "nvmf_tgt_br2" 00:08:48.535 11:30:49 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@169 -- # true 00:08:48.535 11:30:49 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@170 -- # ip link delete nvmf_br type bridge 00:08:48.535 Cannot find device "nvmf_br" 00:08:48.535 11:30:49 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@170 -- # true 00:08:48.535 11:30:49 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@171 -- # ip link delete nvmf_init_if 00:08:48.535 Cannot find device "nvmf_init_if" 00:08:48.535 11:30:49 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@171 -- # true 00:08:48.535 11:30:49 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@172 -- # ip link delete nvmf_init_if2 00:08:48.535 Cannot find device "nvmf_init_if2" 00:08:48.535 11:30:49 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@172 -- # true 00:08:48.535 11:30:49 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@173 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:08:48.535 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:08:48.535 11:30:49 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@173 -- # true 00:08:48.535 11:30:49 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@174 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:08:48.535 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:08:48.535 11:30:49 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@174 -- # true 00:08:48.535 11:30:49 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@177 -- # ip netns add nvmf_tgt_ns_spdk 00:08:48.535 11:30:49 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@180 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:08:48.535 11:30:49 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@181 -- # ip link add nvmf_init_if2 type veth peer name nvmf_init_br2 00:08:48.535 11:30:49 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@182 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:08:48.535 11:30:49 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@183 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:08:48.535 11:30:49 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:08:48.535 11:30:49 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@187 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:08:48.535 11:30:49 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@190 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:08:48.535 11:30:49 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@191 -- # ip addr add 10.0.0.2/24 dev nvmf_init_if2 00:08:48.535 11:30:49 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@192 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if 00:08:48.535 11:30:49 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@193 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.4/24 dev nvmf_tgt_if2 00:08:48.535 11:30:49 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@196 -- # ip link set nvmf_init_if up 00:08:48.535 11:30:49 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@197 -- # ip link set nvmf_init_if2 up 00:08:48.535 11:30:49 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@198 -- # ip link set nvmf_init_br up 00:08:48.535 11:30:49 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@199 -- # ip link set nvmf_init_br2 up 00:08:48.535 11:30:49 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@200 -- # ip link set nvmf_tgt_br up 00:08:48.535 11:30:49 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@201 -- # ip link set nvmf_tgt_br2 up 00:08:48.535 11:30:49 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@202 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:08:48.535 11:30:49 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@203 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:08:48.535 11:30:49 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@204 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:08:48.535 11:30:49 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@207 -- # ip link add nvmf_br type bridge 00:08:48.535 11:30:49 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@208 -- # ip link set nvmf_br up 00:08:48.535 11:30:49 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@211 -- # ip link set nvmf_init_br master nvmf_br 00:08:48.535 11:30:49 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@212 -- # ip link set nvmf_init_br2 master nvmf_br 00:08:48.535 11:30:49 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@213 -- # ip link set nvmf_tgt_br master nvmf_br 00:08:48.794 11:30:49 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@214 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:08:48.794 11:30:49 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@217 -- # ipts -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:08:48.794 11:30:49 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT' 00:08:48.794 11:30:49 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@218 -- # ipts -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT 00:08:48.794 11:30:49 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT' 00:08:48.794 11:30:49 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@219 -- # ipts -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:08:48.794 11:30:49 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@790 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT -m comment --comment 'SPDK_NVMF:-A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT' 00:08:48.794 11:30:49 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@222 -- # ping -c 1 10.0.0.3 00:08:48.794 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:08:48.794 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.073 ms 00:08:48.794 00:08:48.794 --- 10.0.0.3 ping statistics --- 00:08:48.794 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:08:48.794 rtt min/avg/max/mdev = 0.073/0.073/0.073/0.000 ms 00:08:48.794 11:30:49 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@223 -- # ping -c 1 10.0.0.4 00:08:48.794 PING 10.0.0.4 (10.0.0.4) 56(84) bytes of data. 00:08:48.794 64 bytes from 10.0.0.4: icmp_seq=1 ttl=64 time=0.049 ms 00:08:48.794 00:08:48.794 --- 10.0.0.4 ping statistics --- 00:08:48.794 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:08:48.794 rtt min/avg/max/mdev = 0.049/0.049/0.049/0.000 ms 00:08:48.794 11:30:49 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@224 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:08:48.794 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:08:48.794 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.018 ms 00:08:48.794 00:08:48.794 --- 10.0.0.1 ping statistics --- 00:08:48.794 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:08:48.794 rtt min/avg/max/mdev = 0.018/0.018/0.018/0.000 ms 00:08:48.794 11:30:49 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@225 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.2 00:08:48.794 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:08:48.794 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.098 ms 00:08:48.794 00:08:48.794 --- 10.0.0.2 ping statistics --- 00:08:48.794 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:08:48.794 rtt min/avg/max/mdev = 0.098/0.098/0.098/0.000 ms 00:08:48.794 11:30:49 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@227 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:08:48.794 11:30:49 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@461 -- # return 0 00:08:48.794 11:30:49 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:08:48.794 11:30:49 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:08:48.794 11:30:49 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:08:48.794 11:30:49 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:08:48.794 11:30:49 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:08:48.794 11:30:49 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:08:48.794 11:30:49 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:08:48.794 11:30:49 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@21 -- # nvmfappstart -m 0x2 00:08:48.794 11:30:49 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:08:48.794 11:30:49 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@726 -- # xtrace_disable 00:08:48.794 11:30:49 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:08:48.794 11:30:49 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@509 -- # nvmfpid=69711 00:08:48.794 11:30:49 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@508 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:08:48.794 11:30:49 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@510 -- # waitforlisten 69711 00:08:48.794 11:30:49 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@835 -- # '[' -z 69711 ']' 00:08:48.794 11:30:49 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:08:48.794 11:30:49 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@840 -- # local max_retries=100 00:08:48.794 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:08:48.794 11:30:49 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:08:48.794 11:30:49 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@844 -- # xtrace_disable 00:08:48.794 11:30:49 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:08:48.794 [2024-12-16 11:30:49.437790] Starting SPDK v25.01-pre git sha1 c360129e7 / DPDK 24.03.0 initialization... 00:08:48.794 [2024-12-16 11:30:49.437899] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:08:49.054 [2024-12-16 11:30:49.597125] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:49.054 [2024-12-16 11:30:49.666532] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:08:49.054 [2024-12-16 11:30:49.666604] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:08:49.054 [2024-12-16 11:30:49.666619] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:08:49.054 [2024-12-16 11:30:49.666629] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:08:49.054 [2024-12-16 11:30:49.666639] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:08:49.054 [2024-12-16 11:30:49.667094] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 1 00:08:49.054 11:30:49 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:08:49.054 11:30:49 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@868 -- # return 0 00:08:49.054 11:30:49 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:08:49.054 11:30:49 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@732 -- # xtrace_disable 00:08:49.054 11:30:49 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:08:49.313 11:30:49 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:08:49.313 11:30:49 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@23 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:08:49.313 11:30:49 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:49.313 11:30:49 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:08:49.313 [2024-12-16 11:30:49.855432] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:08:49.313 11:30:49 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:49.313 11:30:49 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@24 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:08:49.313 11:30:49 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:49.313 11:30:49 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:08:49.313 Malloc0 00:08:49.313 11:30:49 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:49.313 11:30:49 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@25 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:08:49.313 11:30:49 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:49.313 11:30:49 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:08:49.313 11:30:49 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:49.313 11:30:49 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@26 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:08:49.313 11:30:49 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:49.313 11:30:49 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:08:49.313 11:30:49 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:49.313 11:30:49 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@27 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 00:08:49.313 11:30:49 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:49.313 11:30:49 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:08:49.313 [2024-12-16 11:30:49.908475] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4420 *** 00:08:49.313 11:30:49 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:49.313 11:30:49 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@30 -- # bdevperf_pid=69747 00:08:49.313 11:30:49 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@29 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -z -r /var/tmp/bdevperf.sock -q 1024 -o 4096 -w verify -t 10 00:08:49.313 11:30:49 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@32 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; killprocess $bdevperf_pid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:08:49.313 11:30:49 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@33 -- # waitforlisten 69747 /var/tmp/bdevperf.sock 00:08:49.313 11:30:49 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@835 -- # '[' -z 69747 ']' 00:08:49.313 11:30:49 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:08:49.313 11:30:49 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@840 -- # local max_retries=100 00:08:49.313 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:08:49.313 11:30:49 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:08:49.313 11:30:49 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@844 -- # xtrace_disable 00:08:49.313 11:30:49 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:08:49.313 [2024-12-16 11:30:49.969942] Starting SPDK v25.01-pre git sha1 c360129e7 / DPDK 24.03.0 initialization... 00:08:49.313 [2024-12-16 11:30:49.970041] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid69747 ] 00:08:49.601 [2024-12-16 11:30:50.129186] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:49.601 [2024-12-16 11:30:50.193365] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:08:49.601 11:30:50 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:08:49.601 11:30:50 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@868 -- # return 0 00:08:49.601 11:30:50 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@34 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.3 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:08:49.601 11:30:50 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:49.601 11:30:50 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:08:49.859 NVMe0n1 00:08:49.859 11:30:50 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:49.859 11:30:50 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@35 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:08:49.859 Running I/O for 10 seconds... 00:08:52.172 8050.00 IOPS, 31.45 MiB/s [2024-12-16T11:30:53.861Z] 8195.50 IOPS, 32.01 MiB/s [2024-12-16T11:30:54.802Z] 8296.33 IOPS, 32.41 MiB/s [2024-12-16T11:30:55.816Z] 8436.50 IOPS, 32.96 MiB/s [2024-12-16T11:30:56.749Z] 8459.40 IOPS, 33.04 MiB/s [2024-12-16T11:30:57.680Z] 8533.00 IOPS, 33.33 MiB/s [2024-12-16T11:30:58.611Z] 8625.71 IOPS, 33.69 MiB/s [2024-12-16T11:30:59.543Z] 8699.25 IOPS, 33.98 MiB/s [2024-12-16T11:31:00.915Z] 8754.44 IOPS, 34.20 MiB/s [2024-12-16T11:31:00.915Z] 8799.30 IOPS, 34.37 MiB/s 00:09:00.162 Latency(us) 00:09:00.162 [2024-12-16T11:31:00.915Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:09:00.162 Job: NVMe0n1 (Core Mask 0x1, workload: verify, depth: 1024, IO size: 4096) 00:09:00.162 Verification LBA range: start 0x0 length 0x4000 00:09:00.162 NVMe0n1 : 10.08 8825.77 34.48 0.00 0.00 115519.49 28240.06 107717.35 00:09:00.162 [2024-12-16T11:31:00.915Z] =================================================================================================================== 00:09:00.162 [2024-12-16T11:31:00.915Z] Total : 8825.77 34.48 0.00 0.00 115519.49 28240.06 107717.35 00:09:00.162 { 00:09:00.162 "results": [ 00:09:00.162 { 00:09:00.162 "job": "NVMe0n1", 00:09:00.162 "core_mask": "0x1", 00:09:00.162 "workload": "verify", 00:09:00.162 "status": "finished", 00:09:00.162 "verify_range": { 00:09:00.162 "start": 0, 00:09:00.162 "length": 16384 00:09:00.162 }, 00:09:00.162 "queue_depth": 1024, 00:09:00.162 "io_size": 4096, 00:09:00.162 "runtime": 10.082185, 00:09:00.162 "iops": 8825.765446676489, 00:09:00.162 "mibps": 34.475646276080035, 00:09:00.162 "io_failed": 0, 00:09:00.162 "io_timeout": 0, 00:09:00.162 "avg_latency_us": 115519.487334639, 00:09:00.162 "min_latency_us": 28240.05818181818, 00:09:00.162 "max_latency_us": 107717.35272727272 00:09:00.162 } 00:09:00.162 ], 00:09:00.162 "core_count": 1 00:09:00.162 } 00:09:00.162 11:31:00 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@39 -- # killprocess 69747 00:09:00.162 11:31:00 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@954 -- # '[' -z 69747 ']' 00:09:00.162 11:31:00 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@958 -- # kill -0 69747 00:09:00.162 11:31:00 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@959 -- # uname 00:09:00.162 11:31:00 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:09:00.162 11:31:00 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 69747 00:09:00.162 11:31:00 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:09:00.162 11:31:00 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:09:00.162 11:31:00 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@972 -- # echo 'killing process with pid 69747' 00:09:00.162 killing process with pid 69747 00:09:00.162 Received shutdown signal, test time was about 10.000000 seconds 00:09:00.162 00:09:00.162 Latency(us) 00:09:00.162 [2024-12-16T11:31:00.915Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:09:00.162 [2024-12-16T11:31:00.915Z] =================================================================================================================== 00:09:00.162 [2024-12-16T11:31:00.915Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:09:00.162 11:31:00 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@973 -- # kill 69747 00:09:00.162 11:31:00 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@978 -- # wait 69747 00:09:00.162 11:31:00 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@41 -- # trap - SIGINT SIGTERM EXIT 00:09:00.162 11:31:00 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@43 -- # nvmftestfini 00:09:00.162 11:31:00 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@516 -- # nvmfcleanup 00:09:00.162 11:31:00 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@121 -- # sync 00:09:00.420 11:31:00 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:09:00.420 11:31:00 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@124 -- # set +e 00:09:00.420 11:31:00 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@125 -- # for i in {1..20} 00:09:00.420 11:31:00 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:09:00.420 rmmod nvme_tcp 00:09:00.420 rmmod nvme_fabrics 00:09:00.420 rmmod nvme_keyring 00:09:00.420 11:31:00 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:09:00.420 11:31:00 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@128 -- # set -e 00:09:00.420 11:31:00 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@129 -- # return 0 00:09:00.420 11:31:00 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@517 -- # '[' -n 69711 ']' 00:09:00.420 11:31:00 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@518 -- # killprocess 69711 00:09:00.420 11:31:00 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@954 -- # '[' -z 69711 ']' 00:09:00.420 11:31:00 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@958 -- # kill -0 69711 00:09:00.420 11:31:00 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@959 -- # uname 00:09:00.420 11:31:00 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:09:00.420 11:31:00 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 69711 00:09:00.420 11:31:01 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:09:00.420 11:31:01 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:09:00.420 killing process with pid 69711 00:09:00.420 11:31:01 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@972 -- # echo 'killing process with pid 69711' 00:09:00.420 11:31:01 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@973 -- # kill 69711 00:09:00.420 11:31:01 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@978 -- # wait 69711 00:09:00.677 11:31:01 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:09:00.677 11:31:01 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:09:00.677 11:31:01 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:09:00.678 11:31:01 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@297 -- # iptr 00:09:00.678 11:31:01 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@791 -- # iptables-save 00:09:00.678 11:31:01 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:09:00.678 11:31:01 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@791 -- # iptables-restore 00:09:00.678 11:31:01 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@298 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:09:00.678 11:31:01 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@299 -- # nvmf_veth_fini 00:09:00.678 11:31:01 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@233 -- # ip link set nvmf_init_br nomaster 00:09:00.678 11:31:01 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@234 -- # ip link set nvmf_init_br2 nomaster 00:09:00.678 11:31:01 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@235 -- # ip link set nvmf_tgt_br nomaster 00:09:00.678 11:31:01 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@236 -- # ip link set nvmf_tgt_br2 nomaster 00:09:00.678 11:31:01 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@237 -- # ip link set nvmf_init_br down 00:09:00.678 11:31:01 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@238 -- # ip link set nvmf_init_br2 down 00:09:00.678 11:31:01 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@239 -- # ip link set nvmf_tgt_br down 00:09:00.678 11:31:01 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@240 -- # ip link set nvmf_tgt_br2 down 00:09:00.678 11:31:01 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@241 -- # ip link delete nvmf_br type bridge 00:09:00.678 11:31:01 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@242 -- # ip link delete nvmf_init_if 00:09:00.678 11:31:01 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@243 -- # ip link delete nvmf_init_if2 00:09:00.678 11:31:01 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@244 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:09:00.678 11:31:01 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@245 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:09:00.678 11:31:01 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@246 -- # remove_spdk_ns 00:09:00.678 11:31:01 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:09:00.678 11:31:01 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:09:00.678 11:31:01 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:09:00.934 11:31:01 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@300 -- # return 0 00:09:00.934 00:09:00.934 real 0m12.691s 00:09:00.934 user 0m21.508s 00:09:00.934 sys 0m2.085s 00:09:00.934 11:31:01 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@1130 -- # xtrace_disable 00:09:00.934 11:31:01 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:09:00.934 ************************************ 00:09:00.934 END TEST nvmf_queue_depth 00:09:00.934 ************************************ 00:09:00.934 11:31:01 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@31 -- # run_test nvmf_target_multipath /home/vagrant/spdk_repo/spdk/test/nvmf/target/multipath.sh --transport=tcp 00:09:00.934 11:31:01 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:09:00.934 11:31:01 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1111 -- # xtrace_disable 00:09:00.934 11:31:01 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@10 -- # set +x 00:09:00.934 ************************************ 00:09:00.934 START TEST nvmf_target_multipath 00:09:00.934 ************************************ 00:09:00.934 11:31:01 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/multipath.sh --transport=tcp 00:09:00.934 * Looking for test storage... 00:09:00.934 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:09:00.934 11:31:01 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:09:00.934 11:31:01 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1711 -- # lcov --version 00:09:00.934 11:31:01 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:09:00.934 11:31:01 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:09:00.934 11:31:01 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:09:00.934 11:31:01 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@333 -- # local ver1 ver1_l 00:09:00.934 11:31:01 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@334 -- # local ver2 ver2_l 00:09:00.934 11:31:01 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@336 -- # IFS=.-: 00:09:00.934 11:31:01 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@336 -- # read -ra ver1 00:09:00.934 11:31:01 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@337 -- # IFS=.-: 00:09:00.934 11:31:01 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@337 -- # read -ra ver2 00:09:00.934 11:31:01 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@338 -- # local 'op=<' 00:09:00.934 11:31:01 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@340 -- # ver1_l=2 00:09:00.934 11:31:01 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@341 -- # ver2_l=1 00:09:00.934 11:31:01 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:09:00.934 11:31:01 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@344 -- # case "$op" in 00:09:00.934 11:31:01 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@345 -- # : 1 00:09:00.934 11:31:01 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@364 -- # (( v = 0 )) 00:09:00.934 11:31:01 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:09:00.935 11:31:01 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@365 -- # decimal 1 00:09:00.935 11:31:01 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@353 -- # local d=1 00:09:00.935 11:31:01 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:09:00.935 11:31:01 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@355 -- # echo 1 00:09:00.935 11:31:01 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@365 -- # ver1[v]=1 00:09:00.935 11:31:01 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@366 -- # decimal 2 00:09:00.935 11:31:01 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@353 -- # local d=2 00:09:00.935 11:31:01 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:09:00.935 11:31:01 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@355 -- # echo 2 00:09:00.935 11:31:01 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@366 -- # ver2[v]=2 00:09:00.935 11:31:01 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:09:00.935 11:31:01 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:09:00.935 11:31:01 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@368 -- # return 0 00:09:00.935 11:31:01 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:09:00.935 11:31:01 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:09:00.935 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:00.935 --rc genhtml_branch_coverage=1 00:09:00.935 --rc genhtml_function_coverage=1 00:09:00.935 --rc genhtml_legend=1 00:09:00.935 --rc geninfo_all_blocks=1 00:09:00.935 --rc geninfo_unexecuted_blocks=1 00:09:00.935 00:09:00.935 ' 00:09:00.935 11:31:01 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:09:00.935 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:00.935 --rc genhtml_branch_coverage=1 00:09:00.935 --rc genhtml_function_coverage=1 00:09:00.935 --rc genhtml_legend=1 00:09:00.935 --rc geninfo_all_blocks=1 00:09:00.935 --rc geninfo_unexecuted_blocks=1 00:09:00.935 00:09:00.935 ' 00:09:00.935 11:31:01 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:09:00.935 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:00.935 --rc genhtml_branch_coverage=1 00:09:00.935 --rc genhtml_function_coverage=1 00:09:00.935 --rc genhtml_legend=1 00:09:00.935 --rc geninfo_all_blocks=1 00:09:00.935 --rc geninfo_unexecuted_blocks=1 00:09:00.935 00:09:00.935 ' 00:09:00.935 11:31:01 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:09:00.935 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:00.935 --rc genhtml_branch_coverage=1 00:09:00.935 --rc genhtml_function_coverage=1 00:09:00.935 --rc genhtml_legend=1 00:09:00.935 --rc geninfo_all_blocks=1 00:09:00.935 --rc geninfo_unexecuted_blocks=1 00:09:00.935 00:09:00.935 ' 00:09:00.935 11:31:01 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:09:00.935 11:31:01 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@7 -- # uname -s 00:09:00.935 11:31:01 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:09:00.935 11:31:01 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:09:00.935 11:31:01 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:09:00.935 11:31:01 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:09:00.935 11:31:01 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:09:00.935 11:31:01 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:09:00.935 11:31:01 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:09:00.935 11:31:01 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:09:00.935 11:31:01 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:09:00.935 11:31:01 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:09:01.193 11:31:01 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:2fb5c03d-b67f-4471-b8a9-a83bf7b15be8 00:09:01.193 11:31:01 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@18 -- # NVME_HOSTID=2fb5c03d-b67f-4471-b8a9-a83bf7b15be8 00:09:01.193 11:31:01 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:09:01.193 11:31:01 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:09:01.193 11:31:01 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:09:01.193 11:31:01 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:09:01.193 11:31:01 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@49 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:09:01.193 11:31:01 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@15 -- # shopt -s extglob 00:09:01.193 11:31:01 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:09:01.193 11:31:01 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:09:01.193 11:31:01 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:09:01.193 11:31:01 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:01.193 11:31:01 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:01.193 11:31:01 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:01.194 11:31:01 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- paths/export.sh@5 -- # export PATH 00:09:01.194 11:31:01 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:01.194 11:31:01 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@51 -- # : 0 00:09:01.194 11:31:01 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:09:01.194 11:31:01 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:09:01.194 11:31:01 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:09:01.194 11:31:01 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:09:01.194 11:31:01 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:09:01.194 11:31:01 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:09:01.194 /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:09:01.194 11:31:01 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:09:01.194 11:31:01 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:09:01.194 11:31:01 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@55 -- # have_pci_nics=0 00:09:01.194 11:31:01 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@11 -- # MALLOC_BDEV_SIZE=64 00:09:01.194 11:31:01 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:09:01.194 11:31:01 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@13 -- # nqn=nqn.2016-06.io.spdk:cnode1 00:09:01.194 11:31:01 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@15 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:09:01.194 11:31:01 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@43 -- # nvmftestinit 00:09:01.194 11:31:01 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:09:01.194 11:31:01 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:09:01.194 11:31:01 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@476 -- # prepare_net_devs 00:09:01.194 11:31:01 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@438 -- # local -g is_hw=no 00:09:01.194 11:31:01 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@440 -- # remove_spdk_ns 00:09:01.194 11:31:01 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:09:01.194 11:31:01 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:09:01.194 11:31:01 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:09:01.194 11:31:01 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@442 -- # [[ virt != virt ]] 00:09:01.194 11:31:01 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@444 -- # [[ no == yes ]] 00:09:01.194 11:31:01 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@451 -- # [[ virt == phy ]] 00:09:01.194 11:31:01 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@454 -- # [[ virt == phy-fallback ]] 00:09:01.194 11:31:01 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@459 -- # [[ tcp == tcp ]] 00:09:01.194 11:31:01 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@460 -- # nvmf_veth_init 00:09:01.194 11:31:01 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@145 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:09:01.194 11:31:01 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@146 -- # NVMF_SECOND_INITIATOR_IP=10.0.0.2 00:09:01.194 11:31:01 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@147 -- # NVMF_FIRST_TARGET_IP=10.0.0.3 00:09:01.194 11:31:01 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@148 -- # NVMF_SECOND_TARGET_IP=10.0.0.4 00:09:01.194 11:31:01 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@149 -- # NVMF_INITIATOR_IP=10.0.0.1 00:09:01.194 11:31:01 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@150 -- # NVMF_BRIDGE=nvmf_br 00:09:01.194 11:31:01 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@151 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:09:01.194 11:31:01 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@152 -- # NVMF_INITIATOR_INTERFACE2=nvmf_init_if2 00:09:01.194 11:31:01 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@153 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:09:01.194 11:31:01 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@154 -- # NVMF_INITIATOR_BRIDGE2=nvmf_init_br2 00:09:01.194 11:31:01 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@155 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:09:01.194 11:31:01 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@156 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:09:01.194 11:31:01 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@157 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:09:01.194 11:31:01 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@158 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:09:01.194 11:31:01 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@159 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:09:01.194 11:31:01 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@160 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:09:01.194 11:31:01 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@162 -- # ip link set nvmf_init_br nomaster 00:09:01.194 Cannot find device "nvmf_init_br" 00:09:01.194 11:31:01 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@162 -- # true 00:09:01.194 11:31:01 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@163 -- # ip link set nvmf_init_br2 nomaster 00:09:01.194 Cannot find device "nvmf_init_br2" 00:09:01.194 11:31:01 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@163 -- # true 00:09:01.194 11:31:01 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@164 -- # ip link set nvmf_tgt_br nomaster 00:09:01.194 Cannot find device "nvmf_tgt_br" 00:09:01.194 11:31:01 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@164 -- # true 00:09:01.194 11:31:01 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@165 -- # ip link set nvmf_tgt_br2 nomaster 00:09:01.194 Cannot find device "nvmf_tgt_br2" 00:09:01.194 11:31:01 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@165 -- # true 00:09:01.194 11:31:01 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@166 -- # ip link set nvmf_init_br down 00:09:01.194 Cannot find device "nvmf_init_br" 00:09:01.194 11:31:01 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@166 -- # true 00:09:01.194 11:31:01 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@167 -- # ip link set nvmf_init_br2 down 00:09:01.194 Cannot find device "nvmf_init_br2" 00:09:01.194 11:31:01 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@167 -- # true 00:09:01.194 11:31:01 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@168 -- # ip link set nvmf_tgt_br down 00:09:01.194 Cannot find device "nvmf_tgt_br" 00:09:01.194 11:31:01 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@168 -- # true 00:09:01.194 11:31:01 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@169 -- # ip link set nvmf_tgt_br2 down 00:09:01.194 Cannot find device "nvmf_tgt_br2" 00:09:01.194 11:31:01 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@169 -- # true 00:09:01.194 11:31:01 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@170 -- # ip link delete nvmf_br type bridge 00:09:01.194 Cannot find device "nvmf_br" 00:09:01.194 11:31:01 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@170 -- # true 00:09:01.194 11:31:01 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@171 -- # ip link delete nvmf_init_if 00:09:01.194 Cannot find device "nvmf_init_if" 00:09:01.194 11:31:01 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@171 -- # true 00:09:01.194 11:31:01 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@172 -- # ip link delete nvmf_init_if2 00:09:01.194 Cannot find device "nvmf_init_if2" 00:09:01.194 11:31:01 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@172 -- # true 00:09:01.194 11:31:01 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@173 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:09:01.194 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:09:01.194 11:31:01 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@173 -- # true 00:09:01.194 11:31:01 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@174 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:09:01.194 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:09:01.194 11:31:01 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@174 -- # true 00:09:01.194 11:31:01 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@177 -- # ip netns add nvmf_tgt_ns_spdk 00:09:01.194 11:31:01 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@180 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:09:01.194 11:31:01 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@181 -- # ip link add nvmf_init_if2 type veth peer name nvmf_init_br2 00:09:01.194 11:31:01 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@182 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:09:01.194 11:31:01 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@183 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:09:01.194 11:31:01 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:09:01.194 11:31:01 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@187 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:09:01.194 11:31:01 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@190 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:09:01.194 11:31:01 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@191 -- # ip addr add 10.0.0.2/24 dev nvmf_init_if2 00:09:01.194 11:31:01 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@192 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if 00:09:01.194 11:31:01 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@193 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.4/24 dev nvmf_tgt_if2 00:09:01.194 11:31:01 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@196 -- # ip link set nvmf_init_if up 00:09:01.194 11:31:01 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@197 -- # ip link set nvmf_init_if2 up 00:09:01.194 11:31:01 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@198 -- # ip link set nvmf_init_br up 00:09:01.194 11:31:01 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@199 -- # ip link set nvmf_init_br2 up 00:09:01.194 11:31:01 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@200 -- # ip link set nvmf_tgt_br up 00:09:01.194 11:31:01 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@201 -- # ip link set nvmf_tgt_br2 up 00:09:01.194 11:31:01 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@202 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:09:01.194 11:31:01 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@203 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:09:01.194 11:31:01 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@204 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:09:01.453 11:31:01 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@207 -- # ip link add nvmf_br type bridge 00:09:01.453 11:31:01 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@208 -- # ip link set nvmf_br up 00:09:01.453 11:31:01 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@211 -- # ip link set nvmf_init_br master nvmf_br 00:09:01.453 11:31:01 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@212 -- # ip link set nvmf_init_br2 master nvmf_br 00:09:01.453 11:31:01 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@213 -- # ip link set nvmf_tgt_br master nvmf_br 00:09:01.453 11:31:01 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@214 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:09:01.453 11:31:02 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@217 -- # ipts -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:09:01.453 11:31:02 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT' 00:09:01.453 11:31:02 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@218 -- # ipts -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT 00:09:01.453 11:31:02 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT' 00:09:01.453 11:31:02 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@219 -- # ipts -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:09:01.453 11:31:02 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@790 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT -m comment --comment 'SPDK_NVMF:-A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT' 00:09:01.453 11:31:02 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@222 -- # ping -c 1 10.0.0.3 00:09:01.453 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:09:01.453 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.071 ms 00:09:01.453 00:09:01.453 --- 10.0.0.3 ping statistics --- 00:09:01.453 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:09:01.453 rtt min/avg/max/mdev = 0.071/0.071/0.071/0.000 ms 00:09:01.453 11:31:02 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@223 -- # ping -c 1 10.0.0.4 00:09:01.453 PING 10.0.0.4 (10.0.0.4) 56(84) bytes of data. 00:09:01.453 64 bytes from 10.0.0.4: icmp_seq=1 ttl=64 time=0.057 ms 00:09:01.453 00:09:01.453 --- 10.0.0.4 ping statistics --- 00:09:01.453 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:09:01.453 rtt min/avg/max/mdev = 0.057/0.057/0.057/0.000 ms 00:09:01.453 11:31:02 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@224 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:09:01.453 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:09:01.453 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.030 ms 00:09:01.453 00:09:01.453 --- 10.0.0.1 ping statistics --- 00:09:01.453 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:09:01.453 rtt min/avg/max/mdev = 0.030/0.030/0.030/0.000 ms 00:09:01.454 11:31:02 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@225 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.2 00:09:01.454 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:09:01.454 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.061 ms 00:09:01.454 00:09:01.454 --- 10.0.0.2 ping statistics --- 00:09:01.454 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:09:01.454 rtt min/avg/max/mdev = 0.061/0.061/0.061/0.000 ms 00:09:01.454 11:31:02 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@227 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:09:01.454 11:31:02 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@461 -- # return 0 00:09:01.454 11:31:02 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:09:01.454 11:31:02 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:09:01.454 11:31:02 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:09:01.454 11:31:02 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:09:01.454 11:31:02 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:09:01.454 11:31:02 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:09:01.454 11:31:02 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:09:01.454 11:31:02 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@45 -- # '[' -z 10.0.0.4 ']' 00:09:01.454 11:31:02 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@51 -- # '[' tcp '!=' tcp ']' 00:09:01.454 11:31:02 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@57 -- # nvmfappstart -m 0xF 00:09:01.454 11:31:02 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:09:01.454 11:31:02 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@726 -- # xtrace_disable 00:09:01.454 11:31:02 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@10 -- # set +x 00:09:01.454 11:31:02 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@509 -- # nvmfpid=70123 00:09:01.454 11:31:02 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@508 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:09:01.454 11:31:02 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@510 -- # waitforlisten 70123 00:09:01.454 11:31:02 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@835 -- # '[' -z 70123 ']' 00:09:01.454 11:31:02 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:09:01.454 11:31:02 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@840 -- # local max_retries=100 00:09:01.454 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:09:01.454 11:31:02 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:09:01.454 11:31:02 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@844 -- # xtrace_disable 00:09:01.454 11:31:02 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@10 -- # set +x 00:09:01.454 [2024-12-16 11:31:02.137482] Starting SPDK v25.01-pre git sha1 c360129e7 / DPDK 24.03.0 initialization... 00:09:01.454 [2024-12-16 11:31:02.137594] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:09:01.712 [2024-12-16 11:31:02.290762] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:09:01.712 [2024-12-16 11:31:02.363832] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:09:01.712 [2024-12-16 11:31:02.363888] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:09:01.712 [2024-12-16 11:31:02.363902] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:09:01.712 [2024-12-16 11:31:02.363913] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:09:01.712 [2024-12-16 11:31:02.363922] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:09:01.712 [2024-12-16 11:31:02.365178] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 1 00:09:01.712 [2024-12-16 11:31:02.365317] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 2 00:09:01.712 [2024-12-16 11:31:02.365387] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 3 00:09:01.712 [2024-12-16 11:31:02.365514] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:09:02.646 11:31:03 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:09:02.646 11:31:03 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@868 -- # return 0 00:09:02.646 11:31:03 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:09:02.646 11:31:03 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@732 -- # xtrace_disable 00:09:02.646 11:31:03 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@10 -- # set +x 00:09:02.646 11:31:03 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:09:02.646 11:31:03 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@59 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:09:02.904 [2024-12-16 11:31:03.542446] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:09:02.904 11:31:03 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@61 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc0 00:09:03.161 Malloc0 00:09:03.161 11:31:03 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@62 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME -r 00:09:03.419 11:31:04 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:09:03.677 11:31:04 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 00:09:03.968 [2024-12-16 11:31:04.660412] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4420 *** 00:09:03.968 11:31:04 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@65 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.4 -s 4420 00:09:04.259 [2024-12-16 11:31:04.932653] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.4 port 4420 *** 00:09:04.259 11:31:04 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@67 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:2fb5c03d-b67f-4471-b8a9-a83bf7b15be8 --hostid=2fb5c03d-b67f-4471-b8a9-a83bf7b15be8 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.3 -s 4420 -g -G 00:09:04.518 11:31:05 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@68 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:2fb5c03d-b67f-4471-b8a9-a83bf7b15be8 --hostid=2fb5c03d-b67f-4471-b8a9-a83bf7b15be8 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.4 -s 4420 -g -G 00:09:04.777 11:31:05 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@69 -- # waitforserial SPDKISFASTANDAWESOME 00:09:04.777 11:31:05 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1202 -- # local i=0 00:09:04.777 11:31:05 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1203 -- # local nvme_device_counter=1 nvme_devices=0 00:09:04.777 11:31:05 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1204 -- # [[ -n '' ]] 00:09:04.777 11:31:05 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1209 -- # sleep 2 00:09:06.677 11:31:07 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1210 -- # (( i++ <= 15 )) 00:09:06.677 11:31:07 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1211 -- # lsblk -l -o NAME,SERIAL 00:09:06.677 11:31:07 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1211 -- # grep -c SPDKISFASTANDAWESOME 00:09:06.677 11:31:07 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1211 -- # nvme_devices=1 00:09:06.677 11:31:07 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1212 -- # (( nvme_devices == nvme_device_counter )) 00:09:06.677 11:31:07 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1212 -- # return 0 00:09:06.677 11:31:07 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@72 -- # get_subsystem nqn.2016-06.io.spdk:cnode1 SPDKISFASTANDAWESOME 00:09:06.677 11:31:07 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@34 -- # local nqn=nqn.2016-06.io.spdk:cnode1 serial=SPDKISFASTANDAWESOME s 00:09:06.677 11:31:07 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@36 -- # for s in /sys/class/nvme-subsystem/* 00:09:06.677 11:31:07 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@37 -- # [[ nqn.2016-06.io.spdk:cnode1 == \n\q\n\.\2\0\1\6\-\0\6\.\i\o\.\s\p\d\k\:\c\n\o\d\e\1 ]] 00:09:06.677 11:31:07 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@37 -- # [[ SPDKISFASTANDAWESOME == \S\P\D\K\I\S\F\A\S\T\A\N\D\A\W\E\S\O\M\E ]] 00:09:06.677 11:31:07 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@38 -- # echo nvme-subsys0 00:09:06.677 11:31:07 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@38 -- # return 0 00:09:06.677 11:31:07 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@72 -- # subsystem=nvme-subsys0 00:09:06.677 11:31:07 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@73 -- # paths=(/sys/class/nvme-subsystem/$subsystem/nvme*/nvme*c*) 00:09:06.677 11:31:07 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@74 -- # paths=("${paths[@]##*/}") 00:09:06.677 11:31:07 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@76 -- # (( 2 == 2 )) 00:09:06.677 11:31:07 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@78 -- # p0=nvme0c0n1 00:09:06.677 11:31:07 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@79 -- # p1=nvme0c1n1 00:09:06.677 11:31:07 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@81 -- # check_ana_state nvme0c0n1 optimized 00:09:06.677 11:31:07 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@18 -- # local path=nvme0c0n1 ana_state=optimized 00:09:06.677 11:31:07 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@22 -- # local timeout=20 00:09:06.677 11:31:07 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@23 -- # local ana_state_f=/sys/block/nvme0c0n1/ana_state 00:09:06.677 11:31:07 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@25 -- # [[ ! -e /sys/block/nvme0c0n1/ana_state ]] 00:09:06.677 11:31:07 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@25 -- # [[ optimized != \o\p\t\i\m\i\z\e\d ]] 00:09:06.677 11:31:07 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@82 -- # check_ana_state nvme0c1n1 optimized 00:09:06.677 11:31:07 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@18 -- # local path=nvme0c1n1 ana_state=optimized 00:09:06.677 11:31:07 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@22 -- # local timeout=20 00:09:06.677 11:31:07 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@23 -- # local ana_state_f=/sys/block/nvme0c1n1/ana_state 00:09:06.677 11:31:07 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@25 -- # [[ ! -e /sys/block/nvme0c1n1/ana_state ]] 00:09:06.677 11:31:07 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@25 -- # [[ optimized != \o\p\t\i\m\i\z\e\d ]] 00:09:06.677 11:31:07 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@85 -- # echo numa 00:09:06.677 11:31:07 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@88 -- # fio_pid=70266 00:09:06.677 11:31:07 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@87 -- # /home/vagrant/spdk_repo/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 128 -t randrw -r 6 -v 00:09:06.677 11:31:07 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@90 -- # sleep 1 00:09:06.935 [global] 00:09:06.935 thread=1 00:09:06.935 invalidate=1 00:09:06.935 rw=randrw 00:09:06.935 time_based=1 00:09:06.935 runtime=6 00:09:06.935 ioengine=libaio 00:09:06.935 direct=1 00:09:06.935 bs=4096 00:09:06.935 iodepth=128 00:09:06.935 norandommap=0 00:09:06.935 numjobs=1 00:09:06.935 00:09:06.935 verify_dump=1 00:09:06.935 verify_backlog=512 00:09:06.935 verify_state_save=0 00:09:06.935 do_verify=1 00:09:06.935 verify=crc32c-intel 00:09:06.935 [job0] 00:09:06.935 filename=/dev/nvme0n1 00:09:06.935 Could not set queue depth (nvme0n1) 00:09:06.935 job0: (g=0): rw=randrw, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:09:06.935 fio-3.35 00:09:06.935 Starting 1 thread 00:09:07.869 11:31:08 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@92 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 -n inaccessible 00:09:08.127 11:31:08 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@93 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.4 -s 4420 -n non_optimized 00:09:08.385 11:31:09 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@95 -- # check_ana_state nvme0c0n1 inaccessible 00:09:08.385 11:31:09 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@18 -- # local path=nvme0c0n1 ana_state=inaccessible 00:09:08.385 11:31:09 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@22 -- # local timeout=20 00:09:08.385 11:31:09 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@23 -- # local ana_state_f=/sys/block/nvme0c0n1/ana_state 00:09:08.385 11:31:09 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@25 -- # [[ ! -e /sys/block/nvme0c0n1/ana_state ]] 00:09:08.385 11:31:09 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@25 -- # [[ inaccessible != \i\n\a\c\c\e\s\s\i\b\l\e ]] 00:09:08.385 11:31:09 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@96 -- # check_ana_state nvme0c1n1 non-optimized 00:09:08.385 11:31:09 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@18 -- # local path=nvme0c1n1 ana_state=non-optimized 00:09:08.385 11:31:09 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@22 -- # local timeout=20 00:09:08.385 11:31:09 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@23 -- # local ana_state_f=/sys/block/nvme0c1n1/ana_state 00:09:08.385 11:31:09 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@25 -- # [[ ! -e /sys/block/nvme0c1n1/ana_state ]] 00:09:08.385 11:31:09 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@25 -- # [[ optimized != \n\o\n\-\o\p\t\i\m\i\z\e\d ]] 00:09:08.385 11:31:09 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@25 -- # sleep 1s 00:09:09.319 11:31:10 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@26 -- # (( timeout-- == 0 )) 00:09:09.319 11:31:10 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@25 -- # [[ ! -e /sys/block/nvme0c1n1/ana_state ]] 00:09:09.319 11:31:10 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@25 -- # [[ non-optimized != \n\o\n\-\o\p\t\i\m\i\z\e\d ]] 00:09:09.319 11:31:10 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@98 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 -n non_optimized 00:09:09.884 11:31:10 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@99 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.4 -s 4420 -n inaccessible 00:09:10.141 11:31:10 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@101 -- # check_ana_state nvme0c0n1 non-optimized 00:09:10.141 11:31:10 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@18 -- # local path=nvme0c0n1 ana_state=non-optimized 00:09:10.141 11:31:10 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@22 -- # local timeout=20 00:09:10.141 11:31:10 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@23 -- # local ana_state_f=/sys/block/nvme0c0n1/ana_state 00:09:10.141 11:31:10 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@25 -- # [[ ! -e /sys/block/nvme0c0n1/ana_state ]] 00:09:10.141 11:31:10 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@25 -- # [[ non-optimized != \n\o\n\-\o\p\t\i\m\i\z\e\d ]] 00:09:10.141 11:31:10 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@102 -- # check_ana_state nvme0c1n1 inaccessible 00:09:10.141 11:31:10 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@18 -- # local path=nvme0c1n1 ana_state=inaccessible 00:09:10.141 11:31:10 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@22 -- # local timeout=20 00:09:10.141 11:31:10 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@23 -- # local ana_state_f=/sys/block/nvme0c1n1/ana_state 00:09:10.141 11:31:10 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@25 -- # [[ ! -e /sys/block/nvme0c1n1/ana_state ]] 00:09:10.141 11:31:10 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@25 -- # [[ non-optimized != \i\n\a\c\c\e\s\s\i\b\l\e ]] 00:09:10.141 11:31:10 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@25 -- # sleep 1s 00:09:11.075 11:31:11 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@26 -- # (( timeout-- == 0 )) 00:09:11.075 11:31:11 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@25 -- # [[ ! -e /sys/block/nvme0c1n1/ana_state ]] 00:09:11.075 11:31:11 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@25 -- # [[ inaccessible != \i\n\a\c\c\e\s\s\i\b\l\e ]] 00:09:11.075 11:31:11 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@104 -- # wait 70266 00:09:13.602 00:09:13.602 job0: (groupid=0, jobs=1): err= 0: pid=70293: Mon Dec 16 11:31:13 2024 00:09:13.602 read: IOPS=11.2k, BW=43.7MiB/s (45.8MB/s)(262MiB/6005msec) 00:09:13.602 slat (usec): min=4, max=6160, avg=51.45, stdev=232.91 00:09:13.602 clat (usec): min=833, max=13821, avg=7801.66, stdev=1175.35 00:09:13.602 lat (usec): min=879, max=13829, avg=7853.11, stdev=1184.64 00:09:13.602 clat percentiles (usec): 00:09:13.602 | 1.00th=[ 4752], 5.00th=[ 5997], 10.00th=[ 6718], 20.00th=[ 7111], 00:09:13.602 | 30.00th=[ 7242], 40.00th=[ 7373], 50.00th=[ 7635], 60.00th=[ 7898], 00:09:13.602 | 70.00th=[ 8225], 80.00th=[ 8586], 90.00th=[ 9110], 95.00th=[ 9896], 00:09:13.602 | 99.00th=[11469], 99.50th=[11863], 99.90th=[12518], 99.95th=[12780], 00:09:13.602 | 99.99th=[13304] 00:09:13.602 bw ( KiB/s): min= 9904, max=29440, per=52.24%, avg=23380.91, stdev=5522.93, samples=11 00:09:13.602 iops : min= 2476, max= 7360, avg=5845.18, stdev=1380.72, samples=11 00:09:13.602 write: IOPS=6506, BW=25.4MiB/s (26.7MB/s)(139MiB/5478msec); 0 zone resets 00:09:13.602 slat (usec): min=11, max=2839, avg=61.71, stdev=161.43 00:09:13.602 clat (usec): min=697, max=12973, avg=6665.00, stdev=961.07 00:09:13.602 lat (usec): min=775, max=12995, avg=6726.71, stdev=965.18 00:09:13.602 clat percentiles (usec): 00:09:13.602 | 1.00th=[ 3785], 5.00th=[ 4817], 10.00th=[ 5669], 20.00th=[ 6128], 00:09:13.602 | 30.00th=[ 6390], 40.00th=[ 6587], 50.00th=[ 6718], 60.00th=[ 6915], 00:09:13.602 | 70.00th=[ 7046], 80.00th=[ 7242], 90.00th=[ 7570], 95.00th=[ 7832], 00:09:13.602 | 99.00th=[ 9765], 99.50th=[10159], 99.90th=[11731], 99.95th=[11994], 00:09:13.602 | 99.99th=[12780] 00:09:13.602 bw ( KiB/s): min=10440, max=28672, per=89.81%, avg=23376.55, stdev=5144.01, samples=11 00:09:13.602 iops : min= 2610, max= 7168, avg=5844.09, stdev=1285.99, samples=11 00:09:13.602 lat (usec) : 750=0.01%, 1000=0.01% 00:09:13.602 lat (msec) : 2=0.02%, 4=0.62%, 10=95.96%, 20=3.39% 00:09:13.602 cpu : usr=6.13%, sys=21.72%, ctx=6541, majf=0, minf=78 00:09:13.602 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.2%, >=64=99.7% 00:09:13.602 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:09:13.602 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:09:13.602 issued rwts: total=67189,35645,0,0 short=0,0,0,0 dropped=0,0,0,0 00:09:13.602 latency : target=0, window=0, percentile=100.00%, depth=128 00:09:13.602 00:09:13.602 Run status group 0 (all jobs): 00:09:13.602 READ: bw=43.7MiB/s (45.8MB/s), 43.7MiB/s-43.7MiB/s (45.8MB/s-45.8MB/s), io=262MiB (275MB), run=6005-6005msec 00:09:13.602 WRITE: bw=25.4MiB/s (26.7MB/s), 25.4MiB/s-25.4MiB/s (26.7MB/s-26.7MB/s), io=139MiB (146MB), run=5478-5478msec 00:09:13.602 00:09:13.602 Disk stats (read/write): 00:09:13.602 nvme0n1: ios=66243/34981, merge=0/0, ticks=485146/218765, in_queue=703911, util=98.62% 00:09:13.602 11:31:13 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@106 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 -n optimized 00:09:13.602 11:31:14 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@107 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.4 -s 4420 -n optimized 00:09:13.860 11:31:14 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@109 -- # check_ana_state nvme0c0n1 optimized 00:09:13.860 11:31:14 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@18 -- # local path=nvme0c0n1 ana_state=optimized 00:09:13.860 11:31:14 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@22 -- # local timeout=20 00:09:13.860 11:31:14 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@23 -- # local ana_state_f=/sys/block/nvme0c0n1/ana_state 00:09:13.860 11:31:14 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@25 -- # [[ ! -e /sys/block/nvme0c0n1/ana_state ]] 00:09:13.860 11:31:14 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@25 -- # [[ optimized != \o\p\t\i\m\i\z\e\d ]] 00:09:13.860 11:31:14 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@110 -- # check_ana_state nvme0c1n1 optimized 00:09:13.860 11:31:14 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@18 -- # local path=nvme0c1n1 ana_state=optimized 00:09:13.860 11:31:14 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@22 -- # local timeout=20 00:09:13.860 11:31:14 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@23 -- # local ana_state_f=/sys/block/nvme0c1n1/ana_state 00:09:13.860 11:31:14 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@25 -- # [[ ! -e /sys/block/nvme0c1n1/ana_state ]] 00:09:13.860 11:31:14 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@25 -- # [[ inaccessible != \o\p\t\i\m\i\z\e\d ]] 00:09:13.860 11:31:14 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@25 -- # sleep 1s 00:09:14.791 11:31:15 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@26 -- # (( timeout-- == 0 )) 00:09:14.791 11:31:15 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@25 -- # [[ ! -e /sys/block/nvme0c1n1/ana_state ]] 00:09:14.791 11:31:15 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@25 -- # [[ optimized != \o\p\t\i\m\i\z\e\d ]] 00:09:14.791 11:31:15 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@113 -- # echo round-robin 00:09:14.791 11:31:15 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@116 -- # fio_pid=70422 00:09:14.791 11:31:15 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@115 -- # /home/vagrant/spdk_repo/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 128 -t randrw -r 6 -v 00:09:14.791 11:31:15 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@118 -- # sleep 1 00:09:14.791 [global] 00:09:14.791 thread=1 00:09:14.791 invalidate=1 00:09:14.791 rw=randrw 00:09:14.791 time_based=1 00:09:14.791 runtime=6 00:09:14.791 ioengine=libaio 00:09:14.791 direct=1 00:09:14.791 bs=4096 00:09:14.791 iodepth=128 00:09:14.791 norandommap=0 00:09:14.791 numjobs=1 00:09:14.791 00:09:14.791 verify_dump=1 00:09:14.791 verify_backlog=512 00:09:14.791 verify_state_save=0 00:09:14.791 do_verify=1 00:09:14.791 verify=crc32c-intel 00:09:14.791 [job0] 00:09:14.791 filename=/dev/nvme0n1 00:09:14.791 Could not set queue depth (nvme0n1) 00:09:15.048 job0: (g=0): rw=randrw, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:09:15.048 fio-3.35 00:09:15.048 Starting 1 thread 00:09:15.983 11:31:16 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@120 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 -n inaccessible 00:09:15.983 11:31:16 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@121 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.4 -s 4420 -n non_optimized 00:09:16.241 11:31:16 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@123 -- # check_ana_state nvme0c0n1 inaccessible 00:09:16.241 11:31:16 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@18 -- # local path=nvme0c0n1 ana_state=inaccessible 00:09:16.241 11:31:16 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@22 -- # local timeout=20 00:09:16.241 11:31:16 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@23 -- # local ana_state_f=/sys/block/nvme0c0n1/ana_state 00:09:16.241 11:31:16 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@25 -- # [[ ! -e /sys/block/nvme0c0n1/ana_state ]] 00:09:16.241 11:31:16 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@25 -- # [[ inaccessible != \i\n\a\c\c\e\s\s\i\b\l\e ]] 00:09:16.241 11:31:16 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@124 -- # check_ana_state nvme0c1n1 non-optimized 00:09:16.241 11:31:16 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@18 -- # local path=nvme0c1n1 ana_state=non-optimized 00:09:16.241 11:31:16 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@22 -- # local timeout=20 00:09:16.241 11:31:16 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@23 -- # local ana_state_f=/sys/block/nvme0c1n1/ana_state 00:09:16.241 11:31:16 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@25 -- # [[ ! -e /sys/block/nvme0c1n1/ana_state ]] 00:09:16.241 11:31:16 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@25 -- # [[ optimized != \n\o\n\-\o\p\t\i\m\i\z\e\d ]] 00:09:16.241 11:31:16 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@25 -- # sleep 1s 00:09:17.614 11:31:17 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@26 -- # (( timeout-- == 0 )) 00:09:17.614 11:31:17 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@25 -- # [[ ! -e /sys/block/nvme0c1n1/ana_state ]] 00:09:17.614 11:31:17 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@25 -- # [[ non-optimized != \n\o\n\-\o\p\t\i\m\i\z\e\d ]] 00:09:17.614 11:31:17 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 -n non_optimized 00:09:17.614 11:31:18 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.4 -s 4420 -n inaccessible 00:09:17.871 11:31:18 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@129 -- # check_ana_state nvme0c0n1 non-optimized 00:09:17.872 11:31:18 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@18 -- # local path=nvme0c0n1 ana_state=non-optimized 00:09:17.872 11:31:18 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@22 -- # local timeout=20 00:09:17.872 11:31:18 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@23 -- # local ana_state_f=/sys/block/nvme0c0n1/ana_state 00:09:17.872 11:31:18 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@25 -- # [[ ! -e /sys/block/nvme0c0n1/ana_state ]] 00:09:17.872 11:31:18 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@25 -- # [[ non-optimized != \n\o\n\-\o\p\t\i\m\i\z\e\d ]] 00:09:17.872 11:31:18 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@130 -- # check_ana_state nvme0c1n1 inaccessible 00:09:17.872 11:31:18 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@18 -- # local path=nvme0c1n1 ana_state=inaccessible 00:09:17.872 11:31:18 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@22 -- # local timeout=20 00:09:17.872 11:31:18 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@23 -- # local ana_state_f=/sys/block/nvme0c1n1/ana_state 00:09:17.872 11:31:18 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@25 -- # [[ ! -e /sys/block/nvme0c1n1/ana_state ]] 00:09:17.872 11:31:18 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@25 -- # [[ non-optimized != \i\n\a\c\c\e\s\s\i\b\l\e ]] 00:09:17.872 11:31:18 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@25 -- # sleep 1s 00:09:19.244 11:31:19 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@26 -- # (( timeout-- == 0 )) 00:09:19.244 11:31:19 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@25 -- # [[ ! -e /sys/block/nvme0c1n1/ana_state ]] 00:09:19.244 11:31:19 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@25 -- # [[ inaccessible != \i\n\a\c\c\e\s\s\i\b\l\e ]] 00:09:19.244 11:31:19 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@132 -- # wait 70422 00:09:21.145 00:09:21.145 job0: (groupid=0, jobs=1): err= 0: pid=70448: Mon Dec 16 11:31:21 2024 00:09:21.145 read: IOPS=12.6k, BW=49.2MiB/s (51.5MB/s)(295MiB/6001msec) 00:09:21.145 slat (usec): min=2, max=5867, avg=42.29, stdev=208.02 00:09:21.145 clat (usec): min=272, max=13250, avg=7119.07, stdev=1566.63 00:09:21.145 lat (usec): min=287, max=13365, avg=7161.36, stdev=1584.29 00:09:21.145 clat percentiles (usec): 00:09:21.145 | 1.00th=[ 3130], 5.00th=[ 4293], 10.00th=[ 4948], 20.00th=[ 5800], 00:09:21.145 | 30.00th=[ 6652], 40.00th=[ 7111], 50.00th=[ 7308], 60.00th=[ 7504], 00:09:21.145 | 70.00th=[ 7832], 80.00th=[ 8291], 90.00th=[ 8848], 95.00th=[ 9241], 00:09:21.145 | 99.00th=[11338], 99.50th=[11731], 99.90th=[12649], 99.95th=[12649], 00:09:21.145 | 99.99th=[13173] 00:09:21.145 bw ( KiB/s): min=12040, max=41704, per=52.71%, avg=26530.18, stdev=9123.11, samples=11 00:09:21.145 iops : min= 3010, max=10426, avg=6632.55, stdev=2280.78, samples=11 00:09:21.145 write: IOPS=7353, BW=28.7MiB/s (30.1MB/s)(149MiB/5184msec); 0 zone resets 00:09:21.145 slat (usec): min=4, max=1869, avg=49.68, stdev=125.70 00:09:21.145 clat (usec): min=226, max=12468, avg=5823.22, stdev=1501.22 00:09:21.145 lat (usec): min=335, max=12493, avg=5872.90, stdev=1513.48 00:09:21.145 clat percentiles (usec): 00:09:21.145 | 1.00th=[ 2573], 5.00th=[ 3228], 10.00th=[ 3654], 20.00th=[ 4228], 00:09:21.145 | 30.00th=[ 4883], 40.00th=[ 5866], 50.00th=[ 6259], 60.00th=[ 6587], 00:09:21.145 | 70.00th=[ 6849], 80.00th=[ 7046], 90.00th=[ 7373], 95.00th=[ 7570], 00:09:21.145 | 99.00th=[ 9110], 99.50th=[10159], 99.90th=[11731], 99.95th=[11994], 00:09:21.145 | 99.99th=[12256] 00:09:21.145 bw ( KiB/s): min=12768, max=40960, per=89.96%, avg=26460.36, stdev=8801.91, samples=11 00:09:21.145 iops : min= 3192, max=10240, avg=6615.09, stdev=2200.48, samples=11 00:09:21.145 lat (usec) : 250=0.01%, 500=0.01%, 750=0.02%, 1000=0.01% 00:09:21.145 lat (msec) : 2=0.10%, 4=7.53%, 10=90.14%, 20=2.19% 00:09:21.145 cpu : usr=6.39%, sys=24.16%, ctx=7454, majf=0, minf=90 00:09:21.145 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.2%, >=64=99.6% 00:09:21.145 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:09:21.145 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:09:21.145 issued rwts: total=75508,38118,0,0 short=0,0,0,0 dropped=0,0,0,0 00:09:21.145 latency : target=0, window=0, percentile=100.00%, depth=128 00:09:21.145 00:09:21.145 Run status group 0 (all jobs): 00:09:21.145 READ: bw=49.2MiB/s (51.5MB/s), 49.2MiB/s-49.2MiB/s (51.5MB/s-51.5MB/s), io=295MiB (309MB), run=6001-6001msec 00:09:21.145 WRITE: bw=28.7MiB/s (30.1MB/s), 28.7MiB/s-28.7MiB/s (30.1MB/s-30.1MB/s), io=149MiB (156MB), run=5184-5184msec 00:09:21.145 00:09:21.145 Disk stats (read/write): 00:09:21.145 nvme0n1: ios=74846/37341, merge=0/0, ticks=496415/200145, in_queue=696560, util=98.55% 00:09:21.145 11:31:21 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@134 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:09:21.145 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 2 controller(s) 00:09:21.145 11:31:21 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@135 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:09:21.145 11:31:21 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1223 -- # local i=0 00:09:21.145 11:31:21 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1224 -- # lsblk -o NAME,SERIAL 00:09:21.145 11:31:21 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1224 -- # grep -q -w SPDKISFASTANDAWESOME 00:09:21.145 11:31:21 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1231 -- # lsblk -l -o NAME,SERIAL 00:09:21.145 11:31:21 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1231 -- # grep -q -w SPDKISFASTANDAWESOME 00:09:21.145 11:31:21 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1235 -- # return 0 00:09:21.145 11:31:21 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@137 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:09:21.404 11:31:22 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@139 -- # rm -f ./local-job0-0-verify.state 00:09:21.404 11:31:22 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@140 -- # rm -f ./local-job1-1-verify.state 00:09:21.404 11:31:22 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@142 -- # trap - SIGINT SIGTERM EXIT 00:09:21.404 11:31:22 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@144 -- # nvmftestfini 00:09:21.404 11:31:22 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@516 -- # nvmfcleanup 00:09:21.404 11:31:22 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@121 -- # sync 00:09:21.404 11:31:22 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:09:21.404 11:31:22 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@124 -- # set +e 00:09:21.404 11:31:22 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@125 -- # for i in {1..20} 00:09:21.404 11:31:22 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:09:21.662 rmmod nvme_tcp 00:09:21.662 rmmod nvme_fabrics 00:09:21.662 rmmod nvme_keyring 00:09:21.662 11:31:22 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:09:21.662 11:31:22 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@128 -- # set -e 00:09:21.662 11:31:22 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@129 -- # return 0 00:09:21.662 11:31:22 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@517 -- # '[' -n 70123 ']' 00:09:21.662 11:31:22 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@518 -- # killprocess 70123 00:09:21.662 11:31:22 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@954 -- # '[' -z 70123 ']' 00:09:21.662 11:31:22 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@958 -- # kill -0 70123 00:09:21.662 11:31:22 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@959 -- # uname 00:09:21.662 11:31:22 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:09:21.662 11:31:22 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 70123 00:09:21.662 killing process with pid 70123 00:09:21.662 11:31:22 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:09:21.662 11:31:22 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:09:21.662 11:31:22 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@972 -- # echo 'killing process with pid 70123' 00:09:21.662 11:31:22 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@973 -- # kill 70123 00:09:21.662 11:31:22 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@978 -- # wait 70123 00:09:21.921 11:31:22 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:09:21.921 11:31:22 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:09:21.921 11:31:22 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:09:21.921 11:31:22 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@297 -- # iptr 00:09:21.921 11:31:22 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:09:21.921 11:31:22 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@791 -- # iptables-save 00:09:21.921 11:31:22 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@791 -- # iptables-restore 00:09:21.921 11:31:22 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@298 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:09:21.921 11:31:22 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@299 -- # nvmf_veth_fini 00:09:21.921 11:31:22 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@233 -- # ip link set nvmf_init_br nomaster 00:09:21.921 11:31:22 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@234 -- # ip link set nvmf_init_br2 nomaster 00:09:21.921 11:31:22 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@235 -- # ip link set nvmf_tgt_br nomaster 00:09:21.921 11:31:22 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@236 -- # ip link set nvmf_tgt_br2 nomaster 00:09:21.921 11:31:22 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@237 -- # ip link set nvmf_init_br down 00:09:21.921 11:31:22 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@238 -- # ip link set nvmf_init_br2 down 00:09:21.921 11:31:22 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@239 -- # ip link set nvmf_tgt_br down 00:09:21.921 11:31:22 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@240 -- # ip link set nvmf_tgt_br2 down 00:09:21.921 11:31:22 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@241 -- # ip link delete nvmf_br type bridge 00:09:21.921 11:31:22 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@242 -- # ip link delete nvmf_init_if 00:09:21.921 11:31:22 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@243 -- # ip link delete nvmf_init_if2 00:09:21.921 11:31:22 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@244 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:09:21.921 11:31:22 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@245 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:09:22.180 11:31:22 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@246 -- # remove_spdk_ns 00:09:22.180 11:31:22 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:09:22.180 11:31:22 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:09:22.180 11:31:22 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:09:22.180 11:31:22 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@300 -- # return 0 00:09:22.180 00:09:22.180 real 0m21.214s 00:09:22.180 user 1m23.384s 00:09:22.180 sys 0m6.184s 00:09:22.180 11:31:22 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1130 -- # xtrace_disable 00:09:22.180 11:31:22 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@10 -- # set +x 00:09:22.180 ************************************ 00:09:22.180 END TEST nvmf_target_multipath 00:09:22.180 ************************************ 00:09:22.180 11:31:22 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@32 -- # run_test nvmf_zcopy /home/vagrant/spdk_repo/spdk/test/nvmf/target/zcopy.sh --transport=tcp 00:09:22.180 11:31:22 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:09:22.180 11:31:22 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1111 -- # xtrace_disable 00:09:22.180 11:31:22 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@10 -- # set +x 00:09:22.180 ************************************ 00:09:22.180 START TEST nvmf_zcopy 00:09:22.180 ************************************ 00:09:22.180 11:31:22 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/zcopy.sh --transport=tcp 00:09:22.180 * Looking for test storage... 00:09:22.180 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:09:22.180 11:31:22 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:09:22.180 11:31:22 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@1711 -- # lcov --version 00:09:22.180 11:31:22 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:09:22.440 11:31:22 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:09:22.440 11:31:22 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:09:22.440 11:31:22 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@333 -- # local ver1 ver1_l 00:09:22.440 11:31:22 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@334 -- # local ver2 ver2_l 00:09:22.440 11:31:22 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@336 -- # IFS=.-: 00:09:22.440 11:31:22 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@336 -- # read -ra ver1 00:09:22.440 11:31:22 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@337 -- # IFS=.-: 00:09:22.440 11:31:22 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@337 -- # read -ra ver2 00:09:22.440 11:31:22 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@338 -- # local 'op=<' 00:09:22.440 11:31:22 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@340 -- # ver1_l=2 00:09:22.440 11:31:22 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@341 -- # ver2_l=1 00:09:22.440 11:31:22 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:09:22.440 11:31:22 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@344 -- # case "$op" in 00:09:22.440 11:31:22 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@345 -- # : 1 00:09:22.440 11:31:22 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@364 -- # (( v = 0 )) 00:09:22.440 11:31:22 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:09:22.440 11:31:22 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@365 -- # decimal 1 00:09:22.440 11:31:22 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@353 -- # local d=1 00:09:22.440 11:31:22 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:09:22.440 11:31:22 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@355 -- # echo 1 00:09:22.440 11:31:22 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@365 -- # ver1[v]=1 00:09:22.440 11:31:22 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@366 -- # decimal 2 00:09:22.440 11:31:22 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@353 -- # local d=2 00:09:22.440 11:31:22 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:09:22.440 11:31:22 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@355 -- # echo 2 00:09:22.440 11:31:22 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@366 -- # ver2[v]=2 00:09:22.440 11:31:22 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:09:22.440 11:31:22 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:09:22.440 11:31:22 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@368 -- # return 0 00:09:22.440 11:31:22 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:09:22.440 11:31:22 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:09:22.440 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:22.440 --rc genhtml_branch_coverage=1 00:09:22.440 --rc genhtml_function_coverage=1 00:09:22.440 --rc genhtml_legend=1 00:09:22.440 --rc geninfo_all_blocks=1 00:09:22.440 --rc geninfo_unexecuted_blocks=1 00:09:22.440 00:09:22.440 ' 00:09:22.440 11:31:22 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:09:22.440 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:22.440 --rc genhtml_branch_coverage=1 00:09:22.440 --rc genhtml_function_coverage=1 00:09:22.440 --rc genhtml_legend=1 00:09:22.440 --rc geninfo_all_blocks=1 00:09:22.440 --rc geninfo_unexecuted_blocks=1 00:09:22.440 00:09:22.440 ' 00:09:22.440 11:31:22 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:09:22.440 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:22.440 --rc genhtml_branch_coverage=1 00:09:22.440 --rc genhtml_function_coverage=1 00:09:22.440 --rc genhtml_legend=1 00:09:22.440 --rc geninfo_all_blocks=1 00:09:22.440 --rc geninfo_unexecuted_blocks=1 00:09:22.440 00:09:22.440 ' 00:09:22.440 11:31:22 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:09:22.440 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:22.440 --rc genhtml_branch_coverage=1 00:09:22.440 --rc genhtml_function_coverage=1 00:09:22.440 --rc genhtml_legend=1 00:09:22.440 --rc geninfo_all_blocks=1 00:09:22.440 --rc geninfo_unexecuted_blocks=1 00:09:22.440 00:09:22.440 ' 00:09:22.440 11:31:22 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@10 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:09:22.440 11:31:22 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@7 -- # uname -s 00:09:22.440 11:31:22 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:09:22.440 11:31:22 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:09:22.440 11:31:22 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:09:22.440 11:31:22 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:09:22.440 11:31:22 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:09:22.440 11:31:22 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:09:22.440 11:31:22 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:09:22.440 11:31:22 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:09:22.440 11:31:22 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:09:22.440 11:31:22 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:09:22.440 11:31:22 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:2fb5c03d-b67f-4471-b8a9-a83bf7b15be8 00:09:22.440 11:31:22 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@18 -- # NVME_HOSTID=2fb5c03d-b67f-4471-b8a9-a83bf7b15be8 00:09:22.440 11:31:22 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:09:22.440 11:31:22 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:09:22.440 11:31:22 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:09:22.440 11:31:22 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:09:22.440 11:31:22 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@49 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:09:22.440 11:31:22 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@15 -- # shopt -s extglob 00:09:22.440 11:31:22 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:09:22.440 11:31:22 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:09:22.440 11:31:22 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:09:22.440 11:31:22 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:22.441 11:31:22 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:22.441 11:31:22 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:22.441 11:31:22 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- paths/export.sh@5 -- # export PATH 00:09:22.441 11:31:22 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:22.441 11:31:22 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@51 -- # : 0 00:09:22.441 11:31:22 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:09:22.441 11:31:22 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:09:22.441 11:31:22 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:09:22.441 11:31:22 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:09:22.441 11:31:22 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:09:22.441 11:31:22 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:09:22.441 /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:09:22.441 11:31:22 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:09:22.441 11:31:22 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:09:22.441 11:31:22 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@55 -- # have_pci_nics=0 00:09:22.441 11:31:22 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@12 -- # nvmftestinit 00:09:22.441 11:31:22 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:09:22.441 11:31:22 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:09:22.441 11:31:22 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@476 -- # prepare_net_devs 00:09:22.441 11:31:22 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@438 -- # local -g is_hw=no 00:09:22.441 11:31:22 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@440 -- # remove_spdk_ns 00:09:22.441 11:31:22 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:09:22.441 11:31:22 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:09:22.441 11:31:22 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:09:22.441 11:31:23 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@442 -- # [[ virt != virt ]] 00:09:22.441 11:31:23 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@444 -- # [[ no == yes ]] 00:09:22.441 11:31:23 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@451 -- # [[ virt == phy ]] 00:09:22.441 11:31:23 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@454 -- # [[ virt == phy-fallback ]] 00:09:22.441 11:31:23 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@459 -- # [[ tcp == tcp ]] 00:09:22.441 11:31:23 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@460 -- # nvmf_veth_init 00:09:22.441 11:31:23 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@145 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:09:22.441 11:31:23 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@146 -- # NVMF_SECOND_INITIATOR_IP=10.0.0.2 00:09:22.441 11:31:23 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@147 -- # NVMF_FIRST_TARGET_IP=10.0.0.3 00:09:22.441 11:31:23 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@148 -- # NVMF_SECOND_TARGET_IP=10.0.0.4 00:09:22.441 11:31:23 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@149 -- # NVMF_INITIATOR_IP=10.0.0.1 00:09:22.441 11:31:23 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@150 -- # NVMF_BRIDGE=nvmf_br 00:09:22.441 11:31:23 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@151 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:09:22.441 11:31:23 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@152 -- # NVMF_INITIATOR_INTERFACE2=nvmf_init_if2 00:09:22.441 11:31:23 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@153 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:09:22.441 11:31:23 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@154 -- # NVMF_INITIATOR_BRIDGE2=nvmf_init_br2 00:09:22.441 11:31:23 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@155 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:09:22.441 11:31:23 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@156 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:09:22.441 11:31:23 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@157 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:09:22.441 11:31:23 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@158 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:09:22.441 11:31:23 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@159 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:09:22.441 11:31:23 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@160 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:09:22.441 11:31:23 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@162 -- # ip link set nvmf_init_br nomaster 00:09:22.441 Cannot find device "nvmf_init_br" 00:09:22.441 11:31:23 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@162 -- # true 00:09:22.441 11:31:23 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@163 -- # ip link set nvmf_init_br2 nomaster 00:09:22.441 Cannot find device "nvmf_init_br2" 00:09:22.441 11:31:23 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@163 -- # true 00:09:22.441 11:31:23 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@164 -- # ip link set nvmf_tgt_br nomaster 00:09:22.441 Cannot find device "nvmf_tgt_br" 00:09:22.441 11:31:23 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@164 -- # true 00:09:22.441 11:31:23 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@165 -- # ip link set nvmf_tgt_br2 nomaster 00:09:22.441 Cannot find device "nvmf_tgt_br2" 00:09:22.441 11:31:23 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@165 -- # true 00:09:22.441 11:31:23 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@166 -- # ip link set nvmf_init_br down 00:09:22.441 Cannot find device "nvmf_init_br" 00:09:22.441 11:31:23 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@166 -- # true 00:09:22.441 11:31:23 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@167 -- # ip link set nvmf_init_br2 down 00:09:22.441 Cannot find device "nvmf_init_br2" 00:09:22.441 11:31:23 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@167 -- # true 00:09:22.441 11:31:23 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@168 -- # ip link set nvmf_tgt_br down 00:09:22.441 Cannot find device "nvmf_tgt_br" 00:09:22.441 11:31:23 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@168 -- # true 00:09:22.441 11:31:23 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@169 -- # ip link set nvmf_tgt_br2 down 00:09:22.441 Cannot find device "nvmf_tgt_br2" 00:09:22.441 11:31:23 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@169 -- # true 00:09:22.441 11:31:23 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@170 -- # ip link delete nvmf_br type bridge 00:09:22.441 Cannot find device "nvmf_br" 00:09:22.441 11:31:23 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@170 -- # true 00:09:22.441 11:31:23 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@171 -- # ip link delete nvmf_init_if 00:09:22.441 Cannot find device "nvmf_init_if" 00:09:22.441 11:31:23 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@171 -- # true 00:09:22.441 11:31:23 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@172 -- # ip link delete nvmf_init_if2 00:09:22.441 Cannot find device "nvmf_init_if2" 00:09:22.441 11:31:23 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@172 -- # true 00:09:22.441 11:31:23 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@173 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:09:22.441 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:09:22.441 11:31:23 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@173 -- # true 00:09:22.441 11:31:23 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@174 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:09:22.441 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:09:22.441 11:31:23 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@174 -- # true 00:09:22.441 11:31:23 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@177 -- # ip netns add nvmf_tgt_ns_spdk 00:09:22.441 11:31:23 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@180 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:09:22.441 11:31:23 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@181 -- # ip link add nvmf_init_if2 type veth peer name nvmf_init_br2 00:09:22.441 11:31:23 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@182 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:09:22.441 11:31:23 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@183 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:09:22.441 11:31:23 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:09:22.703 11:31:23 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@187 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:09:22.703 11:31:23 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@190 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:09:22.703 11:31:23 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@191 -- # ip addr add 10.0.0.2/24 dev nvmf_init_if2 00:09:22.703 11:31:23 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@192 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if 00:09:22.703 11:31:23 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@193 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.4/24 dev nvmf_tgt_if2 00:09:22.703 11:31:23 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@196 -- # ip link set nvmf_init_if up 00:09:22.703 11:31:23 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@197 -- # ip link set nvmf_init_if2 up 00:09:22.703 11:31:23 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@198 -- # ip link set nvmf_init_br up 00:09:22.703 11:31:23 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@199 -- # ip link set nvmf_init_br2 up 00:09:22.703 11:31:23 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@200 -- # ip link set nvmf_tgt_br up 00:09:22.703 11:31:23 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@201 -- # ip link set nvmf_tgt_br2 up 00:09:22.703 11:31:23 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@202 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:09:22.703 11:31:23 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@203 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:09:22.703 11:31:23 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@204 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:09:22.703 11:31:23 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@207 -- # ip link add nvmf_br type bridge 00:09:22.703 11:31:23 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@208 -- # ip link set nvmf_br up 00:09:22.703 11:31:23 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@211 -- # ip link set nvmf_init_br master nvmf_br 00:09:22.703 11:31:23 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@212 -- # ip link set nvmf_init_br2 master nvmf_br 00:09:22.703 11:31:23 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@213 -- # ip link set nvmf_tgt_br master nvmf_br 00:09:22.703 11:31:23 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@214 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:09:22.703 11:31:23 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@217 -- # ipts -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:09:22.703 11:31:23 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT' 00:09:22.703 11:31:23 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@218 -- # ipts -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT 00:09:22.703 11:31:23 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT' 00:09:22.703 11:31:23 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@219 -- # ipts -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:09:22.703 11:31:23 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@790 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT -m comment --comment 'SPDK_NVMF:-A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT' 00:09:22.703 11:31:23 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@222 -- # ping -c 1 10.0.0.3 00:09:22.703 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:09:22.703 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.086 ms 00:09:22.703 00:09:22.703 --- 10.0.0.3 ping statistics --- 00:09:22.703 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:09:22.703 rtt min/avg/max/mdev = 0.086/0.086/0.086/0.000 ms 00:09:22.703 11:31:23 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@223 -- # ping -c 1 10.0.0.4 00:09:22.703 PING 10.0.0.4 (10.0.0.4) 56(84) bytes of data. 00:09:22.703 64 bytes from 10.0.0.4: icmp_seq=1 ttl=64 time=0.083 ms 00:09:22.703 00:09:22.703 --- 10.0.0.4 ping statistics --- 00:09:22.703 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:09:22.703 rtt min/avg/max/mdev = 0.083/0.083/0.083/0.000 ms 00:09:22.703 11:31:23 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@224 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:09:22.703 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:09:22.703 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.034 ms 00:09:22.703 00:09:22.703 --- 10.0.0.1 ping statistics --- 00:09:22.703 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:09:22.703 rtt min/avg/max/mdev = 0.034/0.034/0.034/0.000 ms 00:09:22.704 11:31:23 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@225 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.2 00:09:22.704 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:09:22.704 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.076 ms 00:09:22.704 00:09:22.704 --- 10.0.0.2 ping statistics --- 00:09:22.704 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:09:22.704 rtt min/avg/max/mdev = 0.076/0.076/0.076/0.000 ms 00:09:22.704 11:31:23 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@227 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:09:22.704 11:31:23 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@461 -- # return 0 00:09:22.704 11:31:23 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:09:22.704 11:31:23 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:09:22.704 11:31:23 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:09:22.704 11:31:23 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:09:22.704 11:31:23 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:09:22.704 11:31:23 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:09:22.704 11:31:23 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:09:22.704 11:31:23 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@13 -- # nvmfappstart -m 0x2 00:09:22.704 11:31:23 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:09:22.704 11:31:23 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@726 -- # xtrace_disable 00:09:22.704 11:31:23 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:09:22.704 11:31:23 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@509 -- # nvmfpid=70783 00:09:22.704 11:31:23 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@508 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:09:22.704 11:31:23 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@510 -- # waitforlisten 70783 00:09:22.704 11:31:23 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@835 -- # '[' -z 70783 ']' 00:09:22.704 11:31:23 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:09:22.704 11:31:23 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@840 -- # local max_retries=100 00:09:22.704 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:09:22.704 11:31:23 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:09:22.704 11:31:23 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@844 -- # xtrace_disable 00:09:22.704 11:31:23 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:09:22.963 [2024-12-16 11:31:23.467825] Starting SPDK v25.01-pre git sha1 c360129e7 / DPDK 24.03.0 initialization... 00:09:22.963 [2024-12-16 11:31:23.467923] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:09:22.963 [2024-12-16 11:31:23.617772] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:09:22.963 [2024-12-16 11:31:23.680326] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:09:22.963 [2024-12-16 11:31:23.680377] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:09:22.963 [2024-12-16 11:31:23.680388] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:09:22.963 [2024-12-16 11:31:23.680397] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:09:22.963 [2024-12-16 11:31:23.680405] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:09:22.963 [2024-12-16 11:31:23.680791] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 1 00:09:23.221 11:31:23 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:09:23.221 11:31:23 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@868 -- # return 0 00:09:23.221 11:31:23 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:09:23.221 11:31:23 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@732 -- # xtrace_disable 00:09:23.221 11:31:23 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:09:23.221 11:31:23 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:09:23.221 11:31:23 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@15 -- # '[' tcp '!=' tcp ']' 00:09:23.221 11:31:23 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@22 -- # rpc_cmd nvmf_create_transport -t tcp -o -c 0 --zcopy 00:09:23.221 11:31:23 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:23.221 11:31:23 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:09:23.221 [2024-12-16 11:31:23.862825] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:09:23.221 11:31:23 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:23.221 11:31:23 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@24 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 10 00:09:23.221 11:31:23 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:23.221 11:31:23 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:09:23.221 11:31:23 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:23.221 11:31:23 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 00:09:23.221 11:31:23 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:23.221 11:31:23 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:09:23.221 [2024-12-16 11:31:23.878943] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4420 *** 00:09:23.221 11:31:23 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:23.221 11:31:23 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@27 -- # rpc_cmd nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.3 -s 4420 00:09:23.221 11:31:23 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:23.221 11:31:23 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:09:23.221 11:31:23 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:23.221 11:31:23 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@29 -- # rpc_cmd bdev_malloc_create 32 4096 -b malloc0 00:09:23.221 11:31:23 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:23.221 11:31:23 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:09:23.221 malloc0 00:09:23.221 11:31:23 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:23.221 11:31:23 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@30 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 malloc0 -n 1 00:09:23.221 11:31:23 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:23.221 11:31:23 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:09:23.221 11:31:23 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:23.221 11:31:23 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@33 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /dev/fd/62 -t 10 -q 128 -w verify -o 8192 00:09:23.221 11:31:23 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@33 -- # gen_nvmf_target_json 00:09:23.221 11:31:23 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@560 -- # config=() 00:09:23.221 11:31:23 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@560 -- # local subsystem config 00:09:23.221 11:31:23 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:09:23.221 11:31:23 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:09:23.221 { 00:09:23.221 "params": { 00:09:23.221 "name": "Nvme$subsystem", 00:09:23.221 "trtype": "$TEST_TRANSPORT", 00:09:23.221 "traddr": "$NVMF_FIRST_TARGET_IP", 00:09:23.222 "adrfam": "ipv4", 00:09:23.222 "trsvcid": "$NVMF_PORT", 00:09:23.222 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:09:23.222 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:09:23.222 "hdgst": ${hdgst:-false}, 00:09:23.222 "ddgst": ${ddgst:-false} 00:09:23.222 }, 00:09:23.222 "method": "bdev_nvme_attach_controller" 00:09:23.222 } 00:09:23.222 EOF 00:09:23.222 )") 00:09:23.222 11:31:23 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@582 -- # cat 00:09:23.222 11:31:23 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@584 -- # jq . 00:09:23.222 11:31:23 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@585 -- # IFS=, 00:09:23.222 11:31:23 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:09:23.222 "params": { 00:09:23.222 "name": "Nvme1", 00:09:23.222 "trtype": "tcp", 00:09:23.222 "traddr": "10.0.0.3", 00:09:23.222 "adrfam": "ipv4", 00:09:23.222 "trsvcid": "4420", 00:09:23.222 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:09:23.222 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:09:23.222 "hdgst": false, 00:09:23.222 "ddgst": false 00:09:23.222 }, 00:09:23.222 "method": "bdev_nvme_attach_controller" 00:09:23.222 }' 00:09:23.480 [2024-12-16 11:31:23.997514] Starting SPDK v25.01-pre git sha1 c360129e7 / DPDK 24.03.0 initialization... 00:09:23.480 [2024-12-16 11:31:23.997659] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid70821 ] 00:09:23.480 [2024-12-16 11:31:24.152098] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:09:23.480 [2024-12-16 11:31:24.220432] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:09:23.737 Running I/O for 10 seconds... 00:09:26.045 5559.00 IOPS, 43.43 MiB/s [2024-12-16T11:31:27.735Z] 5862.00 IOPS, 45.80 MiB/s [2024-12-16T11:31:28.672Z] 5885.67 IOPS, 45.98 MiB/s [2024-12-16T11:31:29.633Z] 5950.75 IOPS, 46.49 MiB/s [2024-12-16T11:31:30.569Z] 5999.00 IOPS, 46.87 MiB/s [2024-12-16T11:31:31.506Z] 6028.50 IOPS, 47.10 MiB/s [2024-12-16T11:31:32.442Z] 6041.29 IOPS, 47.20 MiB/s [2024-12-16T11:31:33.819Z] 6050.88 IOPS, 47.27 MiB/s [2024-12-16T11:31:34.755Z] 6064.00 IOPS, 47.38 MiB/s [2024-12-16T11:31:34.755Z] 6075.30 IOPS, 47.46 MiB/s 00:09:34.002 Latency(us) 00:09:34.002 [2024-12-16T11:31:34.755Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:09:34.002 Job: Nvme1n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 8192) 00:09:34.002 Verification LBA range: start 0x0 length 0x1000 00:09:34.002 Nvme1n1 : 10.01 6079.79 47.50 0.00 0.00 20984.04 547.37 36461.85 00:09:34.002 [2024-12-16T11:31:34.755Z] =================================================================================================================== 00:09:34.002 [2024-12-16T11:31:34.755Z] Total : 6079.79 47.50 0.00 0.00 20984.04 547.37 36461.85 00:09:34.002 11:31:34 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@39 -- # perfpid=70939 00:09:34.002 11:31:34 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@41 -- # xtrace_disable 00:09:34.002 11:31:34 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:09:34.002 11:31:34 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@37 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /dev/fd/63 -t 5 -q 128 -w randrw -M 50 -o 8192 00:09:34.002 11:31:34 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@37 -- # gen_nvmf_target_json 00:09:34.002 11:31:34 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@560 -- # config=() 00:09:34.002 11:31:34 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@560 -- # local subsystem config 00:09:34.002 11:31:34 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:09:34.002 11:31:34 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:09:34.002 { 00:09:34.002 "params": { 00:09:34.002 "name": "Nvme$subsystem", 00:09:34.002 "trtype": "$TEST_TRANSPORT", 00:09:34.002 "traddr": "$NVMF_FIRST_TARGET_IP", 00:09:34.002 "adrfam": "ipv4", 00:09:34.002 "trsvcid": "$NVMF_PORT", 00:09:34.002 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:09:34.002 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:09:34.002 "hdgst": ${hdgst:-false}, 00:09:34.002 "ddgst": ${ddgst:-false} 00:09:34.002 }, 00:09:34.002 "method": "bdev_nvme_attach_controller" 00:09:34.002 } 00:09:34.002 EOF 00:09:34.002 )") 00:09:34.002 11:31:34 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@582 -- # cat 00:09:34.002 11:31:34 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@584 -- # jq . 00:09:34.002 [2024-12-16 11:31:34.629316] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:34.002 [2024-12-16 11:31:34.629358] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:34.002 11:31:34 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@585 -- # IFS=, 00:09:34.002 11:31:34 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:09:34.002 "params": { 00:09:34.002 "name": "Nvme1", 00:09:34.002 "trtype": "tcp", 00:09:34.002 "traddr": "10.0.0.3", 00:09:34.002 "adrfam": "ipv4", 00:09:34.002 "trsvcid": "4420", 00:09:34.002 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:09:34.002 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:09:34.002 "hdgst": false, 00:09:34.002 "ddgst": false 00:09:34.002 }, 00:09:34.002 "method": "bdev_nvme_attach_controller" 00:09:34.002 }' 00:09:34.002 2024/12/16 11:31:34 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:09:34.002 [2024-12-16 11:31:34.641280] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:34.002 [2024-12-16 11:31:34.641312] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:34.002 2024/12/16 11:31:34 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:09:34.002 [2024-12-16 11:31:34.653278] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:34.002 [2024-12-16 11:31:34.653307] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:34.002 2024/12/16 11:31:34 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:09:34.002 [2024-12-16 11:31:34.665282] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:34.002 [2024-12-16 11:31:34.665310] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:34.002 2024/12/16 11:31:34 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:09:34.002 [2024-12-16 11:31:34.677301] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:34.002 [2024-12-16 11:31:34.677332] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:34.002 [2024-12-16 11:31:34.679470] Starting SPDK v25.01-pre git sha1 c360129e7 / DPDK 24.03.0 initialization... 00:09:34.002 [2024-12-16 11:31:34.679701] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid70939 ] 00:09:34.002 2024/12/16 11:31:34 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:09:34.002 [2024-12-16 11:31:34.689287] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:34.002 [2024-12-16 11:31:34.689316] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:34.002 2024/12/16 11:31:34 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:09:34.002 [2024-12-16 11:31:34.701290] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:34.002 [2024-12-16 11:31:34.701320] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:34.002 2024/12/16 11:31:34 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:09:34.003 [2024-12-16 11:31:34.713297] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:34.003 [2024-12-16 11:31:34.713324] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:34.003 2024/12/16 11:31:34 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:09:34.003 [2024-12-16 11:31:34.725301] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:34.003 [2024-12-16 11:31:34.725328] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:34.003 2024/12/16 11:31:34 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:09:34.003 [2024-12-16 11:31:34.737299] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:34.003 [2024-12-16 11:31:34.737327] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:34.003 2024/12/16 11:31:34 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:09:34.003 [2024-12-16 11:31:34.749300] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:34.003 [2024-12-16 11:31:34.749329] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:34.262 2024/12/16 11:31:34 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:09:34.262 [2024-12-16 11:31:34.761304] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:34.262 [2024-12-16 11:31:34.761332] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:34.262 2024/12/16 11:31:34 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:09:34.262 [2024-12-16 11:31:34.773307] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:34.262 [2024-12-16 11:31:34.773335] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:34.262 2024/12/16 11:31:34 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:09:34.262 [2024-12-16 11:31:34.785311] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:34.262 [2024-12-16 11:31:34.785339] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:34.262 2024/12/16 11:31:34 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:09:34.262 [2024-12-16 11:31:34.797319] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:34.262 [2024-12-16 11:31:34.797348] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:34.262 2024/12/16 11:31:34 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:09:34.262 [2024-12-16 11:31:34.813319] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:34.262 [2024-12-16 11:31:34.813347] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:34.262 2024/12/16 11:31:34 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:09:34.262 [2024-12-16 11:31:34.825324] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:34.262 [2024-12-16 11:31:34.825352] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:34.262 [2024-12-16 11:31:34.827324] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:09:34.262 2024/12/16 11:31:34 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:09:34.262 [2024-12-16 11:31:34.837347] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:34.262 [2024-12-16 11:31:34.837382] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:34.262 2024/12/16 11:31:34 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:09:34.262 [2024-12-16 11:31:34.849345] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:34.262 [2024-12-16 11:31:34.849380] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:34.262 2024/12/16 11:31:34 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:09:34.262 [2024-12-16 11:31:34.861333] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:34.262 [2024-12-16 11:31:34.861363] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:34.262 2024/12/16 11:31:34 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:09:34.262 [2024-12-16 11:31:34.873336] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:34.262 [2024-12-16 11:31:34.873361] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:34.262 2024/12/16 11:31:34 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:09:34.262 [2024-12-16 11:31:34.885342] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:34.262 [2024-12-16 11:31:34.885370] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:34.262 2024/12/16 11:31:34 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:09:34.262 [2024-12-16 11:31:34.897344] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:34.262 [2024-12-16 11:31:34.897373] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:34.262 [2024-12-16 11:31:34.898020] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:09:34.262 2024/12/16 11:31:34 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:09:34.262 [2024-12-16 11:31:34.909347] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:34.262 [2024-12-16 11:31:34.909374] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:34.262 2024/12/16 11:31:34 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:09:34.262 [2024-12-16 11:31:34.921376] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:34.262 [2024-12-16 11:31:34.921413] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:34.262 2024/12/16 11:31:34 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:09:34.262 [2024-12-16 11:31:34.933380] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:34.262 [2024-12-16 11:31:34.933416] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:34.262 2024/12/16 11:31:34 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:09:34.262 [2024-12-16 11:31:34.945378] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:34.262 [2024-12-16 11:31:34.945412] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:34.262 2024/12/16 11:31:34 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:09:34.262 [2024-12-16 11:31:34.957388] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:34.262 [2024-12-16 11:31:34.957423] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:34.262 2024/12/16 11:31:34 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:09:34.262 [2024-12-16 11:31:34.969368] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:34.262 [2024-12-16 11:31:34.969397] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:34.262 2024/12/16 11:31:34 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:09:34.263 [2024-12-16 11:31:34.981394] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:34.263 [2024-12-16 11:31:34.981430] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:34.263 2024/12/16 11:31:34 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:09:34.263 [2024-12-16 11:31:34.993400] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:34.263 [2024-12-16 11:31:34.993440] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:34.263 2024/12/16 11:31:34 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:09:34.263 [2024-12-16 11:31:35.005381] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:34.263 [2024-12-16 11:31:35.005412] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:34.263 2024/12/16 11:31:35 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:09:34.521 [2024-12-16 11:31:35.017400] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:34.521 [2024-12-16 11:31:35.017573] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:34.521 2024/12/16 11:31:35 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:09:34.521 [2024-12-16 11:31:35.029403] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:34.521 [2024-12-16 11:31:35.029437] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:34.521 2024/12/16 11:31:35 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:09:34.521 [2024-12-16 11:31:35.041407] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:34.521 [2024-12-16 11:31:35.041439] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:34.521 2024/12/16 11:31:35 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:09:34.521 [2024-12-16 11:31:35.053412] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:34.521 [2024-12-16 11:31:35.053442] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:34.521 2024/12/16 11:31:35 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:09:34.521 [2024-12-16 11:31:35.065416] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:34.521 [2024-12-16 11:31:35.065448] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:34.521 2024/12/16 11:31:35 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:09:34.521 [2024-12-16 11:31:35.077431] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:34.521 [2024-12-16 11:31:35.077468] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:34.521 2024/12/16 11:31:35 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:09:34.521 Running I/O for 5 seconds... 00:09:34.521 [2024-12-16 11:31:35.089422] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:34.521 [2024-12-16 11:31:35.089453] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:34.521 2024/12/16 11:31:35 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:09:34.521 [2024-12-16 11:31:35.105859] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:34.521 [2024-12-16 11:31:35.105899] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:34.521 2024/12/16 11:31:35 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:09:34.521 [2024-12-16 11:31:35.122137] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:34.521 [2024-12-16 11:31:35.122187] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:34.521 2024/12/16 11:31:35 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:09:34.521 [2024-12-16 11:31:35.137046] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:34.521 [2024-12-16 11:31:35.137085] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:34.521 2024/12/16 11:31:35 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:09:34.521 [2024-12-16 11:31:35.153101] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:34.521 [2024-12-16 11:31:35.153153] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:34.521 2024/12/16 11:31:35 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:09:34.521 [2024-12-16 11:31:35.169705] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:34.521 [2024-12-16 11:31:35.169873] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:34.521 2024/12/16 11:31:35 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:09:34.521 [2024-12-16 11:31:35.186169] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:34.521 [2024-12-16 11:31:35.186206] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:34.521 2024/12/16 11:31:35 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:09:34.521 [2024-12-16 11:31:35.202562] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:34.521 [2024-12-16 11:31:35.202600] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:34.521 2024/12/16 11:31:35 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:09:34.521 [2024-12-16 11:31:35.218700] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:34.521 [2024-12-16 11:31:35.218737] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:34.521 2024/12/16 11:31:35 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:09:34.521 [2024-12-16 11:31:35.236122] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:34.521 [2024-12-16 11:31:35.236175] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:34.521 2024/12/16 11:31:35 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:09:34.521 [2024-12-16 11:31:35.251947] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:34.521 [2024-12-16 11:31:35.251985] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:34.521 2024/12/16 11:31:35 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:09:34.521 [2024-12-16 11:31:35.262171] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:34.521 [2024-12-16 11:31:35.262207] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:34.521 2024/12/16 11:31:35 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:09:34.780 [2024-12-16 11:31:35.275975] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:34.780 [2024-12-16 11:31:35.276012] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:34.780 2024/12/16 11:31:35 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:09:34.780 [2024-12-16 11:31:35.292243] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:34.780 [2024-12-16 11:31:35.292280] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:34.780 2024/12/16 11:31:35 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:09:34.780 [2024-12-16 11:31:35.309169] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:34.780 [2024-12-16 11:31:35.309205] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:34.780 2024/12/16 11:31:35 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:09:34.780 [2024-12-16 11:31:35.325933] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:34.780 [2024-12-16 11:31:35.325971] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:34.780 2024/12/16 11:31:35 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:09:34.780 [2024-12-16 11:31:35.341739] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:34.780 [2024-12-16 11:31:35.341777] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:34.781 2024/12/16 11:31:35 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:09:34.781 [2024-12-16 11:31:35.351670] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:34.781 [2024-12-16 11:31:35.351831] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:34.781 2024/12/16 11:31:35 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:09:34.781 [2024-12-16 11:31:35.365671] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:34.781 [2024-12-16 11:31:35.365709] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:34.781 2024/12/16 11:31:35 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:09:34.781 [2024-12-16 11:31:35.380701] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:34.781 [2024-12-16 11:31:35.380739] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:34.781 2024/12/16 11:31:35 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:09:34.781 [2024-12-16 11:31:35.396586] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:34.781 [2024-12-16 11:31:35.396627] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:34.781 2024/12/16 11:31:35 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:09:34.781 [2024-12-16 11:31:35.412567] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:34.781 [2024-12-16 11:31:35.412606] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:34.781 2024/12/16 11:31:35 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:09:34.781 [2024-12-16 11:31:35.422192] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:34.781 [2024-12-16 11:31:35.422227] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:34.781 2024/12/16 11:31:35 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:09:34.781 [2024-12-16 11:31:35.436735] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:34.781 [2024-12-16 11:31:35.436773] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:34.781 2024/12/16 11:31:35 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:09:34.781 [2024-12-16 11:31:35.454932] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:34.781 [2024-12-16 11:31:35.454970] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:34.781 2024/12/16 11:31:35 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:09:34.781 [2024-12-16 11:31:35.469917] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:34.781 [2024-12-16 11:31:35.470090] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:34.781 2024/12/16 11:31:35 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:09:34.781 [2024-12-16 11:31:35.488297] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:34.781 [2024-12-16 11:31:35.488334] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:34.781 2024/12/16 11:31:35 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:09:34.781 [2024-12-16 11:31:35.503261] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:34.781 [2024-12-16 11:31:35.503298] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:34.781 2024/12/16 11:31:35 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:09:34.781 [2024-12-16 11:31:35.519825] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:34.781 [2024-12-16 11:31:35.519865] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:34.781 2024/12/16 11:31:35 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:09:35.040 [2024-12-16 11:31:35.534898] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:35.040 [2024-12-16 11:31:35.535056] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:35.040 2024/12/16 11:31:35 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:09:35.040 [2024-12-16 11:31:35.549381] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:35.040 [2024-12-16 11:31:35.549417] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:35.040 2024/12/16 11:31:35 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:09:35.040 [2024-12-16 11:31:35.566382] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:35.040 [2024-12-16 11:31:35.566420] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:35.040 2024/12/16 11:31:35 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:09:35.040 [2024-12-16 11:31:35.582588] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:35.040 [2024-12-16 11:31:35.582625] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:35.040 2024/12/16 11:31:35 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:09:35.040 [2024-12-16 11:31:35.600248] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:35.040 [2024-12-16 11:31:35.600285] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:35.040 2024/12/16 11:31:35 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:09:35.040 [2024-12-16 11:31:35.614859] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:35.040 [2024-12-16 11:31:35.614899] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:35.040 2024/12/16 11:31:35 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:09:35.040 [2024-12-16 11:31:35.631266] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:35.040 [2024-12-16 11:31:35.631303] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:35.040 2024/12/16 11:31:35 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:09:35.040 [2024-12-16 11:31:35.647666] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:35.040 [2024-12-16 11:31:35.647704] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:35.040 2024/12/16 11:31:35 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:09:35.040 [2024-12-16 11:31:35.664787] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:35.040 [2024-12-16 11:31:35.664824] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:35.040 2024/12/16 11:31:35 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:09:35.040 [2024-12-16 11:31:35.679473] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:35.040 [2024-12-16 11:31:35.679510] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:35.040 2024/12/16 11:31:35 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:09:35.040 [2024-12-16 11:31:35.695598] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:35.040 [2024-12-16 11:31:35.695637] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:35.040 2024/12/16 11:31:35 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:09:35.040 [2024-12-16 11:31:35.712497] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:35.041 [2024-12-16 11:31:35.712535] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:35.041 2024/12/16 11:31:35 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:09:35.041 [2024-12-16 11:31:35.729383] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:35.041 [2024-12-16 11:31:35.729419] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:35.041 2024/12/16 11:31:35 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:09:35.041 [2024-12-16 11:31:35.746488] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:35.041 [2024-12-16 11:31:35.746525] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:35.041 2024/12/16 11:31:35 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:09:35.041 [2024-12-16 11:31:35.762818] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:35.041 [2024-12-16 11:31:35.762857] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:35.041 2024/12/16 11:31:35 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:09:35.041 [2024-12-16 11:31:35.780303] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:35.041 [2024-12-16 11:31:35.780342] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:35.041 2024/12/16 11:31:35 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:09:35.300 [2024-12-16 11:31:35.795231] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:35.300 [2024-12-16 11:31:35.795267] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:35.300 2024/12/16 11:31:35 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:09:35.300 [2024-12-16 11:31:35.811255] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:35.300 [2024-12-16 11:31:35.811293] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:35.300 2024/12/16 11:31:35 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:09:35.300 [2024-12-16 11:31:35.829489] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:35.300 [2024-12-16 11:31:35.829527] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:35.300 2024/12/16 11:31:35 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:09:35.300 [2024-12-16 11:31:35.844328] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:35.300 [2024-12-16 11:31:35.844365] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:35.300 2024/12/16 11:31:35 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:09:35.300 [2024-12-16 11:31:35.860619] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:35.300 [2024-12-16 11:31:35.860658] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:35.300 2024/12/16 11:31:35 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:09:35.300 [2024-12-16 11:31:35.878340] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:35.300 [2024-12-16 11:31:35.878378] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:35.300 2024/12/16 11:31:35 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:09:35.300 [2024-12-16 11:31:35.892987] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:35.300 [2024-12-16 11:31:35.893025] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:35.300 2024/12/16 11:31:35 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:09:35.300 [2024-12-16 11:31:35.907811] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:35.300 [2024-12-16 11:31:35.907847] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:35.300 2024/12/16 11:31:35 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:09:35.300 [2024-12-16 11:31:35.924488] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:35.300 [2024-12-16 11:31:35.924523] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:35.300 2024/12/16 11:31:35 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:09:35.300 [2024-12-16 11:31:35.941222] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:35.300 [2024-12-16 11:31:35.941264] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:35.300 2024/12/16 11:31:35 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:09:35.300 [2024-12-16 11:31:35.958244] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:35.300 [2024-12-16 11:31:35.958285] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:35.300 2024/12/16 11:31:35 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:09:35.300 [2024-12-16 11:31:35.975559] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:35.300 [2024-12-16 11:31:35.975602] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:35.300 2024/12/16 11:31:35 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:09:35.300 [2024-12-16 11:31:35.991241] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:35.300 [2024-12-16 11:31:35.991277] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:35.300 2024/12/16 11:31:35 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:09:35.300 [2024-12-16 11:31:36.008519] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:35.300 [2024-12-16 11:31:36.008693] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:35.300 2024/12/16 11:31:36 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:09:35.300 [2024-12-16 11:31:36.024995] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:35.300 [2024-12-16 11:31:36.025033] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:35.300 2024/12/16 11:31:36 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:09:35.300 [2024-12-16 11:31:36.042453] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:35.300 [2024-12-16 11:31:36.042491] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:35.300 2024/12/16 11:31:36 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:09:35.559 [2024-12-16 11:31:36.057209] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:35.559 [2024-12-16 11:31:36.057245] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:35.559 2024/12/16 11:31:36 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:09:35.559 [2024-12-16 11:31:36.072006] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:35.560 [2024-12-16 11:31:36.072191] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:35.560 2024/12/16 11:31:36 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:09:35.560 [2024-12-16 11:31:36.088211] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:35.560 [2024-12-16 11:31:36.088249] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:35.560 11986.00 IOPS, 93.64 MiB/s [2024-12-16T11:31:36.313Z] 2024/12/16 11:31:36 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:09:35.560 [2024-12-16 11:31:36.105445] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:35.560 [2024-12-16 11:31:36.105481] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:35.560 2024/12/16 11:31:36 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:09:35.560 [2024-12-16 11:31:36.120493] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:35.560 [2024-12-16 11:31:36.120532] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:35.560 2024/12/16 11:31:36 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:09:35.560 [2024-12-16 11:31:36.131701] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:35.560 [2024-12-16 11:31:36.131855] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:35.560 2024/12/16 11:31:36 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:09:35.560 [2024-12-16 11:31:36.149173] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:35.560 [2024-12-16 11:31:36.149209] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:35.560 2024/12/16 11:31:36 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:09:35.560 [2024-12-16 11:31:36.163914] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:35.560 [2024-12-16 11:31:36.163954] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:35.560 2024/12/16 11:31:36 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:09:35.560 [2024-12-16 11:31:36.173489] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:35.560 [2024-12-16 11:31:36.173524] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:35.560 2024/12/16 11:31:36 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:09:35.560 [2024-12-16 11:31:36.187709] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:35.560 [2024-12-16 11:31:36.187747] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:35.560 2024/12/16 11:31:36 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:09:35.560 [2024-12-16 11:31:36.202720] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:35.560 [2024-12-16 11:31:36.202889] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:35.560 2024/12/16 11:31:36 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:09:35.560 [2024-12-16 11:31:36.219399] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:35.560 [2024-12-16 11:31:36.219552] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:35.560 2024/12/16 11:31:36 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:09:35.560 [2024-12-16 11:31:36.235275] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:35.560 [2024-12-16 11:31:36.235422] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:35.560 2024/12/16 11:31:36 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:09:35.560 [2024-12-16 11:31:36.253033] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:35.560 [2024-12-16 11:31:36.253197] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:35.560 2024/12/16 11:31:36 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:09:35.560 [2024-12-16 11:31:36.267584] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:35.560 [2024-12-16 11:31:36.267735] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:35.560 2024/12/16 11:31:36 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:09:35.560 [2024-12-16 11:31:36.284660] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:35.560 [2024-12-16 11:31:36.284812] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:35.560 2024/12/16 11:31:36 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:09:35.560 [2024-12-16 11:31:36.299133] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:35.560 [2024-12-16 11:31:36.299318] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:35.560 2024/12/16 11:31:36 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:09:35.819 [2024-12-16 11:31:36.315301] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:35.819 [2024-12-16 11:31:36.315450] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:35.819 2024/12/16 11:31:36 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:09:35.819 [2024-12-16 11:31:36.332049] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:35.819 [2024-12-16 11:31:36.332214] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:35.819 2024/12/16 11:31:36 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:09:35.819 [2024-12-16 11:31:36.348730] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:35.819 [2024-12-16 11:31:36.348770] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:35.819 2024/12/16 11:31:36 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:09:35.820 [2024-12-16 11:31:36.364807] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:35.820 [2024-12-16 11:31:36.364845] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:35.820 2024/12/16 11:31:36 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:09:35.820 [2024-12-16 11:31:36.382305] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:35.820 [2024-12-16 11:31:36.382342] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:35.820 2024/12/16 11:31:36 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:09:35.820 [2024-12-16 11:31:36.398333] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:35.820 [2024-12-16 11:31:36.398370] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:35.820 2024/12/16 11:31:36 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:09:35.820 [2024-12-16 11:31:36.415263] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:35.820 [2024-12-16 11:31:36.415300] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:35.820 2024/12/16 11:31:36 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:09:35.820 [2024-12-16 11:31:36.431316] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:35.820 [2024-12-16 11:31:36.431360] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:35.820 2024/12/16 11:31:36 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:09:35.820 [2024-12-16 11:31:36.447062] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:35.820 [2024-12-16 11:31:36.447102] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:35.820 2024/12/16 11:31:36 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:09:35.820 [2024-12-16 11:31:36.463618] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:35.820 [2024-12-16 11:31:36.463657] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:35.820 2024/12/16 11:31:36 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:09:35.820 [2024-12-16 11:31:36.479648] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:35.820 [2024-12-16 11:31:36.479686] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:35.820 2024/12/16 11:31:36 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:09:35.820 [2024-12-16 11:31:36.491564] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:35.820 [2024-12-16 11:31:36.491602] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:35.820 2024/12/16 11:31:36 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:09:35.820 [2024-12-16 11:31:36.510735] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:35.820 [2024-12-16 11:31:36.510774] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:35.820 2024/12/16 11:31:36 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:09:35.820 [2024-12-16 11:31:36.525796] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:35.820 [2024-12-16 11:31:36.525957] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:35.820 2024/12/16 11:31:36 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:09:35.820 [2024-12-16 11:31:36.542817] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:35.820 [2024-12-16 11:31:36.542856] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:35.820 2024/12/16 11:31:36 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:09:35.820 [2024-12-16 11:31:36.559769] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:35.820 [2024-12-16 11:31:36.559806] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:35.820 2024/12/16 11:31:36 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:09:36.079 [2024-12-16 11:31:36.575938] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:36.079 [2024-12-16 11:31:36.575974] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:36.079 2024/12/16 11:31:36 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:09:36.079 [2024-12-16 11:31:36.592365] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:36.079 [2024-12-16 11:31:36.592403] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:36.079 2024/12/16 11:31:36 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:09:36.079 [2024-12-16 11:31:36.609329] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:36.079 [2024-12-16 11:31:36.609366] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:36.079 2024/12/16 11:31:36 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:09:36.079 [2024-12-16 11:31:36.626012] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:36.079 [2024-12-16 11:31:36.626050] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:36.079 2024/12/16 11:31:36 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:09:36.079 [2024-12-16 11:31:36.641891] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:36.079 [2024-12-16 11:31:36.641930] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:36.079 2024/12/16 11:31:36 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:09:36.079 [2024-12-16 11:31:36.659461] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:36.079 [2024-12-16 11:31:36.659639] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:36.079 2024/12/16 11:31:36 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:09:36.080 [2024-12-16 11:31:36.674594] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:36.080 [2024-12-16 11:31:36.674745] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:36.080 2024/12/16 11:31:36 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:09:36.080 [2024-12-16 11:31:36.685053] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:36.080 [2024-12-16 11:31:36.685087] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:36.080 2024/12/16 11:31:36 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:09:36.080 [2024-12-16 11:31:36.698950] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:36.080 [2024-12-16 11:31:36.698986] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:36.080 2024/12/16 11:31:36 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:09:36.080 [2024-12-16 11:31:36.714922] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:36.080 [2024-12-16 11:31:36.714960] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:36.080 2024/12/16 11:31:36 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:09:36.080 [2024-12-16 11:31:36.731496] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:36.080 [2024-12-16 11:31:36.731532] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:36.080 2024/12/16 11:31:36 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:09:36.080 [2024-12-16 11:31:36.746509] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:36.080 [2024-12-16 11:31:36.746547] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:36.080 2024/12/16 11:31:36 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:09:36.080 [2024-12-16 11:31:36.763196] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:36.080 [2024-12-16 11:31:36.763233] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:36.080 2024/12/16 11:31:36 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:09:36.080 [2024-12-16 11:31:36.780709] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:36.080 [2024-12-16 11:31:36.780747] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:36.080 2024/12/16 11:31:36 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:09:36.080 [2024-12-16 11:31:36.795671] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:36.080 [2024-12-16 11:31:36.795724] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:36.080 2024/12/16 11:31:36 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:09:36.080 [2024-12-16 11:31:36.805309] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:36.080 [2024-12-16 11:31:36.805342] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:36.080 2024/12/16 11:31:36 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:09:36.080 [2024-12-16 11:31:36.820216] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:36.080 [2024-12-16 11:31:36.820252] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:36.080 2024/12/16 11:31:36 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:09:36.339 [2024-12-16 11:31:36.830259] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:36.340 [2024-12-16 11:31:36.830295] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:36.340 2024/12/16 11:31:36 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:09:36.340 [2024-12-16 11:31:36.844702] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:36.340 [2024-12-16 11:31:36.844739] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:36.340 2024/12/16 11:31:36 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:09:36.340 [2024-12-16 11:31:36.854451] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:36.340 [2024-12-16 11:31:36.854488] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:36.340 2024/12/16 11:31:36 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:09:36.340 [2024-12-16 11:31:36.869226] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:36.340 [2024-12-16 11:31:36.869262] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:36.340 2024/12/16 11:31:36 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:09:36.340 [2024-12-16 11:31:36.879260] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:36.340 [2024-12-16 11:31:36.879296] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:36.340 2024/12/16 11:31:36 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:09:36.340 [2024-12-16 11:31:36.894214] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:36.340 [2024-12-16 11:31:36.894254] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:36.340 2024/12/16 11:31:36 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:09:36.340 [2024-12-16 11:31:36.911926] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:36.340 [2024-12-16 11:31:36.911964] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:36.340 2024/12/16 11:31:36 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:09:36.340 [2024-12-16 11:31:36.927203] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:36.340 [2024-12-16 11:31:36.927237] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:36.340 2024/12/16 11:31:36 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:09:36.340 [2024-12-16 11:31:36.936161] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:36.340 [2024-12-16 11:31:36.936194] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:36.340 2024/12/16 11:31:36 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:09:36.340 [2024-12-16 11:31:36.951514] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:36.340 [2024-12-16 11:31:36.951550] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:36.340 2024/12/16 11:31:36 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:09:36.340 [2024-12-16 11:31:36.966640] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:36.340 [2024-12-16 11:31:36.966679] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:36.340 2024/12/16 11:31:36 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:09:36.340 [2024-12-16 11:31:36.983448] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:36.340 [2024-12-16 11:31:36.983482] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:36.340 2024/12/16 11:31:36 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:09:36.340 [2024-12-16 11:31:37.001083] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:36.340 [2024-12-16 11:31:37.001119] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:36.340 2024/12/16 11:31:37 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:09:36.340 [2024-12-16 11:31:37.015732] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:36.340 [2024-12-16 11:31:37.015771] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:36.340 2024/12/16 11:31:37 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:09:36.340 [2024-12-16 11:31:37.031051] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:36.340 [2024-12-16 11:31:37.031092] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:36.340 2024/12/16 11:31:37 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:09:36.340 [2024-12-16 11:31:37.045729] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:36.340 [2024-12-16 11:31:37.045764] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:36.340 2024/12/16 11:31:37 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:09:36.340 [2024-12-16 11:31:37.061840] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:36.340 [2024-12-16 11:31:37.061875] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:36.340 2024/12/16 11:31:37 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:09:36.340 [2024-12-16 11:31:37.078906] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:36.340 [2024-12-16 11:31:37.078942] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:36.340 2024/12/16 11:31:37 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:09:36.600 12023.00 IOPS, 93.93 MiB/s [2024-12-16T11:31:37.353Z] [2024-12-16 11:31:37.095929] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:36.600 [2024-12-16 11:31:37.095981] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:36.600 2024/12/16 11:31:37 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:09:36.600 [2024-12-16 11:31:37.111020] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:36.600 [2024-12-16 11:31:37.111072] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:36.600 2024/12/16 11:31:37 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:09:36.600 [2024-12-16 11:31:37.125918] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:36.600 [2024-12-16 11:31:37.125954] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:36.600 2024/12/16 11:31:37 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:09:36.600 [2024-12-16 11:31:37.135071] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:36.600 [2024-12-16 11:31:37.135107] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:36.600 2024/12/16 11:31:37 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:09:36.600 [2024-12-16 11:31:37.151332] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:36.600 [2024-12-16 11:31:37.151368] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:36.600 2024/12/16 11:31:37 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:09:36.600 [2024-12-16 11:31:37.168445] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:36.600 [2024-12-16 11:31:37.168482] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:36.600 2024/12/16 11:31:37 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:09:36.600 [2024-12-16 11:31:37.184293] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:36.600 [2024-12-16 11:31:37.184329] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:36.600 2024/12/16 11:31:37 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:09:36.600 [2024-12-16 11:31:37.194017] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:36.600 [2024-12-16 11:31:37.194055] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:36.600 2024/12/16 11:31:37 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:09:36.600 [2024-12-16 11:31:37.209046] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:36.600 [2024-12-16 11:31:37.209083] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:36.600 2024/12/16 11:31:37 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:09:36.600 [2024-12-16 11:31:37.219080] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:36.600 [2024-12-16 11:31:37.219117] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:36.600 2024/12/16 11:31:37 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:09:36.600 [2024-12-16 11:31:37.233207] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:36.600 [2024-12-16 11:31:37.233242] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:36.600 2024/12/16 11:31:37 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:09:36.600 [2024-12-16 11:31:37.248101] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:36.600 [2024-12-16 11:31:37.248136] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:36.600 2024/12/16 11:31:37 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:09:36.600 [2024-12-16 11:31:37.262973] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:36.600 [2024-12-16 11:31:37.263010] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:36.601 2024/12/16 11:31:37 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:09:36.601 [2024-12-16 11:31:37.281173] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:36.601 [2024-12-16 11:31:37.281209] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:36.601 2024/12/16 11:31:37 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:09:36.601 [2024-12-16 11:31:37.295829] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:36.601 [2024-12-16 11:31:37.295865] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:36.601 2024/12/16 11:31:37 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:09:36.601 [2024-12-16 11:31:37.305667] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:36.601 [2024-12-16 11:31:37.305703] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:36.601 2024/12/16 11:31:37 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:09:36.601 [2024-12-16 11:31:37.320206] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:36.601 [2024-12-16 11:31:37.320241] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:36.601 2024/12/16 11:31:37 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:09:36.601 [2024-12-16 11:31:37.336965] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:36.601 [2024-12-16 11:31:37.337002] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:36.601 2024/12/16 11:31:37 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:09:36.928 [2024-12-16 11:31:37.353733] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:36.928 [2024-12-16 11:31:37.353780] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:36.928 2024/12/16 11:31:37 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:09:36.928 [2024-12-16 11:31:37.369664] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:36.928 [2024-12-16 11:31:37.369700] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:36.928 2024/12/16 11:31:37 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:09:36.928 [2024-12-16 11:31:37.379134] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:36.928 [2024-12-16 11:31:37.379179] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:36.928 2024/12/16 11:31:37 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:09:36.928 [2024-12-16 11:31:37.394371] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:36.928 [2024-12-16 11:31:37.394409] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:36.928 2024/12/16 11:31:37 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:09:36.928 [2024-12-16 11:31:37.410420] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:36.928 [2024-12-16 11:31:37.410457] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:36.928 2024/12/16 11:31:37 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:09:36.928 [2024-12-16 11:31:37.427340] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:36.928 [2024-12-16 11:31:37.427375] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:36.928 2024/12/16 11:31:37 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:09:36.928 [2024-12-16 11:31:37.442966] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:36.928 [2024-12-16 11:31:37.443032] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:36.928 2024/12/16 11:31:37 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:09:36.928 [2024-12-16 11:31:37.452422] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:36.928 [2024-12-16 11:31:37.452457] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:36.928 2024/12/16 11:31:37 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:09:36.928 [2024-12-16 11:31:37.467509] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:36.928 [2024-12-16 11:31:37.467547] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:36.928 2024/12/16 11:31:37 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:09:36.928 [2024-12-16 11:31:37.483416] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:36.928 [2024-12-16 11:31:37.483453] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:36.928 2024/12/16 11:31:37 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:09:36.928 [2024-12-16 11:31:37.500924] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:36.928 [2024-12-16 11:31:37.500961] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:36.928 2024/12/16 11:31:37 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:09:36.928 [2024-12-16 11:31:37.516667] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:36.928 [2024-12-16 11:31:37.516703] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:36.928 2024/12/16 11:31:37 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:09:36.928 [2024-12-16 11:31:37.528183] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:36.928 [2024-12-16 11:31:37.528215] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:36.928 2024/12/16 11:31:37 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:09:36.928 [2024-12-16 11:31:37.545923] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:36.928 [2024-12-16 11:31:37.545976] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:36.928 2024/12/16 11:31:37 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:09:36.928 [2024-12-16 11:31:37.560417] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:36.928 [2024-12-16 11:31:37.560483] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:36.928 2024/12/16 11:31:37 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:09:36.928 [2024-12-16 11:31:37.576670] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:36.928 [2024-12-16 11:31:37.576706] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:36.928 2024/12/16 11:31:37 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:09:36.928 [2024-12-16 11:31:37.595389] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:36.928 [2024-12-16 11:31:37.595459] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:36.928 2024/12/16 11:31:37 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:09:36.928 [2024-12-16 11:31:37.610219] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:36.928 [2024-12-16 11:31:37.610254] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:36.929 2024/12/16 11:31:37 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:09:36.929 [2024-12-16 11:31:37.620210] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:36.929 [2024-12-16 11:31:37.620239] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:36.929 2024/12/16 11:31:37 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:09:36.929 [2024-12-16 11:31:37.634395] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:36.929 [2024-12-16 11:31:37.634431] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:36.929 2024/12/16 11:31:37 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:09:36.929 [2024-12-16 11:31:37.651168] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:36.929 [2024-12-16 11:31:37.651204] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:36.929 2024/12/16 11:31:37 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:09:36.929 [2024-12-16 11:31:37.667774] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:36.929 [2024-12-16 11:31:37.667810] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:36.929 2024/12/16 11:31:37 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:09:37.186 [2024-12-16 11:31:37.684729] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:37.186 [2024-12-16 11:31:37.684766] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:37.186 2024/12/16 11:31:37 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:09:37.186 [2024-12-16 11:31:37.699551] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:37.186 [2024-12-16 11:31:37.699596] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:37.186 2024/12/16 11:31:37 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:09:37.186 [2024-12-16 11:31:37.715429] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:37.186 [2024-12-16 11:31:37.715467] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:37.186 2024/12/16 11:31:37 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:09:37.186 [2024-12-16 11:31:37.730678] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:37.186 [2024-12-16 11:31:37.730715] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:37.186 2024/12/16 11:31:37 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:09:37.186 [2024-12-16 11:31:37.746036] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:37.186 [2024-12-16 11:31:37.746072] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:37.186 2024/12/16 11:31:37 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:09:37.186 [2024-12-16 11:31:37.757940] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:37.186 [2024-12-16 11:31:37.757977] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:37.186 2024/12/16 11:31:37 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:09:37.186 [2024-12-16 11:31:37.774761] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:37.186 [2024-12-16 11:31:37.774797] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:37.186 2024/12/16 11:31:37 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:09:37.186 [2024-12-16 11:31:37.789369] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:37.186 [2024-12-16 11:31:37.789404] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:37.186 2024/12/16 11:31:37 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:09:37.186 [2024-12-16 11:31:37.804107] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:37.186 [2024-12-16 11:31:37.804154] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:37.186 2024/12/16 11:31:37 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:09:37.186 [2024-12-16 11:31:37.819635] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:37.186 [2024-12-16 11:31:37.819672] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:37.186 2024/12/16 11:31:37 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:09:37.186 [2024-12-16 11:31:37.832107] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:37.186 [2024-12-16 11:31:37.832156] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:37.186 2024/12/16 11:31:37 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:09:37.186 [2024-12-16 11:31:37.848107] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:37.186 [2024-12-16 11:31:37.848154] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:37.186 2024/12/16 11:31:37 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:09:37.186 [2024-12-16 11:31:37.865337] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:37.186 [2024-12-16 11:31:37.865372] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:37.186 2024/12/16 11:31:37 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:09:37.186 [2024-12-16 11:31:37.881701] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:37.186 [2024-12-16 11:31:37.881738] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:37.186 2024/12/16 11:31:37 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:09:37.186 [2024-12-16 11:31:37.898563] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:37.186 [2024-12-16 11:31:37.898600] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:37.186 2024/12/16 11:31:37 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:09:37.186 [2024-12-16 11:31:37.915792] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:37.186 [2024-12-16 11:31:37.915830] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:37.186 2024/12/16 11:31:37 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:09:37.186 [2024-12-16 11:31:37.930437] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:37.186 [2024-12-16 11:31:37.930475] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:37.186 2024/12/16 11:31:37 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:09:37.444 [2024-12-16 11:31:37.945403] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:37.444 [2024-12-16 11:31:37.945455] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:37.444 2024/12/16 11:31:37 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:09:37.444 [2024-12-16 11:31:37.960688] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:37.444 [2024-12-16 11:31:37.960739] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:37.445 2024/12/16 11:31:37 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:09:37.445 [2024-12-16 11:31:37.970496] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:37.445 [2024-12-16 11:31:37.970530] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:37.445 2024/12/16 11:31:37 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:09:37.445 [2024-12-16 11:31:37.985466] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:37.445 [2024-12-16 11:31:37.985500] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:37.445 2024/12/16 11:31:37 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:09:37.445 [2024-12-16 11:31:37.995566] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:37.445 [2024-12-16 11:31:37.995599] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:37.445 2024/12/16 11:31:37 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:09:37.445 [2024-12-16 11:31:38.009949] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:37.445 [2024-12-16 11:31:38.009986] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:37.445 2024/12/16 11:31:38 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:09:37.445 [2024-12-16 11:31:38.025038] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:37.445 [2024-12-16 11:31:38.025075] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:37.445 2024/12/16 11:31:38 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:09:37.445 [2024-12-16 11:31:38.041257] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:37.445 [2024-12-16 11:31:38.041294] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:37.445 2024/12/16 11:31:38 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:09:37.445 [2024-12-16 11:31:38.058413] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:37.445 [2024-12-16 11:31:38.058454] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:37.445 2024/12/16 11:31:38 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:09:37.445 [2024-12-16 11:31:38.075700] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:37.445 [2024-12-16 11:31:38.075735] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:37.445 2024/12/16 11:31:38 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:09:37.445 12034.33 IOPS, 94.02 MiB/s [2024-12-16T11:31:38.198Z] [2024-12-16 11:31:38.090875] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:37.445 [2024-12-16 11:31:38.090909] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:37.445 2024/12/16 11:31:38 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:09:37.445 [2024-12-16 11:31:38.102918] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:37.445 [2024-12-16 11:31:38.102951] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:37.445 2024/12/16 11:31:38 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:09:37.445 [2024-12-16 11:31:38.120246] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:37.445 [2024-12-16 11:31:38.120283] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:37.445 2024/12/16 11:31:38 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:09:37.445 [2024-12-16 11:31:38.134795] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:37.445 [2024-12-16 11:31:38.134832] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:37.445 2024/12/16 11:31:38 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:09:37.445 [2024-12-16 11:31:38.149957] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:37.445 [2024-12-16 11:31:38.149995] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:37.445 2024/12/16 11:31:38 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:09:37.445 [2024-12-16 11:31:38.167283] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:37.445 [2024-12-16 11:31:38.167322] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:37.445 2024/12/16 11:31:38 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:09:37.445 [2024-12-16 11:31:38.181964] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:37.445 [2024-12-16 11:31:38.182000] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:37.445 2024/12/16 11:31:38 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:09:37.705 [2024-12-16 11:31:38.198897] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:37.705 [2024-12-16 11:31:38.198933] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:37.705 2024/12/16 11:31:38 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:09:37.705 [2024-12-16 11:31:38.214846] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:37.705 [2024-12-16 11:31:38.214883] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:37.705 2024/12/16 11:31:38 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:09:37.705 [2024-12-16 11:31:38.231845] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:37.705 [2024-12-16 11:31:38.231881] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:37.705 2024/12/16 11:31:38 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:09:37.705 [2024-12-16 11:31:38.247952] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:37.705 [2024-12-16 11:31:38.247988] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:37.705 2024/12/16 11:31:38 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:09:37.705 [2024-12-16 11:31:38.265015] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:37.705 [2024-12-16 11:31:38.265052] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:37.705 2024/12/16 11:31:38 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:09:37.705 [2024-12-16 11:31:38.280614] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:37.705 [2024-12-16 11:31:38.280651] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:37.705 2024/12/16 11:31:38 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:09:37.705 [2024-12-16 11:31:38.291936] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:37.705 [2024-12-16 11:31:38.291972] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:37.705 2024/12/16 11:31:38 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:09:37.705 [2024-12-16 11:31:38.308889] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:37.705 [2024-12-16 11:31:38.308926] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:37.705 2024/12/16 11:31:38 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:09:37.705 [2024-12-16 11:31:38.323631] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:37.705 [2024-12-16 11:31:38.323668] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:37.705 2024/12/16 11:31:38 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:09:37.705 [2024-12-16 11:31:38.339736] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:37.705 [2024-12-16 11:31:38.339774] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:37.705 2024/12/16 11:31:38 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:09:37.705 [2024-12-16 11:31:38.356004] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:37.705 [2024-12-16 11:31:38.356041] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:37.705 2024/12/16 11:31:38 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:09:37.705 [2024-12-16 11:31:38.373510] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:37.705 [2024-12-16 11:31:38.373571] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:37.705 2024/12/16 11:31:38 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:09:37.705 [2024-12-16 11:31:38.388110] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:37.705 [2024-12-16 11:31:38.388164] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:37.705 2024/12/16 11:31:38 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:09:37.705 [2024-12-16 11:31:38.405665] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:37.705 [2024-12-16 11:31:38.405701] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:37.705 2024/12/16 11:31:38 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:09:37.705 [2024-12-16 11:31:38.420371] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:37.705 [2024-12-16 11:31:38.420406] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:37.705 2024/12/16 11:31:38 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:09:37.705 [2024-12-16 11:31:38.430275] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:37.705 [2024-12-16 11:31:38.430324] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:37.705 2024/12/16 11:31:38 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:09:37.705 [2024-12-16 11:31:38.444908] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:37.705 [2024-12-16 11:31:38.444944] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:37.705 2024/12/16 11:31:38 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:09:37.965 [2024-12-16 11:31:38.461052] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:37.965 [2024-12-16 11:31:38.461089] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:37.965 2024/12/16 11:31:38 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:09:37.965 [2024-12-16 11:31:38.477416] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:37.965 [2024-12-16 11:31:38.477451] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:37.965 2024/12/16 11:31:38 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:09:37.965 [2024-12-16 11:31:38.494210] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:37.965 [2024-12-16 11:31:38.494246] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:37.965 2024/12/16 11:31:38 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:09:37.965 [2024-12-16 11:31:38.510674] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:37.965 [2024-12-16 11:31:38.510711] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:37.965 2024/12/16 11:31:38 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:09:37.965 [2024-12-16 11:31:38.527166] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:37.965 [2024-12-16 11:31:38.527202] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:37.965 2024/12/16 11:31:38 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:09:37.965 [2024-12-16 11:31:38.543742] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:37.965 [2024-12-16 11:31:38.543780] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:37.965 2024/12/16 11:31:38 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:09:37.965 [2024-12-16 11:31:38.561035] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:37.965 [2024-12-16 11:31:38.561072] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:37.965 2024/12/16 11:31:38 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:09:37.965 [2024-12-16 11:31:38.577519] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:37.965 [2024-12-16 11:31:38.577565] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:37.965 2024/12/16 11:31:38 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:09:37.965 [2024-12-16 11:31:38.593896] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:37.965 [2024-12-16 11:31:38.593939] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:37.965 2024/12/16 11:31:38 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:09:37.965 [2024-12-16 11:31:38.610079] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:37.965 [2024-12-16 11:31:38.610117] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:37.965 2024/12/16 11:31:38 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:09:37.965 [2024-12-16 11:31:38.627011] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:37.965 [2024-12-16 11:31:38.627048] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:37.965 2024/12/16 11:31:38 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:09:37.965 [2024-12-16 11:31:38.643469] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:37.965 [2024-12-16 11:31:38.643507] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:37.965 2024/12/16 11:31:38 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:09:37.965 [2024-12-16 11:31:38.659593] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:37.965 [2024-12-16 11:31:38.659631] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:37.965 2024/12/16 11:31:38 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:09:37.965 [2024-12-16 11:31:38.676789] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:37.965 [2024-12-16 11:31:38.676824] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:37.965 2024/12/16 11:31:38 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:09:37.965 [2024-12-16 11:31:38.692773] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:37.965 [2024-12-16 11:31:38.692809] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:37.965 2024/12/16 11:31:38 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:09:37.965 [2024-12-16 11:31:38.711537] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:37.965 [2024-12-16 11:31:38.711591] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:37.965 2024/12/16 11:31:38 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:09:38.223 [2024-12-16 11:31:38.725988] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:38.224 [2024-12-16 11:31:38.726026] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:38.224 2024/12/16 11:31:38 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:09:38.224 [2024-12-16 11:31:38.741647] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:38.224 [2024-12-16 11:31:38.741684] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:38.224 2024/12/16 11:31:38 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:09:38.224 [2024-12-16 11:31:38.759788] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:38.224 [2024-12-16 11:31:38.759825] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:38.224 2024/12/16 11:31:38 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:09:38.224 [2024-12-16 11:31:38.774997] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:38.224 [2024-12-16 11:31:38.775034] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:38.224 2024/12/16 11:31:38 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:09:38.224 [2024-12-16 11:31:38.792455] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:38.224 [2024-12-16 11:31:38.792509] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:38.224 2024/12/16 11:31:38 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:09:38.224 [2024-12-16 11:31:38.808735] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:38.224 [2024-12-16 11:31:38.808771] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:38.224 2024/12/16 11:31:38 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:09:38.224 [2024-12-16 11:31:38.825522] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:38.224 [2024-12-16 11:31:38.825568] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:38.224 2024/12/16 11:31:38 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:09:38.224 [2024-12-16 11:31:38.842557] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:38.224 [2024-12-16 11:31:38.842596] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:38.224 2024/12/16 11:31:38 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:09:38.224 [2024-12-16 11:31:38.859759] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:38.224 [2024-12-16 11:31:38.859798] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:38.224 2024/12/16 11:31:38 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:09:38.224 [2024-12-16 11:31:38.875072] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:38.224 [2024-12-16 11:31:38.875109] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:38.224 2024/12/16 11:31:38 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:09:38.224 [2024-12-16 11:31:38.892336] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:38.224 [2024-12-16 11:31:38.892373] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:38.224 2024/12/16 11:31:38 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:09:38.224 [2024-12-16 11:31:38.906728] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:38.224 [2024-12-16 11:31:38.906765] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:38.224 2024/12/16 11:31:38 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:09:38.224 [2024-12-16 11:31:38.923463] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:38.224 [2024-12-16 11:31:38.923499] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:38.224 2024/12/16 11:31:38 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:09:38.224 [2024-12-16 11:31:38.938831] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:38.224 [2024-12-16 11:31:38.938885] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:38.224 2024/12/16 11:31:38 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:09:38.224 [2024-12-16 11:31:38.950686] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:38.224 [2024-12-16 11:31:38.950721] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:38.224 2024/12/16 11:31:38 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:09:38.224 [2024-12-16 11:31:38.967935] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:38.224 [2024-12-16 11:31:38.967974] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:38.224 2024/12/16 11:31:38 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:09:38.483 [2024-12-16 11:31:38.982040] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:38.483 [2024-12-16 11:31:38.982079] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:38.483 2024/12/16 11:31:38 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:09:38.483 [2024-12-16 11:31:38.999446] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:38.483 [2024-12-16 11:31:38.999482] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:38.483 2024/12/16 11:31:39 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:09:38.483 [2024-12-16 11:31:39.014037] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:38.483 [2024-12-16 11:31:39.014074] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:38.483 2024/12/16 11:31:39 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:09:38.483 [2024-12-16 11:31:39.029494] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:38.483 [2024-12-16 11:31:39.029528] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:38.483 2024/12/16 11:31:39 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:09:38.483 [2024-12-16 11:31:39.039034] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:38.483 [2024-12-16 11:31:39.039070] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:38.483 2024/12/16 11:31:39 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:09:38.483 [2024-12-16 11:31:39.054613] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:38.483 [2024-12-16 11:31:39.054651] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:38.483 2024/12/16 11:31:39 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:09:38.483 [2024-12-16 11:31:39.071494] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:38.483 [2024-12-16 11:31:39.071531] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:38.483 2024/12/16 11:31:39 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:09:38.483 [2024-12-16 11:31:39.087275] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:38.483 [2024-12-16 11:31:39.087311] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:38.483 2024/12/16 11:31:39 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:09:38.483 12043.75 IOPS, 94.09 MiB/s [2024-12-16T11:31:39.236Z] [2024-12-16 11:31:39.098548] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:38.483 [2024-12-16 11:31:39.098584] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:38.483 2024/12/16 11:31:39 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:09:38.483 [2024-12-16 11:31:39.115674] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:38.483 [2024-12-16 11:31:39.115709] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:38.483 2024/12/16 11:31:39 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:09:38.483 [2024-12-16 11:31:39.130451] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:38.483 [2024-12-16 11:31:39.130487] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:38.483 2024/12/16 11:31:39 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:09:38.483 [2024-12-16 11:31:39.147629] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:38.483 [2024-12-16 11:31:39.147668] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:38.483 2024/12/16 11:31:39 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:09:38.483 [2024-12-16 11:31:39.162346] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:38.484 [2024-12-16 11:31:39.162378] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:38.484 2024/12/16 11:31:39 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:09:38.484 [2024-12-16 11:31:39.178951] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:38.484 [2024-12-16 11:31:39.178986] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:38.484 2024/12/16 11:31:39 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:09:38.484 [2024-12-16 11:31:39.194525] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:38.484 [2024-12-16 11:31:39.194564] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:38.484 2024/12/16 11:31:39 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:09:38.484 [2024-12-16 11:31:39.212770] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:38.484 [2024-12-16 11:31:39.212826] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:38.484 2024/12/16 11:31:39 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:09:38.484 [2024-12-16 11:31:39.227429] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:38.484 [2024-12-16 11:31:39.227471] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:38.484 2024/12/16 11:31:39 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:09:38.743 [2024-12-16 11:31:39.236792] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:38.743 [2024-12-16 11:31:39.236826] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:38.743 2024/12/16 11:31:39 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:09:38.743 [2024-12-16 11:31:39.252278] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:38.743 [2024-12-16 11:31:39.252314] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:38.743 2024/12/16 11:31:39 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:09:38.743 [2024-12-16 11:31:39.268950] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:38.743 [2024-12-16 11:31:39.268989] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:38.743 2024/12/16 11:31:39 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:09:38.743 [2024-12-16 11:31:39.286056] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:38.743 [2024-12-16 11:31:39.286095] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:38.743 2024/12/16 11:31:39 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:09:38.743 [2024-12-16 11:31:39.302980] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:38.743 [2024-12-16 11:31:39.303020] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:38.743 2024/12/16 11:31:39 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:09:38.743 [2024-12-16 11:31:39.319729] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:38.743 [2024-12-16 11:31:39.319770] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:38.743 2024/12/16 11:31:39 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:09:38.743 [2024-12-16 11:31:39.336160] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:38.743 [2024-12-16 11:31:39.336196] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:38.743 2024/12/16 11:31:39 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:09:38.743 [2024-12-16 11:31:39.354515] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:38.743 [2024-12-16 11:31:39.354554] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:38.743 2024/12/16 11:31:39 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:09:38.743 [2024-12-16 11:31:39.369560] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:38.743 [2024-12-16 11:31:39.369595] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:38.743 2024/12/16 11:31:39 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:09:38.743 [2024-12-16 11:31:39.386215] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:38.743 [2024-12-16 11:31:39.386256] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:38.743 2024/12/16 11:31:39 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:09:38.743 [2024-12-16 11:31:39.403355] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:38.743 [2024-12-16 11:31:39.403396] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:38.743 2024/12/16 11:31:39 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:09:38.743 [2024-12-16 11:31:39.419714] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:38.743 [2024-12-16 11:31:39.419749] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:38.743 2024/12/16 11:31:39 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:09:38.743 [2024-12-16 11:31:39.436876] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:38.743 [2024-12-16 11:31:39.436929] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:38.743 2024/12/16 11:31:39 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:09:38.743 [2024-12-16 11:31:39.454094] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:38.743 [2024-12-16 11:31:39.454131] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:38.743 2024/12/16 11:31:39 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:09:38.743 [2024-12-16 11:31:39.471285] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:38.743 [2024-12-16 11:31:39.471331] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:38.743 2024/12/16 11:31:39 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:09:38.743 [2024-12-16 11:31:39.486493] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:38.743 [2024-12-16 11:31:39.486533] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:38.743 2024/12/16 11:31:39 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:09:39.003 [2024-12-16 11:31:39.501583] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:39.003 [2024-12-16 11:31:39.501619] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:39.003 2024/12/16 11:31:39 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:09:39.003 [2024-12-16 11:31:39.516370] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:39.003 [2024-12-16 11:31:39.516405] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:39.003 2024/12/16 11:31:39 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:09:39.003 [2024-12-16 11:31:39.531710] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:39.003 [2024-12-16 11:31:39.531748] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:39.003 2024/12/16 11:31:39 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:09:39.003 [2024-12-16 11:31:39.546672] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:39.003 [2024-12-16 11:31:39.546708] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:39.003 2024/12/16 11:31:39 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:09:39.003 [2024-12-16 11:31:39.562480] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:39.003 [2024-12-16 11:31:39.562518] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:39.003 2024/12/16 11:31:39 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:09:39.003 [2024-12-16 11:31:39.579752] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:39.003 [2024-12-16 11:31:39.579791] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:39.003 2024/12/16 11:31:39 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:09:39.003 [2024-12-16 11:31:39.594776] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:39.003 [2024-12-16 11:31:39.594820] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:39.003 2024/12/16 11:31:39 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:09:39.003 [2024-12-16 11:31:39.609597] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:39.003 [2024-12-16 11:31:39.609632] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:39.003 2024/12/16 11:31:39 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:09:39.003 [2024-12-16 11:31:39.624499] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:39.003 [2024-12-16 11:31:39.624538] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:39.003 2024/12/16 11:31:39 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:09:39.003 [2024-12-16 11:31:39.639694] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:39.003 [2024-12-16 11:31:39.639733] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:39.003 2024/12/16 11:31:39 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:09:39.003 [2024-12-16 11:31:39.651704] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:39.003 [2024-12-16 11:31:39.651740] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:39.003 2024/12/16 11:31:39 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:09:39.003 [2024-12-16 11:31:39.668169] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:39.003 [2024-12-16 11:31:39.668210] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:39.003 2024/12/16 11:31:39 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:09:39.003 [2024-12-16 11:31:39.684726] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:39.003 [2024-12-16 11:31:39.684766] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:39.003 2024/12/16 11:31:39 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:09:39.003 [2024-12-16 11:31:39.701033] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:39.003 [2024-12-16 11:31:39.701072] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:39.003 2024/12/16 11:31:39 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:09:39.003 [2024-12-16 11:31:39.718135] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:39.003 [2024-12-16 11:31:39.718184] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:39.003 2024/12/16 11:31:39 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:09:39.003 [2024-12-16 11:31:39.732823] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:39.003 [2024-12-16 11:31:39.732861] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:39.003 2024/12/16 11:31:39 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:09:39.003 [2024-12-16 11:31:39.748962] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:39.003 [2024-12-16 11:31:39.749001] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:39.003 2024/12/16 11:31:39 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:09:39.261 [2024-12-16 11:31:39.765170] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:39.261 [2024-12-16 11:31:39.765207] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:39.261 2024/12/16 11:31:39 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:09:39.261 [2024-12-16 11:31:39.782825] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:39.261 [2024-12-16 11:31:39.782863] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:39.261 2024/12/16 11:31:39 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:09:39.261 [2024-12-16 11:31:39.797610] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:39.261 [2024-12-16 11:31:39.797647] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:39.261 2024/12/16 11:31:39 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:09:39.261 [2024-12-16 11:31:39.807236] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:39.261 [2024-12-16 11:31:39.807281] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:39.261 2024/12/16 11:31:39 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:09:39.261 [2024-12-16 11:31:39.818488] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:39.261 [2024-12-16 11:31:39.818537] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:39.261 2024/12/16 11:31:39 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:09:39.261 [2024-12-16 11:31:39.829000] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:39.261 [2024-12-16 11:31:39.829037] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:39.261 2024/12/16 11:31:39 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:09:39.261 [2024-12-16 11:31:39.839890] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:39.261 [2024-12-16 11:31:39.839926] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:39.261 2024/12/16 11:31:39 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:09:39.261 [2024-12-16 11:31:39.857700] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:39.261 [2024-12-16 11:31:39.857739] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:39.261 2024/12/16 11:31:39 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:09:39.261 [2024-12-16 11:31:39.872607] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:39.261 [2024-12-16 11:31:39.872644] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:39.261 2024/12/16 11:31:39 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:09:39.261 [2024-12-16 11:31:39.889416] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:39.261 [2024-12-16 11:31:39.889455] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:39.261 2024/12/16 11:31:39 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:09:39.261 [2024-12-16 11:31:39.906704] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:39.261 [2024-12-16 11:31:39.906743] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:39.261 2024/12/16 11:31:39 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:09:39.261 [2024-12-16 11:31:39.921937] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:39.261 [2024-12-16 11:31:39.921975] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:39.261 2024/12/16 11:31:39 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:09:39.261 [2024-12-16 11:31:39.938776] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:39.261 [2024-12-16 11:31:39.938823] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:39.262 2024/12/16 11:31:39 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:09:39.262 [2024-12-16 11:31:39.954819] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:39.262 [2024-12-16 11:31:39.954857] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:39.262 2024/12/16 11:31:39 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:09:39.262 [2024-12-16 11:31:39.971692] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:39.262 [2024-12-16 11:31:39.971730] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:39.262 2024/12/16 11:31:39 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:09:39.262 [2024-12-16 11:31:39.981663] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:39.262 [2024-12-16 11:31:39.981698] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:39.262 2024/12/16 11:31:39 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:09:39.262 [2024-12-16 11:31:39.992050] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:39.262 [2024-12-16 11:31:39.992087] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:39.262 2024/12/16 11:31:39 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:09:39.262 [2024-12-16 11:31:40.002502] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:39.262 [2024-12-16 11:31:40.002557] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:39.262 2024/12/16 11:31:40 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:09:39.519 [2024-12-16 11:31:40.015492] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:39.519 [2024-12-16 11:31:40.015530] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:39.519 2024/12/16 11:31:40 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:09:39.519 [2024-12-16 11:31:40.027159] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:39.519 [2024-12-16 11:31:40.027195] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:39.519 2024/12/16 11:31:40 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:09:39.519 [2024-12-16 11:31:40.044410] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:39.519 [2024-12-16 11:31:40.044447] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:39.519 2024/12/16 11:31:40 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:09:39.519 [2024-12-16 11:31:40.058587] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:39.519 [2024-12-16 11:31:40.058623] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:39.519 2024/12/16 11:31:40 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:09:39.519 [2024-12-16 11:31:40.068585] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:39.519 [2024-12-16 11:31:40.068620] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:39.519 2024/12/16 11:31:40 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:09:39.519 [2024-12-16 11:31:40.083659] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:39.520 [2024-12-16 11:31:40.083696] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:39.520 2024/12/16 11:31:40 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:09:39.520 12041.60 IOPS, 94.08 MiB/s [2024-12-16T11:31:40.273Z] [2024-12-16 11:31:40.096740] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:39.520 [2024-12-16 11:31:40.096774] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:39.520 2024/12/16 11:31:40 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:09:39.520 00:09:39.520 Latency(us) 00:09:39.520 [2024-12-16T11:31:40.273Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:09:39.520 Job: Nvme1n1 (Core Mask 0x1, workload: randrw, percentage: 50, depth: 128, IO size: 8192) 00:09:39.520 Nvme1n1 : 5.01 12051.30 94.15 0.00 0.00 10607.89 4170.47 17396.83 00:09:39.520 [2024-12-16T11:31:40.273Z] =================================================================================================================== 00:09:39.520 [2024-12-16T11:31:40.273Z] Total : 12051.30 94.15 0.00 0.00 10607.89 4170.47 17396.83 00:09:39.520 [2024-12-16 11:31:40.103075] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:39.520 [2024-12-16 11:31:40.103106] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:39.520 2024/12/16 11:31:40 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:09:39.520 [2024-12-16 11:31:40.115077] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:39.520 [2024-12-16 11:31:40.115108] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:39.520 2024/12/16 11:31:40 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:09:39.520 [2024-12-16 11:31:40.127101] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:39.520 [2024-12-16 11:31:40.127153] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:39.520 2024/12/16 11:31:40 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:09:39.520 [2024-12-16 11:31:40.139102] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:39.520 [2024-12-16 11:31:40.139136] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:39.520 2024/12/16 11:31:40 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:09:39.520 [2024-12-16 11:31:40.151109] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:39.520 [2024-12-16 11:31:40.151152] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:39.520 2024/12/16 11:31:40 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:09:39.520 [2024-12-16 11:31:40.159085] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:39.520 [2024-12-16 11:31:40.159115] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:39.520 2024/12/16 11:31:40 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:09:39.520 [2024-12-16 11:31:40.167083] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:39.520 [2024-12-16 11:31:40.167113] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:39.520 2024/12/16 11:31:40 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:09:39.520 [2024-12-16 11:31:40.179124] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:39.520 [2024-12-16 11:31:40.179170] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:39.520 2024/12/16 11:31:40 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:09:39.520 [2024-12-16 11:31:40.191158] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:39.520 [2024-12-16 11:31:40.191199] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:39.520 2024/12/16 11:31:40 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:09:39.520 [2024-12-16 11:31:40.203128] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:39.520 [2024-12-16 11:31:40.203171] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:39.520 2024/12/16 11:31:40 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:09:39.520 [2024-12-16 11:31:40.215134] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:39.520 [2024-12-16 11:31:40.215180] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:39.520 2024/12/16 11:31:40 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:09:39.520 [2024-12-16 11:31:40.223103] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:39.520 [2024-12-16 11:31:40.223132] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:39.520 2024/12/16 11:31:40 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:09:39.520 [2024-12-16 11:31:40.231106] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:39.520 [2024-12-16 11:31:40.231137] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:39.520 2024/12/16 11:31:40 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:09:39.520 [2024-12-16 11:31:40.239096] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:39.520 [2024-12-16 11:31:40.239121] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:39.520 2024/12/16 11:31:40 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:09:39.520 [2024-12-16 11:31:40.251101] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:39.520 [2024-12-16 11:31:40.251127] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:39.520 2024/12/16 11:31:40 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:09:39.520 [2024-12-16 11:31:40.263138] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:39.520 [2024-12-16 11:31:40.263180] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:39.520 2024/12/16 11:31:40 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:09:39.779 [2024-12-16 11:31:40.275138] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:39.779 [2024-12-16 11:31:40.275178] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:39.779 2024/12/16 11:31:40 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:09:39.779 [2024-12-16 11:31:40.287121] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:39.779 [2024-12-16 11:31:40.287153] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:39.779 2024/12/16 11:31:40 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:09:39.779 [2024-12-16 11:31:40.299125] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:39.779 [2024-12-16 11:31:40.299156] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:39.779 2024/12/16 11:31:40 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:09:39.779 /home/vagrant/spdk_repo/spdk/test/nvmf/target/zcopy.sh: line 42: kill: (70939) - No such process 00:09:39.779 11:31:40 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@49 -- # wait 70939 00:09:39.779 11:31:40 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@52 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:09:39.779 11:31:40 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:39.779 11:31:40 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:09:39.779 11:31:40 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:39.779 11:31:40 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@53 -- # rpc_cmd bdev_delay_create -b malloc0 -d delay0 -r 1000000 -t 1000000 -w 1000000 -n 1000000 00:09:39.779 11:31:40 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:39.779 11:31:40 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:09:39.779 delay0 00:09:39.779 11:31:40 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:39.779 11:31:40 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@54 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 delay0 -n 1 00:09:39.779 11:31:40 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:39.779 11:31:40 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:09:39.779 11:31:40 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:39.779 11:31:40 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@56 -- # /home/vagrant/spdk_repo/spdk/build/examples/abort -c 0x1 -t 5 -q 64 -w randrw -M 50 -l warning -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.3 trsvcid:4420 ns:1' 00:09:39.779 [2024-12-16 11:31:40.504422] nvme_fabric.c: 295:nvme_fabric_discover_probe: *WARNING*: Skipping unsupported current discovery service or discovery service referral 00:09:46.341 Initializing NVMe Controllers 00:09:46.341 Attached to NVMe over Fabrics controller at 10.0.0.3:4420: nqn.2016-06.io.spdk:cnode1 00:09:46.341 Associating TCP (addr:10.0.0.3 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:09:46.341 Initialization complete. Launching workers. 00:09:46.341 NS: TCP (addr:10.0.0.3 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 I/O completed: 320, failed: 68 00:09:46.341 CTRLR: TCP (addr:10.0.0.3 subnqn:nqn.2016-06.io.spdk:cnode1) abort submitted 355, failed to submit 33 00:09:46.341 success 176, unsuccessful 179, failed 0 00:09:46.341 11:31:46 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@59 -- # trap - SIGINT SIGTERM EXIT 00:09:46.341 11:31:46 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@60 -- # nvmftestfini 00:09:46.341 11:31:46 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@516 -- # nvmfcleanup 00:09:46.341 11:31:46 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@121 -- # sync 00:09:46.341 11:31:46 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:09:46.341 11:31:46 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@124 -- # set +e 00:09:46.341 11:31:46 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@125 -- # for i in {1..20} 00:09:46.341 11:31:46 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:09:46.341 rmmod nvme_tcp 00:09:46.341 rmmod nvme_fabrics 00:09:46.341 rmmod nvme_keyring 00:09:46.341 11:31:46 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:09:46.341 11:31:46 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@128 -- # set -e 00:09:46.341 11:31:46 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@129 -- # return 0 00:09:46.341 11:31:46 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@517 -- # '[' -n 70783 ']' 00:09:46.341 11:31:46 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@518 -- # killprocess 70783 00:09:46.341 11:31:46 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@954 -- # '[' -z 70783 ']' 00:09:46.341 11:31:46 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@958 -- # kill -0 70783 00:09:46.341 11:31:46 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@959 -- # uname 00:09:46.341 11:31:46 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:09:46.341 11:31:46 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 70783 00:09:46.341 11:31:46 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:09:46.341 killing process with pid 70783 00:09:46.341 11:31:46 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:09:46.341 11:31:46 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@972 -- # echo 'killing process with pid 70783' 00:09:46.341 11:31:46 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@973 -- # kill 70783 00:09:46.341 11:31:46 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@978 -- # wait 70783 00:09:46.341 11:31:46 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:09:46.341 11:31:46 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:09:46.341 11:31:46 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:09:46.341 11:31:46 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@297 -- # iptr 00:09:46.341 11:31:46 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:09:46.341 11:31:46 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@791 -- # iptables-save 00:09:46.341 11:31:46 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@791 -- # iptables-restore 00:09:46.341 11:31:46 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@298 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:09:46.341 11:31:46 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@299 -- # nvmf_veth_fini 00:09:46.341 11:31:46 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@233 -- # ip link set nvmf_init_br nomaster 00:09:46.341 11:31:46 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@234 -- # ip link set nvmf_init_br2 nomaster 00:09:46.341 11:31:46 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@235 -- # ip link set nvmf_tgt_br nomaster 00:09:46.341 11:31:46 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@236 -- # ip link set nvmf_tgt_br2 nomaster 00:09:46.341 11:31:46 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@237 -- # ip link set nvmf_init_br down 00:09:46.341 11:31:46 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@238 -- # ip link set nvmf_init_br2 down 00:09:46.341 11:31:46 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@239 -- # ip link set nvmf_tgt_br down 00:09:46.341 11:31:46 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@240 -- # ip link set nvmf_tgt_br2 down 00:09:46.341 11:31:46 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@241 -- # ip link delete nvmf_br type bridge 00:09:46.341 11:31:47 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@242 -- # ip link delete nvmf_init_if 00:09:46.341 11:31:47 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@243 -- # ip link delete nvmf_init_if2 00:09:46.341 11:31:47 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@244 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:09:46.605 11:31:47 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@245 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:09:46.605 11:31:47 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@246 -- # remove_spdk_ns 00:09:46.605 11:31:47 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:09:46.605 11:31:47 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:09:46.605 11:31:47 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:09:46.605 11:31:47 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@300 -- # return 0 00:09:46.605 00:09:46.605 real 0m24.397s 00:09:46.605 user 0m39.868s 00:09:46.605 sys 0m6.420s 00:09:46.605 11:31:47 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@1130 -- # xtrace_disable 00:09:46.605 11:31:47 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:09:46.605 ************************************ 00:09:46.605 END TEST nvmf_zcopy 00:09:46.605 ************************************ 00:09:46.605 11:31:47 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@33 -- # run_test nvmf_nmic /home/vagrant/spdk_repo/spdk/test/nvmf/target/nmic.sh --transport=tcp 00:09:46.605 11:31:47 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:09:46.605 11:31:47 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1111 -- # xtrace_disable 00:09:46.605 11:31:47 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@10 -- # set +x 00:09:46.605 ************************************ 00:09:46.605 START TEST nvmf_nmic 00:09:46.605 ************************************ 00:09:46.605 11:31:47 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/nmic.sh --transport=tcp 00:09:46.605 * Looking for test storage... 00:09:46.605 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:09:46.605 11:31:47 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:09:46.605 11:31:47 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1711 -- # lcov --version 00:09:46.606 11:31:47 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:09:46.865 11:31:47 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:09:46.865 11:31:47 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:09:46.865 11:31:47 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@333 -- # local ver1 ver1_l 00:09:46.865 11:31:47 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@334 -- # local ver2 ver2_l 00:09:46.865 11:31:47 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@336 -- # IFS=.-: 00:09:46.865 11:31:47 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@336 -- # read -ra ver1 00:09:46.865 11:31:47 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@337 -- # IFS=.-: 00:09:46.865 11:31:47 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@337 -- # read -ra ver2 00:09:46.865 11:31:47 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@338 -- # local 'op=<' 00:09:46.865 11:31:47 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@340 -- # ver1_l=2 00:09:46.865 11:31:47 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@341 -- # ver2_l=1 00:09:46.865 11:31:47 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:09:46.865 11:31:47 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@344 -- # case "$op" in 00:09:46.865 11:31:47 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@345 -- # : 1 00:09:46.865 11:31:47 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@364 -- # (( v = 0 )) 00:09:46.865 11:31:47 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:09:46.865 11:31:47 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@365 -- # decimal 1 00:09:46.865 11:31:47 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@353 -- # local d=1 00:09:46.865 11:31:47 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:09:46.865 11:31:47 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@355 -- # echo 1 00:09:46.865 11:31:47 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@365 -- # ver1[v]=1 00:09:46.865 11:31:47 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@366 -- # decimal 2 00:09:46.865 11:31:47 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@353 -- # local d=2 00:09:46.865 11:31:47 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:09:46.865 11:31:47 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@355 -- # echo 2 00:09:46.865 11:31:47 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@366 -- # ver2[v]=2 00:09:46.865 11:31:47 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:09:46.865 11:31:47 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:09:46.865 11:31:47 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@368 -- # return 0 00:09:46.865 11:31:47 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:09:46.865 11:31:47 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:09:46.865 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:46.865 --rc genhtml_branch_coverage=1 00:09:46.865 --rc genhtml_function_coverage=1 00:09:46.865 --rc genhtml_legend=1 00:09:46.865 --rc geninfo_all_blocks=1 00:09:46.865 --rc geninfo_unexecuted_blocks=1 00:09:46.865 00:09:46.865 ' 00:09:46.865 11:31:47 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:09:46.865 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:46.865 --rc genhtml_branch_coverage=1 00:09:46.865 --rc genhtml_function_coverage=1 00:09:46.865 --rc genhtml_legend=1 00:09:46.866 --rc geninfo_all_blocks=1 00:09:46.866 --rc geninfo_unexecuted_blocks=1 00:09:46.866 00:09:46.866 ' 00:09:46.866 11:31:47 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:09:46.866 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:46.866 --rc genhtml_branch_coverage=1 00:09:46.866 --rc genhtml_function_coverage=1 00:09:46.866 --rc genhtml_legend=1 00:09:46.866 --rc geninfo_all_blocks=1 00:09:46.866 --rc geninfo_unexecuted_blocks=1 00:09:46.866 00:09:46.866 ' 00:09:46.866 11:31:47 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:09:46.866 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:46.866 --rc genhtml_branch_coverage=1 00:09:46.866 --rc genhtml_function_coverage=1 00:09:46.866 --rc genhtml_legend=1 00:09:46.866 --rc geninfo_all_blocks=1 00:09:46.866 --rc geninfo_unexecuted_blocks=1 00:09:46.866 00:09:46.866 ' 00:09:46.866 11:31:47 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:09:46.866 11:31:47 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@7 -- # uname -s 00:09:46.866 11:31:47 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:09:46.866 11:31:47 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:09:46.866 11:31:47 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:09:46.866 11:31:47 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:09:46.866 11:31:47 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:09:46.866 11:31:47 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:09:46.866 11:31:47 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:09:46.866 11:31:47 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:09:46.866 11:31:47 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:09:46.866 11:31:47 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:09:46.866 11:31:47 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:2fb5c03d-b67f-4471-b8a9-a83bf7b15be8 00:09:46.866 11:31:47 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@18 -- # NVME_HOSTID=2fb5c03d-b67f-4471-b8a9-a83bf7b15be8 00:09:46.866 11:31:47 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:09:46.866 11:31:47 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:09:46.866 11:31:47 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:09:46.866 11:31:47 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:09:46.866 11:31:47 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@49 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:09:46.866 11:31:47 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@15 -- # shopt -s extglob 00:09:46.866 11:31:47 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:09:46.866 11:31:47 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:09:46.866 11:31:47 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:09:46.866 11:31:47 nvmf_tcp.nvmf_target_core.nvmf_nmic -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:46.866 11:31:47 nvmf_tcp.nvmf_target_core.nvmf_nmic -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:46.866 11:31:47 nvmf_tcp.nvmf_target_core.nvmf_nmic -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:46.866 11:31:47 nvmf_tcp.nvmf_target_core.nvmf_nmic -- paths/export.sh@5 -- # export PATH 00:09:46.866 11:31:47 nvmf_tcp.nvmf_target_core.nvmf_nmic -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:46.866 11:31:47 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@51 -- # : 0 00:09:46.866 11:31:47 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:09:46.866 11:31:47 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:09:46.866 11:31:47 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:09:46.866 11:31:47 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:09:46.866 11:31:47 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:09:46.866 11:31:47 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:09:46.866 /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:09:46.866 11:31:47 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:09:46.866 11:31:47 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:09:46.866 11:31:47 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@55 -- # have_pci_nics=0 00:09:46.866 11:31:47 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@11 -- # MALLOC_BDEV_SIZE=64 00:09:46.866 11:31:47 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:09:46.866 11:31:47 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@14 -- # nvmftestinit 00:09:46.866 11:31:47 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:09:46.866 11:31:47 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:09:46.866 11:31:47 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@476 -- # prepare_net_devs 00:09:46.866 11:31:47 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@438 -- # local -g is_hw=no 00:09:46.866 11:31:47 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@440 -- # remove_spdk_ns 00:09:46.866 11:31:47 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:09:46.867 11:31:47 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:09:46.867 11:31:47 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:09:46.867 11:31:47 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@442 -- # [[ virt != virt ]] 00:09:46.867 11:31:47 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@444 -- # [[ no == yes ]] 00:09:46.867 11:31:47 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@451 -- # [[ virt == phy ]] 00:09:46.867 11:31:47 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@454 -- # [[ virt == phy-fallback ]] 00:09:46.867 11:31:47 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@459 -- # [[ tcp == tcp ]] 00:09:46.867 11:31:47 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@460 -- # nvmf_veth_init 00:09:46.867 11:31:47 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@145 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:09:46.867 11:31:47 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@146 -- # NVMF_SECOND_INITIATOR_IP=10.0.0.2 00:09:46.867 11:31:47 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@147 -- # NVMF_FIRST_TARGET_IP=10.0.0.3 00:09:46.867 11:31:47 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@148 -- # NVMF_SECOND_TARGET_IP=10.0.0.4 00:09:46.867 11:31:47 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@149 -- # NVMF_INITIATOR_IP=10.0.0.1 00:09:46.867 11:31:47 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@150 -- # NVMF_BRIDGE=nvmf_br 00:09:46.867 11:31:47 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@151 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:09:46.867 11:31:47 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@152 -- # NVMF_INITIATOR_INTERFACE2=nvmf_init_if2 00:09:46.867 11:31:47 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@153 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:09:46.867 11:31:47 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@154 -- # NVMF_INITIATOR_BRIDGE2=nvmf_init_br2 00:09:46.867 11:31:47 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@155 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:09:46.867 11:31:47 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@156 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:09:46.867 11:31:47 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@157 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:09:46.867 11:31:47 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@158 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:09:46.867 11:31:47 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@159 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:09:46.867 11:31:47 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@160 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:09:46.867 11:31:47 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@162 -- # ip link set nvmf_init_br nomaster 00:09:46.867 Cannot find device "nvmf_init_br" 00:09:46.867 11:31:47 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@162 -- # true 00:09:46.867 11:31:47 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@163 -- # ip link set nvmf_init_br2 nomaster 00:09:46.867 Cannot find device "nvmf_init_br2" 00:09:46.867 11:31:47 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@163 -- # true 00:09:46.867 11:31:47 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@164 -- # ip link set nvmf_tgt_br nomaster 00:09:46.867 Cannot find device "nvmf_tgt_br" 00:09:46.867 11:31:47 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@164 -- # true 00:09:46.867 11:31:47 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@165 -- # ip link set nvmf_tgt_br2 nomaster 00:09:46.867 Cannot find device "nvmf_tgt_br2" 00:09:46.867 11:31:47 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@165 -- # true 00:09:46.867 11:31:47 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@166 -- # ip link set nvmf_init_br down 00:09:46.867 Cannot find device "nvmf_init_br" 00:09:46.867 11:31:47 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@166 -- # true 00:09:46.867 11:31:47 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@167 -- # ip link set nvmf_init_br2 down 00:09:46.867 Cannot find device "nvmf_init_br2" 00:09:46.867 11:31:47 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@167 -- # true 00:09:46.867 11:31:47 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@168 -- # ip link set nvmf_tgt_br down 00:09:46.867 Cannot find device "nvmf_tgt_br" 00:09:46.867 11:31:47 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@168 -- # true 00:09:46.867 11:31:47 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@169 -- # ip link set nvmf_tgt_br2 down 00:09:46.867 Cannot find device "nvmf_tgt_br2" 00:09:46.867 11:31:47 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@169 -- # true 00:09:46.867 11:31:47 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@170 -- # ip link delete nvmf_br type bridge 00:09:46.867 Cannot find device "nvmf_br" 00:09:46.867 11:31:47 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@170 -- # true 00:09:46.867 11:31:47 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@171 -- # ip link delete nvmf_init_if 00:09:46.867 Cannot find device "nvmf_init_if" 00:09:46.867 11:31:47 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@171 -- # true 00:09:46.867 11:31:47 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@172 -- # ip link delete nvmf_init_if2 00:09:46.867 Cannot find device "nvmf_init_if2" 00:09:46.867 11:31:47 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@172 -- # true 00:09:46.867 11:31:47 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@173 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:09:46.867 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:09:46.867 11:31:47 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@173 -- # true 00:09:46.867 11:31:47 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@174 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:09:46.867 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:09:46.867 11:31:47 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@174 -- # true 00:09:46.867 11:31:47 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@177 -- # ip netns add nvmf_tgt_ns_spdk 00:09:46.867 11:31:47 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@180 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:09:46.867 11:31:47 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@181 -- # ip link add nvmf_init_if2 type veth peer name nvmf_init_br2 00:09:46.867 11:31:47 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@182 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:09:46.867 11:31:47 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@183 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:09:46.867 11:31:47 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:09:47.126 11:31:47 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@187 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:09:47.126 11:31:47 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@190 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:09:47.126 11:31:47 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@191 -- # ip addr add 10.0.0.2/24 dev nvmf_init_if2 00:09:47.126 11:31:47 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@192 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if 00:09:47.126 11:31:47 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@193 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.4/24 dev nvmf_tgt_if2 00:09:47.126 11:31:47 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@196 -- # ip link set nvmf_init_if up 00:09:47.126 11:31:47 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@197 -- # ip link set nvmf_init_if2 up 00:09:47.126 11:31:47 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@198 -- # ip link set nvmf_init_br up 00:09:47.126 11:31:47 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@199 -- # ip link set nvmf_init_br2 up 00:09:47.126 11:31:47 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@200 -- # ip link set nvmf_tgt_br up 00:09:47.126 11:31:47 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@201 -- # ip link set nvmf_tgt_br2 up 00:09:47.126 11:31:47 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@202 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:09:47.126 11:31:47 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@203 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:09:47.126 11:31:47 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@204 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:09:47.126 11:31:47 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@207 -- # ip link add nvmf_br type bridge 00:09:47.126 11:31:47 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@208 -- # ip link set nvmf_br up 00:09:47.126 11:31:47 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@211 -- # ip link set nvmf_init_br master nvmf_br 00:09:47.126 11:31:47 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@212 -- # ip link set nvmf_init_br2 master nvmf_br 00:09:47.126 11:31:47 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@213 -- # ip link set nvmf_tgt_br master nvmf_br 00:09:47.126 11:31:47 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@214 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:09:47.127 11:31:47 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@217 -- # ipts -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:09:47.127 11:31:47 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT' 00:09:47.127 11:31:47 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@218 -- # ipts -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT 00:09:47.127 11:31:47 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT' 00:09:47.127 11:31:47 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@219 -- # ipts -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:09:47.127 11:31:47 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@790 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT -m comment --comment 'SPDK_NVMF:-A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT' 00:09:47.127 11:31:47 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@222 -- # ping -c 1 10.0.0.3 00:09:47.127 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:09:47.127 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.063 ms 00:09:47.127 00:09:47.127 --- 10.0.0.3 ping statistics --- 00:09:47.127 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:09:47.127 rtt min/avg/max/mdev = 0.063/0.063/0.063/0.000 ms 00:09:47.127 11:31:47 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@223 -- # ping -c 1 10.0.0.4 00:09:47.127 PING 10.0.0.4 (10.0.0.4) 56(84) bytes of data. 00:09:47.127 64 bytes from 10.0.0.4: icmp_seq=1 ttl=64 time=0.043 ms 00:09:47.127 00:09:47.127 --- 10.0.0.4 ping statistics --- 00:09:47.127 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:09:47.127 rtt min/avg/max/mdev = 0.043/0.043/0.043/0.000 ms 00:09:47.127 11:31:47 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@224 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:09:47.127 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:09:47.127 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.034 ms 00:09:47.127 00:09:47.127 --- 10.0.0.1 ping statistics --- 00:09:47.127 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:09:47.127 rtt min/avg/max/mdev = 0.034/0.034/0.034/0.000 ms 00:09:47.127 11:31:47 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@225 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.2 00:09:47.127 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:09:47.127 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.057 ms 00:09:47.127 00:09:47.127 --- 10.0.0.2 ping statistics --- 00:09:47.127 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:09:47.127 rtt min/avg/max/mdev = 0.057/0.057/0.057/0.000 ms 00:09:47.127 11:31:47 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@227 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:09:47.127 11:31:47 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@461 -- # return 0 00:09:47.127 11:31:47 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:09:47.127 11:31:47 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:09:47.127 11:31:47 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:09:47.127 11:31:47 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:09:47.127 11:31:47 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:09:47.127 11:31:47 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:09:47.127 11:31:47 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:09:47.127 11:31:47 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@15 -- # nvmfappstart -m 0xF 00:09:47.127 11:31:47 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:09:47.127 11:31:47 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@726 -- # xtrace_disable 00:09:47.127 11:31:47 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:09:47.127 11:31:47 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@509 -- # nvmfpid=71320 00:09:47.127 11:31:47 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@508 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:09:47.127 11:31:47 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@510 -- # waitforlisten 71320 00:09:47.127 11:31:47 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@835 -- # '[' -z 71320 ']' 00:09:47.127 11:31:47 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:09:47.127 11:31:47 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@840 -- # local max_retries=100 00:09:47.127 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:09:47.127 11:31:47 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:09:47.127 11:31:47 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@844 -- # xtrace_disable 00:09:47.127 11:31:47 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:09:47.385 [2024-12-16 11:31:47.904297] Starting SPDK v25.01-pre git sha1 c360129e7 / DPDK 24.03.0 initialization... 00:09:47.385 [2024-12-16 11:31:47.904421] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:09:47.385 [2024-12-16 11:31:48.051109] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:09:47.385 [2024-12-16 11:31:48.110263] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:09:47.385 [2024-12-16 11:31:48.110516] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:09:47.385 [2024-12-16 11:31:48.110595] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:09:47.385 [2024-12-16 11:31:48.110688] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:09:47.385 [2024-12-16 11:31:48.110762] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:09:47.385 [2024-12-16 11:31:48.112043] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 1 00:09:47.385 [2024-12-16 11:31:48.112173] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 2 00:09:47.385 [2024-12-16 11:31:48.112246] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 3 00:09:47.385 [2024-12-16 11:31:48.112254] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:09:48.321 11:31:48 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:09:48.321 11:31:48 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@868 -- # return 0 00:09:48.321 11:31:48 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:09:48.321 11:31:48 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@732 -- # xtrace_disable 00:09:48.321 11:31:48 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:09:48.321 11:31:48 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:09:48.321 11:31:48 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@17 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:09:48.321 11:31:48 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:48.321 11:31:48 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:09:48.321 [2024-12-16 11:31:48.987857] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:09:48.321 11:31:48 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:48.321 11:31:48 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@20 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:09:48.321 11:31:48 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:48.321 11:31:48 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:09:48.321 Malloc0 00:09:48.321 11:31:49 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:48.321 11:31:49 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@21 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:09:48.321 11:31:49 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:48.321 11:31:49 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:09:48.321 11:31:49 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:48.321 11:31:49 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@22 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:09:48.321 11:31:49 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:48.321 11:31:49 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:09:48.321 11:31:49 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:48.321 11:31:49 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@23 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 00:09:48.321 11:31:49 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:48.321 11:31:49 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:09:48.321 [2024-12-16 11:31:49.053891] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4420 *** 00:09:48.321 test case1: single bdev can't be used in multiple subsystems 00:09:48.321 11:31:49 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:48.321 11:31:49 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@25 -- # echo 'test case1: single bdev can'\''t be used in multiple subsystems' 00:09:48.321 11:31:49 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@26 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode2 -a -s SPDK2 00:09:48.321 11:31:49 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:48.321 11:31:49 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:09:48.321 11:31:49 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:48.321 11:31:49 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@27 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode2 -t tcp -a 10.0.0.3 -s 4420 00:09:48.321 11:31:49 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:48.321 11:31:49 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:09:48.580 11:31:49 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:48.580 11:31:49 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@28 -- # nmic_status=0 00:09:48.580 11:31:49 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@29 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode2 Malloc0 00:09:48.580 11:31:49 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:48.580 11:31:49 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:09:48.580 [2024-12-16 11:31:49.081694] bdev.c:8538:bdev_open: *ERROR*: bdev Malloc0 already claimed: type exclusive_write by module NVMe-oF Target 00:09:48.580 [2024-12-16 11:31:49.081739] subsystem.c:2160:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode2: bdev Malloc0 cannot be opened, error=-1 00:09:48.580 [2024-12-16 11:31:49.081760] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:48.580 2024/12/16 11:31:49 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:Malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false)] nqn:nqn.2016-06.io.spdk:cnode2], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:09:48.580 request: 00:09:48.580 { 00:09:48.580 "method": "nvmf_subsystem_add_ns", 00:09:48.580 "params": { 00:09:48.580 "nqn": "nqn.2016-06.io.spdk:cnode2", 00:09:48.580 "namespace": { 00:09:48.580 "bdev_name": "Malloc0", 00:09:48.580 "no_auto_visible": false, 00:09:48.580 "hide_metadata": false 00:09:48.580 } 00:09:48.580 } 00:09:48.580 } 00:09:48.580 Got JSON-RPC error response 00:09:48.580 GoRPCClient: error on JSON-RPC call 00:09:48.580 Adding namespace failed - expected result. 00:09:48.580 test case2: host connect to nvmf target in multiple paths 00:09:48.580 11:31:49 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:09:48.580 11:31:49 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@29 -- # nmic_status=1 00:09:48.580 11:31:49 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@31 -- # '[' 1 -eq 0 ']' 00:09:48.580 11:31:49 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@36 -- # echo ' Adding namespace failed - expected result.' 00:09:48.580 11:31:49 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@39 -- # echo 'test case2: host connect to nvmf target in multiple paths' 00:09:48.580 11:31:49 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@40 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4421 00:09:48.580 11:31:49 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:48.580 11:31:49 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:09:48.580 [2024-12-16 11:31:49.097915] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4421 *** 00:09:48.580 11:31:49 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:48.580 11:31:49 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@41 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:2fb5c03d-b67f-4471-b8a9-a83bf7b15be8 --hostid=2fb5c03d-b67f-4471-b8a9-a83bf7b15be8 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.3 -s 4420 00:09:48.580 11:31:49 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@42 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:2fb5c03d-b67f-4471-b8a9-a83bf7b15be8 --hostid=2fb5c03d-b67f-4471-b8a9-a83bf7b15be8 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.3 -s 4421 00:09:48.839 11:31:49 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@44 -- # waitforserial SPDKISFASTANDAWESOME 00:09:48.839 11:31:49 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1202 -- # local i=0 00:09:48.839 11:31:49 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1203 -- # local nvme_device_counter=1 nvme_devices=0 00:09:48.839 11:31:49 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1204 -- # [[ -n '' ]] 00:09:48.839 11:31:49 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1209 -- # sleep 2 00:09:50.741 11:31:51 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1210 -- # (( i++ <= 15 )) 00:09:50.741 11:31:51 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1211 -- # lsblk -l -o NAME,SERIAL 00:09:50.741 11:31:51 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1211 -- # grep -c SPDKISFASTANDAWESOME 00:09:50.741 11:31:51 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1211 -- # nvme_devices=1 00:09:50.741 11:31:51 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1212 -- # (( nvme_devices == nvme_device_counter )) 00:09:50.741 11:31:51 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1212 -- # return 0 00:09:50.741 11:31:51 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@46 -- # /home/vagrant/spdk_repo/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 1 -t write -r 1 -v 00:09:51.014 [global] 00:09:51.014 thread=1 00:09:51.014 invalidate=1 00:09:51.014 rw=write 00:09:51.014 time_based=1 00:09:51.014 runtime=1 00:09:51.014 ioengine=libaio 00:09:51.014 direct=1 00:09:51.014 bs=4096 00:09:51.014 iodepth=1 00:09:51.014 norandommap=0 00:09:51.014 numjobs=1 00:09:51.014 00:09:51.014 verify_dump=1 00:09:51.014 verify_backlog=512 00:09:51.014 verify_state_save=0 00:09:51.014 do_verify=1 00:09:51.014 verify=crc32c-intel 00:09:51.014 [job0] 00:09:51.014 filename=/dev/nvme0n1 00:09:51.014 Could not set queue depth (nvme0n1) 00:09:51.014 job0: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:09:51.014 fio-3.35 00:09:51.014 Starting 1 thread 00:09:52.399 00:09:52.399 job0: (groupid=0, jobs=1): err= 0: pid=71431: Mon Dec 16 11:31:52 2024 00:09:52.399 read: IOPS=3068, BW=12.0MiB/s (12.6MB/s)(12.0MiB/1001msec) 00:09:52.399 slat (nsec): min=12393, max=38957, avg=15076.75, stdev=2916.41 00:09:52.399 clat (usec): min=123, max=271, avg=149.72, stdev=13.25 00:09:52.399 lat (usec): min=135, max=284, avg=164.80, stdev=14.21 00:09:52.399 clat percentiles (usec): 00:09:52.399 | 1.00th=[ 133], 5.00th=[ 135], 10.00th=[ 137], 20.00th=[ 139], 00:09:52.399 | 30.00th=[ 143], 40.00th=[ 145], 50.00th=[ 147], 60.00th=[ 151], 00:09:52.399 | 70.00th=[ 155], 80.00th=[ 159], 90.00th=[ 167], 95.00th=[ 176], 00:09:52.399 | 99.00th=[ 190], 99.50th=[ 200], 99.90th=[ 208], 99.95th=[ 269], 00:09:52.399 | 99.99th=[ 273] 00:09:52.399 write: IOPS=3499, BW=13.7MiB/s (14.3MB/s)(13.7MiB/1001msec); 0 zone resets 00:09:52.399 slat (usec): min=17, max=120, avg=23.84, stdev= 9.15 00:09:52.399 clat (usec): min=88, max=282, avg=114.01, stdev=24.77 00:09:52.399 lat (usec): min=107, max=341, avg=137.85, stdev=31.97 00:09:52.399 clat percentiles (usec): 00:09:52.399 | 1.00th=[ 93], 5.00th=[ 96], 10.00th=[ 97], 20.00th=[ 100], 00:09:52.399 | 30.00th=[ 102], 40.00th=[ 104], 50.00th=[ 106], 60.00th=[ 109], 00:09:52.399 | 70.00th=[ 112], 80.00th=[ 119], 90.00th=[ 147], 95.00th=[ 178], 00:09:52.399 | 99.00th=[ 212], 99.50th=[ 227], 99.90th=[ 249], 99.95th=[ 253], 00:09:52.399 | 99.99th=[ 281] 00:09:52.399 bw ( KiB/s): min=12288, max=12288, per=87.78%, avg=12288.00, stdev= 0.00, samples=1 00:09:52.399 iops : min= 3072, max= 3072, avg=3072.00, stdev= 0.00, samples=1 00:09:52.399 lat (usec) : 100=11.09%, 250=88.84%, 500=0.08% 00:09:52.399 cpu : usr=2.00%, sys=10.30%, ctx=6577, majf=0, minf=5 00:09:52.399 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:09:52.399 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:09:52.399 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:09:52.399 issued rwts: total=3072,3503,0,0 short=0,0,0,0 dropped=0,0,0,0 00:09:52.399 latency : target=0, window=0, percentile=100.00%, depth=1 00:09:52.399 00:09:52.399 Run status group 0 (all jobs): 00:09:52.399 READ: bw=12.0MiB/s (12.6MB/s), 12.0MiB/s-12.0MiB/s (12.6MB/s-12.6MB/s), io=12.0MiB (12.6MB), run=1001-1001msec 00:09:52.399 WRITE: bw=13.7MiB/s (14.3MB/s), 13.7MiB/s-13.7MiB/s (14.3MB/s-14.3MB/s), io=13.7MiB (14.3MB), run=1001-1001msec 00:09:52.399 00:09:52.399 Disk stats (read/write): 00:09:52.399 nvme0n1: ios=2807/3072, merge=0/0, ticks=442/371, in_queue=813, util=91.16% 00:09:52.399 11:31:52 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@48 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:09:52.399 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 2 controller(s) 00:09:52.399 11:31:52 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@49 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:09:52.399 11:31:52 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1223 -- # local i=0 00:09:52.399 11:31:52 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1224 -- # lsblk -o NAME,SERIAL 00:09:52.399 11:31:52 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1224 -- # grep -q -w SPDKISFASTANDAWESOME 00:09:52.399 11:31:52 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1231 -- # lsblk -l -o NAME,SERIAL 00:09:52.399 11:31:52 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1231 -- # grep -q -w SPDKISFASTANDAWESOME 00:09:52.399 11:31:52 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1235 -- # return 0 00:09:52.399 11:31:52 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@51 -- # trap - SIGINT SIGTERM EXIT 00:09:52.399 11:31:52 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@53 -- # nvmftestfini 00:09:52.399 11:31:52 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@516 -- # nvmfcleanup 00:09:52.399 11:31:52 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@121 -- # sync 00:09:52.399 11:31:52 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:09:52.399 11:31:52 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@124 -- # set +e 00:09:52.399 11:31:52 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@125 -- # for i in {1..20} 00:09:52.399 11:31:52 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:09:52.399 rmmod nvme_tcp 00:09:52.399 rmmod nvme_fabrics 00:09:52.399 rmmod nvme_keyring 00:09:52.399 11:31:52 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:09:52.399 11:31:52 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@128 -- # set -e 00:09:52.399 11:31:52 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@129 -- # return 0 00:09:52.399 11:31:52 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@517 -- # '[' -n 71320 ']' 00:09:52.399 11:31:52 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@518 -- # killprocess 71320 00:09:52.399 11:31:52 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@954 -- # '[' -z 71320 ']' 00:09:52.399 11:31:52 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@958 -- # kill -0 71320 00:09:52.399 11:31:52 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@959 -- # uname 00:09:52.400 11:31:52 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:09:52.400 11:31:52 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 71320 00:09:52.400 killing process with pid 71320 00:09:52.400 11:31:53 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:09:52.400 11:31:53 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:09:52.400 11:31:53 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@972 -- # echo 'killing process with pid 71320' 00:09:52.400 11:31:53 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@973 -- # kill 71320 00:09:52.400 11:31:53 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@978 -- # wait 71320 00:09:52.658 11:31:53 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:09:52.658 11:31:53 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:09:52.658 11:31:53 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:09:52.658 11:31:53 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@297 -- # iptr 00:09:52.658 11:31:53 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@791 -- # iptables-save 00:09:52.658 11:31:53 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:09:52.658 11:31:53 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@791 -- # iptables-restore 00:09:52.658 11:31:53 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@298 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:09:52.658 11:31:53 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@299 -- # nvmf_veth_fini 00:09:52.658 11:31:53 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@233 -- # ip link set nvmf_init_br nomaster 00:09:52.658 11:31:53 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@234 -- # ip link set nvmf_init_br2 nomaster 00:09:52.658 11:31:53 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@235 -- # ip link set nvmf_tgt_br nomaster 00:09:52.658 11:31:53 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@236 -- # ip link set nvmf_tgt_br2 nomaster 00:09:52.658 11:31:53 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@237 -- # ip link set nvmf_init_br down 00:09:52.658 11:31:53 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@238 -- # ip link set nvmf_init_br2 down 00:09:52.658 11:31:53 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@239 -- # ip link set nvmf_tgt_br down 00:09:52.658 11:31:53 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@240 -- # ip link set nvmf_tgt_br2 down 00:09:52.658 11:31:53 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@241 -- # ip link delete nvmf_br type bridge 00:09:52.658 11:31:53 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@242 -- # ip link delete nvmf_init_if 00:09:52.658 11:31:53 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@243 -- # ip link delete nvmf_init_if2 00:09:52.917 11:31:53 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@244 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:09:52.917 11:31:53 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@245 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:09:52.917 11:31:53 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@246 -- # remove_spdk_ns 00:09:52.917 11:31:53 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:09:52.917 11:31:53 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:09:52.917 11:31:53 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:09:52.917 11:31:53 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@300 -- # return 0 00:09:52.917 00:09:52.917 real 0m6.279s 00:09:52.917 user 0m20.277s 00:09:52.917 sys 0m1.486s 00:09:52.917 11:31:53 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1130 -- # xtrace_disable 00:09:52.917 11:31:53 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:09:52.917 ************************************ 00:09:52.917 END TEST nvmf_nmic 00:09:52.917 ************************************ 00:09:52.917 11:31:53 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@34 -- # run_test nvmf_fio_target /home/vagrant/spdk_repo/spdk/test/nvmf/target/fio.sh --transport=tcp 00:09:52.917 11:31:53 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:09:52.917 11:31:53 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1111 -- # xtrace_disable 00:09:52.917 11:31:53 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@10 -- # set +x 00:09:52.917 ************************************ 00:09:52.917 START TEST nvmf_fio_target 00:09:52.917 ************************************ 00:09:52.917 11:31:53 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/fio.sh --transport=tcp 00:09:52.917 * Looking for test storage... 00:09:52.917 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:09:52.917 11:31:53 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:09:52.917 11:31:53 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1711 -- # lcov --version 00:09:52.917 11:31:53 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:09:53.177 11:31:53 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:09:53.177 11:31:53 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:09:53.177 11:31:53 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@333 -- # local ver1 ver1_l 00:09:53.177 11:31:53 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@334 -- # local ver2 ver2_l 00:09:53.177 11:31:53 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@336 -- # IFS=.-: 00:09:53.177 11:31:53 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@336 -- # read -ra ver1 00:09:53.177 11:31:53 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@337 -- # IFS=.-: 00:09:53.177 11:31:53 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@337 -- # read -ra ver2 00:09:53.177 11:31:53 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@338 -- # local 'op=<' 00:09:53.177 11:31:53 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@340 -- # ver1_l=2 00:09:53.177 11:31:53 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@341 -- # ver2_l=1 00:09:53.177 11:31:53 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:09:53.177 11:31:53 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@344 -- # case "$op" in 00:09:53.177 11:31:53 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@345 -- # : 1 00:09:53.177 11:31:53 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@364 -- # (( v = 0 )) 00:09:53.177 11:31:53 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:09:53.177 11:31:53 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@365 -- # decimal 1 00:09:53.177 11:31:53 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@353 -- # local d=1 00:09:53.177 11:31:53 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:09:53.177 11:31:53 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@355 -- # echo 1 00:09:53.177 11:31:53 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@365 -- # ver1[v]=1 00:09:53.177 11:31:53 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@366 -- # decimal 2 00:09:53.177 11:31:53 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@353 -- # local d=2 00:09:53.177 11:31:53 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:09:53.177 11:31:53 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@355 -- # echo 2 00:09:53.177 11:31:53 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@366 -- # ver2[v]=2 00:09:53.177 11:31:53 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:09:53.177 11:31:53 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:09:53.177 11:31:53 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@368 -- # return 0 00:09:53.177 11:31:53 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:09:53.177 11:31:53 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:09:53.177 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:53.177 --rc genhtml_branch_coverage=1 00:09:53.177 --rc genhtml_function_coverage=1 00:09:53.177 --rc genhtml_legend=1 00:09:53.177 --rc geninfo_all_blocks=1 00:09:53.177 --rc geninfo_unexecuted_blocks=1 00:09:53.177 00:09:53.177 ' 00:09:53.177 11:31:53 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:09:53.177 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:53.177 --rc genhtml_branch_coverage=1 00:09:53.177 --rc genhtml_function_coverage=1 00:09:53.177 --rc genhtml_legend=1 00:09:53.177 --rc geninfo_all_blocks=1 00:09:53.177 --rc geninfo_unexecuted_blocks=1 00:09:53.177 00:09:53.177 ' 00:09:53.177 11:31:53 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:09:53.177 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:53.177 --rc genhtml_branch_coverage=1 00:09:53.177 --rc genhtml_function_coverage=1 00:09:53.177 --rc genhtml_legend=1 00:09:53.177 --rc geninfo_all_blocks=1 00:09:53.177 --rc geninfo_unexecuted_blocks=1 00:09:53.177 00:09:53.177 ' 00:09:53.177 11:31:53 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:09:53.177 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:53.177 --rc genhtml_branch_coverage=1 00:09:53.177 --rc genhtml_function_coverage=1 00:09:53.177 --rc genhtml_legend=1 00:09:53.177 --rc geninfo_all_blocks=1 00:09:53.177 --rc geninfo_unexecuted_blocks=1 00:09:53.177 00:09:53.177 ' 00:09:53.177 11:31:53 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:09:53.177 11:31:53 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@7 -- # uname -s 00:09:53.177 11:31:53 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:09:53.177 11:31:53 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:09:53.177 11:31:53 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:09:53.177 11:31:53 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:09:53.177 11:31:53 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:09:53.177 11:31:53 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:09:53.177 11:31:53 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:09:53.177 11:31:53 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:09:53.177 11:31:53 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:09:53.177 11:31:53 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:09:53.177 11:31:53 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:2fb5c03d-b67f-4471-b8a9-a83bf7b15be8 00:09:53.177 11:31:53 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@18 -- # NVME_HOSTID=2fb5c03d-b67f-4471-b8a9-a83bf7b15be8 00:09:53.178 11:31:53 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:09:53.178 11:31:53 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:09:53.178 11:31:53 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:09:53.178 11:31:53 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:09:53.178 11:31:53 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@49 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:09:53.178 11:31:53 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@15 -- # shopt -s extglob 00:09:53.178 11:31:53 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:09:53.178 11:31:53 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:09:53.178 11:31:53 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:09:53.178 11:31:53 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:53.178 11:31:53 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:53.178 11:31:53 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:53.178 11:31:53 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- paths/export.sh@5 -- # export PATH 00:09:53.178 11:31:53 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:53.178 11:31:53 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@51 -- # : 0 00:09:53.178 11:31:53 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:09:53.178 11:31:53 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:09:53.178 11:31:53 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:09:53.178 11:31:53 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:09:53.178 11:31:53 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:09:53.178 11:31:53 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:09:53.178 /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:09:53.178 11:31:53 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:09:53.178 11:31:53 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:09:53.178 11:31:53 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@55 -- # have_pci_nics=0 00:09:53.178 11:31:53 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@11 -- # MALLOC_BDEV_SIZE=64 00:09:53.178 11:31:53 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:09:53.178 11:31:53 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@14 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:09:53.178 11:31:53 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@16 -- # nvmftestinit 00:09:53.178 11:31:53 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:09:53.178 11:31:53 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:09:53.178 11:31:53 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@476 -- # prepare_net_devs 00:09:53.178 11:31:53 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@438 -- # local -g is_hw=no 00:09:53.178 11:31:53 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@440 -- # remove_spdk_ns 00:09:53.178 11:31:53 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:09:53.178 11:31:53 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:09:53.178 11:31:53 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:09:53.178 11:31:53 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@442 -- # [[ virt != virt ]] 00:09:53.178 11:31:53 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@444 -- # [[ no == yes ]] 00:09:53.178 11:31:53 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@451 -- # [[ virt == phy ]] 00:09:53.178 11:31:53 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@454 -- # [[ virt == phy-fallback ]] 00:09:53.178 11:31:53 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@459 -- # [[ tcp == tcp ]] 00:09:53.178 11:31:53 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@460 -- # nvmf_veth_init 00:09:53.178 11:31:53 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@145 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:09:53.178 11:31:53 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@146 -- # NVMF_SECOND_INITIATOR_IP=10.0.0.2 00:09:53.178 11:31:53 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@147 -- # NVMF_FIRST_TARGET_IP=10.0.0.3 00:09:53.178 11:31:53 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@148 -- # NVMF_SECOND_TARGET_IP=10.0.0.4 00:09:53.178 11:31:53 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@149 -- # NVMF_INITIATOR_IP=10.0.0.1 00:09:53.178 11:31:53 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@150 -- # NVMF_BRIDGE=nvmf_br 00:09:53.178 11:31:53 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@151 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:09:53.178 11:31:53 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@152 -- # NVMF_INITIATOR_INTERFACE2=nvmf_init_if2 00:09:53.178 11:31:53 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@153 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:09:53.178 11:31:53 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@154 -- # NVMF_INITIATOR_BRIDGE2=nvmf_init_br2 00:09:53.178 11:31:53 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@155 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:09:53.178 11:31:53 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@156 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:09:53.178 11:31:53 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@157 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:09:53.178 11:31:53 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@158 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:09:53.178 11:31:53 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@159 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:09:53.178 11:31:53 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@160 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:09:53.178 11:31:53 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@162 -- # ip link set nvmf_init_br nomaster 00:09:53.178 Cannot find device "nvmf_init_br" 00:09:53.178 11:31:53 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@162 -- # true 00:09:53.178 11:31:53 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@163 -- # ip link set nvmf_init_br2 nomaster 00:09:53.178 Cannot find device "nvmf_init_br2" 00:09:53.178 11:31:53 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@163 -- # true 00:09:53.178 11:31:53 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@164 -- # ip link set nvmf_tgt_br nomaster 00:09:53.178 Cannot find device "nvmf_tgt_br" 00:09:53.178 11:31:53 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@164 -- # true 00:09:53.178 11:31:53 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@165 -- # ip link set nvmf_tgt_br2 nomaster 00:09:53.178 Cannot find device "nvmf_tgt_br2" 00:09:53.178 11:31:53 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@165 -- # true 00:09:53.178 11:31:53 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@166 -- # ip link set nvmf_init_br down 00:09:53.178 Cannot find device "nvmf_init_br" 00:09:53.178 11:31:53 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@166 -- # true 00:09:53.178 11:31:53 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@167 -- # ip link set nvmf_init_br2 down 00:09:53.178 Cannot find device "nvmf_init_br2" 00:09:53.178 11:31:53 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@167 -- # true 00:09:53.178 11:31:53 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@168 -- # ip link set nvmf_tgt_br down 00:09:53.178 Cannot find device "nvmf_tgt_br" 00:09:53.178 11:31:53 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@168 -- # true 00:09:53.178 11:31:53 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@169 -- # ip link set nvmf_tgt_br2 down 00:09:53.178 Cannot find device "nvmf_tgt_br2" 00:09:53.178 11:31:53 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@169 -- # true 00:09:53.178 11:31:53 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@170 -- # ip link delete nvmf_br type bridge 00:09:53.178 Cannot find device "nvmf_br" 00:09:53.178 11:31:53 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@170 -- # true 00:09:53.178 11:31:53 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@171 -- # ip link delete nvmf_init_if 00:09:53.178 Cannot find device "nvmf_init_if" 00:09:53.178 11:31:53 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@171 -- # true 00:09:53.178 11:31:53 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@172 -- # ip link delete nvmf_init_if2 00:09:53.178 Cannot find device "nvmf_init_if2" 00:09:53.178 11:31:53 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@172 -- # true 00:09:53.178 11:31:53 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@173 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:09:53.178 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:09:53.178 11:31:53 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@173 -- # true 00:09:53.178 11:31:53 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@174 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:09:53.178 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:09:53.178 11:31:53 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@174 -- # true 00:09:53.178 11:31:53 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@177 -- # ip netns add nvmf_tgt_ns_spdk 00:09:53.178 11:31:53 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@180 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:09:53.178 11:31:53 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@181 -- # ip link add nvmf_init_if2 type veth peer name nvmf_init_br2 00:09:53.178 11:31:53 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@182 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:09:53.437 11:31:53 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@183 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:09:53.437 11:31:53 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:09:53.437 11:31:53 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@187 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:09:53.437 11:31:53 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@190 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:09:53.437 11:31:53 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@191 -- # ip addr add 10.0.0.2/24 dev nvmf_init_if2 00:09:53.437 11:31:53 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@192 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if 00:09:53.437 11:31:53 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@193 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.4/24 dev nvmf_tgt_if2 00:09:53.437 11:31:54 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@196 -- # ip link set nvmf_init_if up 00:09:53.438 11:31:54 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@197 -- # ip link set nvmf_init_if2 up 00:09:53.438 11:31:54 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@198 -- # ip link set nvmf_init_br up 00:09:53.438 11:31:54 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@199 -- # ip link set nvmf_init_br2 up 00:09:53.438 11:31:54 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@200 -- # ip link set nvmf_tgt_br up 00:09:53.438 11:31:54 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@201 -- # ip link set nvmf_tgt_br2 up 00:09:53.438 11:31:54 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@202 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:09:53.438 11:31:54 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@203 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:09:53.438 11:31:54 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@204 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:09:53.438 11:31:54 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@207 -- # ip link add nvmf_br type bridge 00:09:53.438 11:31:54 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@208 -- # ip link set nvmf_br up 00:09:53.438 11:31:54 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@211 -- # ip link set nvmf_init_br master nvmf_br 00:09:53.438 11:31:54 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@212 -- # ip link set nvmf_init_br2 master nvmf_br 00:09:53.438 11:31:54 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@213 -- # ip link set nvmf_tgt_br master nvmf_br 00:09:53.438 11:31:54 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@214 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:09:53.438 11:31:54 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@217 -- # ipts -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:09:53.438 11:31:54 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT' 00:09:53.438 11:31:54 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@218 -- # ipts -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT 00:09:53.438 11:31:54 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT' 00:09:53.438 11:31:54 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@219 -- # ipts -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:09:53.438 11:31:54 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@790 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT -m comment --comment 'SPDK_NVMF:-A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT' 00:09:53.438 11:31:54 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@222 -- # ping -c 1 10.0.0.3 00:09:53.438 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:09:53.438 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.066 ms 00:09:53.438 00:09:53.438 --- 10.0.0.3 ping statistics --- 00:09:53.438 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:09:53.438 rtt min/avg/max/mdev = 0.066/0.066/0.066/0.000 ms 00:09:53.438 11:31:54 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@223 -- # ping -c 1 10.0.0.4 00:09:53.438 PING 10.0.0.4 (10.0.0.4) 56(84) bytes of data. 00:09:53.438 64 bytes from 10.0.0.4: icmp_seq=1 ttl=64 time=0.044 ms 00:09:53.438 00:09:53.438 --- 10.0.0.4 ping statistics --- 00:09:53.438 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:09:53.438 rtt min/avg/max/mdev = 0.044/0.044/0.044/0.000 ms 00:09:53.438 11:31:54 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@224 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:09:53.438 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:09:53.438 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.022 ms 00:09:53.438 00:09:53.438 --- 10.0.0.1 ping statistics --- 00:09:53.438 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:09:53.438 rtt min/avg/max/mdev = 0.022/0.022/0.022/0.000 ms 00:09:53.438 11:31:54 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@225 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.2 00:09:53.438 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:09:53.438 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.055 ms 00:09:53.438 00:09:53.438 --- 10.0.0.2 ping statistics --- 00:09:53.438 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:09:53.438 rtt min/avg/max/mdev = 0.055/0.055/0.055/0.000 ms 00:09:53.438 11:31:54 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@227 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:09:53.438 11:31:54 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@461 -- # return 0 00:09:53.438 11:31:54 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:09:53.438 11:31:54 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:09:53.438 11:31:54 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:09:53.438 11:31:54 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:09:53.438 11:31:54 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:09:53.438 11:31:54 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:09:53.438 11:31:54 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:09:53.438 11:31:54 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@17 -- # nvmfappstart -m 0xF 00:09:53.438 11:31:54 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:09:53.438 11:31:54 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@726 -- # xtrace_disable 00:09:53.438 11:31:54 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@10 -- # set +x 00:09:53.438 11:31:54 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@509 -- # nvmfpid=71659 00:09:53.438 11:31:54 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@510 -- # waitforlisten 71659 00:09:53.438 11:31:54 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@508 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:09:53.438 11:31:54 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@835 -- # '[' -z 71659 ']' 00:09:53.438 11:31:54 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:09:53.438 11:31:54 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@840 -- # local max_retries=100 00:09:53.438 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:09:53.438 11:31:54 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:09:53.438 11:31:54 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@844 -- # xtrace_disable 00:09:53.438 11:31:54 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@10 -- # set +x 00:09:53.697 [2024-12-16 11:31:54.237492] Starting SPDK v25.01-pre git sha1 c360129e7 / DPDK 24.03.0 initialization... 00:09:53.697 [2024-12-16 11:31:54.237610] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:09:53.697 [2024-12-16 11:31:54.392179] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:09:53.956 [2024-12-16 11:31:54.461744] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:09:53.956 [2024-12-16 11:31:54.461798] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:09:53.956 [2024-12-16 11:31:54.461812] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:09:53.956 [2024-12-16 11:31:54.461823] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:09:53.956 [2024-12-16 11:31:54.461833] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:09:53.956 [2024-12-16 11:31:54.463253] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 1 00:09:53.956 [2024-12-16 11:31:54.463620] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 3 00:09:53.956 [2024-12-16 11:31:54.463632] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:09:53.956 [2024-12-16 11:31:54.463895] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 2 00:09:54.890 11:31:55 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:09:54.890 11:31:55 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@868 -- # return 0 00:09:54.890 11:31:55 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:09:54.890 11:31:55 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@732 -- # xtrace_disable 00:09:54.890 11:31:55 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@10 -- # set +x 00:09:54.890 11:31:55 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:09:54.890 11:31:55 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@19 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:09:55.153 [2024-12-16 11:31:55.652240] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:09:55.153 11:31:55 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@21 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:09:55.411 11:31:56 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@21 -- # malloc_bdevs='Malloc0 ' 00:09:55.411 11:31:56 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@22 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:09:55.670 11:31:56 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@22 -- # malloc_bdevs+=Malloc1 00:09:55.670 11:31:56 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@24 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:09:56.238 11:31:56 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@24 -- # raid_malloc_bdevs='Malloc2 ' 00:09:56.238 11:31:56 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@25 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:09:56.238 11:31:56 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@25 -- # raid_malloc_bdevs+=Malloc3 00:09:56.238 11:31:56 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@26 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_raid_create -n raid0 -z 64 -r 0 -b 'Malloc2 Malloc3' 00:09:56.496 11:31:57 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@29 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:09:57.064 11:31:57 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@29 -- # concat_malloc_bdevs='Malloc4 ' 00:09:57.064 11:31:57 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@30 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:09:57.323 11:31:57 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@30 -- # concat_malloc_bdevs+='Malloc5 ' 00:09:57.323 11:31:57 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:09:57.581 11:31:58 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@31 -- # concat_malloc_bdevs+=Malloc6 00:09:57.581 11:31:58 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@32 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_raid_create -n concat0 -r concat -z 64 -b 'Malloc4 Malloc5 Malloc6' 00:09:57.840 11:31:58 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@34 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:09:58.098 11:31:58 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@35 -- # for malloc_bdev in $malloc_bdevs 00:09:58.098 11:31:58 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@36 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:09:58.664 11:31:59 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@35 -- # for malloc_bdev in $malloc_bdevs 00:09:58.664 11:31:59 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@36 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:09:58.922 11:31:59 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@38 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 00:09:59.180 [2024-12-16 11:31:59.749232] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4420 *** 00:09:59.180 11:31:59 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@41 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 raid0 00:09:59.438 11:32:00 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@44 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 concat0 00:09:59.698 11:32:00 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@46 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:2fb5c03d-b67f-4471-b8a9-a83bf7b15be8 --hostid=2fb5c03d-b67f-4471-b8a9-a83bf7b15be8 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.3 -s 4420 00:09:59.956 11:32:00 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@48 -- # waitforserial SPDKISFASTANDAWESOME 4 00:09:59.956 11:32:00 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1202 -- # local i=0 00:09:59.956 11:32:00 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1203 -- # local nvme_device_counter=1 nvme_devices=0 00:09:59.956 11:32:00 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1204 -- # [[ -n 4 ]] 00:09:59.956 11:32:00 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1205 -- # nvme_device_counter=4 00:09:59.956 11:32:00 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1209 -- # sleep 2 00:10:01.865 11:32:02 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1210 -- # (( i++ <= 15 )) 00:10:01.865 11:32:02 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1211 -- # lsblk -l -o NAME,SERIAL 00:10:01.865 11:32:02 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1211 -- # grep -c SPDKISFASTANDAWESOME 00:10:01.865 11:32:02 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1211 -- # nvme_devices=4 00:10:01.865 11:32:02 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1212 -- # (( nvme_devices == nvme_device_counter )) 00:10:01.865 11:32:02 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1212 -- # return 0 00:10:01.865 11:32:02 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@50 -- # /home/vagrant/spdk_repo/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 1 -t write -r 1 -v 00:10:01.865 [global] 00:10:01.865 thread=1 00:10:01.865 invalidate=1 00:10:01.865 rw=write 00:10:01.865 time_based=1 00:10:01.865 runtime=1 00:10:01.865 ioengine=libaio 00:10:01.865 direct=1 00:10:01.865 bs=4096 00:10:01.865 iodepth=1 00:10:01.865 norandommap=0 00:10:01.865 numjobs=1 00:10:01.865 00:10:01.865 verify_dump=1 00:10:01.865 verify_backlog=512 00:10:01.865 verify_state_save=0 00:10:01.865 do_verify=1 00:10:01.865 verify=crc32c-intel 00:10:01.865 [job0] 00:10:01.865 filename=/dev/nvme0n1 00:10:01.865 [job1] 00:10:01.865 filename=/dev/nvme0n2 00:10:01.865 [job2] 00:10:01.865 filename=/dev/nvme0n3 00:10:01.865 [job3] 00:10:01.865 filename=/dev/nvme0n4 00:10:02.135 Could not set queue depth (nvme0n1) 00:10:02.135 Could not set queue depth (nvme0n2) 00:10:02.135 Could not set queue depth (nvme0n3) 00:10:02.135 Could not set queue depth (nvme0n4) 00:10:02.135 job0: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:10:02.135 job1: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:10:02.135 job2: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:10:02.135 job3: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:10:02.135 fio-3.35 00:10:02.135 Starting 4 threads 00:10:03.514 00:10:03.514 job0: (groupid=0, jobs=1): err= 0: pid=71969: Mon Dec 16 11:32:03 2024 00:10:03.514 read: IOPS=1904, BW=7616KiB/s (7799kB/s)(7624KiB/1001msec) 00:10:03.514 slat (nsec): min=10921, max=47240, avg=15651.84, stdev=3697.50 00:10:03.514 clat (usec): min=137, max=2047, avg=255.27, stdev=70.84 00:10:03.514 lat (usec): min=154, max=2071, avg=270.92, stdev=70.03 00:10:03.514 clat percentiles (usec): 00:10:03.514 | 1.00th=[ 145], 5.00th=[ 155], 10.00th=[ 159], 20.00th=[ 174], 00:10:03.514 | 30.00th=[ 253], 40.00th=[ 265], 50.00th=[ 273], 60.00th=[ 277], 00:10:03.514 | 70.00th=[ 285], 80.00th=[ 293], 90.00th=[ 302], 95.00th=[ 343], 00:10:03.514 | 99.00th=[ 375], 99.50th=[ 383], 99.90th=[ 545], 99.95th=[ 2040], 00:10:03.514 | 99.99th=[ 2040] 00:10:03.514 write: IOPS=2045, BW=8184KiB/s (8380kB/s)(8192KiB/1001msec); 0 zone resets 00:10:03.514 slat (nsec): min=11488, max=96336, avg=23367.44, stdev=6076.09 00:10:03.514 clat (usec): min=108, max=5030, avg=209.21, stdev=138.44 00:10:03.514 lat (usec): min=131, max=5062, avg=232.58, stdev=138.39 00:10:03.514 clat percentiles (usec): 00:10:03.514 | 1.00th=[ 118], 5.00th=[ 123], 10.00th=[ 127], 20.00th=[ 135], 00:10:03.514 | 30.00th=[ 174], 40.00th=[ 208], 50.00th=[ 217], 60.00th=[ 227], 00:10:03.514 | 70.00th=[ 235], 80.00th=[ 251], 90.00th=[ 269], 95.00th=[ 281], 00:10:03.514 | 99.00th=[ 302], 99.50th=[ 314], 99.90th=[ 1778], 99.95th=[ 2245], 00:10:03.514 | 99.99th=[ 5014] 00:10:03.514 bw ( KiB/s): min= 9448, max= 9448, per=23.10%, avg=9448.00, stdev= 0.00, samples=1 00:10:03.514 iops : min= 2362, max= 2362, avg=2362.00, stdev= 0.00, samples=1 00:10:03.514 lat (usec) : 250=54.81%, 500=44.99%, 750=0.05% 00:10:03.514 lat (msec) : 2=0.08%, 4=0.05%, 10=0.03% 00:10:03.514 cpu : usr=2.10%, sys=5.90%, ctx=3956, majf=0, minf=5 00:10:03.514 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:10:03.514 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:10:03.514 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:10:03.514 issued rwts: total=1906,2048,0,0 short=0,0,0,0 dropped=0,0,0,0 00:10:03.514 latency : target=0, window=0, percentile=100.00%, depth=1 00:10:03.514 job1: (groupid=0, jobs=1): err= 0: pid=71970: Mon Dec 16 11:32:03 2024 00:10:03.514 read: IOPS=3033, BW=11.9MiB/s (12.4MB/s)(11.9MiB/1001msec) 00:10:03.514 slat (usec): min=12, max=126, avg=15.81, stdev= 4.70 00:10:03.514 clat (usec): min=111, max=2117, avg=160.45, stdev=47.56 00:10:03.514 lat (usec): min=149, max=2134, avg=176.26, stdev=47.87 00:10:03.514 clat percentiles (usec): 00:10:03.514 | 1.00th=[ 141], 5.00th=[ 143], 10.00th=[ 147], 20.00th=[ 149], 00:10:03.514 | 30.00th=[ 153], 40.00th=[ 155], 50.00th=[ 157], 60.00th=[ 159], 00:10:03.514 | 70.00th=[ 163], 80.00th=[ 165], 90.00th=[ 172], 95.00th=[ 178], 00:10:03.514 | 99.00th=[ 202], 99.50th=[ 347], 99.90th=[ 537], 99.95th=[ 1352], 00:10:03.514 | 99.99th=[ 2114] 00:10:03.514 write: IOPS=3068, BW=12.0MiB/s (12.6MB/s)(12.0MiB/1001msec); 0 zone resets 00:10:03.514 slat (usec): min=18, max=123, avg=23.71, stdev= 6.08 00:10:03.514 clat (usec): min=28, max=593, avg=124.02, stdev=13.79 00:10:03.514 lat (usec): min=122, max=623, avg=147.73, stdev=15.65 00:10:03.514 clat percentiles (usec): 00:10:03.514 | 1.00th=[ 105], 5.00th=[ 110], 10.00th=[ 113], 20.00th=[ 116], 00:10:03.514 | 30.00th=[ 119], 40.00th=[ 121], 50.00th=[ 123], 60.00th=[ 126], 00:10:03.514 | 70.00th=[ 128], 80.00th=[ 133], 90.00th=[ 139], 95.00th=[ 143], 00:10:03.514 | 99.00th=[ 155], 99.50th=[ 163], 99.90th=[ 202], 99.95th=[ 273], 00:10:03.514 | 99.99th=[ 594] 00:10:03.514 bw ( KiB/s): min=12288, max=12288, per=30.04%, avg=12288.00, stdev= 0.00, samples=1 00:10:03.514 iops : min= 3072, max= 3072, avg=3072.00, stdev= 0.00, samples=1 00:10:03.514 lat (usec) : 50=0.02%, 100=0.02%, 250=99.57%, 500=0.28%, 750=0.08% 00:10:03.514 lat (msec) : 2=0.02%, 4=0.02% 00:10:03.514 cpu : usr=2.30%, sys=9.30%, ctx=6115, majf=0, minf=8 00:10:03.514 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:10:03.514 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:10:03.514 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:10:03.514 issued rwts: total=3037,3072,0,0 short=0,0,0,0 dropped=0,0,0,0 00:10:03.514 latency : target=0, window=0, percentile=100.00%, depth=1 00:10:03.514 job2: (groupid=0, jobs=1): err= 0: pid=71971: Mon Dec 16 11:32:03 2024 00:10:03.514 read: IOPS=1733, BW=6933KiB/s (7099kB/s)(6940KiB/1001msec) 00:10:03.514 slat (nsec): min=9671, max=58577, avg=17566.49, stdev=4681.47 00:10:03.514 clat (usec): min=156, max=7780, avg=282.85, stdev=266.96 00:10:03.514 lat (usec): min=175, max=7798, avg=300.42, stdev=267.30 00:10:03.514 clat percentiles (usec): 00:10:03.514 | 1.00th=[ 163], 5.00th=[ 169], 10.00th=[ 180], 20.00th=[ 251], 00:10:03.514 | 30.00th=[ 265], 40.00th=[ 269], 50.00th=[ 277], 60.00th=[ 281], 00:10:03.514 | 70.00th=[ 285], 80.00th=[ 293], 90.00th=[ 306], 95.00th=[ 338], 00:10:03.514 | 99.00th=[ 388], 99.50th=[ 1467], 99.90th=[ 4113], 99.95th=[ 7767], 00:10:03.514 | 99.99th=[ 7767] 00:10:03.514 write: IOPS=2045, BW=8184KiB/s (8380kB/s)(8192KiB/1001msec); 0 zone resets 00:10:03.514 slat (nsec): min=11584, max=60660, avg=23456.75, stdev=5666.61 00:10:03.514 clat (usec): min=113, max=1329, avg=206.90, stdev=54.87 00:10:03.514 lat (usec): min=144, max=1354, avg=230.36, stdev=53.45 00:10:03.514 clat percentiles (usec): 00:10:03.514 | 1.00th=[ 127], 5.00th=[ 133], 10.00th=[ 139], 20.00th=[ 149], 00:10:03.514 | 30.00th=[ 174], 40.00th=[ 206], 50.00th=[ 217], 60.00th=[ 225], 00:10:03.514 | 70.00th=[ 235], 80.00th=[ 249], 90.00th=[ 265], 95.00th=[ 273], 00:10:03.514 | 99.00th=[ 289], 99.50th=[ 297], 99.90th=[ 408], 99.95th=[ 816], 00:10:03.514 | 99.99th=[ 1336] 00:10:03.514 bw ( KiB/s): min= 8192, max= 8192, per=20.03%, avg=8192.00, stdev= 0.00, samples=1 00:10:03.514 iops : min= 2048, max= 2048, avg=2048.00, stdev= 0.00, samples=1 00:10:03.514 lat (usec) : 250=52.50%, 500=47.13%, 750=0.05%, 1000=0.03% 00:10:03.514 lat (msec) : 2=0.13%, 4=0.11%, 10=0.05% 00:10:03.514 cpu : usr=1.80%, sys=6.10%, ctx=3787, majf=0, minf=12 00:10:03.514 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:10:03.514 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:10:03.514 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:10:03.514 issued rwts: total=1735,2048,0,0 short=0,0,0,0 dropped=0,0,0,0 00:10:03.514 latency : target=0, window=0, percentile=100.00%, depth=1 00:10:03.514 job3: (groupid=0, jobs=1): err= 0: pid=71972: Mon Dec 16 11:32:03 2024 00:10:03.514 read: IOPS=2557, BW=9.99MiB/s (10.5MB/s)(10.0MiB/1001msec) 00:10:03.514 slat (nsec): min=12921, max=52000, avg=15720.64, stdev=3148.04 00:10:03.514 clat (usec): min=152, max=223, avg=177.10, stdev=10.68 00:10:03.514 lat (usec): min=167, max=238, avg=192.83, stdev=11.18 00:10:03.515 clat percentiles (usec): 00:10:03.515 | 1.00th=[ 159], 5.00th=[ 161], 10.00th=[ 165], 20.00th=[ 167], 00:10:03.515 | 30.00th=[ 172], 40.00th=[ 174], 50.00th=[ 176], 60.00th=[ 180], 00:10:03.515 | 70.00th=[ 182], 80.00th=[ 186], 90.00th=[ 192], 95.00th=[ 196], 00:10:03.515 | 99.00th=[ 206], 99.50th=[ 210], 99.90th=[ 217], 99.95th=[ 221], 00:10:03.515 | 99.99th=[ 223] 00:10:03.515 write: IOPS=3063, BW=12.0MiB/s (12.5MB/s)(12.0MiB/1001msec); 0 zone resets 00:10:03.515 slat (usec): min=18, max=100, avg=22.39, stdev= 5.51 00:10:03.515 clat (usec): min=114, max=790, avg=139.67, stdev=18.80 00:10:03.515 lat (usec): min=133, max=815, avg=162.06, stdev=20.10 00:10:03.515 clat percentiles (usec): 00:10:03.515 | 1.00th=[ 121], 5.00th=[ 124], 10.00th=[ 127], 20.00th=[ 130], 00:10:03.515 | 30.00th=[ 133], 40.00th=[ 137], 50.00th=[ 139], 60.00th=[ 141], 00:10:03.515 | 70.00th=[ 145], 80.00th=[ 149], 90.00th=[ 155], 95.00th=[ 159], 00:10:03.515 | 99.00th=[ 172], 99.50th=[ 178], 99.90th=[ 215], 99.95th=[ 562], 00:10:03.515 | 99.99th=[ 791] 00:10:03.515 bw ( KiB/s): min=12288, max=12288, per=30.04%, avg=12288.00, stdev= 0.00, samples=1 00:10:03.515 iops : min= 3072, max= 3072, avg=3072.00, stdev= 0.00, samples=1 00:10:03.515 lat (usec) : 250=99.95%, 500=0.02%, 750=0.02%, 1000=0.02% 00:10:03.515 cpu : usr=1.20%, sys=9.20%, ctx=5628, majf=0, minf=15 00:10:03.515 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:10:03.515 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:10:03.515 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:10:03.515 issued rwts: total=2560,3067,0,0 short=0,0,0,0 dropped=0,0,0,0 00:10:03.515 latency : target=0, window=0, percentile=100.00%, depth=1 00:10:03.515 00:10:03.515 Run status group 0 (all jobs): 00:10:03.515 READ: bw=36.0MiB/s (37.8MB/s), 6933KiB/s-11.9MiB/s (7099kB/s-12.4MB/s), io=36.1MiB (37.8MB), run=1001-1001msec 00:10:03.515 WRITE: bw=39.9MiB/s (41.9MB/s), 8184KiB/s-12.0MiB/s (8380kB/s-12.6MB/s), io=40.0MiB (41.9MB), run=1001-1001msec 00:10:03.515 00:10:03.515 Disk stats (read/write): 00:10:03.515 nvme0n1: ios=1586/1947, merge=0/0, ticks=402/393, in_queue=795, util=86.97% 00:10:03.515 nvme0n2: ios=2608/2682, merge=0/0, ticks=455/359, in_queue=814, util=89.45% 00:10:03.515 nvme0n3: ios=1550/1717, merge=0/0, ticks=429/353, in_queue=782, util=87.37% 00:10:03.515 nvme0n4: ios=2251/2560, merge=0/0, ticks=414/392, in_queue=806, util=89.69% 00:10:03.515 11:32:03 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@51 -- # /home/vagrant/spdk_repo/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 1 -t randwrite -r 1 -v 00:10:03.515 [global] 00:10:03.515 thread=1 00:10:03.515 invalidate=1 00:10:03.515 rw=randwrite 00:10:03.515 time_based=1 00:10:03.515 runtime=1 00:10:03.515 ioengine=libaio 00:10:03.515 direct=1 00:10:03.515 bs=4096 00:10:03.515 iodepth=1 00:10:03.515 norandommap=0 00:10:03.515 numjobs=1 00:10:03.515 00:10:03.515 verify_dump=1 00:10:03.515 verify_backlog=512 00:10:03.515 verify_state_save=0 00:10:03.515 do_verify=1 00:10:03.515 verify=crc32c-intel 00:10:03.515 [job0] 00:10:03.515 filename=/dev/nvme0n1 00:10:03.515 [job1] 00:10:03.515 filename=/dev/nvme0n2 00:10:03.515 [job2] 00:10:03.515 filename=/dev/nvme0n3 00:10:03.515 [job3] 00:10:03.515 filename=/dev/nvme0n4 00:10:03.515 Could not set queue depth (nvme0n1) 00:10:03.515 Could not set queue depth (nvme0n2) 00:10:03.515 Could not set queue depth (nvme0n3) 00:10:03.515 Could not set queue depth (nvme0n4) 00:10:03.515 job0: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:10:03.515 job1: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:10:03.515 job2: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:10:03.515 job3: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:10:03.515 fio-3.35 00:10:03.515 Starting 4 threads 00:10:04.892 00:10:04.892 job0: (groupid=0, jobs=1): err= 0: pid=72031: Mon Dec 16 11:32:05 2024 00:10:04.892 read: IOPS=2177, BW=8711KiB/s (8920kB/s)(8720KiB/1001msec) 00:10:04.892 slat (usec): min=12, max=117, avg=19.28, stdev= 8.19 00:10:04.892 clat (usec): min=138, max=6051, avg=210.09, stdev=176.67 00:10:04.892 lat (usec): min=151, max=6083, avg=229.37, stdev=179.56 00:10:04.892 clat percentiles (usec): 00:10:04.892 | 1.00th=[ 145], 5.00th=[ 149], 10.00th=[ 153], 20.00th=[ 157], 00:10:04.892 | 30.00th=[ 161], 40.00th=[ 165], 50.00th=[ 169], 60.00th=[ 176], 00:10:04.892 | 70.00th=[ 184], 80.00th=[ 306], 90.00th=[ 347], 95.00th=[ 367], 00:10:04.892 | 99.00th=[ 416], 99.50th=[ 457], 99.90th=[ 947], 99.95th=[ 4686], 00:10:04.892 | 99.99th=[ 6063] 00:10:04.892 write: IOPS=2557, BW=9.99MiB/s (10.5MB/s)(10.0MiB/1001msec); 0 zone resets 00:10:04.892 slat (usec): min=17, max=134, avg=26.73, stdev= 8.79 00:10:04.892 clat (usec): min=105, max=6502, avg=164.50, stdev=180.50 00:10:04.892 lat (usec): min=123, max=6521, avg=191.22, stdev=182.64 00:10:04.892 clat percentiles (usec): 00:10:04.892 | 1.00th=[ 110], 5.00th=[ 114], 10.00th=[ 118], 20.00th=[ 122], 00:10:04.892 | 30.00th=[ 125], 40.00th=[ 128], 50.00th=[ 133], 60.00th=[ 137], 00:10:04.892 | 70.00th=[ 143], 80.00th=[ 196], 90.00th=[ 265], 95.00th=[ 297], 00:10:04.892 | 99.00th=[ 375], 99.50th=[ 400], 99.90th=[ 3261], 99.95th=[ 4948], 00:10:04.892 | 99.99th=[ 6521] 00:10:04.892 bw ( KiB/s): min= 8192, max= 8192, per=22.18%, avg=8192.00, stdev= 0.00, samples=1 00:10:04.892 iops : min= 2048, max= 2048, avg=2048.00, stdev= 0.00, samples=1 00:10:04.892 lat (usec) : 250=83.14%, 500=16.62%, 750=0.11%, 1000=0.02% 00:10:04.892 lat (msec) : 4=0.02%, 10=0.08% 00:10:04.892 cpu : usr=2.90%, sys=7.90%, ctx=4742, majf=0, minf=5 00:10:04.892 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:10:04.892 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:10:04.892 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:10:04.892 issued rwts: total=2180,2560,0,0 short=0,0,0,0 dropped=0,0,0,0 00:10:04.892 latency : target=0, window=0, percentile=100.00%, depth=1 00:10:04.892 job1: (groupid=0, jobs=1): err= 0: pid=72032: Mon Dec 16 11:32:05 2024 00:10:04.892 read: IOPS=1534, BW=6138KiB/s (6285kB/s)(6144KiB/1001msec) 00:10:04.892 slat (nsec): min=10879, max=46125, avg=15122.63, stdev=3438.96 00:10:04.892 clat (usec): min=178, max=1137, avg=306.86, stdev=48.72 00:10:04.892 lat (usec): min=192, max=1158, avg=321.98, stdev=49.54 00:10:04.892 clat percentiles (usec): 00:10:04.892 | 1.00th=[ 253], 5.00th=[ 262], 10.00th=[ 265], 20.00th=[ 273], 00:10:04.892 | 30.00th=[ 277], 40.00th=[ 281], 50.00th=[ 289], 60.00th=[ 302], 00:10:04.892 | 70.00th=[ 330], 80.00th=[ 351], 90.00th=[ 367], 95.00th=[ 379], 00:10:04.892 | 99.00th=[ 437], 99.50th=[ 465], 99.90th=[ 635], 99.95th=[ 1139], 00:10:04.892 | 99.99th=[ 1139] 00:10:04.892 write: IOPS=1869, BW=7477KiB/s (7656kB/s)(7484KiB/1001msec); 0 zone resets 00:10:04.892 slat (nsec): min=11140, max=87183, avg=24748.62, stdev=6679.96 00:10:04.892 clat (usec): min=104, max=704, avg=241.85, stdev=47.59 00:10:04.892 lat (usec): min=140, max=724, avg=266.59, stdev=49.99 00:10:04.892 clat percentiles (usec): 00:10:04.892 | 1.00th=[ 182], 5.00th=[ 200], 10.00th=[ 204], 20.00th=[ 212], 00:10:04.892 | 30.00th=[ 219], 40.00th=[ 225], 50.00th=[ 231], 60.00th=[ 239], 00:10:04.892 | 70.00th=[ 249], 80.00th=[ 262], 90.00th=[ 293], 95.00th=[ 330], 00:10:04.892 | 99.00th=[ 416], 99.50th=[ 461], 99.90th=[ 619], 99.95th=[ 709], 00:10:04.892 | 99.99th=[ 709] 00:10:04.892 bw ( KiB/s): min= 8192, max= 8192, per=22.18%, avg=8192.00, stdev= 0.00, samples=1 00:10:04.892 iops : min= 2048, max= 2048, avg=2048.00, stdev= 0.00, samples=1 00:10:04.892 lat (usec) : 250=39.54%, 500=60.08%, 750=0.35% 00:10:04.892 lat (msec) : 2=0.03% 00:10:04.892 cpu : usr=2.00%, sys=5.20%, ctx=3407, majf=0, minf=17 00:10:04.892 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:10:04.892 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:10:04.892 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:10:04.892 issued rwts: total=1536,1871,0,0 short=0,0,0,0 dropped=0,0,0,0 00:10:04.892 latency : target=0, window=0, percentile=100.00%, depth=1 00:10:04.892 job2: (groupid=0, jobs=1): err= 0: pid=72033: Mon Dec 16 11:32:05 2024 00:10:04.893 read: IOPS=2557, BW=9.99MiB/s (10.5MB/s)(10.0MiB/1001msec) 00:10:04.893 slat (nsec): min=12257, max=49262, avg=15783.69, stdev=3263.05 00:10:04.893 clat (usec): min=150, max=663, avg=180.66, stdev=17.57 00:10:04.893 lat (usec): min=166, max=687, avg=196.44, stdev=18.11 00:10:04.893 clat percentiles (usec): 00:10:04.893 | 1.00th=[ 159], 5.00th=[ 161], 10.00th=[ 165], 20.00th=[ 169], 00:10:04.893 | 30.00th=[ 172], 40.00th=[ 176], 50.00th=[ 178], 60.00th=[ 182], 00:10:04.893 | 70.00th=[ 186], 80.00th=[ 192], 90.00th=[ 200], 95.00th=[ 208], 00:10:04.893 | 99.00th=[ 233], 99.50th=[ 243], 99.90th=[ 258], 99.95th=[ 260], 00:10:04.893 | 99.99th=[ 660] 00:10:04.893 write: IOPS=2937, BW=11.5MiB/s (12.0MB/s)(11.5MiB/1001msec); 0 zone resets 00:10:04.893 slat (nsec): min=17562, max=93903, avg=23928.97, stdev=6198.79 00:10:04.893 clat (usec): min=112, max=519, avg=141.81, stdev=15.53 00:10:04.893 lat (usec): min=135, max=537, avg=165.74, stdev=17.31 00:10:04.893 clat percentiles (usec): 00:10:04.893 | 1.00th=[ 121], 5.00th=[ 125], 10.00th=[ 128], 20.00th=[ 131], 00:10:04.893 | 30.00th=[ 135], 40.00th=[ 137], 50.00th=[ 141], 60.00th=[ 143], 00:10:04.893 | 70.00th=[ 147], 80.00th=[ 151], 90.00th=[ 159], 95.00th=[ 167], 00:10:04.893 | 99.00th=[ 186], 99.50th=[ 202], 99.90th=[ 243], 99.95th=[ 306], 00:10:04.893 | 99.99th=[ 519] 00:10:04.893 bw ( KiB/s): min=12288, max=12288, per=33.27%, avg=12288.00, stdev= 0.00, samples=1 00:10:04.893 iops : min= 3072, max= 3072, avg=3072.00, stdev= 0.00, samples=1 00:10:04.893 lat (usec) : 250=99.87%, 500=0.09%, 750=0.04% 00:10:04.893 cpu : usr=1.80%, sys=8.80%, ctx=5500, majf=0, minf=11 00:10:04.893 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:10:04.893 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:10:04.893 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:10:04.893 issued rwts: total=2560,2940,0,0 short=0,0,0,0 dropped=0,0,0,0 00:10:04.893 latency : target=0, window=0, percentile=100.00%, depth=1 00:10:04.893 job3: (groupid=0, jobs=1): err= 0: pid=72034: Mon Dec 16 11:32:05 2024 00:10:04.893 read: IOPS=1534, BW=6138KiB/s (6285kB/s)(6144KiB/1001msec) 00:10:04.893 slat (usec): min=11, max=132, avg=15.77, stdev= 4.80 00:10:04.893 clat (usec): min=189, max=1113, avg=306.16, stdev=47.09 00:10:04.893 lat (usec): min=202, max=1130, avg=321.93, stdev=47.94 00:10:04.893 clat percentiles (usec): 00:10:04.893 | 1.00th=[ 253], 5.00th=[ 262], 10.00th=[ 265], 20.00th=[ 273], 00:10:04.893 | 30.00th=[ 277], 40.00th=[ 281], 50.00th=[ 289], 60.00th=[ 302], 00:10:04.893 | 70.00th=[ 330], 80.00th=[ 351], 90.00th=[ 367], 95.00th=[ 375], 00:10:04.893 | 99.00th=[ 437], 99.50th=[ 453], 99.90th=[ 594], 99.95th=[ 1106], 00:10:04.893 | 99.99th=[ 1106] 00:10:04.893 write: IOPS=1869, BW=7477KiB/s (7656kB/s)(7484KiB/1001msec); 0 zone resets 00:10:04.893 slat (nsec): min=11173, max=77947, avg=22518.98, stdev=4976.71 00:10:04.893 clat (usec): min=125, max=907, avg=244.29, stdev=52.21 00:10:04.893 lat (usec): min=146, max=929, avg=266.81, stdev=52.24 00:10:04.893 clat percentiles (usec): 00:10:04.893 | 1.00th=[ 174], 5.00th=[ 198], 10.00th=[ 206], 20.00th=[ 212], 00:10:04.893 | 30.00th=[ 219], 40.00th=[ 223], 50.00th=[ 231], 60.00th=[ 239], 00:10:04.893 | 70.00th=[ 253], 80.00th=[ 269], 90.00th=[ 302], 95.00th=[ 334], 00:10:04.893 | 99.00th=[ 433], 99.50th=[ 457], 99.90th=[ 717], 99.95th=[ 906], 00:10:04.893 | 99.99th=[ 906] 00:10:04.893 bw ( KiB/s): min= 8192, max= 8192, per=22.18%, avg=8192.00, stdev= 0.00, samples=1 00:10:04.893 iops : min= 2048, max= 2048, avg=2048.00, stdev= 0.00, samples=1 00:10:04.893 lat (usec) : 250=37.86%, 500=61.81%, 750=0.26%, 1000=0.03% 00:10:04.893 lat (msec) : 2=0.03% 00:10:04.893 cpu : usr=1.20%, sys=5.50%, ctx=3410, majf=0, minf=11 00:10:04.893 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:10:04.893 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:10:04.893 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:10:04.893 issued rwts: total=1536,1871,0,0 short=0,0,0,0 dropped=0,0,0,0 00:10:04.893 latency : target=0, window=0, percentile=100.00%, depth=1 00:10:04.893 00:10:04.893 Run status group 0 (all jobs): 00:10:04.893 READ: bw=30.5MiB/s (32.0MB/s), 6138KiB/s-9.99MiB/s (6285kB/s-10.5MB/s), io=30.5MiB (32.0MB), run=1001-1001msec 00:10:04.893 WRITE: bw=36.1MiB/s (37.8MB/s), 7477KiB/s-11.5MiB/s (7656kB/s-12.0MB/s), io=36.1MiB (37.9MB), run=1001-1001msec 00:10:04.893 00:10:04.893 Disk stats (read/write): 00:10:04.893 nvme0n1: ios=2074/2048, merge=0/0, ticks=473/361, in_queue=834, util=89.78% 00:10:04.893 nvme0n2: ios=1443/1536, merge=0/0, ticks=456/379, in_queue=835, util=89.71% 00:10:04.893 nvme0n3: ios=2278/2560, merge=0/0, ticks=445/385, in_queue=830, util=89.97% 00:10:04.893 nvme0n4: ios=1420/1536, merge=0/0, ticks=526/374, in_queue=900, util=90.84% 00:10:04.893 11:32:05 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@52 -- # /home/vagrant/spdk_repo/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 128 -t write -r 1 -v 00:10:04.893 [global] 00:10:04.893 thread=1 00:10:04.893 invalidate=1 00:10:04.893 rw=write 00:10:04.893 time_based=1 00:10:04.893 runtime=1 00:10:04.893 ioengine=libaio 00:10:04.893 direct=1 00:10:04.893 bs=4096 00:10:04.893 iodepth=128 00:10:04.893 norandommap=0 00:10:04.893 numjobs=1 00:10:04.893 00:10:04.893 verify_dump=1 00:10:04.893 verify_backlog=512 00:10:04.893 verify_state_save=0 00:10:04.893 do_verify=1 00:10:04.893 verify=crc32c-intel 00:10:04.893 [job0] 00:10:04.893 filename=/dev/nvme0n1 00:10:04.893 [job1] 00:10:04.893 filename=/dev/nvme0n2 00:10:04.893 [job2] 00:10:04.893 filename=/dev/nvme0n3 00:10:04.893 [job3] 00:10:04.893 filename=/dev/nvme0n4 00:10:04.893 Could not set queue depth (nvme0n1) 00:10:04.893 Could not set queue depth (nvme0n2) 00:10:04.893 Could not set queue depth (nvme0n3) 00:10:04.893 Could not set queue depth (nvme0n4) 00:10:04.893 job0: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:10:04.893 job1: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:10:04.893 job2: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:10:04.893 job3: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:10:04.893 fio-3.35 00:10:04.893 Starting 4 threads 00:10:06.270 00:10:06.270 job0: (groupid=0, jobs=1): err= 0: pid=72088: Mon Dec 16 11:32:06 2024 00:10:06.270 read: IOPS=2554, BW=9.98MiB/s (10.5MB/s)(10.0MiB/1002msec) 00:10:06.270 slat (usec): min=3, max=16674, avg=202.60, stdev=1187.28 00:10:06.270 clat (usec): min=12636, max=63026, avg=26304.88, stdev=11378.56 00:10:06.270 lat (usec): min=14931, max=63039, avg=26507.48, stdev=11407.96 00:10:06.270 clat percentiles (usec): 00:10:06.270 | 1.00th=[14877], 5.00th=[16188], 10.00th=[17433], 20.00th=[17695], 00:10:06.270 | 30.00th=[18482], 40.00th=[19268], 50.00th=[19530], 60.00th=[25297], 00:10:06.270 | 70.00th=[29754], 80.00th=[38011], 90.00th=[42206], 95.00th=[48497], 00:10:06.270 | 99.00th=[63177], 99.50th=[63177], 99.90th=[63177], 99.95th=[63177], 00:10:06.270 | 99.99th=[63177] 00:10:06.270 write: IOPS=2619, BW=10.2MiB/s (10.7MB/s)(10.3MiB/1002msec); 0 zone resets 00:10:06.270 slat (usec): min=13, max=10929, avg=174.87, stdev=920.78 00:10:06.270 clat (usec): min=1010, max=42656, avg=22402.84, stdev=8387.24 00:10:06.270 lat (usec): min=7520, max=42677, avg=22577.71, stdev=8389.45 00:10:06.270 clat percentiles (usec): 00:10:06.270 | 1.00th=[ 8291], 5.00th=[13566], 10.00th=[13698], 20.00th=[14222], 00:10:06.270 | 30.00th=[14484], 40.00th=[15533], 50.00th=[23200], 60.00th=[28181], 00:10:06.270 | 70.00th=[28967], 80.00th=[29754], 90.00th=[32375], 95.00th=[36963], 00:10:06.270 | 99.00th=[42206], 99.50th=[42730], 99.90th=[42730], 99.95th=[42730], 00:10:06.270 | 99.99th=[42730] 00:10:06.271 bw ( KiB/s): min= 8192, max=12288, per=21.57%, avg=10240.00, stdev=2896.31, samples=2 00:10:06.271 iops : min= 2048, max= 3072, avg=2560.00, stdev=724.08, samples=2 00:10:06.271 lat (msec) : 2=0.02%, 10=0.62%, 20=51.55%, 50=45.42%, 100=2.39% 00:10:06.271 cpu : usr=2.10%, sys=8.49%, ctx=166, majf=0, minf=15 00:10:06.271 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.2%, 16=0.3%, 32=0.6%, >=64=98.8% 00:10:06.271 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:10:06.271 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:10:06.271 issued rwts: total=2560,2625,0,0 short=0,0,0,0 dropped=0,0,0,0 00:10:06.271 latency : target=0, window=0, percentile=100.00%, depth=128 00:10:06.271 job1: (groupid=0, jobs=1): err= 0: pid=72089: Mon Dec 16 11:32:06 2024 00:10:06.271 read: IOPS=1917, BW=7669KiB/s (7853kB/s)(7684KiB/1002msec) 00:10:06.271 slat (usec): min=3, max=8780, avg=224.59, stdev=892.28 00:10:06.271 clat (usec): min=292, max=43107, avg=27965.72, stdev=5083.51 00:10:06.271 lat (usec): min=2095, max=43112, avg=28190.31, stdev=5046.25 00:10:06.271 clat percentiles (usec): 00:10:06.271 | 1.00th=[ 2507], 5.00th=[19792], 10.00th=[24511], 20.00th=[26084], 00:10:06.271 | 30.00th=[26870], 40.00th=[28443], 50.00th=[28705], 60.00th=[29230], 00:10:06.271 | 70.00th=[29754], 80.00th=[30278], 90.00th=[31851], 95.00th=[34341], 00:10:06.271 | 99.00th=[38011], 99.50th=[39060], 99.90th=[43254], 99.95th=[43254], 00:10:06.271 | 99.99th=[43254] 00:10:06.271 write: IOPS=2043, BW=8176KiB/s (8372kB/s)(8192KiB/1002msec); 0 zone resets 00:10:06.271 slat (usec): min=6, max=8664, avg=269.53, stdev=965.47 00:10:06.271 clat (usec): min=16711, max=56887, avg=35221.67, stdev=10598.25 00:10:06.271 lat (usec): min=21065, max=56910, avg=35491.20, stdev=10641.87 00:10:06.271 clat percentiles (usec): 00:10:06.271 | 1.00th=[21103], 5.00th=[21890], 10.00th=[22414], 20.00th=[23987], 00:10:06.271 | 30.00th=[27132], 40.00th=[29230], 50.00th=[32375], 60.00th=[39060], 00:10:06.271 | 70.00th=[42206], 80.00th=[45876], 90.00th=[50594], 95.00th=[54264], 00:10:06.271 | 99.00th=[56361], 99.50th=[56886], 99.90th=[56886], 99.95th=[56886], 00:10:06.271 | 99.99th=[56886] 00:10:06.271 bw ( KiB/s): min= 8928, max= 8928, per=18.80%, avg=8928.00, stdev= 0.00, samples=1 00:10:06.271 iops : min= 2232, max= 2232, avg=2232.00, stdev= 0.00, samples=1 00:10:06.271 lat (usec) : 500=0.03% 00:10:06.271 lat (msec) : 4=0.81%, 20=2.02%, 50=90.98%, 100=6.17% 00:10:06.271 cpu : usr=1.90%, sys=6.39%, ctx=584, majf=0, minf=6 00:10:06.271 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.2%, 16=0.4%, 32=0.8%, >=64=98.4% 00:10:06.271 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:10:06.271 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:10:06.271 issued rwts: total=1921,2048,0,0 short=0,0,0,0 dropped=0,0,0,0 00:10:06.271 latency : target=0, window=0, percentile=100.00%, depth=128 00:10:06.271 job2: (groupid=0, jobs=1): err= 0: pid=72090: Mon Dec 16 11:32:06 2024 00:10:06.271 read: IOPS=5109, BW=20.0MiB/s (20.9MB/s)(20.0MiB/1002msec) 00:10:06.271 slat (usec): min=8, max=3391, avg=94.44, stdev=433.41 00:10:06.271 clat (usec): min=8277, max=17040, avg=12519.39, stdev=1609.29 00:10:06.271 lat (usec): min=8294, max=17991, avg=12613.82, stdev=1574.75 00:10:06.271 clat percentiles (usec): 00:10:06.271 | 1.00th=[ 9110], 5.00th=[10028], 10.00th=[10945], 20.00th=[11338], 00:10:06.271 | 30.00th=[11469], 40.00th=[11731], 50.00th=[11994], 60.00th=[12387], 00:10:06.271 | 70.00th=[13960], 80.00th=[14484], 90.00th=[14746], 95.00th=[14877], 00:10:06.271 | 99.00th=[15533], 99.50th=[15664], 99.90th=[16909], 99.95th=[16909], 00:10:06.271 | 99.99th=[17171] 00:10:06.271 write: IOPS=5174, BW=20.2MiB/s (21.2MB/s)(20.3MiB/1002msec); 0 zone resets 00:10:06.271 slat (usec): min=10, max=4100, avg=91.89, stdev=368.82 00:10:06.271 clat (usec): min=244, max=16936, avg=11994.27, stdev=2012.82 00:10:06.271 lat (usec): min=2239, max=16955, avg=12086.16, stdev=2016.40 00:10:06.271 clat percentiles (usec): 00:10:06.271 | 1.00th=[ 5669], 5.00th=[ 9503], 10.00th=[ 9634], 20.00th=[10159], 00:10:06.271 | 30.00th=[11076], 40.00th=[11600], 50.00th=[11994], 60.00th=[12256], 00:10:06.271 | 70.00th=[12649], 80.00th=[13960], 90.00th=[15008], 95.00th=[15270], 00:10:06.271 | 99.00th=[16057], 99.50th=[16581], 99.90th=[16909], 99.95th=[16909], 00:10:06.271 | 99.99th=[16909] 00:10:06.271 bw ( KiB/s): min=20040, max=20040, per=42.21%, avg=20040.00, stdev= 0.00, samples=1 00:10:06.271 iops : min= 5010, max= 5010, avg=5010.00, stdev= 0.00, samples=1 00:10:06.271 lat (usec) : 250=0.01% 00:10:06.271 lat (msec) : 4=0.31%, 10=10.72%, 20=88.96% 00:10:06.271 cpu : usr=4.50%, sys=14.89%, ctx=613, majf=0, minf=5 00:10:06.271 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.3%, >=64=99.4% 00:10:06.271 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:10:06.271 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:10:06.271 issued rwts: total=5120,5185,0,0 short=0,0,0,0 dropped=0,0,0,0 00:10:06.271 latency : target=0, window=0, percentile=100.00%, depth=128 00:10:06.271 job3: (groupid=0, jobs=1): err= 0: pid=72091: Mon Dec 16 11:32:06 2024 00:10:06.271 read: IOPS=1883, BW=7533KiB/s (7714kB/s)(7556KiB/1003msec) 00:10:06.271 slat (usec): min=3, max=9323, avg=232.82, stdev=898.76 00:10:06.271 clat (usec): min=2139, max=43228, avg=28412.41, stdev=3973.22 00:10:06.271 lat (usec): min=11093, max=43635, avg=28645.22, stdev=3916.83 00:10:06.271 clat percentiles (usec): 00:10:06.271 | 1.00th=[11469], 5.00th=[23725], 10.00th=[25297], 20.00th=[26084], 00:10:06.271 | 30.00th=[27132], 40.00th=[28443], 50.00th=[28705], 60.00th=[29230], 00:10:06.271 | 70.00th=[29754], 80.00th=[30278], 90.00th=[32637], 95.00th=[35914], 00:10:06.271 | 99.00th=[38011], 99.50th=[39060], 99.90th=[43254], 99.95th=[43254], 00:10:06.271 | 99.99th=[43254] 00:10:06.271 write: IOPS=2041, BW=8167KiB/s (8364kB/s)(8192KiB/1003msec); 0 zone resets 00:10:06.271 slat (usec): min=10, max=7779, avg=265.25, stdev=915.16 00:10:06.271 clat (usec): min=16386, max=57011, avg=35260.88, stdev=10876.94 00:10:06.271 lat (usec): min=20831, max=57043, avg=35526.13, stdev=10926.17 00:10:06.271 clat percentiles (usec): 00:10:06.271 | 1.00th=[20841], 5.00th=[21890], 10.00th=[22414], 20.00th=[23725], 00:10:06.271 | 30.00th=[26608], 40.00th=[28967], 50.00th=[31851], 60.00th=[39060], 00:10:06.271 | 70.00th=[42730], 80.00th=[45876], 90.00th=[51119], 95.00th=[54264], 00:10:06.271 | 99.00th=[56886], 99.50th=[56886], 99.90th=[56886], 99.95th=[56886], 00:10:06.271 | 99.99th=[56886] 00:10:06.271 bw ( KiB/s): min= 7288, max= 9096, per=17.25%, avg=8192.00, stdev=1278.45, samples=2 00:10:06.271 iops : min= 1822, max= 2274, avg=2048.00, stdev=319.61, samples=2 00:10:06.271 lat (msec) : 4=0.03%, 20=2.06%, 50=90.96%, 100=6.96% 00:10:06.271 cpu : usr=2.10%, sys=6.19%, ctx=678, majf=0, minf=9 00:10:06.271 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.2%, 16=0.4%, 32=0.8%, >=64=98.4% 00:10:06.271 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:10:06.271 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:10:06.271 issued rwts: total=1889,2048,0,0 short=0,0,0,0 dropped=0,0,0,0 00:10:06.271 latency : target=0, window=0, percentile=100.00%, depth=128 00:10:06.271 00:10:06.271 Run status group 0 (all jobs): 00:10:06.271 READ: bw=44.7MiB/s (46.9MB/s), 7533KiB/s-20.0MiB/s (7714kB/s-20.9MB/s), io=44.9MiB (47.1MB), run=1002-1003msec 00:10:06.271 WRITE: bw=46.4MiB/s (48.6MB/s), 8167KiB/s-20.2MiB/s (8364kB/s-21.2MB/s), io=46.5MiB (48.8MB), run=1002-1003msec 00:10:06.271 00:10:06.271 Disk stats (read/write): 00:10:06.271 nvme0n1: ios=2098/2336, merge=0/0, ticks=13186/12015, in_queue=25201, util=88.06% 00:10:06.271 nvme0n2: ios=1584/1891, merge=0/0, ticks=10316/15450, in_queue=25766, util=88.77% 00:10:06.271 nvme0n3: ios=4128/4608, merge=0/0, ticks=12334/12253, in_queue=24587, util=89.28% 00:10:06.271 nvme0n4: ios=1536/1900, merge=0/0, ticks=10673/14934, in_queue=25607, util=89.73% 00:10:06.271 11:32:06 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@53 -- # /home/vagrant/spdk_repo/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 128 -t randwrite -r 1 -v 00:10:06.271 [global] 00:10:06.271 thread=1 00:10:06.271 invalidate=1 00:10:06.271 rw=randwrite 00:10:06.271 time_based=1 00:10:06.271 runtime=1 00:10:06.271 ioengine=libaio 00:10:06.271 direct=1 00:10:06.271 bs=4096 00:10:06.271 iodepth=128 00:10:06.271 norandommap=0 00:10:06.271 numjobs=1 00:10:06.271 00:10:06.271 verify_dump=1 00:10:06.271 verify_backlog=512 00:10:06.271 verify_state_save=0 00:10:06.271 do_verify=1 00:10:06.271 verify=crc32c-intel 00:10:06.271 [job0] 00:10:06.271 filename=/dev/nvme0n1 00:10:06.271 [job1] 00:10:06.271 filename=/dev/nvme0n2 00:10:06.271 [job2] 00:10:06.271 filename=/dev/nvme0n3 00:10:06.271 [job3] 00:10:06.271 filename=/dev/nvme0n4 00:10:06.271 Could not set queue depth (nvme0n1) 00:10:06.271 Could not set queue depth (nvme0n2) 00:10:06.271 Could not set queue depth (nvme0n3) 00:10:06.271 Could not set queue depth (nvme0n4) 00:10:06.271 job0: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:10:06.271 job1: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:10:06.271 job2: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:10:06.271 job3: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:10:06.271 fio-3.35 00:10:06.271 Starting 4 threads 00:10:07.659 00:10:07.659 job0: (groupid=0, jobs=1): err= 0: pid=72150: Mon Dec 16 11:32:08 2024 00:10:07.659 read: IOPS=5350, BW=20.9MiB/s (21.9MB/s)(21.0MiB/1004msec) 00:10:07.659 slat (usec): min=3, max=10541, avg=98.13, stdev=624.44 00:10:07.659 clat (usec): min=2794, max=22051, avg=12362.05, stdev=3169.51 00:10:07.659 lat (usec): min=4702, max=22066, avg=12460.17, stdev=3198.95 00:10:07.659 clat percentiles (usec): 00:10:07.659 | 1.00th=[ 5669], 5.00th=[ 8979], 10.00th=[ 9241], 20.00th=[10028], 00:10:07.659 | 30.00th=[10683], 40.00th=[11076], 50.00th=[11469], 60.00th=[11863], 00:10:07.659 | 70.00th=[13042], 80.00th=[14353], 90.00th=[17695], 95.00th=[19268], 00:10:07.659 | 99.00th=[21103], 99.50th=[21627], 99.90th=[21890], 99.95th=[22152], 00:10:07.659 | 99.99th=[22152] 00:10:07.659 write: IOPS=5609, BW=21.9MiB/s (23.0MB/s)(22.0MiB/1004msec); 0 zone resets 00:10:07.659 slat (usec): min=5, max=11165, avg=76.55, stdev=328.94 00:10:07.659 clat (usec): min=3400, max=24679, avg=10785.92, stdev=2384.93 00:10:07.659 lat (usec): min=3418, max=24773, avg=10862.48, stdev=2410.45 00:10:07.659 clat percentiles (usec): 00:10:07.659 | 1.00th=[ 4686], 5.00th=[ 5538], 10.00th=[ 6390], 20.00th=[ 9241], 00:10:07.659 | 30.00th=[11207], 40.00th=[11469], 50.00th=[11600], 60.00th=[11731], 00:10:07.659 | 70.00th=[11863], 80.00th=[11994], 90.00th=[12387], 95.00th=[12911], 00:10:07.659 | 99.00th=[14091], 99.50th=[14353], 99.90th=[21890], 99.95th=[22676], 00:10:07.659 | 99.99th=[24773] 00:10:07.659 bw ( KiB/s): min=22320, max=22736, per=35.05%, avg=22528.00, stdev=294.16, samples=2 00:10:07.659 iops : min= 5580, max= 5684, avg=5632.00, stdev=73.54, samples=2 00:10:07.659 lat (msec) : 4=0.06%, 10=21.00%, 20=77.19%, 50=1.74% 00:10:07.659 cpu : usr=5.18%, sys=12.66%, ctx=861, majf=0, minf=6 00:10:07.659 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.3%, >=64=99.4% 00:10:07.659 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:10:07.659 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:10:07.659 issued rwts: total=5372,5632,0,0 short=0,0,0,0 dropped=0,0,0,0 00:10:07.659 latency : target=0, window=0, percentile=100.00%, depth=128 00:10:07.659 job1: (groupid=0, jobs=1): err= 0: pid=72151: Mon Dec 16 11:32:08 2024 00:10:07.659 read: IOPS=2547, BW=9.95MiB/s (10.4MB/s)(10.0MiB/1005msec) 00:10:07.659 slat (usec): min=2, max=13515, avg=193.20, stdev=1021.08 00:10:07.659 clat (usec): min=14150, max=40200, avg=24428.18, stdev=3411.39 00:10:07.659 lat (usec): min=14178, max=40236, avg=24621.38, stdev=3508.43 00:10:07.659 clat percentiles (usec): 00:10:07.659 | 1.00th=[16712], 5.00th=[18482], 10.00th=[20317], 20.00th=[22414], 00:10:07.659 | 30.00th=[22938], 40.00th=[23462], 50.00th=[23987], 60.00th=[24249], 00:10:07.659 | 70.00th=[25297], 80.00th=[27132], 90.00th=[29230], 95.00th=[31589], 00:10:07.659 | 99.00th=[33817], 99.50th=[33817], 99.90th=[37487], 99.95th=[39060], 00:10:07.659 | 99.99th=[40109] 00:10:07.659 write: IOPS=2888, BW=11.3MiB/s (11.8MB/s)(11.3MiB/1005msec); 0 zone resets 00:10:07.659 slat (usec): min=3, max=10824, avg=167.70, stdev=936.43 00:10:07.659 clat (usec): min=2600, max=35043, avg=22038.30, stdev=3560.68 00:10:07.659 lat (usec): min=5031, max=35097, avg=22206.00, stdev=3608.14 00:10:07.659 clat percentiles (usec): 00:10:07.659 | 1.00th=[10945], 5.00th=[16188], 10.00th=[18220], 20.00th=[19268], 00:10:07.659 | 30.00th=[20841], 40.00th=[21627], 50.00th=[22414], 60.00th=[22938], 00:10:07.659 | 70.00th=[23725], 80.00th=[24511], 90.00th=[25822], 95.00th=[26608], 00:10:07.659 | 99.00th=[30802], 99.50th=[30802], 99.90th=[33162], 99.95th=[33817], 00:10:07.659 | 99.99th=[34866] 00:10:07.659 bw ( KiB/s): min= 9912, max=12288, per=17.27%, avg=11100.00, stdev=1680.09, samples=2 00:10:07.659 iops : min= 2478, max= 3072, avg=2775.00, stdev=420.02, samples=2 00:10:07.659 lat (msec) : 4=0.02%, 10=0.38%, 20=16.36%, 50=83.23% 00:10:07.659 cpu : usr=2.39%, sys=7.77%, ctx=558, majf=0, minf=11 00:10:07.659 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.3%, 32=0.6%, >=64=98.8% 00:10:07.659 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:10:07.659 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:10:07.659 issued rwts: total=2560,2903,0,0 short=0,0,0,0 dropped=0,0,0,0 00:10:07.659 latency : target=0, window=0, percentile=100.00%, depth=128 00:10:07.659 job2: (groupid=0, jobs=1): err= 0: pid=72152: Mon Dec 16 11:32:08 2024 00:10:07.659 read: IOPS=3999, BW=15.6MiB/s (16.4MB/s)(15.7MiB/1004msec) 00:10:07.659 slat (usec): min=3, max=11115, avg=121.29, stdev=644.65 00:10:07.659 clat (usec): min=3737, max=33282, avg=15697.59, stdev=5323.03 00:10:07.659 lat (usec): min=4877, max=37344, avg=15818.88, stdev=5386.56 00:10:07.659 clat percentiles (usec): 00:10:07.659 | 1.00th=[ 5604], 5.00th=[11338], 10.00th=[11863], 20.00th=[12387], 00:10:07.659 | 30.00th=[12518], 40.00th=[12649], 50.00th=[12911], 60.00th=[13566], 00:10:07.659 | 70.00th=[15270], 80.00th=[23200], 90.00th=[24249], 95.00th=[26084], 00:10:07.659 | 99.00th=[28443], 99.50th=[29230], 99.90th=[33162], 99.95th=[33162], 00:10:07.659 | 99.99th=[33162] 00:10:07.659 write: IOPS=4079, BW=15.9MiB/s (16.7MB/s)(16.0MiB/1004msec); 0 zone resets 00:10:07.659 slat (usec): min=5, max=11056, avg=117.27, stdev=693.03 00:10:07.659 clat (usec): min=7815, max=35392, avg=15604.13, stdev=4999.20 00:10:07.659 lat (usec): min=7848, max=35418, avg=15721.40, stdev=5069.74 00:10:07.659 clat percentiles (usec): 00:10:07.659 | 1.00th=[ 8586], 5.00th=[10945], 10.00th=[11600], 20.00th=[12256], 00:10:07.659 | 30.00th=[12387], 40.00th=[12518], 50.00th=[12911], 60.00th=[13304], 00:10:07.659 | 70.00th=[16581], 80.00th=[21890], 90.00th=[23987], 95.00th=[24773], 00:10:07.659 | 99.00th=[26870], 99.50th=[29230], 99.90th=[32375], 99.95th=[33162], 00:10:07.659 | 99.99th=[35390] 00:10:07.659 bw ( KiB/s): min=12288, max=20480, per=25.49%, avg=16384.00, stdev=5792.62, samples=2 00:10:07.659 iops : min= 3072, max= 5120, avg=4096.00, stdev=1448.15, samples=2 00:10:07.659 lat (msec) : 4=0.01%, 10=2.29%, 20=72.72%, 50=24.98% 00:10:07.659 cpu : usr=3.69%, sys=11.47%, ctx=504, majf=0, minf=11 00:10:07.659 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.4%, >=64=99.2% 00:10:07.659 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:10:07.659 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:10:07.659 issued rwts: total=4015,4096,0,0 short=0,0,0,0 dropped=0,0,0,0 00:10:07.659 latency : target=0, window=0, percentile=100.00%, depth=128 00:10:07.659 job3: (groupid=0, jobs=1): err= 0: pid=72153: Mon Dec 16 11:32:08 2024 00:10:07.659 read: IOPS=3516, BW=13.7MiB/s (14.4MB/s)(13.9MiB/1009msec) 00:10:07.659 slat (usec): min=4, max=14153, avg=150.17, stdev=886.81 00:10:07.659 clat (usec): min=2761, max=41437, avg=19272.09, stdev=6865.88 00:10:07.659 lat (usec): min=5135, max=45625, avg=19422.26, stdev=6928.54 00:10:07.659 clat percentiles (usec): 00:10:07.659 | 1.00th=[ 9241], 5.00th=[10552], 10.00th=[11338], 20.00th=[12256], 00:10:07.659 | 30.00th=[13435], 40.00th=[15401], 50.00th=[19268], 60.00th=[21890], 00:10:07.659 | 70.00th=[23987], 80.00th=[24773], 90.00th=[27395], 95.00th=[31327], 00:10:07.659 | 99.00th=[35390], 99.50th=[35390], 99.90th=[41157], 99.95th=[41157], 00:10:07.659 | 99.99th=[41681] 00:10:07.659 write: IOPS=3552, BW=13.9MiB/s (14.5MB/s)(14.0MiB/1009msec); 0 zone resets 00:10:07.659 slat (usec): min=4, max=14259, avg=123.81, stdev=718.54 00:10:07.659 clat (usec): min=2490, max=35454, avg=16613.19, stdev=5891.41 00:10:07.659 lat (usec): min=2510, max=35480, avg=16737.00, stdev=5950.82 00:10:07.659 clat percentiles (usec): 00:10:07.659 | 1.00th=[ 5276], 5.00th=[ 7177], 10.00th=[11207], 20.00th=[12911], 00:10:07.659 | 30.00th=[13304], 40.00th=[13435], 50.00th=[13566], 60.00th=[17957], 00:10:07.659 | 70.00th=[21103], 80.00th=[22152], 90.00th=[24511], 95.00th=[26346], 00:10:07.659 | 99.00th=[30802], 99.50th=[32900], 99.90th=[32900], 99.95th=[35390], 00:10:07.659 | 99.99th=[35390] 00:10:07.659 bw ( KiB/s): min= 9888, max=18784, per=22.30%, avg=14336.00, stdev=6290.42, samples=2 00:10:07.659 iops : min= 2472, max= 4696, avg=3584.00, stdev=1572.61, samples=2 00:10:07.659 lat (msec) : 4=0.10%, 10=6.23%, 20=52.50%, 50=41.18% 00:10:07.659 cpu : usr=3.47%, sys=9.42%, ctx=639, majf=0, minf=8 00:10:07.659 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.4%, >=64=99.1% 00:10:07.659 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:10:07.659 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:10:07.659 issued rwts: total=3548,3584,0,0 short=0,0,0,0 dropped=0,0,0,0 00:10:07.659 latency : target=0, window=0, percentile=100.00%, depth=128 00:10:07.659 00:10:07.659 Run status group 0 (all jobs): 00:10:07.659 READ: bw=60.0MiB/s (62.9MB/s), 9.95MiB/s-20.9MiB/s (10.4MB/s-21.9MB/s), io=60.5MiB (63.5MB), run=1004-1009msec 00:10:07.659 WRITE: bw=62.8MiB/s (65.8MB/s), 11.3MiB/s-21.9MiB/s (11.8MB/s-23.0MB/s), io=63.3MiB (66.4MB), run=1004-1009msec 00:10:07.659 00:10:07.659 Disk stats (read/write): 00:10:07.659 nvme0n1: ios=4658/4719, merge=0/0, ticks=53341/49231, in_queue=102572, util=88.08% 00:10:07.659 nvme0n2: ios=2080/2528, merge=0/0, ticks=23608/25752, in_queue=49360, util=85.92% 00:10:07.659 nvme0n3: ios=3072/3527, merge=0/0, ticks=24407/25152, in_queue=49559, util=88.47% 00:10:07.659 nvme0n4: ios=3072/3221, merge=0/0, ticks=39388/36190, in_queue=75578, util=89.52% 00:10:07.659 11:32:08 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@55 -- # sync 00:10:07.659 11:32:08 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@59 -- # fio_pid=72166 00:10:07.659 11:32:08 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@58 -- # /home/vagrant/spdk_repo/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 1 -t read -r 10 00:10:07.659 11:32:08 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@61 -- # sleep 3 00:10:07.659 [global] 00:10:07.659 thread=1 00:10:07.659 invalidate=1 00:10:07.659 rw=read 00:10:07.659 time_based=1 00:10:07.659 runtime=10 00:10:07.659 ioengine=libaio 00:10:07.659 direct=1 00:10:07.659 bs=4096 00:10:07.659 iodepth=1 00:10:07.659 norandommap=1 00:10:07.659 numjobs=1 00:10:07.659 00:10:07.659 [job0] 00:10:07.659 filename=/dev/nvme0n1 00:10:07.659 [job1] 00:10:07.659 filename=/dev/nvme0n2 00:10:07.659 [job2] 00:10:07.659 filename=/dev/nvme0n3 00:10:07.659 [job3] 00:10:07.659 filename=/dev/nvme0n4 00:10:07.659 Could not set queue depth (nvme0n1) 00:10:07.659 Could not set queue depth (nvme0n2) 00:10:07.659 Could not set queue depth (nvme0n3) 00:10:07.659 Could not set queue depth (nvme0n4) 00:10:07.659 job0: (g=0): rw=read, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:10:07.659 job1: (g=0): rw=read, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:10:07.660 job2: (g=0): rw=read, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:10:07.660 job3: (g=0): rw=read, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:10:07.660 fio-3.35 00:10:07.660 Starting 4 threads 00:10:10.960 11:32:11 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_raid_delete concat0 00:10:10.960 fio: pid=72209, err=95/file:io_u.c:1889, func=io_u error, error=Operation not supported 00:10:10.960 fio: io_u error on file /dev/nvme0n4: Operation not supported: read offset=49704960, buflen=4096 00:10:10.960 11:32:11 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_raid_delete raid0 00:10:11.219 fio: pid=72208, err=95/file:io_u.c:1889, func=io_u error, error=Operation not supported 00:10:11.219 fio: io_u error on file /dev/nvme0n3: Operation not supported: read offset=39268352, buflen=4096 00:10:11.219 11:32:11 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:10:11.219 11:32:11 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@66 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_delete Malloc0 00:10:11.478 fio: pid=72206, err=95/file:io_u.c:1889, func=io_u error, error=Operation not supported 00:10:11.478 fio: io_u error on file /dev/nvme0n1: Operation not supported: read offset=41676800, buflen=4096 00:10:11.478 11:32:12 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:10:11.478 11:32:12 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@66 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_delete Malloc1 00:10:11.738 fio: pid=72207, err=95/file:io_u.c:1889, func=io_u error, error=Operation not supported 00:10:11.738 fio: io_u error on file /dev/nvme0n2: Operation not supported: read offset=50978816, buflen=4096 00:10:11.738 00:10:11.738 job0: (groupid=0, jobs=1): err=95 (file:io_u.c:1889, func=io_u error, error=Operation not supported): pid=72206: Mon Dec 16 11:32:12 2024 00:10:11.738 read: IOPS=2858, BW=11.2MiB/s (11.7MB/s)(39.7MiB/3560msec) 00:10:11.738 slat (usec): min=6, max=14310, avg=21.65, stdev=222.47 00:10:11.738 clat (usec): min=138, max=3496, avg=326.62, stdev=93.80 00:10:11.738 lat (usec): min=163, max=14583, avg=348.27, stdev=241.11 00:10:11.738 clat percentiles (usec): 00:10:11.738 | 1.00th=[ 196], 5.00th=[ 225], 10.00th=[ 235], 20.00th=[ 249], 00:10:11.738 | 30.00th=[ 265], 40.00th=[ 289], 50.00th=[ 334], 60.00th=[ 355], 00:10:11.738 | 70.00th=[ 367], 80.00th=[ 383], 90.00th=[ 408], 95.00th=[ 490], 00:10:11.738 | 99.00th=[ 537], 99.50th=[ 545], 99.90th=[ 701], 99.95th=[ 873], 00:10:11.738 | 99.99th=[ 3392] 00:10:11.738 bw ( KiB/s): min= 9824, max=15072, per=25.26%, avg=11608.00, stdev=2010.83, samples=6 00:10:11.738 iops : min= 2456, max= 3768, avg=2902.00, stdev=502.71, samples=6 00:10:11.738 lat (usec) : 250=20.86%, 500=74.54%, 750=4.51%, 1000=0.03% 00:10:11.738 lat (msec) : 2=0.02%, 4=0.03% 00:10:11.738 cpu : usr=1.12%, sys=3.91%, ctx=11200, majf=0, minf=1 00:10:11.738 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:10:11.738 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:10:11.738 complete : 0=0.1%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:10:11.738 issued rwts: total=10176,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:10:11.738 latency : target=0, window=0, percentile=100.00%, depth=1 00:10:11.738 job1: (groupid=0, jobs=1): err=95 (file:io_u.c:1889, func=io_u error, error=Operation not supported): pid=72207: Mon Dec 16 11:32:12 2024 00:10:11.738 read: IOPS=3224, BW=12.6MiB/s (13.2MB/s)(48.6MiB/3860msec) 00:10:11.738 slat (usec): min=6, max=9871, avg=21.52, stdev=170.68 00:10:11.738 clat (usec): min=145, max=3653, avg=287.21, stdev=94.56 00:10:11.738 lat (usec): min=160, max=10230, avg=308.73, stdev=195.52 00:10:11.738 clat percentiles (usec): 00:10:11.738 | 1.00th=[ 155], 5.00th=[ 165], 10.00th=[ 174], 20.00th=[ 198], 00:10:11.738 | 30.00th=[ 239], 40.00th=[ 258], 50.00th=[ 277], 60.00th=[ 314], 00:10:11.738 | 70.00th=[ 351], 80.00th=[ 367], 90.00th=[ 388], 95.00th=[ 404], 00:10:11.738 | 99.00th=[ 461], 99.50th=[ 490], 99.90th=[ 619], 99.95th=[ 857], 00:10:11.738 | 99.99th=[ 3392] 00:10:11.738 bw ( KiB/s): min=10224, max=18256, per=27.34%, avg=12563.86, stdev=3086.64, samples=7 00:10:11.738 iops : min= 2556, max= 4564, avg=3140.86, stdev=771.73, samples=7 00:10:11.738 lat (usec) : 250=35.92%, 500=63.66%, 750=0.34%, 1000=0.03% 00:10:11.738 lat (msec) : 2=0.02%, 4=0.02% 00:10:11.738 cpu : usr=0.86%, sys=4.43%, ctx=14394, majf=0, minf=2 00:10:11.738 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:10:11.738 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:10:11.738 complete : 0=0.1%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:10:11.738 issued rwts: total=12447,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:10:11.738 latency : target=0, window=0, percentile=100.00%, depth=1 00:10:11.738 job2: (groupid=0, jobs=1): err=95 (file:io_u.c:1889, func=io_u error, error=Operation not supported): pid=72208: Mon Dec 16 11:32:12 2024 00:10:11.738 read: IOPS=2919, BW=11.4MiB/s (12.0MB/s)(37.4MiB/3284msec) 00:10:11.738 slat (usec): min=6, max=11621, avg=18.40, stdev=163.75 00:10:11.738 clat (usec): min=147, max=7535, avg=322.64, stdev=112.92 00:10:11.738 lat (usec): min=160, max=11875, avg=341.04, stdev=199.12 00:10:11.738 clat percentiles (usec): 00:10:11.738 | 1.00th=[ 204], 5.00th=[ 233], 10.00th=[ 247], 20.00th=[ 269], 00:10:11.738 | 30.00th=[ 285], 40.00th=[ 302], 50.00th=[ 318], 60.00th=[ 347], 00:10:11.738 | 70.00th=[ 359], 80.00th=[ 371], 90.00th=[ 388], 95.00th=[ 400], 00:10:11.738 | 99.00th=[ 437], 99.50th=[ 465], 99.90th=[ 914], 99.95th=[ 2474], 00:10:11.738 | 99.99th=[ 7504] 00:10:11.738 bw ( KiB/s): min=10312, max=13848, per=25.16%, avg=11562.67, stdev=1488.93, samples=6 00:10:11.738 iops : min= 2578, max= 3462, avg=2890.67, stdev=372.23, samples=6 00:10:11.738 lat (usec) : 250=12.09%, 500=87.58%, 750=0.21%, 1000=0.03% 00:10:11.738 lat (msec) : 2=0.03%, 4=0.03%, 10=0.02% 00:10:11.738 cpu : usr=0.79%, sys=3.75%, ctx=11350, majf=0, minf=2 00:10:11.738 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:10:11.738 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:10:11.738 complete : 0=0.1%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:10:11.738 issued rwts: total=9588,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:10:11.738 latency : target=0, window=0, percentile=100.00%, depth=1 00:10:11.738 job3: (groupid=0, jobs=1): err=95 (file:io_u.c:1889, func=io_u error, error=Operation not supported): pid=72209: Mon Dec 16 11:32:12 2024 00:10:11.738 read: IOPS=4079, BW=15.9MiB/s (16.7MB/s)(47.4MiB/2975msec) 00:10:11.738 slat (nsec): min=8487, max=82827, avg=13945.87, stdev=3559.13 00:10:11.738 clat (usec): min=145, max=7179, avg=229.86, stdev=129.57 00:10:11.738 lat (usec): min=158, max=7196, avg=243.80, stdev=130.04 00:10:11.738 clat percentiles (usec): 00:10:11.738 | 1.00th=[ 157], 5.00th=[ 163], 10.00th=[ 167], 20.00th=[ 174], 00:10:11.738 | 30.00th=[ 178], 40.00th=[ 184], 50.00th=[ 190], 60.00th=[ 198], 00:10:11.738 | 70.00th=[ 273], 80.00th=[ 293], 90.00th=[ 314], 95.00th=[ 351], 00:10:11.738 | 99.00th=[ 529], 99.50th=[ 537], 99.90th=[ 635], 99.95th=[ 1844], 00:10:11.738 | 99.99th=[ 5866] 00:10:11.738 bw ( KiB/s): min= 9784, max=20416, per=33.97%, avg=15611.20, stdev=4629.95, samples=5 00:10:11.738 iops : min= 2446, max= 5104, avg=3902.80, stdev=1157.49, samples=5 00:10:11.738 lat (usec) : 250=68.68%, 500=27.76%, 750=3.49%, 1000=0.01% 00:10:11.738 lat (msec) : 2=0.02%, 4=0.02%, 10=0.02% 00:10:11.738 cpu : usr=1.21%, sys=4.71%, ctx=12136, majf=0, minf=2 00:10:11.738 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:10:11.738 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:10:11.738 complete : 0=0.1%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:10:11.738 issued rwts: total=12136,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:10:11.738 latency : target=0, window=0, percentile=100.00%, depth=1 00:10:11.738 00:10:11.738 Run status group 0 (all jobs): 00:10:11.738 READ: bw=44.9MiB/s (47.1MB/s), 11.2MiB/s-15.9MiB/s (11.7MB/s-16.7MB/s), io=173MiB (182MB), run=2975-3860msec 00:10:11.738 00:10:11.738 Disk stats (read/write): 00:10:11.738 nvme0n1: ios=9595/0, merge=0/0, ticks=3133/0, in_queue=3133, util=95.13% 00:10:11.738 nvme0n2: ios=11300/0, merge=0/0, ticks=3292/0, in_queue=3292, util=95.58% 00:10:11.738 nvme0n3: ios=8992/0, merge=0/0, ticks=2910/0, in_queue=2910, util=96.18% 00:10:11.738 nvme0n4: ios=11604/0, merge=0/0, ticks=2695/0, in_queue=2695, util=96.56% 00:10:11.738 11:32:12 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:10:11.738 11:32:12 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@66 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_delete Malloc2 00:10:11.997 11:32:12 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:10:11.997 11:32:12 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@66 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_delete Malloc3 00:10:12.257 11:32:12 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:10:12.257 11:32:12 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@66 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_delete Malloc4 00:10:12.519 11:32:13 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:10:12.519 11:32:13 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@66 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_delete Malloc5 00:10:13.085 11:32:13 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:10:13.085 11:32:13 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@66 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_delete Malloc6 00:10:13.344 11:32:13 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@69 -- # fio_status=0 00:10:13.344 11:32:13 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@70 -- # wait 72166 00:10:13.344 11:32:13 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@70 -- # fio_status=4 00:10:13.344 11:32:13 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@72 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:10:13.344 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:10:13.344 11:32:13 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@73 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:10:13.344 11:32:13 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1223 -- # local i=0 00:10:13.344 11:32:13 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1224 -- # lsblk -o NAME,SERIAL 00:10:13.344 11:32:13 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1224 -- # grep -q -w SPDKISFASTANDAWESOME 00:10:13.344 11:32:13 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1231 -- # lsblk -l -o NAME,SERIAL 00:10:13.344 11:32:13 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1231 -- # grep -q -w SPDKISFASTANDAWESOME 00:10:13.344 11:32:13 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1235 -- # return 0 00:10:13.344 11:32:13 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@75 -- # '[' 4 -eq 0 ']' 00:10:13.344 nvmf hotplug test: fio failed as expected 00:10:13.344 11:32:13 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@80 -- # echo 'nvmf hotplug test: fio failed as expected' 00:10:13.344 11:32:13 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@83 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:10:13.602 11:32:14 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@85 -- # rm -f ./local-job0-0-verify.state 00:10:13.602 11:32:14 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@86 -- # rm -f ./local-job1-1-verify.state 00:10:13.602 11:32:14 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@87 -- # rm -f ./local-job2-2-verify.state 00:10:13.602 11:32:14 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@89 -- # trap - SIGINT SIGTERM EXIT 00:10:13.602 11:32:14 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@91 -- # nvmftestfini 00:10:13.602 11:32:14 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@516 -- # nvmfcleanup 00:10:13.602 11:32:14 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@121 -- # sync 00:10:13.602 11:32:14 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:10:13.602 11:32:14 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@124 -- # set +e 00:10:13.602 11:32:14 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@125 -- # for i in {1..20} 00:10:13.602 11:32:14 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:10:13.602 rmmod nvme_tcp 00:10:13.602 rmmod nvme_fabrics 00:10:13.602 rmmod nvme_keyring 00:10:13.602 11:32:14 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:10:13.602 11:32:14 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@128 -- # set -e 00:10:13.602 11:32:14 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@129 -- # return 0 00:10:13.602 11:32:14 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@517 -- # '[' -n 71659 ']' 00:10:13.602 11:32:14 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@518 -- # killprocess 71659 00:10:13.602 11:32:14 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@954 -- # '[' -z 71659 ']' 00:10:13.602 11:32:14 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@958 -- # kill -0 71659 00:10:13.602 11:32:14 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@959 -- # uname 00:10:13.602 11:32:14 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:10:13.602 11:32:14 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 71659 00:10:13.602 11:32:14 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:10:13.602 11:32:14 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:10:13.602 11:32:14 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@972 -- # echo 'killing process with pid 71659' 00:10:13.602 killing process with pid 71659 00:10:13.602 11:32:14 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@973 -- # kill 71659 00:10:13.602 11:32:14 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@978 -- # wait 71659 00:10:13.861 11:32:14 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:10:13.861 11:32:14 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:10:13.861 11:32:14 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:10:13.861 11:32:14 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@297 -- # iptr 00:10:13.861 11:32:14 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@791 -- # iptables-restore 00:10:13.861 11:32:14 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@791 -- # iptables-save 00:10:13.861 11:32:14 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:10:13.861 11:32:14 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@298 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:10:13.861 11:32:14 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@299 -- # nvmf_veth_fini 00:10:13.861 11:32:14 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@233 -- # ip link set nvmf_init_br nomaster 00:10:13.861 11:32:14 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@234 -- # ip link set nvmf_init_br2 nomaster 00:10:13.861 11:32:14 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@235 -- # ip link set nvmf_tgt_br nomaster 00:10:13.861 11:32:14 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@236 -- # ip link set nvmf_tgt_br2 nomaster 00:10:13.861 11:32:14 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@237 -- # ip link set nvmf_init_br down 00:10:13.861 11:32:14 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@238 -- # ip link set nvmf_init_br2 down 00:10:13.861 11:32:14 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@239 -- # ip link set nvmf_tgt_br down 00:10:13.861 11:32:14 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@240 -- # ip link set nvmf_tgt_br2 down 00:10:13.861 11:32:14 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@241 -- # ip link delete nvmf_br type bridge 00:10:14.120 11:32:14 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@242 -- # ip link delete nvmf_init_if 00:10:14.120 11:32:14 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@243 -- # ip link delete nvmf_init_if2 00:10:14.120 11:32:14 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@244 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:10:14.120 11:32:14 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@245 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:10:14.120 11:32:14 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@246 -- # remove_spdk_ns 00:10:14.120 11:32:14 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:10:14.120 11:32:14 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:10:14.120 11:32:14 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:10:14.120 11:32:14 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@300 -- # return 0 00:10:14.120 00:10:14.120 real 0m21.198s 00:10:14.120 user 1m22.651s 00:10:14.120 sys 0m8.066s 00:10:14.120 11:32:14 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1130 -- # xtrace_disable 00:10:14.120 11:32:14 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@10 -- # set +x 00:10:14.120 ************************************ 00:10:14.120 END TEST nvmf_fio_target 00:10:14.120 ************************************ 00:10:14.120 11:32:14 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@35 -- # run_test nvmf_bdevio /home/vagrant/spdk_repo/spdk/test/nvmf/target/bdevio.sh --transport=tcp 00:10:14.120 11:32:14 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:10:14.120 11:32:14 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1111 -- # xtrace_disable 00:10:14.120 11:32:14 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@10 -- # set +x 00:10:14.120 ************************************ 00:10:14.120 START TEST nvmf_bdevio 00:10:14.120 ************************************ 00:10:14.120 11:32:14 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/bdevio.sh --transport=tcp 00:10:14.120 * Looking for test storage... 00:10:14.379 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:10:14.379 11:32:14 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:10:14.379 11:32:14 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@1711 -- # lcov --version 00:10:14.379 11:32:14 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:10:14.379 11:32:14 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:10:14.379 11:32:14 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:10:14.379 11:32:14 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@333 -- # local ver1 ver1_l 00:10:14.379 11:32:14 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@334 -- # local ver2 ver2_l 00:10:14.379 11:32:14 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@336 -- # IFS=.-: 00:10:14.379 11:32:14 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@336 -- # read -ra ver1 00:10:14.379 11:32:14 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@337 -- # IFS=.-: 00:10:14.379 11:32:14 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@337 -- # read -ra ver2 00:10:14.379 11:32:14 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@338 -- # local 'op=<' 00:10:14.379 11:32:14 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@340 -- # ver1_l=2 00:10:14.379 11:32:14 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@341 -- # ver2_l=1 00:10:14.379 11:32:14 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:10:14.379 11:32:14 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@344 -- # case "$op" in 00:10:14.379 11:32:14 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@345 -- # : 1 00:10:14.379 11:32:14 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@364 -- # (( v = 0 )) 00:10:14.379 11:32:14 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:10:14.379 11:32:14 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@365 -- # decimal 1 00:10:14.379 11:32:14 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@353 -- # local d=1 00:10:14.379 11:32:14 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:10:14.379 11:32:14 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@355 -- # echo 1 00:10:14.379 11:32:14 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@365 -- # ver1[v]=1 00:10:14.379 11:32:14 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@366 -- # decimal 2 00:10:14.379 11:32:14 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@353 -- # local d=2 00:10:14.379 11:32:14 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:10:14.379 11:32:14 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@355 -- # echo 2 00:10:14.379 11:32:14 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@366 -- # ver2[v]=2 00:10:14.379 11:32:14 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:10:14.379 11:32:14 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:10:14.379 11:32:14 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@368 -- # return 0 00:10:14.380 11:32:14 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:10:14.380 11:32:14 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:10:14.380 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:14.380 --rc genhtml_branch_coverage=1 00:10:14.380 --rc genhtml_function_coverage=1 00:10:14.380 --rc genhtml_legend=1 00:10:14.380 --rc geninfo_all_blocks=1 00:10:14.380 --rc geninfo_unexecuted_blocks=1 00:10:14.380 00:10:14.380 ' 00:10:14.380 11:32:14 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:10:14.380 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:14.380 --rc genhtml_branch_coverage=1 00:10:14.380 --rc genhtml_function_coverage=1 00:10:14.380 --rc genhtml_legend=1 00:10:14.380 --rc geninfo_all_blocks=1 00:10:14.380 --rc geninfo_unexecuted_blocks=1 00:10:14.380 00:10:14.380 ' 00:10:14.380 11:32:14 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:10:14.380 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:14.380 --rc genhtml_branch_coverage=1 00:10:14.380 --rc genhtml_function_coverage=1 00:10:14.380 --rc genhtml_legend=1 00:10:14.380 --rc geninfo_all_blocks=1 00:10:14.380 --rc geninfo_unexecuted_blocks=1 00:10:14.380 00:10:14.380 ' 00:10:14.380 11:32:14 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:10:14.380 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:14.380 --rc genhtml_branch_coverage=1 00:10:14.380 --rc genhtml_function_coverage=1 00:10:14.380 --rc genhtml_legend=1 00:10:14.380 --rc geninfo_all_blocks=1 00:10:14.380 --rc geninfo_unexecuted_blocks=1 00:10:14.380 00:10:14.380 ' 00:10:14.380 11:32:14 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:10:14.380 11:32:14 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@7 -- # uname -s 00:10:14.380 11:32:14 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:10:14.380 11:32:14 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:10:14.380 11:32:14 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:10:14.380 11:32:14 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:10:14.380 11:32:14 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:10:14.380 11:32:14 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:10:14.380 11:32:14 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:10:14.380 11:32:14 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:10:14.380 11:32:14 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:10:14.380 11:32:14 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:10:14.380 11:32:14 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:2fb5c03d-b67f-4471-b8a9-a83bf7b15be8 00:10:14.380 11:32:14 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@18 -- # NVME_HOSTID=2fb5c03d-b67f-4471-b8a9-a83bf7b15be8 00:10:14.380 11:32:14 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:10:14.380 11:32:14 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:10:14.380 11:32:14 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:10:14.380 11:32:14 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:10:14.380 11:32:14 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@49 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:10:14.380 11:32:14 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@15 -- # shopt -s extglob 00:10:14.380 11:32:14 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:10:14.380 11:32:14 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:10:14.380 11:32:14 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:10:14.380 11:32:14 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:14.380 11:32:14 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:14.380 11:32:14 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:14.380 11:32:14 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- paths/export.sh@5 -- # export PATH 00:10:14.380 11:32:14 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:14.380 11:32:14 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@51 -- # : 0 00:10:14.380 11:32:14 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:10:14.380 11:32:14 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:10:14.380 11:32:14 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:10:14.380 11:32:14 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:10:14.380 11:32:14 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:10:14.380 11:32:14 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:10:14.380 /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:10:14.380 11:32:14 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:10:14.380 11:32:14 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:10:14.380 11:32:14 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@55 -- # have_pci_nics=0 00:10:14.380 11:32:14 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@11 -- # MALLOC_BDEV_SIZE=64 00:10:14.380 11:32:14 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:10:14.380 11:32:14 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@14 -- # nvmftestinit 00:10:14.380 11:32:14 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:10:14.380 11:32:14 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:10:14.380 11:32:14 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@476 -- # prepare_net_devs 00:10:14.380 11:32:14 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@438 -- # local -g is_hw=no 00:10:14.380 11:32:14 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@440 -- # remove_spdk_ns 00:10:14.380 11:32:14 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:10:14.380 11:32:14 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:10:14.380 11:32:14 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:10:14.380 11:32:14 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@442 -- # [[ virt != virt ]] 00:10:14.380 11:32:14 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@444 -- # [[ no == yes ]] 00:10:14.380 11:32:14 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@451 -- # [[ virt == phy ]] 00:10:14.380 11:32:14 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@454 -- # [[ virt == phy-fallback ]] 00:10:14.380 11:32:14 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@459 -- # [[ tcp == tcp ]] 00:10:14.380 11:32:14 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@460 -- # nvmf_veth_init 00:10:14.380 11:32:14 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@145 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:10:14.380 11:32:14 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@146 -- # NVMF_SECOND_INITIATOR_IP=10.0.0.2 00:10:14.380 11:32:14 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@147 -- # NVMF_FIRST_TARGET_IP=10.0.0.3 00:10:14.380 11:32:14 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@148 -- # NVMF_SECOND_TARGET_IP=10.0.0.4 00:10:14.380 11:32:14 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@149 -- # NVMF_INITIATOR_IP=10.0.0.1 00:10:14.380 11:32:14 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@150 -- # NVMF_BRIDGE=nvmf_br 00:10:14.380 11:32:14 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@151 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:10:14.380 11:32:14 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@152 -- # NVMF_INITIATOR_INTERFACE2=nvmf_init_if2 00:10:14.380 11:32:14 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@153 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:10:14.380 11:32:14 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@154 -- # NVMF_INITIATOR_BRIDGE2=nvmf_init_br2 00:10:14.380 11:32:14 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@155 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:10:14.380 11:32:14 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@156 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:10:14.380 11:32:14 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@157 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:10:14.380 11:32:14 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@158 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:10:14.380 11:32:14 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@159 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:10:14.380 11:32:14 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@160 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:10:14.380 11:32:14 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@162 -- # ip link set nvmf_init_br nomaster 00:10:14.380 Cannot find device "nvmf_init_br" 00:10:14.381 11:32:15 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@162 -- # true 00:10:14.381 11:32:15 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@163 -- # ip link set nvmf_init_br2 nomaster 00:10:14.381 Cannot find device "nvmf_init_br2" 00:10:14.381 11:32:15 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@163 -- # true 00:10:14.381 11:32:15 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@164 -- # ip link set nvmf_tgt_br nomaster 00:10:14.381 Cannot find device "nvmf_tgt_br" 00:10:14.381 11:32:15 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@164 -- # true 00:10:14.381 11:32:15 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@165 -- # ip link set nvmf_tgt_br2 nomaster 00:10:14.381 Cannot find device "nvmf_tgt_br2" 00:10:14.381 11:32:15 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@165 -- # true 00:10:14.381 11:32:15 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@166 -- # ip link set nvmf_init_br down 00:10:14.381 Cannot find device "nvmf_init_br" 00:10:14.381 11:32:15 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@166 -- # true 00:10:14.381 11:32:15 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@167 -- # ip link set nvmf_init_br2 down 00:10:14.381 Cannot find device "nvmf_init_br2" 00:10:14.381 11:32:15 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@167 -- # true 00:10:14.381 11:32:15 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@168 -- # ip link set nvmf_tgt_br down 00:10:14.381 Cannot find device "nvmf_tgt_br" 00:10:14.381 11:32:15 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@168 -- # true 00:10:14.381 11:32:15 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@169 -- # ip link set nvmf_tgt_br2 down 00:10:14.381 Cannot find device "nvmf_tgt_br2" 00:10:14.381 11:32:15 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@169 -- # true 00:10:14.381 11:32:15 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@170 -- # ip link delete nvmf_br type bridge 00:10:14.381 Cannot find device "nvmf_br" 00:10:14.381 11:32:15 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@170 -- # true 00:10:14.381 11:32:15 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@171 -- # ip link delete nvmf_init_if 00:10:14.381 Cannot find device "nvmf_init_if" 00:10:14.381 11:32:15 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@171 -- # true 00:10:14.381 11:32:15 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@172 -- # ip link delete nvmf_init_if2 00:10:14.381 Cannot find device "nvmf_init_if2" 00:10:14.381 11:32:15 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@172 -- # true 00:10:14.381 11:32:15 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@173 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:10:14.381 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:10:14.381 11:32:15 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@173 -- # true 00:10:14.381 11:32:15 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@174 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:10:14.381 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:10:14.381 11:32:15 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@174 -- # true 00:10:14.381 11:32:15 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@177 -- # ip netns add nvmf_tgt_ns_spdk 00:10:14.642 11:32:15 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@180 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:10:14.642 11:32:15 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@181 -- # ip link add nvmf_init_if2 type veth peer name nvmf_init_br2 00:10:14.642 11:32:15 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@182 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:10:14.642 11:32:15 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@183 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:10:14.642 11:32:15 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:10:14.642 11:32:15 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@187 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:10:14.642 11:32:15 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@190 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:10:14.642 11:32:15 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@191 -- # ip addr add 10.0.0.2/24 dev nvmf_init_if2 00:10:14.642 11:32:15 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@192 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if 00:10:14.642 11:32:15 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@193 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.4/24 dev nvmf_tgt_if2 00:10:14.642 11:32:15 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@196 -- # ip link set nvmf_init_if up 00:10:14.642 11:32:15 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@197 -- # ip link set nvmf_init_if2 up 00:10:14.642 11:32:15 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@198 -- # ip link set nvmf_init_br up 00:10:14.642 11:32:15 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@199 -- # ip link set nvmf_init_br2 up 00:10:14.642 11:32:15 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@200 -- # ip link set nvmf_tgt_br up 00:10:14.642 11:32:15 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@201 -- # ip link set nvmf_tgt_br2 up 00:10:14.642 11:32:15 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@202 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:10:14.642 11:32:15 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@203 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:10:14.642 11:32:15 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@204 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:10:14.642 11:32:15 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@207 -- # ip link add nvmf_br type bridge 00:10:14.642 11:32:15 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@208 -- # ip link set nvmf_br up 00:10:14.642 11:32:15 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@211 -- # ip link set nvmf_init_br master nvmf_br 00:10:14.642 11:32:15 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@212 -- # ip link set nvmf_init_br2 master nvmf_br 00:10:14.642 11:32:15 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@213 -- # ip link set nvmf_tgt_br master nvmf_br 00:10:14.642 11:32:15 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@214 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:10:14.642 11:32:15 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@217 -- # ipts -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:10:14.642 11:32:15 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT' 00:10:14.642 11:32:15 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@218 -- # ipts -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT 00:10:14.642 11:32:15 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT' 00:10:14.642 11:32:15 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@219 -- # ipts -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:10:14.642 11:32:15 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@790 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT -m comment --comment 'SPDK_NVMF:-A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT' 00:10:14.642 11:32:15 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@222 -- # ping -c 1 10.0.0.3 00:10:14.642 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:10:14.642 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.047 ms 00:10:14.642 00:10:14.642 --- 10.0.0.3 ping statistics --- 00:10:14.642 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:10:14.642 rtt min/avg/max/mdev = 0.047/0.047/0.047/0.000 ms 00:10:14.642 11:32:15 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@223 -- # ping -c 1 10.0.0.4 00:10:14.642 PING 10.0.0.4 (10.0.0.4) 56(84) bytes of data. 00:10:14.642 64 bytes from 10.0.0.4: icmp_seq=1 ttl=64 time=0.075 ms 00:10:14.642 00:10:14.642 --- 10.0.0.4 ping statistics --- 00:10:14.642 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:10:14.642 rtt min/avg/max/mdev = 0.075/0.075/0.075/0.000 ms 00:10:14.642 11:32:15 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@224 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:10:14.642 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:10:14.642 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.042 ms 00:10:14.642 00:10:14.642 --- 10.0.0.1 ping statistics --- 00:10:14.642 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:10:14.642 rtt min/avg/max/mdev = 0.042/0.042/0.042/0.000 ms 00:10:14.642 11:32:15 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@225 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.2 00:10:14.642 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:10:14.642 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.045 ms 00:10:14.642 00:10:14.642 --- 10.0.0.2 ping statistics --- 00:10:14.642 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:10:14.642 rtt min/avg/max/mdev = 0.045/0.045/0.045/0.000 ms 00:10:14.642 11:32:15 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@227 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:10:14.642 11:32:15 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@461 -- # return 0 00:10:14.642 11:32:15 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:10:14.642 11:32:15 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:10:14.642 11:32:15 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:10:14.642 11:32:15 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:10:14.643 11:32:15 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:10:14.643 11:32:15 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:10:14.643 11:32:15 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:10:14.902 11:32:15 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@16 -- # nvmfappstart -m 0x78 00:10:14.902 11:32:15 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:10:14.902 11:32:15 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@726 -- # xtrace_disable 00:10:14.902 11:32:15 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:10:14.902 11:32:15 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@509 -- # nvmfpid=72592 00:10:14.902 11:32:15 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@508 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x78 00:10:14.902 11:32:15 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@510 -- # waitforlisten 72592 00:10:14.902 11:32:15 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@835 -- # '[' -z 72592 ']' 00:10:14.902 11:32:15 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:10:14.902 11:32:15 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@840 -- # local max_retries=100 00:10:14.902 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:10:14.902 11:32:15 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:10:14.902 11:32:15 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@844 -- # xtrace_disable 00:10:14.902 11:32:15 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:10:14.902 [2024-12-16 11:32:15.476397] Starting SPDK v25.01-pre git sha1 c360129e7 / DPDK 24.03.0 initialization... 00:10:14.902 [2024-12-16 11:32:15.476510] [ DPDK EAL parameters: nvmf -c 0x78 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:10:14.902 [2024-12-16 11:32:15.625958] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:10:15.160 [2024-12-16 11:32:15.693751] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:10:15.160 [2024-12-16 11:32:15.694106] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:10:15.160 [2024-12-16 11:32:15.694328] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:10:15.160 [2024-12-16 11:32:15.694448] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:10:15.160 [2024-12-16 11:32:15.694487] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:10:15.160 [2024-12-16 11:32:15.695843] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 4 00:10:15.160 [2024-12-16 11:32:15.696117] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 6 00:10:15.160 [2024-12-16 11:32:15.696117] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 3 00:10:15.160 [2024-12-16 11:32:15.695993] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 5 00:10:16.096 11:32:16 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:10:16.096 11:32:16 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@868 -- # return 0 00:10:16.096 11:32:16 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:10:16.096 11:32:16 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@732 -- # xtrace_disable 00:10:16.096 11:32:16 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:10:16.096 11:32:16 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:10:16.096 11:32:16 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@18 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:10:16.096 11:32:16 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:16.096 11:32:16 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:10:16.096 [2024-12-16 11:32:16.552171] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:10:16.096 11:32:16 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:16.096 11:32:16 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@19 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:10:16.096 11:32:16 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:16.096 11:32:16 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:10:16.096 Malloc0 00:10:16.096 11:32:16 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:16.096 11:32:16 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@20 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:10:16.096 11:32:16 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:16.096 11:32:16 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:10:16.096 11:32:16 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:16.096 11:32:16 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@21 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:10:16.096 11:32:16 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:16.096 11:32:16 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:10:16.096 11:32:16 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:16.096 11:32:16 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@22 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 00:10:16.096 11:32:16 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:16.096 11:32:16 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:10:16.096 [2024-12-16 11:32:16.617515] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4420 *** 00:10:16.096 11:32:16 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:16.096 11:32:16 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@24 -- # /home/vagrant/spdk_repo/spdk/test/bdev/bdevio/bdevio --json /dev/fd/62 00:10:16.096 11:32:16 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@24 -- # gen_nvmf_target_json 00:10:16.096 11:32:16 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@560 -- # config=() 00:10:16.096 11:32:16 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@560 -- # local subsystem config 00:10:16.096 11:32:16 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:10:16.096 11:32:16 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:10:16.096 { 00:10:16.096 "params": { 00:10:16.096 "name": "Nvme$subsystem", 00:10:16.096 "trtype": "$TEST_TRANSPORT", 00:10:16.096 "traddr": "$NVMF_FIRST_TARGET_IP", 00:10:16.096 "adrfam": "ipv4", 00:10:16.096 "trsvcid": "$NVMF_PORT", 00:10:16.096 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:10:16.096 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:10:16.096 "hdgst": ${hdgst:-false}, 00:10:16.096 "ddgst": ${ddgst:-false} 00:10:16.096 }, 00:10:16.096 "method": "bdev_nvme_attach_controller" 00:10:16.096 } 00:10:16.096 EOF 00:10:16.096 )") 00:10:16.096 11:32:16 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@582 -- # cat 00:10:16.096 11:32:16 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@584 -- # jq . 00:10:16.096 11:32:16 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@585 -- # IFS=, 00:10:16.096 11:32:16 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:10:16.096 "params": { 00:10:16.096 "name": "Nvme1", 00:10:16.096 "trtype": "tcp", 00:10:16.096 "traddr": "10.0.0.3", 00:10:16.096 "adrfam": "ipv4", 00:10:16.096 "trsvcid": "4420", 00:10:16.096 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:10:16.096 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:10:16.096 "hdgst": false, 00:10:16.096 "ddgst": false 00:10:16.096 }, 00:10:16.096 "method": "bdev_nvme_attach_controller" 00:10:16.096 }' 00:10:16.096 [2024-12-16 11:32:16.679737] Starting SPDK v25.01-pre git sha1 c360129e7 / DPDK 24.03.0 initialization... 00:10:16.096 [2024-12-16 11:32:16.680016] [ DPDK EAL parameters: bdevio --no-shconf -c 0x7 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid72646 ] 00:10:16.096 [2024-12-16 11:32:16.833017] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:10:16.364 [2024-12-16 11:32:16.906674] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 1 00:10:16.364 [2024-12-16 11:32:16.906828] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 2 00:10:16.364 [2024-12-16 11:32:16.906840] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:10:16.364 I/O targets: 00:10:16.364 Nvme1n1: 131072 blocks of 512 bytes (64 MiB) 00:10:16.364 00:10:16.364 00:10:16.364 CUnit - A unit testing framework for C - Version 2.1-3 00:10:16.364 http://cunit.sourceforge.net/ 00:10:16.364 00:10:16.364 00:10:16.364 Suite: bdevio tests on: Nvme1n1 00:10:16.636 Test: blockdev write read block ...passed 00:10:16.636 Test: blockdev write zeroes read block ...passed 00:10:16.636 Test: blockdev write zeroes read no split ...passed 00:10:16.636 Test: blockdev write zeroes read split ...passed 00:10:16.636 Test: blockdev write zeroes read split partial ...passed 00:10:16.636 Test: blockdev reset ...[2024-12-16 11:32:17.222016] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 1] resetting controller 00:10:16.636 [2024-12-16 11:32:17.222285] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x10a5f70 (9): Bad file descriptor 00:10:16.636 [2024-12-16 11:32:17.234031] bdev_nvme.c:2287:bdev_nvme_reset_ctrlr_complete: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] Resettinpassed 00:10:16.636 Test: blockdev write read 8 blocks ...g controller successful. 00:10:16.636 passed 00:10:16.636 Test: blockdev write read size > 128k ...passed 00:10:16.636 Test: blockdev write read invalid size ...passed 00:10:16.636 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:10:16.636 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:10:16.636 Test: blockdev write read max offset ...passed 00:10:16.636 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:10:16.636 Test: blockdev writev readv 8 blocks ...passed 00:10:16.636 Test: blockdev writev readv 30 x 1block ...passed 00:10:16.923 Test: blockdev writev readv block ...passed 00:10:16.923 Test: blockdev writev readv size > 128k ...passed 00:10:16.923 Test: blockdev writev readv size > 128k in two iovs ...passed 00:10:16.923 Test: blockdev comparev and writev ...[2024-12-16 11:32:17.411381] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:10:16.923 [2024-12-16 11:32:17.411437] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:10:16.923 [2024-12-16 11:32:17.411459] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:10:16.923 [2024-12-16 11:32:17.411472] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:10:16.923 [2024-12-16 11:32:17.411781] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:10:16.923 [2024-12-16 11:32:17.411800] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:1 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:10:16.923 [2024-12-16 11:32:17.411816] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:10:16.923 [2024-12-16 11:32:17.411827] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:0 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 00:10:16.923 [2024-12-16 11:32:17.412114] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:10:16.923 [2024-12-16 11:32:17.412131] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:0 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:10:16.923 [2024-12-16 11:32:17.412167] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:10:16.923 [2024-12-16 11:32:17.412181] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:1 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:10:16.924 [2024-12-16 11:32:17.412458] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:10:16.924 [2024-12-16 11:32:17.412488] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:1 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:10:16.924 [2024-12-16 11:32:17.412505] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:10:16.924 [2024-12-16 11:32:17.412515] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:0 cdw0:0 sqhd:0028 p:0 m:0 dnr:0 00:10:16.924 passed 00:10:16.924 Test: blockdev nvme passthru rw ...passed 00:10:16.924 Test: blockdev nvme passthru vendor specific ...[2024-12-16 11:32:17.496863] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOpassed 00:10:16.924 Test: blockdev nvme admin passthru ...CK OFFSET 0x0 len:0x0 00:10:16.924 [2024-12-16 11:32:17.497211] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002c p:0 m:0 dnr:0 00:10:16.924 [2024-12-16 11:32:17.497365] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:10:16.924 [2024-12-16 11:32:17.497383] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002d p:0 m:0 dnr:0 00:10:16.924 [2024-12-16 11:32:17.497500] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:10:16.924 [2024-12-16 11:32:17.497516] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:10:16.924 [2024-12-16 11:32:17.497662] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:10:16.924 [2024-12-16 11:32:17.497679] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002f p:0 m:0 dnr:0 00:10:16.924 passed 00:10:16.924 Test: blockdev copy ...passed 00:10:16.924 00:10:16.924 Run Summary: Type Total Ran Passed Failed Inactive 00:10:16.924 suites 1 1 n/a 0 0 00:10:16.924 tests 23 23 23 0 0 00:10:16.924 asserts 152 152 152 0 n/a 00:10:16.924 00:10:16.924 Elapsed time = 0.893 seconds 00:10:17.182 11:32:17 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@26 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:10:17.182 11:32:17 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:17.182 11:32:17 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:10:17.182 11:32:17 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:17.182 11:32:17 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@28 -- # trap - SIGINT SIGTERM EXIT 00:10:17.182 11:32:17 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@30 -- # nvmftestfini 00:10:17.182 11:32:17 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@516 -- # nvmfcleanup 00:10:17.182 11:32:17 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@121 -- # sync 00:10:17.182 11:32:17 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:10:17.182 11:32:17 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@124 -- # set +e 00:10:17.182 11:32:17 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@125 -- # for i in {1..20} 00:10:17.182 11:32:17 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:10:17.182 rmmod nvme_tcp 00:10:17.182 rmmod nvme_fabrics 00:10:17.182 rmmod nvme_keyring 00:10:17.182 11:32:17 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:10:17.182 11:32:17 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@128 -- # set -e 00:10:17.183 11:32:17 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@129 -- # return 0 00:10:17.183 11:32:17 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@517 -- # '[' -n 72592 ']' 00:10:17.183 11:32:17 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@518 -- # killprocess 72592 00:10:17.183 11:32:17 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@954 -- # '[' -z 72592 ']' 00:10:17.183 11:32:17 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@958 -- # kill -0 72592 00:10:17.183 11:32:17 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@959 -- # uname 00:10:17.183 11:32:17 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:10:17.183 11:32:17 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 72592 00:10:17.183 killing process with pid 72592 00:10:17.183 11:32:17 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@960 -- # process_name=reactor_3 00:10:17.183 11:32:17 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@964 -- # '[' reactor_3 = sudo ']' 00:10:17.183 11:32:17 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@972 -- # echo 'killing process with pid 72592' 00:10:17.183 11:32:17 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@973 -- # kill 72592 00:10:17.183 11:32:17 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@978 -- # wait 72592 00:10:17.441 11:32:18 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:10:17.441 11:32:18 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:10:17.441 11:32:18 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:10:17.441 11:32:18 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@297 -- # iptr 00:10:17.441 11:32:18 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:10:17.441 11:32:18 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@791 -- # iptables-save 00:10:17.441 11:32:18 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@791 -- # iptables-restore 00:10:17.441 11:32:18 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@298 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:10:17.441 11:32:18 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@299 -- # nvmf_veth_fini 00:10:17.442 11:32:18 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@233 -- # ip link set nvmf_init_br nomaster 00:10:17.442 11:32:18 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@234 -- # ip link set nvmf_init_br2 nomaster 00:10:17.442 11:32:18 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@235 -- # ip link set nvmf_tgt_br nomaster 00:10:17.442 11:32:18 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@236 -- # ip link set nvmf_tgt_br2 nomaster 00:10:17.442 11:32:18 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@237 -- # ip link set nvmf_init_br down 00:10:17.442 11:32:18 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@238 -- # ip link set nvmf_init_br2 down 00:10:17.442 11:32:18 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@239 -- # ip link set nvmf_tgt_br down 00:10:17.442 11:32:18 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@240 -- # ip link set nvmf_tgt_br2 down 00:10:17.442 11:32:18 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@241 -- # ip link delete nvmf_br type bridge 00:10:17.701 11:32:18 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@242 -- # ip link delete nvmf_init_if 00:10:17.701 11:32:18 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@243 -- # ip link delete nvmf_init_if2 00:10:17.701 11:32:18 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@244 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:10:17.701 11:32:18 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@245 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:10:17.701 11:32:18 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@246 -- # remove_spdk_ns 00:10:17.701 11:32:18 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:10:17.701 11:32:18 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:10:17.701 11:32:18 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:10:17.701 11:32:18 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@300 -- # return 0 00:10:17.701 00:10:17.701 real 0m3.551s 00:10:17.701 user 0m11.907s 00:10:17.701 sys 0m0.913s 00:10:17.701 11:32:18 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@1130 -- # xtrace_disable 00:10:17.701 11:32:18 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:10:17.701 ************************************ 00:10:17.701 END TEST nvmf_bdevio 00:10:17.701 ************************************ 00:10:17.701 11:32:18 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@37 -- # trap - SIGINT SIGTERM EXIT 00:10:17.701 ************************************ 00:10:17.701 END TEST nvmf_target_core 00:10:17.701 ************************************ 00:10:17.701 00:10:17.701 real 3m40.164s 00:10:17.701 user 11m40.937s 00:10:17.701 sys 1m0.761s 00:10:17.701 11:32:18 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1130 -- # xtrace_disable 00:10:17.701 11:32:18 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@10 -- # set +x 00:10:17.701 11:32:18 nvmf_tcp -- nvmf/nvmf.sh@15 -- # run_test nvmf_target_extra /home/vagrant/spdk_repo/spdk/test/nvmf/nvmf_target_extra.sh --transport=tcp 00:10:17.701 11:32:18 nvmf_tcp -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:10:17.701 11:32:18 nvmf_tcp -- common/autotest_common.sh@1111 -- # xtrace_disable 00:10:17.701 11:32:18 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:10:17.701 ************************************ 00:10:17.701 START TEST nvmf_target_extra 00:10:17.701 ************************************ 00:10:17.701 11:32:18 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/nvmf_target_extra.sh --transport=tcp 00:10:17.960 * Looking for test storage... 00:10:17.960 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf 00:10:17.960 11:32:18 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:10:17.960 11:32:18 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1711 -- # lcov --version 00:10:17.960 11:32:18 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:10:17.960 11:32:18 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:10:17.960 11:32:18 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:10:17.960 11:32:18 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@333 -- # local ver1 ver1_l 00:10:17.960 11:32:18 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@334 -- # local ver2 ver2_l 00:10:17.960 11:32:18 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@336 -- # IFS=.-: 00:10:17.960 11:32:18 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@336 -- # read -ra ver1 00:10:17.960 11:32:18 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@337 -- # IFS=.-: 00:10:17.960 11:32:18 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@337 -- # read -ra ver2 00:10:17.960 11:32:18 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@338 -- # local 'op=<' 00:10:17.960 11:32:18 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@340 -- # ver1_l=2 00:10:17.960 11:32:18 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@341 -- # ver2_l=1 00:10:17.960 11:32:18 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:10:17.960 11:32:18 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@344 -- # case "$op" in 00:10:17.960 11:32:18 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@345 -- # : 1 00:10:17.960 11:32:18 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@364 -- # (( v = 0 )) 00:10:17.960 11:32:18 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:10:17.960 11:32:18 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@365 -- # decimal 1 00:10:17.960 11:32:18 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@353 -- # local d=1 00:10:17.960 11:32:18 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:10:17.960 11:32:18 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@355 -- # echo 1 00:10:17.960 11:32:18 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@365 -- # ver1[v]=1 00:10:17.960 11:32:18 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@366 -- # decimal 2 00:10:17.960 11:32:18 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@353 -- # local d=2 00:10:17.960 11:32:18 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:10:17.960 11:32:18 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@355 -- # echo 2 00:10:17.960 11:32:18 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@366 -- # ver2[v]=2 00:10:17.960 11:32:18 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:10:17.960 11:32:18 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:10:17.960 11:32:18 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@368 -- # return 0 00:10:17.960 11:32:18 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:10:17.960 11:32:18 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:10:17.960 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:17.960 --rc genhtml_branch_coverage=1 00:10:17.960 --rc genhtml_function_coverage=1 00:10:17.960 --rc genhtml_legend=1 00:10:17.960 --rc geninfo_all_blocks=1 00:10:17.960 --rc geninfo_unexecuted_blocks=1 00:10:17.960 00:10:17.960 ' 00:10:17.960 11:32:18 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:10:17.960 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:17.960 --rc genhtml_branch_coverage=1 00:10:17.960 --rc genhtml_function_coverage=1 00:10:17.960 --rc genhtml_legend=1 00:10:17.960 --rc geninfo_all_blocks=1 00:10:17.960 --rc geninfo_unexecuted_blocks=1 00:10:17.960 00:10:17.960 ' 00:10:17.960 11:32:18 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:10:17.960 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:17.960 --rc genhtml_branch_coverage=1 00:10:17.960 --rc genhtml_function_coverage=1 00:10:17.960 --rc genhtml_legend=1 00:10:17.960 --rc geninfo_all_blocks=1 00:10:17.960 --rc geninfo_unexecuted_blocks=1 00:10:17.960 00:10:17.960 ' 00:10:17.960 11:32:18 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:10:17.960 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:17.960 --rc genhtml_branch_coverage=1 00:10:17.960 --rc genhtml_function_coverage=1 00:10:17.960 --rc genhtml_legend=1 00:10:17.960 --rc geninfo_all_blocks=1 00:10:17.960 --rc geninfo_unexecuted_blocks=1 00:10:17.960 00:10:17.960 ' 00:10:17.960 11:32:18 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:10:17.960 11:32:18 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@7 -- # uname -s 00:10:17.960 11:32:18 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:10:17.960 11:32:18 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:10:17.960 11:32:18 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:10:17.960 11:32:18 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:10:17.960 11:32:18 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:10:17.960 11:32:18 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:10:17.960 11:32:18 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:10:17.960 11:32:18 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:10:17.960 11:32:18 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:10:17.960 11:32:18 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:10:17.960 11:32:18 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:2fb5c03d-b67f-4471-b8a9-a83bf7b15be8 00:10:17.960 11:32:18 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@18 -- # NVME_HOSTID=2fb5c03d-b67f-4471-b8a9-a83bf7b15be8 00:10:17.960 11:32:18 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:10:17.960 11:32:18 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:10:17.960 11:32:18 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:10:17.960 11:32:18 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:10:17.960 11:32:18 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@49 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:10:17.960 11:32:18 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@15 -- # shopt -s extglob 00:10:17.960 11:32:18 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:10:17.960 11:32:18 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:10:17.960 11:32:18 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:10:17.960 11:32:18 nvmf_tcp.nvmf_target_extra -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:17.961 11:32:18 nvmf_tcp.nvmf_target_extra -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:17.961 11:32:18 nvmf_tcp.nvmf_target_extra -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:17.961 11:32:18 nvmf_tcp.nvmf_target_extra -- paths/export.sh@5 -- # export PATH 00:10:17.961 11:32:18 nvmf_tcp.nvmf_target_extra -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:17.961 11:32:18 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@51 -- # : 0 00:10:17.961 11:32:18 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:10:17.961 11:32:18 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:10:17.961 11:32:18 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:10:17.961 11:32:18 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:10:17.961 11:32:18 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:10:17.961 11:32:18 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:10:17.961 /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:10:17.961 11:32:18 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:10:17.961 11:32:18 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:10:17.961 11:32:18 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@55 -- # have_pci_nics=0 00:10:17.961 11:32:18 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@11 -- # trap 'exit 1' SIGINT SIGTERM EXIT 00:10:17.961 11:32:18 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@13 -- # TEST_ARGS=("$@") 00:10:17.961 11:32:18 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@15 -- # [[ 0 -eq 0 ]] 00:10:17.961 11:32:18 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@16 -- # run_test nvmf_example /home/vagrant/spdk_repo/spdk/test/nvmf/target/nvmf_example.sh --transport=tcp 00:10:17.961 11:32:18 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:10:17.961 11:32:18 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1111 -- # xtrace_disable 00:10:17.961 11:32:18 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:10:17.961 ************************************ 00:10:17.961 START TEST nvmf_example 00:10:17.961 ************************************ 00:10:17.961 11:32:18 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/nvmf_example.sh --transport=tcp 00:10:18.220 * Looking for test storage... 00:10:18.220 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:10:18.220 11:32:18 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:10:18.220 11:32:18 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@1711 -- # lcov --version 00:10:18.220 11:32:18 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:10:18.220 11:32:18 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:10:18.220 11:32:18 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:10:18.220 11:32:18 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@333 -- # local ver1 ver1_l 00:10:18.220 11:32:18 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@334 -- # local ver2 ver2_l 00:10:18.220 11:32:18 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@336 -- # IFS=.-: 00:10:18.220 11:32:18 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@336 -- # read -ra ver1 00:10:18.220 11:32:18 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@337 -- # IFS=.-: 00:10:18.220 11:32:18 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@337 -- # read -ra ver2 00:10:18.220 11:32:18 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@338 -- # local 'op=<' 00:10:18.220 11:32:18 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@340 -- # ver1_l=2 00:10:18.220 11:32:18 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@341 -- # ver2_l=1 00:10:18.220 11:32:18 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:10:18.220 11:32:18 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@344 -- # case "$op" in 00:10:18.220 11:32:18 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@345 -- # : 1 00:10:18.220 11:32:18 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@364 -- # (( v = 0 )) 00:10:18.220 11:32:18 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:10:18.220 11:32:18 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@365 -- # decimal 1 00:10:18.220 11:32:18 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@353 -- # local d=1 00:10:18.220 11:32:18 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:10:18.220 11:32:18 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@355 -- # echo 1 00:10:18.220 11:32:18 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@365 -- # ver1[v]=1 00:10:18.220 11:32:18 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@366 -- # decimal 2 00:10:18.220 11:32:18 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@353 -- # local d=2 00:10:18.220 11:32:18 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:10:18.220 11:32:18 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@355 -- # echo 2 00:10:18.220 11:32:18 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@366 -- # ver2[v]=2 00:10:18.220 11:32:18 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:10:18.220 11:32:18 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:10:18.220 11:32:18 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@368 -- # return 0 00:10:18.220 11:32:18 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:10:18.220 11:32:18 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:10:18.220 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:18.220 --rc genhtml_branch_coverage=1 00:10:18.220 --rc genhtml_function_coverage=1 00:10:18.220 --rc genhtml_legend=1 00:10:18.220 --rc geninfo_all_blocks=1 00:10:18.220 --rc geninfo_unexecuted_blocks=1 00:10:18.220 00:10:18.220 ' 00:10:18.220 11:32:18 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:10:18.220 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:18.220 --rc genhtml_branch_coverage=1 00:10:18.220 --rc genhtml_function_coverage=1 00:10:18.220 --rc genhtml_legend=1 00:10:18.220 --rc geninfo_all_blocks=1 00:10:18.220 --rc geninfo_unexecuted_blocks=1 00:10:18.220 00:10:18.220 ' 00:10:18.220 11:32:18 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:10:18.220 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:18.220 --rc genhtml_branch_coverage=1 00:10:18.220 --rc genhtml_function_coverage=1 00:10:18.220 --rc genhtml_legend=1 00:10:18.220 --rc geninfo_all_blocks=1 00:10:18.220 --rc geninfo_unexecuted_blocks=1 00:10:18.220 00:10:18.220 ' 00:10:18.220 11:32:18 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:10:18.220 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:18.220 --rc genhtml_branch_coverage=1 00:10:18.220 --rc genhtml_function_coverage=1 00:10:18.220 --rc genhtml_legend=1 00:10:18.220 --rc geninfo_all_blocks=1 00:10:18.220 --rc geninfo_unexecuted_blocks=1 00:10:18.220 00:10:18.220 ' 00:10:18.220 11:32:18 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:10:18.220 11:32:18 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@7 -- # uname -s 00:10:18.220 11:32:18 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:10:18.221 11:32:18 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:10:18.221 11:32:18 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:10:18.221 11:32:18 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:10:18.221 11:32:18 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:10:18.221 11:32:18 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:10:18.221 11:32:18 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:10:18.221 11:32:18 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:10:18.221 11:32:18 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:10:18.221 11:32:18 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:10:18.221 11:32:18 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:2fb5c03d-b67f-4471-b8a9-a83bf7b15be8 00:10:18.221 11:32:18 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@18 -- # NVME_HOSTID=2fb5c03d-b67f-4471-b8a9-a83bf7b15be8 00:10:18.221 11:32:18 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:10:18.221 11:32:18 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:10:18.221 11:32:18 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:10:18.221 11:32:18 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:10:18.221 11:32:18 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@49 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:10:18.221 11:32:18 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@15 -- # shopt -s extglob 00:10:18.221 11:32:18 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:10:18.221 11:32:18 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:10:18.221 11:32:18 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:10:18.221 11:32:18 nvmf_tcp.nvmf_target_extra.nvmf_example -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:18.221 11:32:18 nvmf_tcp.nvmf_target_extra.nvmf_example -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:18.221 11:32:18 nvmf_tcp.nvmf_target_extra.nvmf_example -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:18.221 11:32:18 nvmf_tcp.nvmf_target_extra.nvmf_example -- paths/export.sh@5 -- # export PATH 00:10:18.221 11:32:18 nvmf_tcp.nvmf_target_extra.nvmf_example -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:18.221 11:32:18 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@51 -- # : 0 00:10:18.221 11:32:18 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:10:18.221 11:32:18 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:10:18.221 11:32:18 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:10:18.221 11:32:18 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:10:18.221 11:32:18 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:10:18.221 11:32:18 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:10:18.221 /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:10:18.221 11:32:18 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:10:18.221 11:32:18 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:10:18.221 11:32:18 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@55 -- # have_pci_nics=0 00:10:18.221 11:32:18 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@11 -- # NVMF_EXAMPLE=("$SPDK_EXAMPLE_DIR/nvmf") 00:10:18.221 11:32:18 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@13 -- # MALLOC_BDEV_SIZE=64 00:10:18.221 11:32:18 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@14 -- # MALLOC_BLOCK_SIZE=512 00:10:18.221 11:32:18 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@24 -- # build_nvmf_example_args 00:10:18.221 11:32:18 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@17 -- # '[' 0 -eq 1 ']' 00:10:18.221 11:32:18 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@20 -- # NVMF_EXAMPLE+=(-i "$NVMF_APP_SHM_ID" -g 10000) 00:10:18.221 11:32:18 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@21 -- # NVMF_EXAMPLE+=("${NO_HUGE[@]}") 00:10:18.221 11:32:18 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@40 -- # timing_enter nvmf_example_test 00:10:18.221 11:32:18 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@726 -- # xtrace_disable 00:10:18.221 11:32:18 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:10:18.221 11:32:18 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@41 -- # nvmftestinit 00:10:18.221 11:32:18 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:10:18.221 11:32:18 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:10:18.221 11:32:18 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@476 -- # prepare_net_devs 00:10:18.221 11:32:18 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@438 -- # local -g is_hw=no 00:10:18.221 11:32:18 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@440 -- # remove_spdk_ns 00:10:18.221 11:32:18 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:10:18.221 11:32:18 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:10:18.221 11:32:18 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:10:18.221 11:32:18 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@442 -- # [[ virt != virt ]] 00:10:18.221 11:32:18 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@444 -- # [[ no == yes ]] 00:10:18.221 11:32:18 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@451 -- # [[ virt == phy ]] 00:10:18.221 11:32:18 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@454 -- # [[ virt == phy-fallback ]] 00:10:18.221 11:32:18 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@459 -- # [[ tcp == tcp ]] 00:10:18.221 11:32:18 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@460 -- # nvmf_veth_init 00:10:18.221 11:32:18 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@145 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:10:18.221 11:32:18 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@146 -- # NVMF_SECOND_INITIATOR_IP=10.0.0.2 00:10:18.221 11:32:18 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@147 -- # NVMF_FIRST_TARGET_IP=10.0.0.3 00:10:18.221 11:32:18 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@148 -- # NVMF_SECOND_TARGET_IP=10.0.0.4 00:10:18.221 11:32:18 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@149 -- # NVMF_INITIATOR_IP=10.0.0.1 00:10:18.221 11:32:18 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@150 -- # NVMF_BRIDGE=nvmf_br 00:10:18.221 11:32:18 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@151 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:10:18.221 11:32:18 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@152 -- # NVMF_INITIATOR_INTERFACE2=nvmf_init_if2 00:10:18.221 11:32:18 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@153 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:10:18.221 11:32:18 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@154 -- # NVMF_INITIATOR_BRIDGE2=nvmf_init_br2 00:10:18.221 11:32:18 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@155 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:10:18.221 11:32:18 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@156 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:10:18.221 11:32:18 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@157 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:10:18.221 11:32:18 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@158 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:10:18.221 11:32:18 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@159 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:10:18.221 11:32:18 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@160 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:10:18.221 11:32:18 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@162 -- # ip link set nvmf_init_br nomaster 00:10:18.221 Cannot find device "nvmf_init_br" 00:10:18.221 11:32:18 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@162 -- # true 00:10:18.221 11:32:18 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@163 -- # ip link set nvmf_init_br2 nomaster 00:10:18.221 Cannot find device "nvmf_init_br2" 00:10:18.221 11:32:18 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@163 -- # true 00:10:18.221 11:32:18 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@164 -- # ip link set nvmf_tgt_br nomaster 00:10:18.221 Cannot find device "nvmf_tgt_br" 00:10:18.221 11:32:18 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@164 -- # true 00:10:18.221 11:32:18 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@165 -- # ip link set nvmf_tgt_br2 nomaster 00:10:18.221 Cannot find device "nvmf_tgt_br2" 00:10:18.221 11:32:18 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@165 -- # true 00:10:18.221 11:32:18 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@166 -- # ip link set nvmf_init_br down 00:10:18.221 Cannot find device "nvmf_init_br" 00:10:18.221 11:32:18 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@166 -- # true 00:10:18.221 11:32:18 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@167 -- # ip link set nvmf_init_br2 down 00:10:18.480 Cannot find device "nvmf_init_br2" 00:10:18.480 11:32:18 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@167 -- # true 00:10:18.480 11:32:18 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@168 -- # ip link set nvmf_tgt_br down 00:10:18.480 Cannot find device "nvmf_tgt_br" 00:10:18.480 11:32:18 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@168 -- # true 00:10:18.480 11:32:18 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@169 -- # ip link set nvmf_tgt_br2 down 00:10:18.480 Cannot find device "nvmf_tgt_br2" 00:10:18.480 11:32:18 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@169 -- # true 00:10:18.480 11:32:18 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@170 -- # ip link delete nvmf_br type bridge 00:10:18.480 Cannot find device "nvmf_br" 00:10:18.480 11:32:19 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@170 -- # true 00:10:18.480 11:32:19 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@171 -- # ip link delete nvmf_init_if 00:10:18.480 Cannot find device "nvmf_init_if" 00:10:18.480 11:32:19 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@171 -- # true 00:10:18.480 11:32:19 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@172 -- # ip link delete nvmf_init_if2 00:10:18.480 Cannot find device "nvmf_init_if2" 00:10:18.480 11:32:19 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@172 -- # true 00:10:18.480 11:32:19 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@173 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:10:18.480 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:10:18.480 11:32:19 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@173 -- # true 00:10:18.480 11:32:19 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@174 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:10:18.480 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:10:18.480 11:32:19 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@174 -- # true 00:10:18.480 11:32:19 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@177 -- # ip netns add nvmf_tgt_ns_spdk 00:10:18.480 11:32:19 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@180 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:10:18.480 11:32:19 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@181 -- # ip link add nvmf_init_if2 type veth peer name nvmf_init_br2 00:10:18.480 11:32:19 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@182 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:10:18.480 11:32:19 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@183 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:10:18.480 11:32:19 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:10:18.480 11:32:19 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@187 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:10:18.480 11:32:19 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@190 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:10:18.480 11:32:19 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@191 -- # ip addr add 10.0.0.2/24 dev nvmf_init_if2 00:10:18.480 11:32:19 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@192 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if 00:10:18.480 11:32:19 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@193 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.4/24 dev nvmf_tgt_if2 00:10:18.480 11:32:19 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@196 -- # ip link set nvmf_init_if up 00:10:18.480 11:32:19 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@197 -- # ip link set nvmf_init_if2 up 00:10:18.480 11:32:19 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@198 -- # ip link set nvmf_init_br up 00:10:18.480 11:32:19 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@199 -- # ip link set nvmf_init_br2 up 00:10:18.480 11:32:19 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@200 -- # ip link set nvmf_tgt_br up 00:10:18.480 11:32:19 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@201 -- # ip link set nvmf_tgt_br2 up 00:10:18.480 11:32:19 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@202 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:10:18.480 11:32:19 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@203 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:10:18.480 11:32:19 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@204 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:10:18.480 11:32:19 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@207 -- # ip link add nvmf_br type bridge 00:10:18.739 11:32:19 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@208 -- # ip link set nvmf_br up 00:10:18.739 11:32:19 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@211 -- # ip link set nvmf_init_br master nvmf_br 00:10:18.739 11:32:19 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@212 -- # ip link set nvmf_init_br2 master nvmf_br 00:10:18.739 11:32:19 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@213 -- # ip link set nvmf_tgt_br master nvmf_br 00:10:18.739 11:32:19 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@214 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:10:18.739 11:32:19 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@217 -- # ipts -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:10:18.739 11:32:19 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT' 00:10:18.739 11:32:19 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@218 -- # ipts -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT 00:10:18.739 11:32:19 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT' 00:10:18.739 11:32:19 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@219 -- # ipts -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:10:18.739 11:32:19 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@790 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT -m comment --comment 'SPDK_NVMF:-A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT' 00:10:18.739 11:32:19 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@222 -- # ping -c 1 10.0.0.3 00:10:18.739 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:10:18.739 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.049 ms 00:10:18.739 00:10:18.739 --- 10.0.0.3 ping statistics --- 00:10:18.739 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:10:18.739 rtt min/avg/max/mdev = 0.049/0.049/0.049/0.000 ms 00:10:18.740 11:32:19 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@223 -- # ping -c 1 10.0.0.4 00:10:18.740 PING 10.0.0.4 (10.0.0.4) 56(84) bytes of data. 00:10:18.740 64 bytes from 10.0.0.4: icmp_seq=1 ttl=64 time=0.042 ms 00:10:18.740 00:10:18.740 --- 10.0.0.4 ping statistics --- 00:10:18.740 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:10:18.740 rtt min/avg/max/mdev = 0.042/0.042/0.042/0.000 ms 00:10:18.740 11:32:19 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@224 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:10:18.740 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:10:18.740 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.029 ms 00:10:18.740 00:10:18.740 --- 10.0.0.1 ping statistics --- 00:10:18.740 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:10:18.740 rtt min/avg/max/mdev = 0.029/0.029/0.029/0.000 ms 00:10:18.740 11:32:19 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@225 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.2 00:10:18.740 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:10:18.740 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.057 ms 00:10:18.740 00:10:18.740 --- 10.0.0.2 ping statistics --- 00:10:18.740 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:10:18.740 rtt min/avg/max/mdev = 0.057/0.057/0.057/0.000 ms 00:10:18.740 11:32:19 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@227 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:10:18.740 11:32:19 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@461 -- # return 0 00:10:18.740 11:32:19 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:10:18.740 11:32:19 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:10:18.740 11:32:19 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:10:18.740 11:32:19 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:10:18.740 11:32:19 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:10:18.740 11:32:19 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:10:18.740 11:32:19 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:10:18.740 11:32:19 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@42 -- # nvmfexamplestart '-m 0xF' 00:10:18.740 11:32:19 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@27 -- # timing_enter start_nvmf_example 00:10:18.740 11:32:19 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@726 -- # xtrace_disable 00:10:18.740 11:32:19 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:10:18.740 11:32:19 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@29 -- # '[' tcp == tcp ']' 00:10:18.740 11:32:19 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@30 -- # NVMF_EXAMPLE=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_EXAMPLE[@]}") 00:10:18.740 11:32:19 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@34 -- # nvmfpid=72946 00:10:18.740 11:32:19 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@33 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/examples/nvmf -i 0 -g 10000 -m 0xF 00:10:18.740 11:32:19 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@35 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:10:18.740 11:32:19 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@36 -- # waitforlisten 72946 00:10:18.740 11:32:19 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@835 -- # '[' -z 72946 ']' 00:10:18.740 11:32:19 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:10:18.740 11:32:19 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@840 -- # local max_retries=100 00:10:18.740 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:10:18.740 11:32:19 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:10:18.740 11:32:19 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@844 -- # xtrace_disable 00:10:18.740 11:32:19 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:10:20.116 11:32:20 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:10:20.116 11:32:20 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@868 -- # return 0 00:10:20.116 11:32:20 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@37 -- # timing_exit start_nvmf_example 00:10:20.116 11:32:20 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@732 -- # xtrace_disable 00:10:20.116 11:32:20 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:10:20.116 11:32:20 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@45 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:10:20.116 11:32:20 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:20.116 11:32:20 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:10:20.116 11:32:20 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:20.116 11:32:20 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@47 -- # rpc_cmd bdev_malloc_create 64 512 00:10:20.116 11:32:20 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:20.116 11:32:20 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:10:20.116 11:32:20 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:20.116 11:32:20 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@47 -- # malloc_bdevs='Malloc0 ' 00:10:20.116 11:32:20 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@49 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:10:20.116 11:32:20 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:20.116 11:32:20 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:10:20.116 11:32:20 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:20.116 11:32:20 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@52 -- # for malloc_bdev in $malloc_bdevs 00:10:20.116 11:32:20 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@53 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:10:20.116 11:32:20 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:20.116 11:32:20 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:10:20.116 11:32:20 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:20.116 11:32:20 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@57 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 00:10:20.116 11:32:20 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:20.116 11:32:20 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:10:20.116 11:32:20 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:20.116 11:32:20 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@59 -- # perf=/home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf 00:10:20.116 11:32:20 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@61 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf -q 64 -o 4096 -w randrw -M 30 -t 10 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.3 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' 00:10:32.333 Initializing NVMe Controllers 00:10:32.333 Attached to NVMe over Fabrics controller at 10.0.0.3:4420: nqn.2016-06.io.spdk:cnode1 00:10:32.333 Associating TCP (addr:10.0.0.3 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:10:32.333 Initialization complete. Launching workers. 00:10:32.333 ======================================================== 00:10:32.333 Latency(us) 00:10:32.333 Device Information : IOPS MiB/s Average min max 00:10:32.333 TCP (addr:10.0.0.3 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 14719.75 57.50 4347.48 745.49 23135.63 00:10:32.333 ======================================================== 00:10:32.333 Total : 14719.75 57.50 4347.48 745.49 23135.63 00:10:32.333 00:10:32.333 11:32:30 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@65 -- # trap - SIGINT SIGTERM EXIT 00:10:32.333 11:32:30 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@66 -- # nvmftestfini 00:10:32.333 11:32:30 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@516 -- # nvmfcleanup 00:10:32.333 11:32:30 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@121 -- # sync 00:10:32.333 11:32:30 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:10:32.333 11:32:30 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@124 -- # set +e 00:10:32.333 11:32:30 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@125 -- # for i in {1..20} 00:10:32.333 11:32:30 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:10:32.333 rmmod nvme_tcp 00:10:32.333 rmmod nvme_fabrics 00:10:32.333 rmmod nvme_keyring 00:10:32.333 11:32:30 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:10:32.333 11:32:30 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@128 -- # set -e 00:10:32.333 11:32:30 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@129 -- # return 0 00:10:32.333 11:32:30 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@517 -- # '[' -n 72946 ']' 00:10:32.333 11:32:30 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@518 -- # killprocess 72946 00:10:32.333 11:32:30 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@954 -- # '[' -z 72946 ']' 00:10:32.333 11:32:30 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@958 -- # kill -0 72946 00:10:32.333 11:32:30 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@959 -- # uname 00:10:32.333 11:32:30 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:10:32.333 11:32:30 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 72946 00:10:32.333 11:32:30 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@960 -- # process_name=nvmf 00:10:32.333 killing process with pid 72946 00:10:32.333 11:32:30 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@964 -- # '[' nvmf = sudo ']' 00:10:32.333 11:32:30 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@972 -- # echo 'killing process with pid 72946' 00:10:32.333 11:32:30 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@973 -- # kill 72946 00:10:32.333 11:32:30 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@978 -- # wait 72946 00:10:32.333 nvmf threads initialize successfully 00:10:32.333 bdev subsystem init successfully 00:10:32.333 created a nvmf target service 00:10:32.333 create targets's poll groups done 00:10:32.333 all subsystems of target started 00:10:32.333 nvmf target is running 00:10:32.333 all subsystems of target stopped 00:10:32.333 destroy targets's poll groups done 00:10:32.333 destroyed the nvmf target service 00:10:32.333 bdev subsystem finish successfully 00:10:32.333 nvmf threads destroy successfully 00:10:32.333 11:32:31 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:10:32.333 11:32:31 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:10:32.333 11:32:31 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:10:32.333 11:32:31 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@297 -- # iptr 00:10:32.333 11:32:31 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@791 -- # iptables-save 00:10:32.333 11:32:31 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:10:32.333 11:32:31 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@791 -- # iptables-restore 00:10:32.333 11:32:31 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@298 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:10:32.333 11:32:31 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@299 -- # nvmf_veth_fini 00:10:32.333 11:32:31 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@233 -- # ip link set nvmf_init_br nomaster 00:10:32.333 11:32:31 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@234 -- # ip link set nvmf_init_br2 nomaster 00:10:32.333 11:32:31 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@235 -- # ip link set nvmf_tgt_br nomaster 00:10:32.333 11:32:31 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@236 -- # ip link set nvmf_tgt_br2 nomaster 00:10:32.333 11:32:31 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@237 -- # ip link set nvmf_init_br down 00:10:32.334 11:32:31 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@238 -- # ip link set nvmf_init_br2 down 00:10:32.334 11:32:31 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@239 -- # ip link set nvmf_tgt_br down 00:10:32.334 11:32:31 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@240 -- # ip link set nvmf_tgt_br2 down 00:10:32.334 11:32:31 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@241 -- # ip link delete nvmf_br type bridge 00:10:32.334 11:32:31 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@242 -- # ip link delete nvmf_init_if 00:10:32.334 11:32:31 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@243 -- # ip link delete nvmf_init_if2 00:10:32.334 11:32:31 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@244 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:10:32.334 11:32:31 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@245 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:10:32.334 11:32:31 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@246 -- # remove_spdk_ns 00:10:32.334 11:32:31 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:10:32.334 11:32:31 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:10:32.334 11:32:31 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:10:32.334 11:32:31 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@300 -- # return 0 00:10:32.334 11:32:31 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@67 -- # timing_exit nvmf_example_test 00:10:32.334 11:32:31 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@732 -- # xtrace_disable 00:10:32.334 11:32:31 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:10:32.334 00:10:32.334 real 0m12.805s 00:10:32.334 user 0m44.896s 00:10:32.334 sys 0m2.161s 00:10:32.334 11:32:31 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@1130 -- # xtrace_disable 00:10:32.334 11:32:31 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:10:32.334 ************************************ 00:10:32.334 END TEST nvmf_example 00:10:32.334 ************************************ 00:10:32.334 11:32:31 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@17 -- # run_test nvmf_filesystem /home/vagrant/spdk_repo/spdk/test/nvmf/target/filesystem.sh --transport=tcp 00:10:32.334 11:32:31 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:10:32.334 11:32:31 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1111 -- # xtrace_disable 00:10:32.334 11:32:31 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:10:32.334 ************************************ 00:10:32.334 START TEST nvmf_filesystem 00:10:32.334 ************************************ 00:10:32.334 11:32:31 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/filesystem.sh --transport=tcp 00:10:32.334 * Looking for test storage... 00:10:32.334 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:10:32.334 11:32:31 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:10:32.334 11:32:31 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1711 -- # lcov --version 00:10:32.334 11:32:31 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:10:32.334 11:32:31 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:10:32.334 11:32:31 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:10:32.334 11:32:31 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@333 -- # local ver1 ver1_l 00:10:32.334 11:32:31 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@334 -- # local ver2 ver2_l 00:10:32.334 11:32:31 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@336 -- # IFS=.-: 00:10:32.334 11:32:31 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@336 -- # read -ra ver1 00:10:32.334 11:32:31 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@337 -- # IFS=.-: 00:10:32.334 11:32:31 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@337 -- # read -ra ver2 00:10:32.334 11:32:31 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@338 -- # local 'op=<' 00:10:32.334 11:32:31 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@340 -- # ver1_l=2 00:10:32.334 11:32:31 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@341 -- # ver2_l=1 00:10:32.334 11:32:31 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:10:32.334 11:32:31 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@344 -- # case "$op" in 00:10:32.334 11:32:31 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@345 -- # : 1 00:10:32.334 11:32:31 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@364 -- # (( v = 0 )) 00:10:32.334 11:32:31 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:10:32.334 11:32:31 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@365 -- # decimal 1 00:10:32.334 11:32:31 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@353 -- # local d=1 00:10:32.334 11:32:31 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:10:32.334 11:32:31 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@355 -- # echo 1 00:10:32.334 11:32:31 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@365 -- # ver1[v]=1 00:10:32.334 11:32:31 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@366 -- # decimal 2 00:10:32.334 11:32:31 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@353 -- # local d=2 00:10:32.334 11:32:31 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:10:32.334 11:32:31 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@355 -- # echo 2 00:10:32.334 11:32:31 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@366 -- # ver2[v]=2 00:10:32.334 11:32:31 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:10:32.334 11:32:31 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:10:32.334 11:32:31 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@368 -- # return 0 00:10:32.334 11:32:31 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:10:32.334 11:32:31 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:10:32.334 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:32.334 --rc genhtml_branch_coverage=1 00:10:32.334 --rc genhtml_function_coverage=1 00:10:32.334 --rc genhtml_legend=1 00:10:32.334 --rc geninfo_all_blocks=1 00:10:32.334 --rc geninfo_unexecuted_blocks=1 00:10:32.334 00:10:32.334 ' 00:10:32.334 11:32:31 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:10:32.334 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:32.334 --rc genhtml_branch_coverage=1 00:10:32.334 --rc genhtml_function_coverage=1 00:10:32.334 --rc genhtml_legend=1 00:10:32.334 --rc geninfo_all_blocks=1 00:10:32.334 --rc geninfo_unexecuted_blocks=1 00:10:32.334 00:10:32.334 ' 00:10:32.334 11:32:31 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:10:32.334 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:32.334 --rc genhtml_branch_coverage=1 00:10:32.334 --rc genhtml_function_coverage=1 00:10:32.334 --rc genhtml_legend=1 00:10:32.334 --rc geninfo_all_blocks=1 00:10:32.334 --rc geninfo_unexecuted_blocks=1 00:10:32.334 00:10:32.334 ' 00:10:32.334 11:32:31 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:10:32.334 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:32.334 --rc genhtml_branch_coverage=1 00:10:32.334 --rc genhtml_function_coverage=1 00:10:32.334 --rc genhtml_legend=1 00:10:32.334 --rc geninfo_all_blocks=1 00:10:32.334 --rc geninfo_unexecuted_blocks=1 00:10:32.334 00:10:32.334 ' 00:10:32.334 11:32:31 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- target/filesystem.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh 00:10:32.334 11:32:31 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@7 -- # rpc_py=rpc_cmd 00:10:32.334 11:32:31 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@34 -- # set -e 00:10:32.334 11:32:31 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@35 -- # shopt -s nullglob 00:10:32.334 11:32:31 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@36 -- # shopt -s extglob 00:10:32.334 11:32:31 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@37 -- # shopt -s inherit_errexit 00:10:32.334 11:32:31 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@39 -- # '[' -z /home/vagrant/spdk_repo/spdk/../output ']' 00:10:32.334 11:32:31 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@44 -- # [[ -e /home/vagrant/spdk_repo/spdk/test/common/build_config.sh ]] 00:10:32.334 11:32:31 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@45 -- # source /home/vagrant/spdk_repo/spdk/test/common/build_config.sh 00:10:32.334 11:32:31 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@1 -- # CONFIG_WPDK_DIR= 00:10:32.334 11:32:31 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@2 -- # CONFIG_ASAN=n 00:10:32.334 11:32:31 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@3 -- # CONFIG_VBDEV_COMPRESS=n 00:10:32.334 11:32:31 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@4 -- # CONFIG_HAVE_EXECINFO_H=y 00:10:32.334 11:32:31 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@5 -- # CONFIG_USDT=y 00:10:32.334 11:32:31 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@6 -- # CONFIG_CUSTOMOCF=n 00:10:32.334 11:32:31 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@7 -- # CONFIG_PREFIX=/usr/local 00:10:32.334 11:32:31 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@8 -- # CONFIG_RBD=n 00:10:32.334 11:32:31 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@9 -- # CONFIG_LIBDIR= 00:10:32.334 11:32:31 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@10 -- # CONFIG_IDXD=y 00:10:32.334 11:32:31 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@11 -- # CONFIG_NVME_CUSE=y 00:10:32.334 11:32:31 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@12 -- # CONFIG_SMA=n 00:10:32.334 11:32:31 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@13 -- # CONFIG_VTUNE=n 00:10:32.334 11:32:31 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@14 -- # CONFIG_TSAN=n 00:10:32.334 11:32:31 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@15 -- # CONFIG_RDMA_SEND_WITH_INVAL=y 00:10:32.334 11:32:31 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@16 -- # CONFIG_VFIO_USER_DIR= 00:10:32.334 11:32:31 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@17 -- # CONFIG_MAX_NUMA_NODES=1 00:10:32.334 11:32:31 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@18 -- # CONFIG_PGO_CAPTURE=n 00:10:32.334 11:32:31 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@19 -- # CONFIG_HAVE_UUID_GENERATE_SHA1=y 00:10:32.334 11:32:31 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@20 -- # CONFIG_ENV=/home/vagrant/spdk_repo/spdk/lib/env_dpdk 00:10:32.334 11:32:31 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@21 -- # CONFIG_LTO=n 00:10:32.335 11:32:31 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@22 -- # CONFIG_ISCSI_INITIATOR=y 00:10:32.335 11:32:31 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@23 -- # CONFIG_CET=n 00:10:32.335 11:32:31 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@24 -- # CONFIG_VBDEV_COMPRESS_MLX5=n 00:10:32.335 11:32:31 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@25 -- # CONFIG_OCF_PATH= 00:10:32.335 11:32:31 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@26 -- # CONFIG_RDMA_SET_TOS=y 00:10:32.335 11:32:31 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@27 -- # CONFIG_AIO_FSDEV=y 00:10:32.335 11:32:31 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@28 -- # CONFIG_HAVE_ARC4RANDOM=y 00:10:32.335 11:32:31 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@29 -- # CONFIG_HAVE_LIBARCHIVE=n 00:10:32.335 11:32:31 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@30 -- # CONFIG_UBLK=y 00:10:32.335 11:32:31 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@31 -- # CONFIG_ISAL_CRYPTO=y 00:10:32.335 11:32:31 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@32 -- # CONFIG_OPENSSL_PATH= 00:10:32.335 11:32:31 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@33 -- # CONFIG_OCF=n 00:10:32.335 11:32:31 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@34 -- # CONFIG_FUSE=n 00:10:32.335 11:32:31 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@35 -- # CONFIG_VTUNE_DIR= 00:10:32.335 11:32:31 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@36 -- # CONFIG_FUZZER_LIB= 00:10:32.335 11:32:31 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@37 -- # CONFIG_FUZZER=n 00:10:32.335 11:32:31 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@38 -- # CONFIG_FSDEV=y 00:10:32.335 11:32:31 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@39 -- # CONFIG_DPDK_DIR=/home/vagrant/spdk_repo/spdk/dpdk/build 00:10:32.335 11:32:31 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@40 -- # CONFIG_CRYPTO=n 00:10:32.335 11:32:31 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@41 -- # CONFIG_PGO_USE=n 00:10:32.335 11:32:31 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@42 -- # CONFIG_VHOST=y 00:10:32.335 11:32:31 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@43 -- # CONFIG_DAOS=n 00:10:32.335 11:32:31 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@44 -- # CONFIG_DPDK_INC_DIR= 00:10:32.335 11:32:31 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@45 -- # CONFIG_DAOS_DIR= 00:10:32.335 11:32:31 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@46 -- # CONFIG_UNIT_TESTS=n 00:10:32.335 11:32:31 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@47 -- # CONFIG_RDMA_SET_ACK_TIMEOUT=y 00:10:32.335 11:32:31 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@48 -- # CONFIG_VIRTIO=y 00:10:32.335 11:32:31 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@49 -- # CONFIG_DPDK_UADK=n 00:10:32.335 11:32:31 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@50 -- # CONFIG_COVERAGE=y 00:10:32.335 11:32:31 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@51 -- # CONFIG_RDMA=y 00:10:32.335 11:32:31 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@52 -- # CONFIG_HAVE_STRUCT_STAT_ST_ATIM=y 00:10:32.335 11:32:31 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@53 -- # CONFIG_HAVE_LZ4=n 00:10:32.335 11:32:31 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@54 -- # CONFIG_FIO_SOURCE_DIR=/usr/src/fio 00:10:32.335 11:32:31 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@55 -- # CONFIG_URING_PATH= 00:10:32.335 11:32:31 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@56 -- # CONFIG_XNVME=n 00:10:32.335 11:32:31 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@57 -- # CONFIG_VFIO_USER=n 00:10:32.335 11:32:31 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@58 -- # CONFIG_ARCH=native 00:10:32.335 11:32:31 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@59 -- # CONFIG_HAVE_EVP_MAC=y 00:10:32.335 11:32:31 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@60 -- # CONFIG_URING_ZNS=n 00:10:32.335 11:32:31 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@61 -- # CONFIG_WERROR=y 00:10:32.335 11:32:31 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@62 -- # CONFIG_HAVE_LIBBSD=n 00:10:32.335 11:32:31 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@63 -- # CONFIG_UBSAN=y 00:10:32.335 11:32:31 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@64 -- # CONFIG_HAVE_STRUCT_STAT_ST_ATIMESPEC=n 00:10:32.335 11:32:31 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@65 -- # CONFIG_IPSEC_MB_DIR= 00:10:32.335 11:32:31 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@66 -- # CONFIG_GOLANG=y 00:10:32.335 11:32:31 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@67 -- # CONFIG_ISAL=y 00:10:32.335 11:32:31 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@68 -- # CONFIG_IDXD_KERNEL=y 00:10:32.335 11:32:31 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@69 -- # CONFIG_DPDK_LIB_DIR= 00:10:32.335 11:32:31 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@70 -- # CONFIG_RDMA_PROV=verbs 00:10:32.335 11:32:31 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@71 -- # CONFIG_APPS=y 00:10:32.335 11:32:31 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@72 -- # CONFIG_SHARED=y 00:10:32.335 11:32:31 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@73 -- # CONFIG_HAVE_KEYUTILS=y 00:10:32.335 11:32:31 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@74 -- # CONFIG_FC_PATH= 00:10:32.335 11:32:31 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@75 -- # CONFIG_DPDK_PKG_CONFIG=n 00:10:32.335 11:32:31 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@76 -- # CONFIG_FC=n 00:10:32.335 11:32:31 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@77 -- # CONFIG_AVAHI=y 00:10:32.335 11:32:31 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@78 -- # CONFIG_FIO_PLUGIN=y 00:10:32.335 11:32:31 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@79 -- # CONFIG_RAID5F=n 00:10:32.335 11:32:31 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@80 -- # CONFIG_EXAMPLES=y 00:10:32.335 11:32:31 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@81 -- # CONFIG_TESTS=y 00:10:32.335 11:32:31 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@82 -- # CONFIG_CRYPTO_MLX5=n 00:10:32.335 11:32:31 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@83 -- # CONFIG_MAX_LCORES=128 00:10:32.335 11:32:31 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@84 -- # CONFIG_IPSEC_MB=n 00:10:32.335 11:32:31 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@85 -- # CONFIG_PGO_DIR= 00:10:32.335 11:32:31 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@86 -- # CONFIG_DEBUG=y 00:10:32.335 11:32:31 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@87 -- # CONFIG_DPDK_COMPRESSDEV=n 00:10:32.335 11:32:31 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@88 -- # CONFIG_CROSS_PREFIX= 00:10:32.335 11:32:31 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@89 -- # CONFIG_COPY_FILE_RANGE=y 00:10:32.335 11:32:31 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@90 -- # CONFIG_URING=n 00:10:32.335 11:32:31 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@54 -- # source /home/vagrant/spdk_repo/spdk/test/common/applications.sh 00:10:32.335 11:32:31 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/applications.sh@8 -- # dirname /home/vagrant/spdk_repo/spdk/test/common/applications.sh 00:10:32.335 11:32:31 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/applications.sh@8 -- # readlink -f /home/vagrant/spdk_repo/spdk/test/common 00:10:32.335 11:32:31 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/applications.sh@8 -- # _root=/home/vagrant/spdk_repo/spdk/test/common 00:10:32.335 11:32:31 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/applications.sh@9 -- # _root=/home/vagrant/spdk_repo/spdk 00:10:32.335 11:32:31 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/applications.sh@10 -- # _app_dir=/home/vagrant/spdk_repo/spdk/build/bin 00:10:32.335 11:32:31 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/applications.sh@11 -- # _test_app_dir=/home/vagrant/spdk_repo/spdk/test/app 00:10:32.335 11:32:31 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/applications.sh@12 -- # _examples_dir=/home/vagrant/spdk_repo/spdk/build/examples 00:10:32.335 11:32:31 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/applications.sh@14 -- # VHOST_FUZZ_APP=("$_test_app_dir/fuzz/vhost_fuzz/vhost_fuzz") 00:10:32.335 11:32:31 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/applications.sh@15 -- # ISCSI_APP=("$_app_dir/iscsi_tgt") 00:10:32.335 11:32:31 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/applications.sh@16 -- # NVMF_APP=("$_app_dir/nvmf_tgt") 00:10:32.335 11:32:31 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/applications.sh@17 -- # VHOST_APP=("$_app_dir/vhost") 00:10:32.335 11:32:31 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/applications.sh@18 -- # DD_APP=("$_app_dir/spdk_dd") 00:10:32.335 11:32:31 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/applications.sh@19 -- # SPDK_APP=("$_app_dir/spdk_tgt") 00:10:32.335 11:32:31 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/applications.sh@22 -- # [[ -e /home/vagrant/spdk_repo/spdk/include/spdk/config.h ]] 00:10:32.335 11:32:31 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/applications.sh@23 -- # [[ #ifndef SPDK_CONFIG_H 00:10:32.335 #define SPDK_CONFIG_H 00:10:32.335 #define SPDK_CONFIG_AIO_FSDEV 1 00:10:32.335 #define SPDK_CONFIG_APPS 1 00:10:32.335 #define SPDK_CONFIG_ARCH native 00:10:32.335 #undef SPDK_CONFIG_ASAN 00:10:32.335 #define SPDK_CONFIG_AVAHI 1 00:10:32.335 #undef SPDK_CONFIG_CET 00:10:32.335 #define SPDK_CONFIG_COPY_FILE_RANGE 1 00:10:32.335 #define SPDK_CONFIG_COVERAGE 1 00:10:32.335 #define SPDK_CONFIG_CROSS_PREFIX 00:10:32.335 #undef SPDK_CONFIG_CRYPTO 00:10:32.335 #undef SPDK_CONFIG_CRYPTO_MLX5 00:10:32.335 #undef SPDK_CONFIG_CUSTOMOCF 00:10:32.335 #undef SPDK_CONFIG_DAOS 00:10:32.335 #define SPDK_CONFIG_DAOS_DIR 00:10:32.335 #define SPDK_CONFIG_DEBUG 1 00:10:32.335 #undef SPDK_CONFIG_DPDK_COMPRESSDEV 00:10:32.335 #define SPDK_CONFIG_DPDK_DIR /home/vagrant/spdk_repo/spdk/dpdk/build 00:10:32.335 #define SPDK_CONFIG_DPDK_INC_DIR 00:10:32.335 #define SPDK_CONFIG_DPDK_LIB_DIR 00:10:32.335 #undef SPDK_CONFIG_DPDK_PKG_CONFIG 00:10:32.335 #undef SPDK_CONFIG_DPDK_UADK 00:10:32.335 #define SPDK_CONFIG_ENV /home/vagrant/spdk_repo/spdk/lib/env_dpdk 00:10:32.335 #define SPDK_CONFIG_EXAMPLES 1 00:10:32.335 #undef SPDK_CONFIG_FC 00:10:32.335 #define SPDK_CONFIG_FC_PATH 00:10:32.335 #define SPDK_CONFIG_FIO_PLUGIN 1 00:10:32.335 #define SPDK_CONFIG_FIO_SOURCE_DIR /usr/src/fio 00:10:32.335 #define SPDK_CONFIG_FSDEV 1 00:10:32.335 #undef SPDK_CONFIG_FUSE 00:10:32.335 #undef SPDK_CONFIG_FUZZER 00:10:32.336 #define SPDK_CONFIG_FUZZER_LIB 00:10:32.336 #define SPDK_CONFIG_GOLANG 1 00:10:32.336 #define SPDK_CONFIG_HAVE_ARC4RANDOM 1 00:10:32.336 #define SPDK_CONFIG_HAVE_EVP_MAC 1 00:10:32.336 #define SPDK_CONFIG_HAVE_EXECINFO_H 1 00:10:32.336 #define SPDK_CONFIG_HAVE_KEYUTILS 1 00:10:32.336 #undef SPDK_CONFIG_HAVE_LIBARCHIVE 00:10:32.336 #undef SPDK_CONFIG_HAVE_LIBBSD 00:10:32.336 #undef SPDK_CONFIG_HAVE_LZ4 00:10:32.336 #define SPDK_CONFIG_HAVE_STRUCT_STAT_ST_ATIM 1 00:10:32.336 #undef SPDK_CONFIG_HAVE_STRUCT_STAT_ST_ATIMESPEC 00:10:32.336 #define SPDK_CONFIG_HAVE_UUID_GENERATE_SHA1 1 00:10:32.336 #define SPDK_CONFIG_IDXD 1 00:10:32.336 #define SPDK_CONFIG_IDXD_KERNEL 1 00:10:32.336 #undef SPDK_CONFIG_IPSEC_MB 00:10:32.336 #define SPDK_CONFIG_IPSEC_MB_DIR 00:10:32.336 #define SPDK_CONFIG_ISAL 1 00:10:32.336 #define SPDK_CONFIG_ISAL_CRYPTO 1 00:10:32.336 #define SPDK_CONFIG_ISCSI_INITIATOR 1 00:10:32.336 #define SPDK_CONFIG_LIBDIR 00:10:32.336 #undef SPDK_CONFIG_LTO 00:10:32.336 #define SPDK_CONFIG_MAX_LCORES 128 00:10:32.336 #define SPDK_CONFIG_MAX_NUMA_NODES 1 00:10:32.336 #define SPDK_CONFIG_NVME_CUSE 1 00:10:32.336 #undef SPDK_CONFIG_OCF 00:10:32.336 #define SPDK_CONFIG_OCF_PATH 00:10:32.336 #define SPDK_CONFIG_OPENSSL_PATH 00:10:32.336 #undef SPDK_CONFIG_PGO_CAPTURE 00:10:32.336 #define SPDK_CONFIG_PGO_DIR 00:10:32.336 #undef SPDK_CONFIG_PGO_USE 00:10:32.336 #define SPDK_CONFIG_PREFIX /usr/local 00:10:32.336 #undef SPDK_CONFIG_RAID5F 00:10:32.336 #undef SPDK_CONFIG_RBD 00:10:32.336 #define SPDK_CONFIG_RDMA 1 00:10:32.336 #define SPDK_CONFIG_RDMA_PROV verbs 00:10:32.336 #define SPDK_CONFIG_RDMA_SEND_WITH_INVAL 1 00:10:32.336 #define SPDK_CONFIG_RDMA_SET_ACK_TIMEOUT 1 00:10:32.336 #define SPDK_CONFIG_RDMA_SET_TOS 1 00:10:32.336 #define SPDK_CONFIG_SHARED 1 00:10:32.336 #undef SPDK_CONFIG_SMA 00:10:32.336 #define SPDK_CONFIG_TESTS 1 00:10:32.336 #undef SPDK_CONFIG_TSAN 00:10:32.336 #define SPDK_CONFIG_UBLK 1 00:10:32.336 #define SPDK_CONFIG_UBSAN 1 00:10:32.336 #undef SPDK_CONFIG_UNIT_TESTS 00:10:32.336 #undef SPDK_CONFIG_URING 00:10:32.336 #define SPDK_CONFIG_URING_PATH 00:10:32.336 #undef SPDK_CONFIG_URING_ZNS 00:10:32.336 #define SPDK_CONFIG_USDT 1 00:10:32.336 #undef SPDK_CONFIG_VBDEV_COMPRESS 00:10:32.336 #undef SPDK_CONFIG_VBDEV_COMPRESS_MLX5 00:10:32.336 #undef SPDK_CONFIG_VFIO_USER 00:10:32.336 #define SPDK_CONFIG_VFIO_USER_DIR 00:10:32.336 #define SPDK_CONFIG_VHOST 1 00:10:32.336 #define SPDK_CONFIG_VIRTIO 1 00:10:32.336 #undef SPDK_CONFIG_VTUNE 00:10:32.336 #define SPDK_CONFIG_VTUNE_DIR 00:10:32.336 #define SPDK_CONFIG_WERROR 1 00:10:32.336 #define SPDK_CONFIG_WPDK_DIR 00:10:32.336 #undef SPDK_CONFIG_XNVME 00:10:32.336 #endif /* SPDK_CONFIG_H */ == *\#\d\e\f\i\n\e\ \S\P\D\K\_\C\O\N\F\I\G\_\D\E\B\U\G* ]] 00:10:32.336 11:32:31 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/applications.sh@24 -- # (( SPDK_AUTOTEST_DEBUG_APPS )) 00:10:32.336 11:32:31 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@55 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:10:32.336 11:32:31 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@15 -- # shopt -s extglob 00:10:32.336 11:32:31 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:10:32.336 11:32:31 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:10:32.336 11:32:31 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:10:32.336 11:32:31 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:32.336 11:32:31 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:32.336 11:32:31 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:32.336 11:32:31 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- paths/export.sh@5 -- # export PATH 00:10:32.336 11:32:31 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:32.336 11:32:31 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@56 -- # source /home/vagrant/spdk_repo/spdk/scripts/perf/pm/common 00:10:32.336 11:32:31 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@6 -- # dirname /home/vagrant/spdk_repo/spdk/scripts/perf/pm/common 00:10:32.336 11:32:31 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@6 -- # readlink -f /home/vagrant/spdk_repo/spdk/scripts/perf/pm 00:10:32.336 11:32:31 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@6 -- # _pmdir=/home/vagrant/spdk_repo/spdk/scripts/perf/pm 00:10:32.336 11:32:31 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@7 -- # readlink -f /home/vagrant/spdk_repo/spdk/scripts/perf/pm/../../../ 00:10:32.336 11:32:31 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@7 -- # _pmrootdir=/home/vagrant/spdk_repo/spdk 00:10:32.336 11:32:31 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@64 -- # TEST_TAG=N/A 00:10:32.336 11:32:31 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@65 -- # TEST_TAG_FILE=/home/vagrant/spdk_repo/spdk/.run_test_name 00:10:32.336 11:32:31 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@67 -- # PM_OUTPUTDIR=/home/vagrant/spdk_repo/spdk/../output/power 00:10:32.336 11:32:31 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@68 -- # uname -s 00:10:32.336 11:32:31 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@68 -- # PM_OS=Linux 00:10:32.336 11:32:31 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@70 -- # MONITOR_RESOURCES_SUDO=() 00:10:32.336 11:32:31 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@70 -- # declare -A MONITOR_RESOURCES_SUDO 00:10:32.336 11:32:31 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@71 -- # MONITOR_RESOURCES_SUDO["collect-bmc-pm"]=1 00:10:32.336 11:32:31 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@72 -- # MONITOR_RESOURCES_SUDO["collect-cpu-load"]=0 00:10:32.336 11:32:31 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@73 -- # MONITOR_RESOURCES_SUDO["collect-cpu-temp"]=0 00:10:32.336 11:32:31 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@74 -- # MONITOR_RESOURCES_SUDO["collect-vmstat"]=0 00:10:32.336 11:32:31 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@76 -- # SUDO[0]= 00:10:32.336 11:32:31 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@76 -- # SUDO[1]='sudo -E' 00:10:32.336 11:32:31 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@78 -- # MONITOR_RESOURCES=(collect-cpu-load collect-vmstat) 00:10:32.336 11:32:31 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@79 -- # [[ Linux == FreeBSD ]] 00:10:32.336 11:32:31 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@81 -- # [[ Linux == Linux ]] 00:10:32.336 11:32:31 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@81 -- # [[ QEMU != QEMU ]] 00:10:32.336 11:32:31 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@88 -- # [[ ! -d /home/vagrant/spdk_repo/spdk/../output/power ]] 00:10:32.336 11:32:31 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@58 -- # : 0 00:10:32.336 11:32:31 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@59 -- # export RUN_NIGHTLY 00:10:32.336 11:32:31 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@62 -- # : 0 00:10:32.336 11:32:31 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@63 -- # export SPDK_AUTOTEST_DEBUG_APPS 00:10:32.336 11:32:31 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@64 -- # : 0 00:10:32.336 11:32:31 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@65 -- # export SPDK_RUN_VALGRIND 00:10:32.336 11:32:31 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@66 -- # : 1 00:10:32.336 11:32:31 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@67 -- # export SPDK_RUN_FUNCTIONAL_TEST 00:10:32.336 11:32:31 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@68 -- # : 0 00:10:32.336 11:32:31 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@69 -- # export SPDK_TEST_UNITTEST 00:10:32.336 11:32:31 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@70 -- # : 00:10:32.336 11:32:31 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@71 -- # export SPDK_TEST_AUTOBUILD 00:10:32.336 11:32:31 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@72 -- # : 0 00:10:32.336 11:32:31 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@73 -- # export SPDK_TEST_RELEASE_BUILD 00:10:32.336 11:32:31 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@74 -- # : 0 00:10:32.336 11:32:31 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@75 -- # export SPDK_TEST_ISAL 00:10:32.336 11:32:31 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@76 -- # : 0 00:10:32.336 11:32:31 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@77 -- # export SPDK_TEST_ISCSI 00:10:32.336 11:32:31 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@78 -- # : 0 00:10:32.336 11:32:31 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@79 -- # export SPDK_TEST_ISCSI_INITIATOR 00:10:32.336 11:32:31 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@80 -- # : 0 00:10:32.336 11:32:31 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@81 -- # export SPDK_TEST_NVME 00:10:32.336 11:32:31 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@82 -- # : 0 00:10:32.336 11:32:31 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@83 -- # export SPDK_TEST_NVME_PMR 00:10:32.336 11:32:31 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@84 -- # : 0 00:10:32.336 11:32:31 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@85 -- # export SPDK_TEST_NVME_BP 00:10:32.336 11:32:31 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@86 -- # : 0 00:10:32.336 11:32:31 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@87 -- # export SPDK_TEST_NVME_CLI 00:10:32.336 11:32:31 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@88 -- # : 0 00:10:32.337 11:32:31 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@89 -- # export SPDK_TEST_NVME_CUSE 00:10:32.337 11:32:31 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@90 -- # : 0 00:10:32.337 11:32:31 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@91 -- # export SPDK_TEST_NVME_FDP 00:10:32.337 11:32:31 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@92 -- # : 1 00:10:32.337 11:32:31 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@93 -- # export SPDK_TEST_NVMF 00:10:32.337 11:32:31 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@94 -- # : 0 00:10:32.337 11:32:31 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@95 -- # export SPDK_TEST_VFIOUSER 00:10:32.337 11:32:31 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@96 -- # : 0 00:10:32.337 11:32:31 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@97 -- # export SPDK_TEST_VFIOUSER_QEMU 00:10:32.337 11:32:31 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@98 -- # : 0 00:10:32.337 11:32:31 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@99 -- # export SPDK_TEST_FUZZER 00:10:32.337 11:32:31 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@100 -- # : 0 00:10:32.337 11:32:31 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@101 -- # export SPDK_TEST_FUZZER_SHORT 00:10:32.337 11:32:31 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@102 -- # : tcp 00:10:32.337 11:32:31 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@103 -- # export SPDK_TEST_NVMF_TRANSPORT 00:10:32.337 11:32:31 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@104 -- # : 0 00:10:32.337 11:32:31 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@105 -- # export SPDK_TEST_RBD 00:10:32.337 11:32:31 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@106 -- # : 0 00:10:32.337 11:32:31 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@107 -- # export SPDK_TEST_VHOST 00:10:32.337 11:32:31 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@108 -- # : 0 00:10:32.337 11:32:31 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@109 -- # export SPDK_TEST_BLOCKDEV 00:10:32.337 11:32:31 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@110 -- # : 0 00:10:32.337 11:32:31 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@111 -- # export SPDK_TEST_RAID 00:10:32.337 11:32:31 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@112 -- # : 0 00:10:32.337 11:32:31 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@113 -- # export SPDK_TEST_IOAT 00:10:32.337 11:32:31 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@114 -- # : 0 00:10:32.337 11:32:31 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@115 -- # export SPDK_TEST_BLOBFS 00:10:32.337 11:32:31 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@116 -- # : 0 00:10:32.337 11:32:31 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@117 -- # export SPDK_TEST_VHOST_INIT 00:10:32.337 11:32:31 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@118 -- # : 0 00:10:32.337 11:32:31 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@119 -- # export SPDK_TEST_LVOL 00:10:32.337 11:32:31 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@120 -- # : 0 00:10:32.337 11:32:31 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@121 -- # export SPDK_TEST_VBDEV_COMPRESS 00:10:32.337 11:32:31 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@122 -- # : 0 00:10:32.337 11:32:31 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@123 -- # export SPDK_RUN_ASAN 00:10:32.337 11:32:31 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@124 -- # : 1 00:10:32.337 11:32:31 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@125 -- # export SPDK_RUN_UBSAN 00:10:32.337 11:32:31 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@126 -- # : 00:10:32.337 11:32:31 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@127 -- # export SPDK_RUN_EXTERNAL_DPDK 00:10:32.337 11:32:31 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@128 -- # : 0 00:10:32.337 11:32:31 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@129 -- # export SPDK_RUN_NON_ROOT 00:10:32.337 11:32:31 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@130 -- # : 0 00:10:32.337 11:32:31 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@131 -- # export SPDK_TEST_CRYPTO 00:10:32.337 11:32:31 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@132 -- # : 0 00:10:32.337 11:32:31 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@133 -- # export SPDK_TEST_FTL 00:10:32.337 11:32:31 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@134 -- # : 0 00:10:32.337 11:32:31 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@135 -- # export SPDK_TEST_OCF 00:10:32.337 11:32:31 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@136 -- # : 0 00:10:32.337 11:32:31 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@137 -- # export SPDK_TEST_VMD 00:10:32.337 11:32:31 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@138 -- # : 0 00:10:32.337 11:32:31 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@139 -- # export SPDK_TEST_OPAL 00:10:32.337 11:32:31 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@140 -- # : 00:10:32.337 11:32:31 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@141 -- # export SPDK_TEST_NATIVE_DPDK 00:10:32.337 11:32:31 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@142 -- # : true 00:10:32.337 11:32:31 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@143 -- # export SPDK_AUTOTEST_X 00:10:32.337 11:32:31 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@144 -- # : 0 00:10:32.337 11:32:31 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@145 -- # export SPDK_TEST_URING 00:10:32.337 11:32:31 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@146 -- # : 1 00:10:32.337 11:32:31 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@147 -- # export SPDK_TEST_USDT 00:10:32.337 11:32:31 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@148 -- # : 0 00:10:32.337 11:32:31 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@149 -- # export SPDK_TEST_USE_IGB_UIO 00:10:32.337 11:32:31 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@150 -- # : 0 00:10:32.337 11:32:31 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@151 -- # export SPDK_TEST_SCHEDULER 00:10:32.337 11:32:31 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@152 -- # : 0 00:10:32.337 11:32:31 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@153 -- # export SPDK_TEST_SCANBUILD 00:10:32.337 11:32:31 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@154 -- # : 00:10:32.337 11:32:31 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@155 -- # export SPDK_TEST_NVMF_NICS 00:10:32.337 11:32:31 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@156 -- # : 0 00:10:32.337 11:32:31 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@157 -- # export SPDK_TEST_SMA 00:10:32.337 11:32:31 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@158 -- # : 0 00:10:32.337 11:32:31 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@159 -- # export SPDK_TEST_DAOS 00:10:32.337 11:32:31 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@160 -- # : 0 00:10:32.337 11:32:31 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@161 -- # export SPDK_TEST_XNVME 00:10:32.337 11:32:31 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@162 -- # : 0 00:10:32.337 11:32:31 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@163 -- # export SPDK_TEST_ACCEL 00:10:32.337 11:32:31 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@164 -- # : 0 00:10:32.337 11:32:31 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@165 -- # export SPDK_TEST_ACCEL_DSA 00:10:32.337 11:32:31 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@166 -- # : 0 00:10:32.337 11:32:31 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@167 -- # export SPDK_TEST_ACCEL_IAA 00:10:32.337 11:32:31 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@169 -- # : 00:10:32.337 11:32:31 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@170 -- # export SPDK_TEST_FUZZER_TARGET 00:10:32.337 11:32:31 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@171 -- # : 1 00:10:32.337 11:32:31 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@172 -- # export SPDK_TEST_NVMF_MDNS 00:10:32.337 11:32:31 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@173 -- # : 1 00:10:32.337 11:32:31 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@174 -- # export SPDK_JSONRPC_GO_CLIENT 00:10:32.337 11:32:31 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@175 -- # : 0 00:10:32.337 11:32:31 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@176 -- # export SPDK_TEST_SETUP 00:10:32.337 11:32:31 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@177 -- # : 0 00:10:32.337 11:32:31 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@178 -- # export SPDK_TEST_NVME_INTERRUPT 00:10:32.337 11:32:31 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@181 -- # export SPDK_LIB_DIR=/home/vagrant/spdk_repo/spdk/build/lib 00:10:32.337 11:32:31 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@181 -- # SPDK_LIB_DIR=/home/vagrant/spdk_repo/spdk/build/lib 00:10:32.337 11:32:31 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@182 -- # export DPDK_LIB_DIR=/home/vagrant/spdk_repo/spdk/dpdk/build/lib 00:10:32.337 11:32:31 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@182 -- # DPDK_LIB_DIR=/home/vagrant/spdk_repo/spdk/dpdk/build/lib 00:10:32.337 11:32:31 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@183 -- # export VFIO_LIB_DIR=/home/vagrant/spdk_repo/spdk/build/libvfio-user/usr/local/lib 00:10:32.337 11:32:31 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@183 -- # VFIO_LIB_DIR=/home/vagrant/spdk_repo/spdk/build/libvfio-user/usr/local/lib 00:10:32.337 11:32:31 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@184 -- # export LD_LIBRARY_PATH=:/home/vagrant/spdk_repo/spdk/build/lib:/home/vagrant/spdk_repo/spdk/dpdk/build/lib:/home/vagrant/spdk_repo/spdk/build/libvfio-user/usr/local/lib:/home/vagrant/spdk_repo/spdk/build/lib:/home/vagrant/spdk_repo/spdk/dpdk/build/lib:/home/vagrant/spdk_repo/spdk/build/libvfio-user/usr/local/lib:/home/vagrant/spdk_repo/spdk/build/lib:/home/vagrant/spdk_repo/spdk/dpdk/build/lib:/home/vagrant/spdk_repo/spdk/build/libvfio-user/usr/local/lib:/home/vagrant/spdk_repo/spdk/build/lib:/home/vagrant/spdk_repo/spdk/dpdk/build/lib:/home/vagrant/spdk_repo/spdk/build/libvfio-user/usr/local/lib:/home/vagrant/spdk_repo/spdk/build/lib:/home/vagrant/spdk_repo/spdk/dpdk/build/lib:/home/vagrant/spdk_repo/spdk/build/libvfio-user/usr/local/lib 00:10:32.337 11:32:31 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@184 -- # LD_LIBRARY_PATH=:/home/vagrant/spdk_repo/spdk/build/lib:/home/vagrant/spdk_repo/spdk/dpdk/build/lib:/home/vagrant/spdk_repo/spdk/build/libvfio-user/usr/local/lib:/home/vagrant/spdk_repo/spdk/build/lib:/home/vagrant/spdk_repo/spdk/dpdk/build/lib:/home/vagrant/spdk_repo/spdk/build/libvfio-user/usr/local/lib:/home/vagrant/spdk_repo/spdk/build/lib:/home/vagrant/spdk_repo/spdk/dpdk/build/lib:/home/vagrant/spdk_repo/spdk/build/libvfio-user/usr/local/lib:/home/vagrant/spdk_repo/spdk/build/lib:/home/vagrant/spdk_repo/spdk/dpdk/build/lib:/home/vagrant/spdk_repo/spdk/build/libvfio-user/usr/local/lib:/home/vagrant/spdk_repo/spdk/build/lib:/home/vagrant/spdk_repo/spdk/dpdk/build/lib:/home/vagrant/spdk_repo/spdk/build/libvfio-user/usr/local/lib 00:10:32.337 11:32:31 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@187 -- # export PCI_BLOCK_SYNC_ON_RESET=yes 00:10:32.337 11:32:31 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@187 -- # PCI_BLOCK_SYNC_ON_RESET=yes 00:10:32.338 11:32:31 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@191 -- # export PYTHONPATH=:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/python 00:10:32.338 11:32:31 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@191 -- # PYTHONPATH=:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/python 00:10:32.338 11:32:31 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@195 -- # export PYTHONDONTWRITEBYTECODE=1 00:10:32.338 11:32:31 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@195 -- # PYTHONDONTWRITEBYTECODE=1 00:10:32.338 11:32:31 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@199 -- # export ASAN_OPTIONS=new_delete_type_mismatch=0:disable_coredump=0:abort_on_error=1:use_sigaltstack=0 00:10:32.338 11:32:31 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@199 -- # ASAN_OPTIONS=new_delete_type_mismatch=0:disable_coredump=0:abort_on_error=1:use_sigaltstack=0 00:10:32.338 11:32:31 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@200 -- # export UBSAN_OPTIONS=halt_on_error=1:print_stacktrace=1:abort_on_error=1:disable_coredump=0:exitcode=134 00:10:32.338 11:32:31 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@200 -- # UBSAN_OPTIONS=halt_on_error=1:print_stacktrace=1:abort_on_error=1:disable_coredump=0:exitcode=134 00:10:32.338 11:32:31 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@204 -- # asan_suppression_file=/var/tmp/asan_suppression_file 00:10:32.338 11:32:31 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@205 -- # rm -rf /var/tmp/asan_suppression_file 00:10:32.338 11:32:31 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@206 -- # cat 00:10:32.338 11:32:31 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@242 -- # echo leak:libfuse3.so 00:10:32.338 11:32:31 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@244 -- # export LSAN_OPTIONS=suppressions=/var/tmp/asan_suppression_file 00:10:32.338 11:32:31 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@244 -- # LSAN_OPTIONS=suppressions=/var/tmp/asan_suppression_file 00:10:32.338 11:32:31 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@246 -- # export DEFAULT_RPC_ADDR=/var/tmp/spdk.sock 00:10:32.338 11:32:31 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@246 -- # DEFAULT_RPC_ADDR=/var/tmp/spdk.sock 00:10:32.338 11:32:31 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@248 -- # '[' -z /var/spdk/dependencies ']' 00:10:32.338 11:32:31 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@251 -- # export DEPENDENCY_DIR 00:10:32.338 11:32:31 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@255 -- # export SPDK_BIN_DIR=/home/vagrant/spdk_repo/spdk/build/bin 00:10:32.338 11:32:31 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@255 -- # SPDK_BIN_DIR=/home/vagrant/spdk_repo/spdk/build/bin 00:10:32.338 11:32:31 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@256 -- # export SPDK_EXAMPLE_DIR=/home/vagrant/spdk_repo/spdk/build/examples 00:10:32.338 11:32:31 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@256 -- # SPDK_EXAMPLE_DIR=/home/vagrant/spdk_repo/spdk/build/examples 00:10:32.338 11:32:31 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@259 -- # export QEMU_BIN=/usr/local/qemu/vanilla-latest/bin/qemu-system-x86_64 00:10:32.338 11:32:31 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@259 -- # QEMU_BIN=/usr/local/qemu/vanilla-latest/bin/qemu-system-x86_64 00:10:32.338 11:32:31 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@260 -- # export VFIO_QEMU_BIN=/usr/local/qemu/vfio-user-latest/bin/qemu-system-x86_64 00:10:32.338 11:32:31 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@260 -- # VFIO_QEMU_BIN=/usr/local/qemu/vfio-user-latest/bin/qemu-system-x86_64 00:10:32.338 11:32:31 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@262 -- # export AR_TOOL=/home/vagrant/spdk_repo/spdk/scripts/ar-xnvme-fixer 00:10:32.338 11:32:31 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@262 -- # AR_TOOL=/home/vagrant/spdk_repo/spdk/scripts/ar-xnvme-fixer 00:10:32.338 11:32:31 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@265 -- # export UNBIND_ENTIRE_IOMMU_GROUP=yes 00:10:32.338 11:32:31 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@265 -- # UNBIND_ENTIRE_IOMMU_GROUP=yes 00:10:32.338 11:32:31 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@267 -- # _LCOV_MAIN=0 00:10:32.338 11:32:31 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@268 -- # _LCOV_LLVM=1 00:10:32.338 11:32:31 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@269 -- # _LCOV= 00:10:32.338 11:32:31 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@270 -- # [[ '' == *clang* ]] 00:10:32.338 11:32:31 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@270 -- # [[ 0 -eq 1 ]] 00:10:32.338 11:32:31 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@272 -- # _lcov_opt[_LCOV_LLVM]='--gcov-tool /home/vagrant/spdk_repo/spdk/test/fuzz/llvm/llvm-gcov.sh' 00:10:32.338 11:32:31 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@273 -- # _lcov_opt[_LCOV_MAIN]= 00:10:32.338 11:32:31 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@275 -- # lcov_opt= 00:10:32.338 11:32:31 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@278 -- # '[' 0 -eq 0 ']' 00:10:32.338 11:32:31 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@279 -- # export valgrind= 00:10:32.338 11:32:31 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@279 -- # valgrind= 00:10:32.338 11:32:31 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@285 -- # uname -s 00:10:32.338 11:32:31 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@285 -- # '[' Linux = Linux ']' 00:10:32.338 11:32:31 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@286 -- # HUGEMEM=4096 00:10:32.338 11:32:31 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@287 -- # export CLEAR_HUGE=yes 00:10:32.338 11:32:31 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@287 -- # CLEAR_HUGE=yes 00:10:32.338 11:32:31 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@289 -- # MAKE=make 00:10:32.338 11:32:31 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@290 -- # MAKEFLAGS=-j10 00:10:32.338 11:32:31 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@306 -- # export HUGEMEM=4096 00:10:32.338 11:32:31 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@306 -- # HUGEMEM=4096 00:10:32.338 11:32:31 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@308 -- # NO_HUGE=() 00:10:32.338 11:32:31 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@309 -- # TEST_MODE= 00:10:32.338 11:32:31 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@310 -- # for i in "$@" 00:10:32.338 11:32:31 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@311 -- # case "$i" in 00:10:32.338 11:32:31 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@316 -- # TEST_TRANSPORT=tcp 00:10:32.338 11:32:31 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@331 -- # [[ -z 73220 ]] 00:10:32.338 11:32:31 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@331 -- # kill -0 73220 00:10:32.338 11:32:31 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1696 -- # set_test_storage 2147483648 00:10:32.338 11:32:31 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@341 -- # [[ -v testdir ]] 00:10:32.338 11:32:31 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@343 -- # local requested_size=2147483648 00:10:32.338 11:32:31 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@344 -- # local mount target_dir 00:10:32.338 11:32:31 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@346 -- # local -A mounts fss sizes avails uses 00:10:32.338 11:32:31 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@347 -- # local source fs size avail mount use 00:10:32.338 11:32:31 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@349 -- # local storage_fallback storage_candidates 00:10:32.338 11:32:31 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@351 -- # mktemp -udt spdk.XXXXXX 00:10:32.338 11:32:31 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@351 -- # storage_fallback=/tmp/spdk.HRJbPn 00:10:32.338 11:32:31 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@356 -- # storage_candidates=("$testdir" "$storage_fallback/tests/${testdir##*/}" "$storage_fallback") 00:10:32.338 11:32:31 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@358 -- # [[ -n '' ]] 00:10:32.338 11:32:31 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@363 -- # [[ -n '' ]] 00:10:32.338 11:32:31 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@368 -- # mkdir -p /home/vagrant/spdk_repo/spdk/test/nvmf/target /tmp/spdk.HRJbPn/tests/target /tmp/spdk.HRJbPn 00:10:32.338 11:32:31 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@371 -- # requested_size=2214592512 00:10:32.338 11:32:31 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@373 -- # read -r source fs size use avail _ mount 00:10:32.338 11:32:31 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@340 -- # df -T 00:10:32.338 11:32:31 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@340 -- # grep -v Filesystem 00:10:32.338 11:32:31 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@374 -- # mounts["$mount"]=/dev/vda5 00:10:32.338 11:32:31 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@374 -- # fss["$mount"]=btrfs 00:10:32.338 11:32:31 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@375 -- # avails["$mount"]=13977382912 00:10:32.338 11:32:31 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@375 -- # sizes["$mount"]=20314062848 00:10:32.339 11:32:31 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@376 -- # uses["$mount"]=5591486464 00:10:32.339 11:32:31 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@373 -- # read -r source fs size use avail _ mount 00:10:32.339 11:32:31 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@374 -- # mounts["$mount"]=devtmpfs 00:10:32.339 11:32:31 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@374 -- # fss["$mount"]=devtmpfs 00:10:32.339 11:32:31 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@375 -- # avails["$mount"]=4194304 00:10:32.339 11:32:31 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@375 -- # sizes["$mount"]=4194304 00:10:32.339 11:32:31 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@376 -- # uses["$mount"]=0 00:10:32.339 11:32:31 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@373 -- # read -r source fs size use avail _ mount 00:10:32.339 11:32:31 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@374 -- # mounts["$mount"]=tmpfs 00:10:32.339 11:32:31 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@374 -- # fss["$mount"]=tmpfs 00:10:32.339 11:32:31 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@375 -- # avails["$mount"]=6256398336 00:10:32.339 11:32:31 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@375 -- # sizes["$mount"]=6266429440 00:10:32.339 11:32:31 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@376 -- # uses["$mount"]=10031104 00:10:32.339 11:32:31 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@373 -- # read -r source fs size use avail _ mount 00:10:32.339 11:32:31 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@374 -- # mounts["$mount"]=tmpfs 00:10:32.339 11:32:31 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@374 -- # fss["$mount"]=tmpfs 00:10:32.339 11:32:31 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@375 -- # avails["$mount"]=2486431744 00:10:32.339 11:32:31 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@375 -- # sizes["$mount"]=2506571776 00:10:32.339 11:32:31 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@376 -- # uses["$mount"]=20140032 00:10:32.339 11:32:31 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@373 -- # read -r source fs size use avail _ mount 00:10:32.339 11:32:31 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@374 -- # mounts["$mount"]=/dev/vda5 00:10:32.339 11:32:31 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@374 -- # fss["$mount"]=btrfs 00:10:32.339 11:32:31 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@375 -- # avails["$mount"]=13977382912 00:10:32.339 11:32:31 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@375 -- # sizes["$mount"]=20314062848 00:10:32.339 11:32:31 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@376 -- # uses["$mount"]=5591486464 00:10:32.339 11:32:31 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@373 -- # read -r source fs size use avail _ mount 00:10:32.339 11:32:31 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@374 -- # mounts["$mount"]=/dev/vda2 00:10:32.339 11:32:31 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@374 -- # fss["$mount"]=ext4 00:10:32.339 11:32:31 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@375 -- # avails["$mount"]=840085504 00:10:32.339 11:32:31 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@375 -- # sizes["$mount"]=1012768768 00:10:32.339 11:32:31 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@376 -- # uses["$mount"]=103477248 00:10:32.339 11:32:31 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@373 -- # read -r source fs size use avail _ mount 00:10:32.339 11:32:31 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@374 -- # mounts["$mount"]=tmpfs 00:10:32.339 11:32:31 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@374 -- # fss["$mount"]=tmpfs 00:10:32.339 11:32:31 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@375 -- # avails["$mount"]=6266290176 00:10:32.339 11:32:31 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@375 -- # sizes["$mount"]=6266429440 00:10:32.339 11:32:31 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@376 -- # uses["$mount"]=139264 00:10:32.339 11:32:31 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@373 -- # read -r source fs size use avail _ mount 00:10:32.339 11:32:31 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@374 -- # mounts["$mount"]=/dev/vda3 00:10:32.339 11:32:31 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@374 -- # fss["$mount"]=vfat 00:10:32.339 11:32:31 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@375 -- # avails["$mount"]=91617280 00:10:32.339 11:32:31 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@375 -- # sizes["$mount"]=104607744 00:10:32.339 11:32:31 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@376 -- # uses["$mount"]=12990464 00:10:32.339 11:32:31 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@373 -- # read -r source fs size use avail _ mount 00:10:32.339 11:32:31 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@374 -- # mounts["$mount"]=tmpfs 00:10:32.339 11:32:31 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@374 -- # fss["$mount"]=tmpfs 00:10:32.339 11:32:31 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@375 -- # avails["$mount"]=1253273600 00:10:32.339 11:32:31 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@375 -- # sizes["$mount"]=1253285888 00:10:32.339 11:32:31 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@376 -- # uses["$mount"]=12288 00:10:32.339 11:32:31 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@373 -- # read -r source fs size use avail _ mount 00:10:32.339 11:32:31 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@374 -- # mounts["$mount"]=:/mnt/jenkins_nvme/jenkins/workspace/nvmf-tcp-vg-autotest_3/fedora39-libvirt/output 00:10:32.339 11:32:31 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@374 -- # fss["$mount"]=fuse.sshfs 00:10:32.339 11:32:31 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@375 -- # avails["$mount"]=94866776064 00:10:32.339 11:32:31 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@375 -- # sizes["$mount"]=105088212992 00:10:32.339 11:32:31 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@376 -- # uses["$mount"]=4836003840 00:10:32.339 11:32:31 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@373 -- # read -r source fs size use avail _ mount 00:10:32.339 11:32:31 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@379 -- # printf '* Looking for test storage...\n' 00:10:32.339 * Looking for test storage... 00:10:32.339 11:32:31 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@381 -- # local target_space new_size 00:10:32.339 11:32:31 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@382 -- # for target_dir in "${storage_candidates[@]}" 00:10:32.339 11:32:31 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@385 -- # df /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:10:32.339 11:32:31 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@385 -- # awk '$1 !~ /Filesystem/{print $6}' 00:10:32.339 11:32:31 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@385 -- # mount=/home 00:10:32.339 11:32:31 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@387 -- # target_space=13977382912 00:10:32.339 11:32:31 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@388 -- # (( target_space == 0 || target_space < requested_size )) 00:10:32.339 11:32:31 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@391 -- # (( target_space >= requested_size )) 00:10:32.339 11:32:31 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@393 -- # [[ btrfs == tmpfs ]] 00:10:32.339 11:32:31 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@393 -- # [[ btrfs == ramfs ]] 00:10:32.339 11:32:31 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@393 -- # [[ /home == / ]] 00:10:32.339 11:32:31 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@400 -- # export SPDK_TEST_STORAGE=/home/vagrant/spdk_repo/spdk/test/nvmf/target 00:10:32.339 11:32:31 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@400 -- # SPDK_TEST_STORAGE=/home/vagrant/spdk_repo/spdk/test/nvmf/target 00:10:32.339 11:32:31 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@401 -- # printf '* Found test storage at %s\n' /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:10:32.339 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:10:32.339 11:32:31 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@402 -- # return 0 00:10:32.339 11:32:31 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1698 -- # set -o errtrace 00:10:32.339 11:32:31 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1699 -- # shopt -s extdebug 00:10:32.339 11:32:31 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1700 -- # trap 'trap - ERR; print_backtrace >&2' ERR 00:10:32.339 11:32:31 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1702 -- # PS4=' \t ${test_domain:-} -- ${BASH_SOURCE#${BASH_SOURCE%/*/*}/}@${LINENO} -- \$ ' 00:10:32.339 11:32:31 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1703 -- # true 00:10:32.339 11:32:31 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1705 -- # xtrace_fd 00:10:32.339 11:32:31 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@25 -- # [[ -n 15 ]] 00:10:32.339 11:32:31 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@25 -- # [[ -e /proc/self/fd/15 ]] 00:10:32.339 11:32:31 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@27 -- # exec 00:10:32.339 11:32:31 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@29 -- # exec 00:10:32.339 11:32:31 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@31 -- # xtrace_restore 00:10:32.339 11:32:31 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@16 -- # unset -v 'X_STACK[0 - 1 < 0 ? 0 : 0 - 1]' 00:10:32.339 11:32:31 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@17 -- # (( 0 == 0 )) 00:10:32.339 11:32:31 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@18 -- # set -x 00:10:32.339 11:32:31 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:10:32.339 11:32:31 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1711 -- # lcov --version 00:10:32.339 11:32:31 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:10:32.339 11:32:31 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:10:32.339 11:32:31 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:10:32.339 11:32:31 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@333 -- # local ver1 ver1_l 00:10:32.339 11:32:31 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@334 -- # local ver2 ver2_l 00:10:32.339 11:32:31 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@336 -- # IFS=.-: 00:10:32.339 11:32:31 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@336 -- # read -ra ver1 00:10:32.339 11:32:31 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@337 -- # IFS=.-: 00:10:32.339 11:32:31 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@337 -- # read -ra ver2 00:10:32.339 11:32:31 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@338 -- # local 'op=<' 00:10:32.339 11:32:31 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@340 -- # ver1_l=2 00:10:32.339 11:32:31 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@341 -- # ver2_l=1 00:10:32.339 11:32:31 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:10:32.339 11:32:31 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@344 -- # case "$op" in 00:10:32.339 11:32:31 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@345 -- # : 1 00:10:32.339 11:32:31 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@364 -- # (( v = 0 )) 00:10:32.340 11:32:31 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:10:32.340 11:32:31 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@365 -- # decimal 1 00:10:32.340 11:32:31 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@353 -- # local d=1 00:10:32.340 11:32:31 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:10:32.340 11:32:31 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@355 -- # echo 1 00:10:32.340 11:32:31 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@365 -- # ver1[v]=1 00:10:32.340 11:32:31 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@366 -- # decimal 2 00:10:32.340 11:32:31 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@353 -- # local d=2 00:10:32.340 11:32:32 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:10:32.340 11:32:32 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@355 -- # echo 2 00:10:32.340 11:32:32 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@366 -- # ver2[v]=2 00:10:32.340 11:32:32 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:10:32.340 11:32:32 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:10:32.340 11:32:32 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@368 -- # return 0 00:10:32.340 11:32:32 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:10:32.340 11:32:32 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:10:32.340 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:32.340 --rc genhtml_branch_coverage=1 00:10:32.340 --rc genhtml_function_coverage=1 00:10:32.340 --rc genhtml_legend=1 00:10:32.340 --rc geninfo_all_blocks=1 00:10:32.340 --rc geninfo_unexecuted_blocks=1 00:10:32.340 00:10:32.340 ' 00:10:32.340 11:32:32 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:10:32.340 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:32.340 --rc genhtml_branch_coverage=1 00:10:32.340 --rc genhtml_function_coverage=1 00:10:32.340 --rc genhtml_legend=1 00:10:32.340 --rc geninfo_all_blocks=1 00:10:32.340 --rc geninfo_unexecuted_blocks=1 00:10:32.340 00:10:32.340 ' 00:10:32.340 11:32:32 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:10:32.340 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:32.340 --rc genhtml_branch_coverage=1 00:10:32.340 --rc genhtml_function_coverage=1 00:10:32.340 --rc genhtml_legend=1 00:10:32.340 --rc geninfo_all_blocks=1 00:10:32.340 --rc geninfo_unexecuted_blocks=1 00:10:32.340 00:10:32.340 ' 00:10:32.340 11:32:32 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:10:32.340 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:32.340 --rc genhtml_branch_coverage=1 00:10:32.340 --rc genhtml_function_coverage=1 00:10:32.340 --rc genhtml_legend=1 00:10:32.340 --rc geninfo_all_blocks=1 00:10:32.340 --rc geninfo_unexecuted_blocks=1 00:10:32.340 00:10:32.340 ' 00:10:32.340 11:32:32 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- target/filesystem.sh@10 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:10:32.340 11:32:32 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@7 -- # uname -s 00:10:32.340 11:32:32 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:10:32.340 11:32:32 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:10:32.340 11:32:32 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:10:32.340 11:32:32 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:10:32.340 11:32:32 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:10:32.340 11:32:32 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:10:32.340 11:32:32 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:10:32.340 11:32:32 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:10:32.340 11:32:32 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:10:32.340 11:32:32 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:10:32.340 11:32:32 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:2fb5c03d-b67f-4471-b8a9-a83bf7b15be8 00:10:32.340 11:32:32 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@18 -- # NVME_HOSTID=2fb5c03d-b67f-4471-b8a9-a83bf7b15be8 00:10:32.340 11:32:32 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:10:32.340 11:32:32 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:10:32.340 11:32:32 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:10:32.340 11:32:32 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:10:32.340 11:32:32 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@49 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:10:32.340 11:32:32 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@15 -- # shopt -s extglob 00:10:32.340 11:32:32 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:10:32.340 11:32:32 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:10:32.340 11:32:32 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:10:32.340 11:32:32 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:32.340 11:32:32 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:32.340 11:32:32 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:32.340 11:32:32 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- paths/export.sh@5 -- # export PATH 00:10:32.340 11:32:32 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:32.340 11:32:32 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@51 -- # : 0 00:10:32.340 11:32:32 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:10:32.340 11:32:32 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:10:32.340 11:32:32 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:10:32.340 11:32:32 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:10:32.340 11:32:32 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:10:32.340 11:32:32 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:10:32.340 /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:10:32.340 11:32:32 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:10:32.340 11:32:32 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:10:32.340 11:32:32 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@55 -- # have_pci_nics=0 00:10:32.340 11:32:32 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- target/filesystem.sh@12 -- # MALLOC_BDEV_SIZE=512 00:10:32.340 11:32:32 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- target/filesystem.sh@13 -- # MALLOC_BLOCK_SIZE=512 00:10:32.340 11:32:32 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- target/filesystem.sh@15 -- # nvmftestinit 00:10:32.340 11:32:32 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:10:32.340 11:32:32 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:10:32.340 11:32:32 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@476 -- # prepare_net_devs 00:10:32.340 11:32:32 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@438 -- # local -g is_hw=no 00:10:32.340 11:32:32 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@440 -- # remove_spdk_ns 00:10:32.340 11:32:32 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:10:32.340 11:32:32 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:10:32.340 11:32:32 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:10:32.340 11:32:32 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@442 -- # [[ virt != virt ]] 00:10:32.340 11:32:32 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@444 -- # [[ no == yes ]] 00:10:32.340 11:32:32 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@451 -- # [[ virt == phy ]] 00:10:32.340 11:32:32 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@454 -- # [[ virt == phy-fallback ]] 00:10:32.340 11:32:32 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@459 -- # [[ tcp == tcp ]] 00:10:32.340 11:32:32 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@460 -- # nvmf_veth_init 00:10:32.340 11:32:32 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@145 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:10:32.340 11:32:32 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@146 -- # NVMF_SECOND_INITIATOR_IP=10.0.0.2 00:10:32.340 11:32:32 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@147 -- # NVMF_FIRST_TARGET_IP=10.0.0.3 00:10:32.340 11:32:32 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@148 -- # NVMF_SECOND_TARGET_IP=10.0.0.4 00:10:32.340 11:32:32 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@149 -- # NVMF_INITIATOR_IP=10.0.0.1 00:10:32.340 11:32:32 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@150 -- # NVMF_BRIDGE=nvmf_br 00:10:32.341 11:32:32 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@151 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:10:32.341 11:32:32 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@152 -- # NVMF_INITIATOR_INTERFACE2=nvmf_init_if2 00:10:32.341 11:32:32 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@153 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:10:32.341 11:32:32 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@154 -- # NVMF_INITIATOR_BRIDGE2=nvmf_init_br2 00:10:32.341 11:32:32 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@155 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:10:32.341 11:32:32 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@156 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:10:32.341 11:32:32 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@157 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:10:32.341 11:32:32 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@158 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:10:32.341 11:32:32 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@159 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:10:32.341 11:32:32 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@160 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:10:32.341 11:32:32 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@162 -- # ip link set nvmf_init_br nomaster 00:10:32.341 Cannot find device "nvmf_init_br" 00:10:32.341 11:32:32 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@162 -- # true 00:10:32.341 11:32:32 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@163 -- # ip link set nvmf_init_br2 nomaster 00:10:32.341 Cannot find device "nvmf_init_br2" 00:10:32.341 11:32:32 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@163 -- # true 00:10:32.341 11:32:32 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@164 -- # ip link set nvmf_tgt_br nomaster 00:10:32.341 Cannot find device "nvmf_tgt_br" 00:10:32.341 11:32:32 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@164 -- # true 00:10:32.341 11:32:32 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@165 -- # ip link set nvmf_tgt_br2 nomaster 00:10:32.341 Cannot find device "nvmf_tgt_br2" 00:10:32.341 11:32:32 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@165 -- # true 00:10:32.341 11:32:32 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@166 -- # ip link set nvmf_init_br down 00:10:32.341 Cannot find device "nvmf_init_br" 00:10:32.341 11:32:32 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@166 -- # true 00:10:32.341 11:32:32 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@167 -- # ip link set nvmf_init_br2 down 00:10:32.341 Cannot find device "nvmf_init_br2" 00:10:32.341 11:32:32 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@167 -- # true 00:10:32.341 11:32:32 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@168 -- # ip link set nvmf_tgt_br down 00:10:32.341 Cannot find device "nvmf_tgt_br" 00:10:32.341 11:32:32 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@168 -- # true 00:10:32.341 11:32:32 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@169 -- # ip link set nvmf_tgt_br2 down 00:10:32.341 Cannot find device "nvmf_tgt_br2" 00:10:32.341 11:32:32 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@169 -- # true 00:10:32.341 11:32:32 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@170 -- # ip link delete nvmf_br type bridge 00:10:32.341 Cannot find device "nvmf_br" 00:10:32.341 11:32:32 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@170 -- # true 00:10:32.341 11:32:32 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@171 -- # ip link delete nvmf_init_if 00:10:32.341 Cannot find device "nvmf_init_if" 00:10:32.341 11:32:32 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@171 -- # true 00:10:32.341 11:32:32 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@172 -- # ip link delete nvmf_init_if2 00:10:32.341 Cannot find device "nvmf_init_if2" 00:10:32.341 11:32:32 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@172 -- # true 00:10:32.341 11:32:32 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@173 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:10:32.341 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:10:32.341 11:32:32 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@173 -- # true 00:10:32.341 11:32:32 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@174 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:10:32.341 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:10:32.341 11:32:32 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@174 -- # true 00:10:32.341 11:32:32 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@177 -- # ip netns add nvmf_tgt_ns_spdk 00:10:32.341 11:32:32 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@180 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:10:32.341 11:32:32 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@181 -- # ip link add nvmf_init_if2 type veth peer name nvmf_init_br2 00:10:32.341 11:32:32 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@182 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:10:32.341 11:32:32 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@183 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:10:32.341 11:32:32 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:10:32.341 11:32:32 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@187 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:10:32.341 11:32:32 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@190 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:10:32.341 11:32:32 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@191 -- # ip addr add 10.0.0.2/24 dev nvmf_init_if2 00:10:32.341 11:32:32 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@192 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if 00:10:32.341 11:32:32 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@193 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.4/24 dev nvmf_tgt_if2 00:10:32.341 11:32:32 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@196 -- # ip link set nvmf_init_if up 00:10:32.341 11:32:32 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@197 -- # ip link set nvmf_init_if2 up 00:10:32.341 11:32:32 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@198 -- # ip link set nvmf_init_br up 00:10:32.341 11:32:32 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@199 -- # ip link set nvmf_init_br2 up 00:10:32.341 11:32:32 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@200 -- # ip link set nvmf_tgt_br up 00:10:32.341 11:32:32 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@201 -- # ip link set nvmf_tgt_br2 up 00:10:32.341 11:32:32 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@202 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:10:32.341 11:32:32 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@203 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:10:32.341 11:32:32 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@204 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:10:32.341 11:32:32 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@207 -- # ip link add nvmf_br type bridge 00:10:32.341 11:32:32 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@208 -- # ip link set nvmf_br up 00:10:32.341 11:32:32 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@211 -- # ip link set nvmf_init_br master nvmf_br 00:10:32.341 11:32:32 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@212 -- # ip link set nvmf_init_br2 master nvmf_br 00:10:32.341 11:32:32 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@213 -- # ip link set nvmf_tgt_br master nvmf_br 00:10:32.341 11:32:32 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@214 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:10:32.341 11:32:32 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@217 -- # ipts -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:10:32.341 11:32:32 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT' 00:10:32.341 11:32:32 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@218 -- # ipts -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT 00:10:32.341 11:32:32 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT' 00:10:32.341 11:32:32 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@219 -- # ipts -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:10:32.341 11:32:32 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@790 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT -m comment --comment 'SPDK_NVMF:-A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT' 00:10:32.341 11:32:32 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@222 -- # ping -c 1 10.0.0.3 00:10:32.341 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:10:32.341 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.089 ms 00:10:32.341 00:10:32.341 --- 10.0.0.3 ping statistics --- 00:10:32.341 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:10:32.341 rtt min/avg/max/mdev = 0.089/0.089/0.089/0.000 ms 00:10:32.341 11:32:32 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@223 -- # ping -c 1 10.0.0.4 00:10:32.341 PING 10.0.0.4 (10.0.0.4) 56(84) bytes of data. 00:10:32.341 64 bytes from 10.0.0.4: icmp_seq=1 ttl=64 time=0.098 ms 00:10:32.341 00:10:32.341 --- 10.0.0.4 ping statistics --- 00:10:32.341 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:10:32.341 rtt min/avg/max/mdev = 0.098/0.098/0.098/0.000 ms 00:10:32.341 11:32:32 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@224 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:10:32.341 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:10:32.341 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.041 ms 00:10:32.341 00:10:32.341 --- 10.0.0.1 ping statistics --- 00:10:32.341 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:10:32.341 rtt min/avg/max/mdev = 0.041/0.041/0.041/0.000 ms 00:10:32.341 11:32:32 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@225 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.2 00:10:32.341 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:10:32.341 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.093 ms 00:10:32.341 00:10:32.341 --- 10.0.0.2 ping statistics --- 00:10:32.341 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:10:32.341 rtt min/avg/max/mdev = 0.093/0.093/0.093/0.000 ms 00:10:32.341 11:32:32 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@227 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:10:32.341 11:32:32 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@461 -- # return 0 00:10:32.341 11:32:32 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:10:32.341 11:32:32 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:10:32.341 11:32:32 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:10:32.341 11:32:32 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:10:32.341 11:32:32 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:10:32.341 11:32:32 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:10:32.341 11:32:32 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:10:32.341 11:32:32 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- target/filesystem.sh@105 -- # run_test nvmf_filesystem_no_in_capsule nvmf_filesystem_part 0 00:10:32.341 11:32:32 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:10:32.341 11:32:32 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1111 -- # xtrace_disable 00:10:32.341 11:32:32 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@10 -- # set +x 00:10:32.341 ************************************ 00:10:32.341 START TEST nvmf_filesystem_no_in_capsule 00:10:32.341 ************************************ 00:10:32.342 11:32:32 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1129 -- # nvmf_filesystem_part 0 00:10:32.342 11:32:32 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@47 -- # in_capsule=0 00:10:32.342 11:32:32 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@49 -- # nvmfappstart -m 0xF 00:10:32.342 11:32:32 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:10:32.342 11:32:32 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@726 -- # xtrace_disable 00:10:32.342 11:32:32 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:10:32.342 11:32:32 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- nvmf/common.sh@509 -- # nvmfpid=73409 00:10:32.342 11:32:32 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- nvmf/common.sh@508 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:10:32.342 11:32:32 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- nvmf/common.sh@510 -- # waitforlisten 73409 00:10:32.342 11:32:32 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@835 -- # '[' -z 73409 ']' 00:10:32.342 11:32:32 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:10:32.342 11:32:32 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@840 -- # local max_retries=100 00:10:32.342 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:10:32.342 11:32:32 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:10:32.342 11:32:32 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@844 -- # xtrace_disable 00:10:32.342 11:32:32 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:10:32.342 [2024-12-16 11:32:32.544423] Starting SPDK v25.01-pre git sha1 c360129e7 / DPDK 24.03.0 initialization... 00:10:32.342 [2024-12-16 11:32:32.544546] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:10:32.342 [2024-12-16 11:32:32.692361] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:10:32.342 [2024-12-16 11:32:32.767073] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:10:32.342 [2024-12-16 11:32:32.767206] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:10:32.342 [2024-12-16 11:32:32.767231] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:10:32.342 [2024-12-16 11:32:32.767247] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:10:32.342 [2024-12-16 11:32:32.767261] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:10:32.342 [2024-12-16 11:32:32.768760] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 1 00:10:32.342 [2024-12-16 11:32:32.768898] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 2 00:10:32.342 [2024-12-16 11:32:32.769035] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 3 00:10:32.342 [2024-12-16 11:32:32.769043] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:10:32.909 11:32:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:10:32.909 11:32:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@868 -- # return 0 00:10:32.909 11:32:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:10:32.909 11:32:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@732 -- # xtrace_disable 00:10:32.909 11:32:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:10:32.909 11:32:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:10:32.909 11:32:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@50 -- # malloc_name=Malloc1 00:10:32.909 11:32:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@52 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 -c 0 00:10:32.909 11:32:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:32.909 11:32:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:10:32.909 [2024-12-16 11:32:33.569640] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:10:32.909 11:32:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:32.909 11:32:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@53 -- # rpc_cmd bdev_malloc_create 512 512 -b Malloc1 00:10:32.909 11:32:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:32.909 11:32:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:10:33.169 Malloc1 00:10:33.169 11:32:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:33.169 11:32:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@54 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:10:33.169 11:32:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:33.169 11:32:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:10:33.169 11:32:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:33.169 11:32:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@55 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:10:33.169 11:32:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:33.169 11:32:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:10:33.169 11:32:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:33.169 11:32:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@56 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 00:10:33.169 11:32:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:33.169 11:32:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:10:33.169 [2024-12-16 11:32:33.751953] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4420 *** 00:10:33.169 11:32:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:33.169 11:32:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@58 -- # get_bdev_size Malloc1 00:10:33.169 11:32:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1382 -- # local bdev_name=Malloc1 00:10:33.169 11:32:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1383 -- # local bdev_info 00:10:33.169 11:32:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1384 -- # local bs 00:10:33.169 11:32:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1385 -- # local nb 00:10:33.169 11:32:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1386 -- # rpc_cmd bdev_get_bdevs -b Malloc1 00:10:33.169 11:32:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:33.169 11:32:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:10:33.169 11:32:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:33.169 11:32:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1386 -- # bdev_info='[ 00:10:33.169 { 00:10:33.169 "aliases": [ 00:10:33.169 "88db6675-3e45-44e9-8adf-e6405f5286dc" 00:10:33.169 ], 00:10:33.169 "assigned_rate_limits": { 00:10:33.169 "r_mbytes_per_sec": 0, 00:10:33.169 "rw_ios_per_sec": 0, 00:10:33.169 "rw_mbytes_per_sec": 0, 00:10:33.169 "w_mbytes_per_sec": 0 00:10:33.169 }, 00:10:33.169 "block_size": 512, 00:10:33.169 "claim_type": "exclusive_write", 00:10:33.169 "claimed": true, 00:10:33.169 "driver_specific": {}, 00:10:33.169 "memory_domains": [ 00:10:33.169 { 00:10:33.169 "dma_device_id": "system", 00:10:33.169 "dma_device_type": 1 00:10:33.169 }, 00:10:33.169 { 00:10:33.169 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:33.169 "dma_device_type": 2 00:10:33.169 } 00:10:33.169 ], 00:10:33.169 "name": "Malloc1", 00:10:33.169 "num_blocks": 1048576, 00:10:33.169 "product_name": "Malloc disk", 00:10:33.169 "supported_io_types": { 00:10:33.169 "abort": true, 00:10:33.169 "compare": false, 00:10:33.169 "compare_and_write": false, 00:10:33.169 "copy": true, 00:10:33.169 "flush": true, 00:10:33.169 "get_zone_info": false, 00:10:33.169 "nvme_admin": false, 00:10:33.169 "nvme_io": false, 00:10:33.169 "nvme_io_md": false, 00:10:33.169 "nvme_iov_md": false, 00:10:33.169 "read": true, 00:10:33.169 "reset": true, 00:10:33.169 "seek_data": false, 00:10:33.169 "seek_hole": false, 00:10:33.169 "unmap": true, 00:10:33.169 "write": true, 00:10:33.169 "write_zeroes": true, 00:10:33.169 "zcopy": true, 00:10:33.169 "zone_append": false, 00:10:33.169 "zone_management": false 00:10:33.169 }, 00:10:33.169 "uuid": "88db6675-3e45-44e9-8adf-e6405f5286dc", 00:10:33.169 "zoned": false 00:10:33.169 } 00:10:33.169 ]' 00:10:33.169 11:32:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1387 -- # jq '.[] .block_size' 00:10:33.169 11:32:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1387 -- # bs=512 00:10:33.169 11:32:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1388 -- # jq '.[] .num_blocks' 00:10:33.169 11:32:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1388 -- # nb=1048576 00:10:33.169 11:32:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1391 -- # bdev_size=512 00:10:33.169 11:32:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1392 -- # echo 512 00:10:33.169 11:32:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@58 -- # malloc_size=536870912 00:10:33.169 11:32:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@60 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:2fb5c03d-b67f-4471-b8a9-a83bf7b15be8 --hostid=2fb5c03d-b67f-4471-b8a9-a83bf7b15be8 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.3 -s 4420 00:10:33.431 11:32:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@62 -- # waitforserial SPDKISFASTANDAWESOME 00:10:33.431 11:32:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1202 -- # local i=0 00:10:33.431 11:32:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1203 -- # local nvme_device_counter=1 nvme_devices=0 00:10:33.431 11:32:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1204 -- # [[ -n '' ]] 00:10:33.431 11:32:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1209 -- # sleep 2 00:10:35.334 11:32:36 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1210 -- # (( i++ <= 15 )) 00:10:35.334 11:32:36 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1211 -- # lsblk -l -o NAME,SERIAL 00:10:35.334 11:32:36 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1211 -- # grep -c SPDKISFASTANDAWESOME 00:10:35.334 11:32:36 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1211 -- # nvme_devices=1 00:10:35.334 11:32:36 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1212 -- # (( nvme_devices == nvme_device_counter )) 00:10:35.334 11:32:36 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1212 -- # return 0 00:10:35.334 11:32:36 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@63 -- # grep -oP '([\w]*)(?=\s+SPDKISFASTANDAWESOME)' 00:10:35.334 11:32:36 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@63 -- # lsblk -l -o NAME,SERIAL 00:10:35.592 11:32:36 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@63 -- # nvme_name=nvme0n1 00:10:35.592 11:32:36 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@64 -- # sec_size_to_bytes nvme0n1 00:10:35.592 11:32:36 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- setup/common.sh@76 -- # local dev=nvme0n1 00:10:35.592 11:32:36 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- setup/common.sh@78 -- # [[ -e /sys/block/nvme0n1 ]] 00:10:35.592 11:32:36 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- setup/common.sh@80 -- # echo 536870912 00:10:35.592 11:32:36 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@64 -- # nvme_size=536870912 00:10:35.593 11:32:36 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@66 -- # mkdir -p /mnt/device 00:10:35.593 11:32:36 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@67 -- # (( nvme_size == malloc_size )) 00:10:35.593 11:32:36 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@68 -- # parted -s /dev/nvme0n1 mklabel gpt mkpart SPDK_TEST 0% 100% 00:10:35.593 11:32:36 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@69 -- # partprobe 00:10:35.593 11:32:36 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@70 -- # sleep 1 00:10:36.528 11:32:37 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@76 -- # '[' 0 -eq 0 ']' 00:10:36.529 11:32:37 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@77 -- # run_test filesystem_ext4 nvmf_filesystem_create ext4 nvme0n1 00:10:36.529 11:32:37 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:10:36.529 11:32:37 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1111 -- # xtrace_disable 00:10:36.529 11:32:37 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:10:36.529 ************************************ 00:10:36.529 START TEST filesystem_ext4 00:10:36.529 ************************************ 00:10:36.529 11:32:37 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@1129 -- # nvmf_filesystem_create ext4 nvme0n1 00:10:36.529 11:32:37 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@18 -- # fstype=ext4 00:10:36.529 11:32:37 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@19 -- # nvme_name=nvme0n1 00:10:36.529 11:32:37 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@21 -- # make_filesystem ext4 /dev/nvme0n1p1 00:10:36.529 11:32:37 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@930 -- # local fstype=ext4 00:10:36.529 11:32:37 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@931 -- # local dev_name=/dev/nvme0n1p1 00:10:36.529 11:32:37 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@932 -- # local i=0 00:10:36.529 11:32:37 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@933 -- # local force 00:10:36.529 11:32:37 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@935 -- # '[' ext4 = ext4 ']' 00:10:36.529 11:32:37 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@936 -- # force=-F 00:10:36.529 11:32:37 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@941 -- # mkfs.ext4 -F /dev/nvme0n1p1 00:10:36.529 mke2fs 1.47.0 (5-Feb-2023) 00:10:36.787 Discarding device blocks: 0/522240 done 00:10:36.787 Creating filesystem with 522240 1k blocks and 130560 inodes 00:10:36.787 Filesystem UUID: e4fa0c0d-ea86-4efe-8c0e-8eda6c13c1ff 00:10:36.787 Superblock backups stored on blocks: 00:10:36.787 8193, 24577, 40961, 57345, 73729, 204801, 221185, 401409 00:10:36.787 00:10:36.787 Allocating group tables: 0/64 done 00:10:36.787 Writing inode tables: 0/64 done 00:10:36.787 Creating journal (8192 blocks): done 00:10:36.787 Writing superblocks and filesystem accounting information: 0/64 done 00:10:36.787 00:10:36.787 11:32:37 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@949 -- # return 0 00:10:36.787 11:32:37 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@23 -- # mount /dev/nvme0n1p1 /mnt/device 00:10:42.054 11:32:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@24 -- # touch /mnt/device/aaa 00:10:42.054 11:32:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@25 -- # sync 00:10:42.054 11:32:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@26 -- # rm /mnt/device/aaa 00:10:42.054 11:32:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@27 -- # sync 00:10:42.054 11:32:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@29 -- # i=0 00:10:42.054 11:32:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@30 -- # umount /mnt/device 00:10:42.313 11:32:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@37 -- # kill -0 73409 00:10:42.313 11:32:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@40 -- # lsblk -l -o NAME 00:10:42.313 11:32:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@40 -- # grep -q -w nvme0n1 00:10:42.313 11:32:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@43 -- # lsblk -l -o NAME 00:10:42.313 11:32:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@43 -- # grep -q -w nvme0n1p1 00:10:42.313 ************************************ 00:10:42.313 END TEST filesystem_ext4 00:10:42.313 ************************************ 00:10:42.313 00:10:42.313 real 0m5.574s 00:10:42.313 user 0m0.025s 00:10:42.313 sys 0m0.069s 00:10:42.313 11:32:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@1130 -- # xtrace_disable 00:10:42.313 11:32:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@10 -- # set +x 00:10:42.313 11:32:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@78 -- # run_test filesystem_btrfs nvmf_filesystem_create btrfs nvme0n1 00:10:42.313 11:32:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:10:42.313 11:32:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1111 -- # xtrace_disable 00:10:42.313 11:32:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:10:42.313 ************************************ 00:10:42.313 START TEST filesystem_btrfs 00:10:42.313 ************************************ 00:10:42.313 11:32:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@1129 -- # nvmf_filesystem_create btrfs nvme0n1 00:10:42.313 11:32:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@18 -- # fstype=btrfs 00:10:42.313 11:32:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@19 -- # nvme_name=nvme0n1 00:10:42.313 11:32:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@21 -- # make_filesystem btrfs /dev/nvme0n1p1 00:10:42.313 11:32:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@930 -- # local fstype=btrfs 00:10:42.313 11:32:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@931 -- # local dev_name=/dev/nvme0n1p1 00:10:42.313 11:32:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@932 -- # local i=0 00:10:42.313 11:32:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@933 -- # local force 00:10:42.313 11:32:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@935 -- # '[' btrfs = ext4 ']' 00:10:42.313 11:32:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@938 -- # force=-f 00:10:42.313 11:32:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@941 -- # mkfs.btrfs -f /dev/nvme0n1p1 00:10:42.313 btrfs-progs v6.8.1 00:10:42.313 See https://btrfs.readthedocs.io for more information. 00:10:42.313 00:10:42.313 Performing full device TRIM /dev/nvme0n1p1 (510.00MiB) ... 00:10:42.313 NOTE: several default settings have changed in version 5.15, please make sure 00:10:42.313 this does not affect your deployments: 00:10:42.313 - DUP for metadata (-m dup) 00:10:42.313 - enabled no-holes (-O no-holes) 00:10:42.313 - enabled free-space-tree (-R free-space-tree) 00:10:42.313 00:10:42.313 Label: (null) 00:10:42.314 UUID: e93a8270-09ce-42d5-a494-b22d67750aa6 00:10:42.314 Node size: 16384 00:10:42.314 Sector size: 4096 (CPU page size: 4096) 00:10:42.314 Filesystem size: 510.00MiB 00:10:42.314 Block group profiles: 00:10:42.314 Data: single 8.00MiB 00:10:42.314 Metadata: DUP 32.00MiB 00:10:42.314 System: DUP 8.00MiB 00:10:42.314 SSD detected: yes 00:10:42.314 Zoned device: no 00:10:42.314 Features: extref, skinny-metadata, no-holes, free-space-tree 00:10:42.314 Checksum: crc32c 00:10:42.314 Number of devices: 1 00:10:42.314 Devices: 00:10:42.314 ID SIZE PATH 00:10:42.314 1 510.00MiB /dev/nvme0n1p1 00:10:42.314 00:10:42.314 11:32:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@949 -- # return 0 00:10:42.314 11:32:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@23 -- # mount /dev/nvme0n1p1 /mnt/device 00:10:42.314 11:32:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@24 -- # touch /mnt/device/aaa 00:10:42.314 11:32:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@25 -- # sync 00:10:42.573 11:32:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@26 -- # rm /mnt/device/aaa 00:10:42.573 11:32:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@27 -- # sync 00:10:42.573 11:32:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@29 -- # i=0 00:10:42.573 11:32:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@30 -- # umount /mnt/device 00:10:42.573 11:32:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@37 -- # kill -0 73409 00:10:42.573 11:32:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@40 -- # lsblk -l -o NAME 00:10:42.573 11:32:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@40 -- # grep -q -w nvme0n1 00:10:42.573 11:32:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@43 -- # grep -q -w nvme0n1p1 00:10:42.573 11:32:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@43 -- # lsblk -l -o NAME 00:10:42.573 ************************************ 00:10:42.573 END TEST filesystem_btrfs 00:10:42.573 ************************************ 00:10:42.573 00:10:42.573 real 0m0.231s 00:10:42.573 user 0m0.018s 00:10:42.573 sys 0m0.066s 00:10:42.573 11:32:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@1130 -- # xtrace_disable 00:10:42.573 11:32:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@10 -- # set +x 00:10:42.573 11:32:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@79 -- # run_test filesystem_xfs nvmf_filesystem_create xfs nvme0n1 00:10:42.573 11:32:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:10:42.573 11:32:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1111 -- # xtrace_disable 00:10:42.573 11:32:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:10:42.573 ************************************ 00:10:42.573 START TEST filesystem_xfs 00:10:42.573 ************************************ 00:10:42.573 11:32:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@1129 -- # nvmf_filesystem_create xfs nvme0n1 00:10:42.573 11:32:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@18 -- # fstype=xfs 00:10:42.573 11:32:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@19 -- # nvme_name=nvme0n1 00:10:42.573 11:32:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@21 -- # make_filesystem xfs /dev/nvme0n1p1 00:10:42.573 11:32:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@930 -- # local fstype=xfs 00:10:42.573 11:32:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@931 -- # local dev_name=/dev/nvme0n1p1 00:10:42.573 11:32:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@932 -- # local i=0 00:10:42.573 11:32:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@933 -- # local force 00:10:42.573 11:32:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@935 -- # '[' xfs = ext4 ']' 00:10:42.573 11:32:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@938 -- # force=-f 00:10:42.573 11:32:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@941 -- # mkfs.xfs -f /dev/nvme0n1p1 00:10:42.573 meta-data=/dev/nvme0n1p1 isize=512 agcount=4, agsize=32640 blks 00:10:42.573 = sectsz=512 attr=2, projid32bit=1 00:10:42.573 = crc=1 finobt=1, sparse=1, rmapbt=0 00:10:42.573 = reflink=1 bigtime=1 inobtcount=1 nrext64=0 00:10:42.573 data = bsize=4096 blocks=130560, imaxpct=25 00:10:42.573 = sunit=0 swidth=0 blks 00:10:42.573 naming =version 2 bsize=4096 ascii-ci=0, ftype=1 00:10:42.573 log =internal log bsize=4096 blocks=16384, version=2 00:10:42.573 = sectsz=512 sunit=0 blks, lazy-count=1 00:10:42.573 realtime =none extsz=4096 blocks=0, rtextents=0 00:10:43.508 Discarding blocks...Done. 00:10:43.508 11:32:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@949 -- # return 0 00:10:43.508 11:32:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@23 -- # mount /dev/nvme0n1p1 /mnt/device 00:10:46.038 11:32:46 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@24 -- # touch /mnt/device/aaa 00:10:46.038 11:32:46 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@25 -- # sync 00:10:46.038 11:32:46 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@26 -- # rm /mnt/device/aaa 00:10:46.038 11:32:46 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@27 -- # sync 00:10:46.038 11:32:46 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@29 -- # i=0 00:10:46.038 11:32:46 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@30 -- # umount /mnt/device 00:10:46.038 11:32:46 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@37 -- # kill -0 73409 00:10:46.038 11:32:46 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@40 -- # lsblk -l -o NAME 00:10:46.038 11:32:46 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@40 -- # grep -q -w nvme0n1 00:10:46.038 11:32:46 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@43 -- # lsblk -l -o NAME 00:10:46.038 11:32:46 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@43 -- # grep -q -w nvme0n1p1 00:10:46.038 ************************************ 00:10:46.038 END TEST filesystem_xfs 00:10:46.038 ************************************ 00:10:46.038 00:10:46.038 real 0m3.141s 00:10:46.038 user 0m0.023s 00:10:46.038 sys 0m0.061s 00:10:46.038 11:32:46 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@1130 -- # xtrace_disable 00:10:46.038 11:32:46 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@10 -- # set +x 00:10:46.038 11:32:46 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@91 -- # flock /dev/nvme0n1 parted -s /dev/nvme0n1 rm 1 00:10:46.038 11:32:46 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@93 -- # sync 00:10:46.038 11:32:46 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@94 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:10:46.038 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:10:46.038 11:32:46 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@95 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:10:46.038 11:32:46 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1223 -- # local i=0 00:10:46.038 11:32:46 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1224 -- # lsblk -o NAME,SERIAL 00:10:46.038 11:32:46 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1224 -- # grep -q -w SPDKISFASTANDAWESOME 00:10:46.038 11:32:46 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1231 -- # lsblk -l -o NAME,SERIAL 00:10:46.038 11:32:46 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1231 -- # grep -q -w SPDKISFASTANDAWESOME 00:10:46.038 11:32:46 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1235 -- # return 0 00:10:46.038 11:32:46 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@97 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:10:46.038 11:32:46 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:46.038 11:32:46 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:10:46.038 11:32:46 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:46.039 11:32:46 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@99 -- # trap - SIGINT SIGTERM EXIT 00:10:46.039 11:32:46 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@101 -- # killprocess 73409 00:10:46.039 11:32:46 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@954 -- # '[' -z 73409 ']' 00:10:46.039 11:32:46 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@958 -- # kill -0 73409 00:10:46.039 11:32:46 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@959 -- # uname 00:10:46.039 11:32:46 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:10:46.039 11:32:46 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 73409 00:10:46.039 killing process with pid 73409 00:10:46.039 11:32:46 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:10:46.039 11:32:46 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:10:46.039 11:32:46 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@972 -- # echo 'killing process with pid 73409' 00:10:46.039 11:32:46 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@973 -- # kill 73409 00:10:46.039 11:32:46 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@978 -- # wait 73409 00:10:46.297 11:32:46 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@102 -- # nvmfpid= 00:10:46.297 00:10:46.297 real 0m14.416s 00:10:46.297 user 0m55.131s 00:10:46.297 sys 0m2.156s 00:10:46.297 11:32:46 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1130 -- # xtrace_disable 00:10:46.297 ************************************ 00:10:46.297 11:32:46 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:10:46.297 END TEST nvmf_filesystem_no_in_capsule 00:10:46.297 ************************************ 00:10:46.297 11:32:46 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- target/filesystem.sh@106 -- # run_test nvmf_filesystem_in_capsule nvmf_filesystem_part 4096 00:10:46.297 11:32:46 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:10:46.297 11:32:46 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1111 -- # xtrace_disable 00:10:46.297 11:32:46 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@10 -- # set +x 00:10:46.297 ************************************ 00:10:46.297 START TEST nvmf_filesystem_in_capsule 00:10:46.297 ************************************ 00:10:46.297 11:32:46 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1129 -- # nvmf_filesystem_part 4096 00:10:46.297 11:32:46 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@47 -- # in_capsule=4096 00:10:46.297 11:32:46 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@49 -- # nvmfappstart -m 0xF 00:10:46.297 11:32:46 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:10:46.297 11:32:46 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@726 -- # xtrace_disable 00:10:46.297 11:32:46 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:10:46.297 11:32:46 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- nvmf/common.sh@509 -- # nvmfpid=73775 00:10:46.297 11:32:46 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- nvmf/common.sh@508 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:10:46.297 11:32:46 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- nvmf/common.sh@510 -- # waitforlisten 73775 00:10:46.297 11:32:46 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@835 -- # '[' -z 73775 ']' 00:10:46.297 11:32:46 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:10:46.297 11:32:46 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@840 -- # local max_retries=100 00:10:46.297 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:10:46.297 11:32:46 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:10:46.297 11:32:46 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@844 -- # xtrace_disable 00:10:46.297 11:32:46 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:10:46.297 [2024-12-16 11:32:47.006218] Starting SPDK v25.01-pre git sha1 c360129e7 / DPDK 24.03.0 initialization... 00:10:46.297 [2024-12-16 11:32:47.006343] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:10:46.555 [2024-12-16 11:32:47.157871] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:10:46.555 [2024-12-16 11:32:47.220018] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:10:46.555 [2024-12-16 11:32:47.220123] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:10:46.555 [2024-12-16 11:32:47.220135] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:10:46.555 [2024-12-16 11:32:47.220155] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:10:46.555 [2024-12-16 11:32:47.220163] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:10:46.555 [2024-12-16 11:32:47.221347] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 1 00:10:46.555 [2024-12-16 11:32:47.221478] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 2 00:10:46.555 [2024-12-16 11:32:47.221805] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 3 00:10:46.555 [2024-12-16 11:32:47.221810] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:10:46.814 11:32:47 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:10:46.814 11:32:47 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@868 -- # return 0 00:10:46.814 11:32:47 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:10:46.814 11:32:47 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@732 -- # xtrace_disable 00:10:46.814 11:32:47 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:10:46.814 11:32:47 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:10:46.814 11:32:47 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@50 -- # malloc_name=Malloc1 00:10:46.814 11:32:47 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@52 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 -c 4096 00:10:46.814 11:32:47 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:46.814 11:32:47 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:10:46.814 [2024-12-16 11:32:47.390753] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:10:46.814 11:32:47 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:46.814 11:32:47 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@53 -- # rpc_cmd bdev_malloc_create 512 512 -b Malloc1 00:10:46.814 11:32:47 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:46.814 11:32:47 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:10:46.814 Malloc1 00:10:46.814 11:32:47 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:46.814 11:32:47 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@54 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:10:46.814 11:32:47 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:46.814 11:32:47 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:10:47.072 11:32:47 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:47.072 11:32:47 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@55 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:10:47.072 11:32:47 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:47.072 11:32:47 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:10:47.072 11:32:47 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:47.072 11:32:47 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@56 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 00:10:47.072 11:32:47 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:47.072 11:32:47 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:10:47.072 [2024-12-16 11:32:47.577099] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4420 *** 00:10:47.072 11:32:47 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:47.072 11:32:47 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@58 -- # get_bdev_size Malloc1 00:10:47.072 11:32:47 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1382 -- # local bdev_name=Malloc1 00:10:47.072 11:32:47 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1383 -- # local bdev_info 00:10:47.072 11:32:47 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1384 -- # local bs 00:10:47.072 11:32:47 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1385 -- # local nb 00:10:47.072 11:32:47 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1386 -- # rpc_cmd bdev_get_bdevs -b Malloc1 00:10:47.072 11:32:47 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:47.072 11:32:47 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:10:47.072 11:32:47 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:47.072 11:32:47 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1386 -- # bdev_info='[ 00:10:47.072 { 00:10:47.072 "aliases": [ 00:10:47.072 "bf56cb58-de3d-4a96-8185-1fc55b1df9b4" 00:10:47.072 ], 00:10:47.072 "assigned_rate_limits": { 00:10:47.072 "r_mbytes_per_sec": 0, 00:10:47.072 "rw_ios_per_sec": 0, 00:10:47.072 "rw_mbytes_per_sec": 0, 00:10:47.072 "w_mbytes_per_sec": 0 00:10:47.072 }, 00:10:47.072 "block_size": 512, 00:10:47.072 "claim_type": "exclusive_write", 00:10:47.072 "claimed": true, 00:10:47.072 "driver_specific": {}, 00:10:47.072 "memory_domains": [ 00:10:47.072 { 00:10:47.072 "dma_device_id": "system", 00:10:47.072 "dma_device_type": 1 00:10:47.072 }, 00:10:47.072 { 00:10:47.072 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:47.072 "dma_device_type": 2 00:10:47.072 } 00:10:47.072 ], 00:10:47.072 "name": "Malloc1", 00:10:47.072 "num_blocks": 1048576, 00:10:47.072 "product_name": "Malloc disk", 00:10:47.072 "supported_io_types": { 00:10:47.072 "abort": true, 00:10:47.072 "compare": false, 00:10:47.072 "compare_and_write": false, 00:10:47.072 "copy": true, 00:10:47.072 "flush": true, 00:10:47.072 "get_zone_info": false, 00:10:47.073 "nvme_admin": false, 00:10:47.073 "nvme_io": false, 00:10:47.073 "nvme_io_md": false, 00:10:47.073 "nvme_iov_md": false, 00:10:47.073 "read": true, 00:10:47.073 "reset": true, 00:10:47.073 "seek_data": false, 00:10:47.073 "seek_hole": false, 00:10:47.073 "unmap": true, 00:10:47.073 "write": true, 00:10:47.073 "write_zeroes": true, 00:10:47.073 "zcopy": true, 00:10:47.073 "zone_append": false, 00:10:47.073 "zone_management": false 00:10:47.073 }, 00:10:47.073 "uuid": "bf56cb58-de3d-4a96-8185-1fc55b1df9b4", 00:10:47.073 "zoned": false 00:10:47.073 } 00:10:47.073 ]' 00:10:47.073 11:32:47 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1387 -- # jq '.[] .block_size' 00:10:47.073 11:32:47 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1387 -- # bs=512 00:10:47.073 11:32:47 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1388 -- # jq '.[] .num_blocks' 00:10:47.073 11:32:47 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1388 -- # nb=1048576 00:10:47.073 11:32:47 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1391 -- # bdev_size=512 00:10:47.073 11:32:47 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1392 -- # echo 512 00:10:47.073 11:32:47 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@58 -- # malloc_size=536870912 00:10:47.073 11:32:47 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@60 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:2fb5c03d-b67f-4471-b8a9-a83bf7b15be8 --hostid=2fb5c03d-b67f-4471-b8a9-a83bf7b15be8 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.3 -s 4420 00:10:47.332 11:32:47 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@62 -- # waitforserial SPDKISFASTANDAWESOME 00:10:47.332 11:32:47 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1202 -- # local i=0 00:10:47.332 11:32:47 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1203 -- # local nvme_device_counter=1 nvme_devices=0 00:10:47.332 11:32:47 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1204 -- # [[ -n '' ]] 00:10:47.332 11:32:47 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1209 -- # sleep 2 00:10:49.232 11:32:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1210 -- # (( i++ <= 15 )) 00:10:49.232 11:32:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1211 -- # lsblk -l -o NAME,SERIAL 00:10:49.232 11:32:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1211 -- # grep -c SPDKISFASTANDAWESOME 00:10:49.232 11:32:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1211 -- # nvme_devices=1 00:10:49.232 11:32:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1212 -- # (( nvme_devices == nvme_device_counter )) 00:10:49.232 11:32:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1212 -- # return 0 00:10:49.232 11:32:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@63 -- # lsblk -l -o NAME,SERIAL 00:10:49.232 11:32:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@63 -- # grep -oP '([\w]*)(?=\s+SPDKISFASTANDAWESOME)' 00:10:49.232 11:32:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@63 -- # nvme_name=nvme0n1 00:10:49.232 11:32:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@64 -- # sec_size_to_bytes nvme0n1 00:10:49.232 11:32:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- setup/common.sh@76 -- # local dev=nvme0n1 00:10:49.232 11:32:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- setup/common.sh@78 -- # [[ -e /sys/block/nvme0n1 ]] 00:10:49.232 11:32:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- setup/common.sh@80 -- # echo 536870912 00:10:49.232 11:32:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@64 -- # nvme_size=536870912 00:10:49.232 11:32:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@66 -- # mkdir -p /mnt/device 00:10:49.232 11:32:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@67 -- # (( nvme_size == malloc_size )) 00:10:49.232 11:32:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@68 -- # parted -s /dev/nvme0n1 mklabel gpt mkpart SPDK_TEST 0% 100% 00:10:49.232 11:32:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@69 -- # partprobe 00:10:49.491 11:32:50 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@70 -- # sleep 1 00:10:50.428 11:32:51 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@76 -- # '[' 4096 -eq 0 ']' 00:10:50.428 11:32:51 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@81 -- # run_test filesystem_in_capsule_ext4 nvmf_filesystem_create ext4 nvme0n1 00:10:50.428 11:32:51 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:10:50.428 11:32:51 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1111 -- # xtrace_disable 00:10:50.428 11:32:51 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:10:50.428 ************************************ 00:10:50.428 START TEST filesystem_in_capsule_ext4 00:10:50.428 ************************************ 00:10:50.428 11:32:51 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@1129 -- # nvmf_filesystem_create ext4 nvme0n1 00:10:50.428 11:32:51 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@18 -- # fstype=ext4 00:10:50.428 11:32:51 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@19 -- # nvme_name=nvme0n1 00:10:50.428 11:32:51 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@21 -- # make_filesystem ext4 /dev/nvme0n1p1 00:10:50.428 11:32:51 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@930 -- # local fstype=ext4 00:10:50.428 11:32:51 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@931 -- # local dev_name=/dev/nvme0n1p1 00:10:50.428 11:32:51 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@932 -- # local i=0 00:10:50.428 11:32:51 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@933 -- # local force 00:10:50.428 11:32:51 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@935 -- # '[' ext4 = ext4 ']' 00:10:50.428 11:32:51 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@936 -- # force=-F 00:10:50.428 11:32:51 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@941 -- # mkfs.ext4 -F /dev/nvme0n1p1 00:10:50.428 mke2fs 1.47.0 (5-Feb-2023) 00:10:50.428 Discarding device blocks: 0/522240 done 00:10:50.428 Creating filesystem with 522240 1k blocks and 130560 inodes 00:10:50.428 Filesystem UUID: 4cc09cd4-2101-4023-9b46-aa02dca80448 00:10:50.428 Superblock backups stored on blocks: 00:10:50.428 8193, 24577, 40961, 57345, 73729, 204801, 221185, 401409 00:10:50.428 00:10:50.428 Allocating group tables: 0/64 done 00:10:50.428 Writing inode tables: 0/64 done 00:10:50.428 Creating journal (8192 blocks): done 00:10:50.687 Writing superblocks and filesystem accounting information: 0/64 done 00:10:50.687 00:10:50.687 11:32:51 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@949 -- # return 0 00:10:50.687 11:32:51 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@23 -- # mount /dev/nvme0n1p1 /mnt/device 00:10:55.954 11:32:56 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@24 -- # touch /mnt/device/aaa 00:10:55.954 11:32:56 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@25 -- # sync 00:10:55.954 11:32:56 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@26 -- # rm /mnt/device/aaa 00:10:55.954 11:32:56 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@27 -- # sync 00:10:55.954 11:32:56 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@29 -- # i=0 00:10:55.954 11:32:56 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@30 -- # umount /mnt/device 00:10:55.954 11:32:56 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@37 -- # kill -0 73775 00:10:55.954 11:32:56 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@40 -- # lsblk -l -o NAME 00:10:55.954 11:32:56 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@40 -- # grep -q -w nvme0n1 00:10:55.954 11:32:56 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@43 -- # lsblk -l -o NAME 00:10:55.954 11:32:56 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@43 -- # grep -q -w nvme0n1p1 00:10:55.954 ************************************ 00:10:55.954 END TEST filesystem_in_capsule_ext4 00:10:55.954 ************************************ 00:10:55.954 00:10:55.954 real 0m5.594s 00:10:55.954 user 0m0.024s 00:10:55.954 sys 0m0.067s 00:10:55.954 11:32:56 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@1130 -- # xtrace_disable 00:10:55.954 11:32:56 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@10 -- # set +x 00:10:55.954 11:32:56 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@82 -- # run_test filesystem_in_capsule_btrfs nvmf_filesystem_create btrfs nvme0n1 00:10:55.954 11:32:56 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:10:55.954 11:32:56 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1111 -- # xtrace_disable 00:10:55.954 11:32:56 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:10:55.954 ************************************ 00:10:55.954 START TEST filesystem_in_capsule_btrfs 00:10:55.954 ************************************ 00:10:55.954 11:32:56 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@1129 -- # nvmf_filesystem_create btrfs nvme0n1 00:10:55.954 11:32:56 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@18 -- # fstype=btrfs 00:10:55.954 11:32:56 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@19 -- # nvme_name=nvme0n1 00:10:55.954 11:32:56 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@21 -- # make_filesystem btrfs /dev/nvme0n1p1 00:10:55.954 11:32:56 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@930 -- # local fstype=btrfs 00:10:55.954 11:32:56 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@931 -- # local dev_name=/dev/nvme0n1p1 00:10:55.954 11:32:56 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@932 -- # local i=0 00:10:55.954 11:32:56 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@933 -- # local force 00:10:55.954 11:32:56 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@935 -- # '[' btrfs = ext4 ']' 00:10:55.954 11:32:56 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@938 -- # force=-f 00:10:55.954 11:32:56 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@941 -- # mkfs.btrfs -f /dev/nvme0n1p1 00:10:56.213 btrfs-progs v6.8.1 00:10:56.213 See https://btrfs.readthedocs.io for more information. 00:10:56.213 00:10:56.213 Performing full device TRIM /dev/nvme0n1p1 (510.00MiB) ... 00:10:56.213 NOTE: several default settings have changed in version 5.15, please make sure 00:10:56.213 this does not affect your deployments: 00:10:56.213 - DUP for metadata (-m dup) 00:10:56.213 - enabled no-holes (-O no-holes) 00:10:56.213 - enabled free-space-tree (-R free-space-tree) 00:10:56.213 00:10:56.213 Label: (null) 00:10:56.213 UUID: d5de30ac-dd82-4201-9a9a-0c44d3fdd28d 00:10:56.213 Node size: 16384 00:10:56.213 Sector size: 4096 (CPU page size: 4096) 00:10:56.213 Filesystem size: 510.00MiB 00:10:56.213 Block group profiles: 00:10:56.213 Data: single 8.00MiB 00:10:56.213 Metadata: DUP 32.00MiB 00:10:56.213 System: DUP 8.00MiB 00:10:56.213 SSD detected: yes 00:10:56.213 Zoned device: no 00:10:56.213 Features: extref, skinny-metadata, no-holes, free-space-tree 00:10:56.213 Checksum: crc32c 00:10:56.213 Number of devices: 1 00:10:56.213 Devices: 00:10:56.213 ID SIZE PATH 00:10:56.213 1 510.00MiB /dev/nvme0n1p1 00:10:56.213 00:10:56.213 11:32:56 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@949 -- # return 0 00:10:56.213 11:32:56 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@23 -- # mount /dev/nvme0n1p1 /mnt/device 00:10:56.213 11:32:56 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@24 -- # touch /mnt/device/aaa 00:10:56.213 11:32:56 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@25 -- # sync 00:10:56.213 11:32:56 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@26 -- # rm /mnt/device/aaa 00:10:56.213 11:32:56 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@27 -- # sync 00:10:56.213 11:32:56 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@29 -- # i=0 00:10:56.213 11:32:56 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@30 -- # umount /mnt/device 00:10:56.213 11:32:56 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@37 -- # kill -0 73775 00:10:56.213 11:32:56 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@40 -- # lsblk -l -o NAME 00:10:56.214 11:32:56 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@40 -- # grep -q -w nvme0n1 00:10:56.214 11:32:56 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@43 -- # grep -q -w nvme0n1p1 00:10:56.214 11:32:56 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@43 -- # lsblk -l -o NAME 00:10:56.214 ************************************ 00:10:56.214 END TEST filesystem_in_capsule_btrfs 00:10:56.214 ************************************ 00:10:56.214 00:10:56.214 real 0m0.237s 00:10:56.214 user 0m0.021s 00:10:56.214 sys 0m0.062s 00:10:56.214 11:32:56 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@1130 -- # xtrace_disable 00:10:56.214 11:32:56 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@10 -- # set +x 00:10:56.214 11:32:56 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@83 -- # run_test filesystem_in_capsule_xfs nvmf_filesystem_create xfs nvme0n1 00:10:56.214 11:32:56 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:10:56.214 11:32:56 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1111 -- # xtrace_disable 00:10:56.214 11:32:56 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:10:56.472 ************************************ 00:10:56.472 START TEST filesystem_in_capsule_xfs 00:10:56.472 ************************************ 00:10:56.472 11:32:56 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@1129 -- # nvmf_filesystem_create xfs nvme0n1 00:10:56.472 11:32:56 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@18 -- # fstype=xfs 00:10:56.472 11:32:56 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@19 -- # nvme_name=nvme0n1 00:10:56.472 11:32:56 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@21 -- # make_filesystem xfs /dev/nvme0n1p1 00:10:56.472 11:32:56 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@930 -- # local fstype=xfs 00:10:56.472 11:32:56 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@931 -- # local dev_name=/dev/nvme0n1p1 00:10:56.472 11:32:56 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@932 -- # local i=0 00:10:56.472 11:32:56 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@933 -- # local force 00:10:56.472 11:32:56 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@935 -- # '[' xfs = ext4 ']' 00:10:56.472 11:32:56 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@938 -- # force=-f 00:10:56.472 11:32:56 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@941 -- # mkfs.xfs -f /dev/nvme0n1p1 00:10:56.472 meta-data=/dev/nvme0n1p1 isize=512 agcount=4, agsize=32640 blks 00:10:56.472 = sectsz=512 attr=2, projid32bit=1 00:10:56.472 = crc=1 finobt=1, sparse=1, rmapbt=0 00:10:56.472 = reflink=1 bigtime=1 inobtcount=1 nrext64=0 00:10:56.472 data = bsize=4096 blocks=130560, imaxpct=25 00:10:56.472 = sunit=0 swidth=0 blks 00:10:56.472 naming =version 2 bsize=4096 ascii-ci=0, ftype=1 00:10:56.472 log =internal log bsize=4096 blocks=16384, version=2 00:10:56.472 = sectsz=512 sunit=0 blks, lazy-count=1 00:10:56.472 realtime =none extsz=4096 blocks=0, rtextents=0 00:10:57.038 Discarding blocks...Done. 00:10:57.038 11:32:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@949 -- # return 0 00:10:57.038 11:32:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@23 -- # mount /dev/nvme0n1p1 /mnt/device 00:10:58.986 11:32:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@24 -- # touch /mnt/device/aaa 00:10:58.986 11:32:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@25 -- # sync 00:10:58.986 11:32:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@26 -- # rm /mnt/device/aaa 00:10:58.986 11:32:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@27 -- # sync 00:10:58.986 11:32:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@29 -- # i=0 00:10:58.986 11:32:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@30 -- # umount /mnt/device 00:10:58.986 11:32:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@37 -- # kill -0 73775 00:10:58.986 11:32:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@40 -- # lsblk -l -o NAME 00:10:58.986 11:32:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@40 -- # grep -q -w nvme0n1 00:10:58.986 11:32:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@43 -- # lsblk -l -o NAME 00:10:58.986 11:32:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@43 -- # grep -q -w nvme0n1p1 00:10:58.986 ************************************ 00:10:58.986 END TEST filesystem_in_capsule_xfs 00:10:58.986 ************************************ 00:10:58.986 00:10:58.986 real 0m2.611s 00:10:58.986 user 0m0.026s 00:10:58.986 sys 0m0.053s 00:10:58.986 11:32:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@1130 -- # xtrace_disable 00:10:58.986 11:32:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@10 -- # set +x 00:10:58.986 11:32:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@91 -- # flock /dev/nvme0n1 parted -s /dev/nvme0n1 rm 1 00:10:58.986 11:32:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@93 -- # sync 00:10:58.986 11:32:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@94 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:10:58.986 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:10:58.986 11:32:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@95 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:10:58.986 11:32:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1223 -- # local i=0 00:10:58.986 11:32:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1224 -- # grep -q -w SPDKISFASTANDAWESOME 00:10:58.986 11:32:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1224 -- # lsblk -o NAME,SERIAL 00:10:58.986 11:32:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1231 -- # lsblk -l -o NAME,SERIAL 00:10:58.986 11:32:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1231 -- # grep -q -w SPDKISFASTANDAWESOME 00:10:58.986 11:32:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1235 -- # return 0 00:10:58.986 11:32:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@97 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:10:58.986 11:32:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:58.986 11:32:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:10:59.244 11:32:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:59.244 11:32:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@99 -- # trap - SIGINT SIGTERM EXIT 00:10:59.244 11:32:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@101 -- # killprocess 73775 00:10:59.244 11:32:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@954 -- # '[' -z 73775 ']' 00:10:59.244 11:32:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@958 -- # kill -0 73775 00:10:59.244 11:32:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@959 -- # uname 00:10:59.244 11:32:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:10:59.244 11:32:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 73775 00:10:59.244 killing process with pid 73775 00:10:59.244 11:32:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:10:59.245 11:32:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:10:59.245 11:32:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@972 -- # echo 'killing process with pid 73775' 00:10:59.245 11:32:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@973 -- # kill 73775 00:10:59.245 11:32:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@978 -- # wait 73775 00:10:59.502 ************************************ 00:10:59.502 END TEST nvmf_filesystem_in_capsule 00:10:59.503 ************************************ 00:10:59.503 11:33:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@102 -- # nvmfpid= 00:10:59.503 00:10:59.503 real 0m13.242s 00:10:59.503 user 0m50.462s 00:10:59.503 sys 0m2.015s 00:10:59.503 11:33:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1130 -- # xtrace_disable 00:10:59.503 11:33:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:10:59.503 11:33:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- target/filesystem.sh@108 -- # nvmftestfini 00:10:59.503 11:33:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@516 -- # nvmfcleanup 00:10:59.503 11:33:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@121 -- # sync 00:10:59.761 11:33:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:10:59.761 11:33:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@124 -- # set +e 00:10:59.761 11:33:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@125 -- # for i in {1..20} 00:10:59.761 11:33:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:10:59.761 rmmod nvme_tcp 00:10:59.761 rmmod nvme_fabrics 00:10:59.761 rmmod nvme_keyring 00:10:59.761 11:33:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:10:59.761 11:33:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@128 -- # set -e 00:10:59.761 11:33:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@129 -- # return 0 00:10:59.761 11:33:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@517 -- # '[' -n '' ']' 00:10:59.761 11:33:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:10:59.761 11:33:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:10:59.761 11:33:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:10:59.761 11:33:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@297 -- # iptr 00:10:59.761 11:33:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@791 -- # iptables-save 00:10:59.761 11:33:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:10:59.761 11:33:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@791 -- # iptables-restore 00:10:59.761 11:33:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@298 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:10:59.761 11:33:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@299 -- # nvmf_veth_fini 00:10:59.761 11:33:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@233 -- # ip link set nvmf_init_br nomaster 00:10:59.761 11:33:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@234 -- # ip link set nvmf_init_br2 nomaster 00:10:59.761 11:33:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@235 -- # ip link set nvmf_tgt_br nomaster 00:10:59.761 11:33:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@236 -- # ip link set nvmf_tgt_br2 nomaster 00:10:59.761 11:33:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@237 -- # ip link set nvmf_init_br down 00:10:59.761 11:33:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@238 -- # ip link set nvmf_init_br2 down 00:10:59.761 11:33:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@239 -- # ip link set nvmf_tgt_br down 00:10:59.761 11:33:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@240 -- # ip link set nvmf_tgt_br2 down 00:10:59.761 11:33:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@241 -- # ip link delete nvmf_br type bridge 00:10:59.761 11:33:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@242 -- # ip link delete nvmf_init_if 00:10:59.761 11:33:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@243 -- # ip link delete nvmf_init_if2 00:10:59.761 11:33:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@244 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:11:00.019 11:33:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@245 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:11:00.019 11:33:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@246 -- # remove_spdk_ns 00:11:00.019 11:33:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:11:00.019 11:33:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:11:00.019 11:33:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:11:00.019 11:33:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@300 -- # return 0 00:11:00.019 ************************************ 00:11:00.019 END TEST nvmf_filesystem 00:11:00.019 ************************************ 00:11:00.019 00:11:00.019 real 0m29.032s 00:11:00.019 user 1m46.058s 00:11:00.019 sys 0m4.730s 00:11:00.019 11:33:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1130 -- # xtrace_disable 00:11:00.019 11:33:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@10 -- # set +x 00:11:00.019 11:33:00 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@18 -- # run_test nvmf_target_discovery /home/vagrant/spdk_repo/spdk/test/nvmf/target/discovery.sh --transport=tcp 00:11:00.019 11:33:00 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:11:00.019 11:33:00 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1111 -- # xtrace_disable 00:11:00.019 11:33:00 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:11:00.019 ************************************ 00:11:00.019 START TEST nvmf_target_discovery 00:11:00.019 ************************************ 00:11:00.019 11:33:00 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/discovery.sh --transport=tcp 00:11:00.019 * Looking for test storage... 00:11:00.019 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:11:00.019 11:33:00 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:11:00.019 11:33:00 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@1711 -- # lcov --version 00:11:00.019 11:33:00 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:11:00.278 11:33:00 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:11:00.278 11:33:00 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:11:00.278 11:33:00 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@333 -- # local ver1 ver1_l 00:11:00.278 11:33:00 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@334 -- # local ver2 ver2_l 00:11:00.278 11:33:00 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@336 -- # IFS=.-: 00:11:00.278 11:33:00 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@336 -- # read -ra ver1 00:11:00.278 11:33:00 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@337 -- # IFS=.-: 00:11:00.278 11:33:00 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@337 -- # read -ra ver2 00:11:00.278 11:33:00 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@338 -- # local 'op=<' 00:11:00.278 11:33:00 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@340 -- # ver1_l=2 00:11:00.278 11:33:00 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@341 -- # ver2_l=1 00:11:00.278 11:33:00 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:11:00.278 11:33:00 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@344 -- # case "$op" in 00:11:00.278 11:33:00 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@345 -- # : 1 00:11:00.278 11:33:00 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@364 -- # (( v = 0 )) 00:11:00.278 11:33:00 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:11:00.278 11:33:00 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@365 -- # decimal 1 00:11:00.278 11:33:00 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@353 -- # local d=1 00:11:00.278 11:33:00 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:11:00.278 11:33:00 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@355 -- # echo 1 00:11:00.278 11:33:00 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@365 -- # ver1[v]=1 00:11:00.278 11:33:00 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@366 -- # decimal 2 00:11:00.278 11:33:00 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@353 -- # local d=2 00:11:00.278 11:33:00 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:11:00.278 11:33:00 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@355 -- # echo 2 00:11:00.278 11:33:00 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@366 -- # ver2[v]=2 00:11:00.278 11:33:00 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:11:00.278 11:33:00 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:11:00.278 11:33:00 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@368 -- # return 0 00:11:00.278 11:33:00 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:11:00.278 11:33:00 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:11:00.278 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:00.278 --rc genhtml_branch_coverage=1 00:11:00.278 --rc genhtml_function_coverage=1 00:11:00.278 --rc genhtml_legend=1 00:11:00.278 --rc geninfo_all_blocks=1 00:11:00.278 --rc geninfo_unexecuted_blocks=1 00:11:00.278 00:11:00.278 ' 00:11:00.278 11:33:00 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:11:00.278 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:00.278 --rc genhtml_branch_coverage=1 00:11:00.278 --rc genhtml_function_coverage=1 00:11:00.278 --rc genhtml_legend=1 00:11:00.278 --rc geninfo_all_blocks=1 00:11:00.278 --rc geninfo_unexecuted_blocks=1 00:11:00.278 00:11:00.278 ' 00:11:00.278 11:33:00 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:11:00.278 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:00.278 --rc genhtml_branch_coverage=1 00:11:00.278 --rc genhtml_function_coverage=1 00:11:00.278 --rc genhtml_legend=1 00:11:00.278 --rc geninfo_all_blocks=1 00:11:00.278 --rc geninfo_unexecuted_blocks=1 00:11:00.278 00:11:00.278 ' 00:11:00.278 11:33:00 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:11:00.278 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:00.278 --rc genhtml_branch_coverage=1 00:11:00.278 --rc genhtml_function_coverage=1 00:11:00.278 --rc genhtml_legend=1 00:11:00.278 --rc geninfo_all_blocks=1 00:11:00.278 --rc geninfo_unexecuted_blocks=1 00:11:00.278 00:11:00.278 ' 00:11:00.278 11:33:00 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:11:00.278 11:33:00 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@7 -- # uname -s 00:11:00.278 11:33:00 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:11:00.278 11:33:00 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:11:00.278 11:33:00 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:11:00.278 11:33:00 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:11:00.278 11:33:00 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:11:00.278 11:33:00 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:11:00.278 11:33:00 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:11:00.278 11:33:00 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:11:00.278 11:33:00 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:11:00.278 11:33:00 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:11:00.278 11:33:00 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:2fb5c03d-b67f-4471-b8a9-a83bf7b15be8 00:11:00.278 11:33:00 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@18 -- # NVME_HOSTID=2fb5c03d-b67f-4471-b8a9-a83bf7b15be8 00:11:00.278 11:33:00 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:11:00.278 11:33:00 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:11:00.278 11:33:00 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:11:00.278 11:33:00 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:11:00.279 11:33:00 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@49 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:11:00.279 11:33:00 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@15 -- # shopt -s extglob 00:11:00.279 11:33:00 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:11:00.279 11:33:00 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:11:00.279 11:33:00 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:11:00.279 11:33:00 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:00.279 11:33:00 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:00.279 11:33:00 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:00.279 11:33:00 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- paths/export.sh@5 -- # export PATH 00:11:00.279 11:33:00 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:00.279 11:33:00 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@51 -- # : 0 00:11:00.279 11:33:00 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:11:00.279 11:33:00 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:11:00.279 11:33:00 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:11:00.279 11:33:00 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:11:00.279 11:33:00 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:11:00.279 11:33:00 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:11:00.279 /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:11:00.279 11:33:00 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:11:00.279 11:33:00 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:11:00.279 11:33:00 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@55 -- # have_pci_nics=0 00:11:00.279 11:33:00 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@11 -- # NULL_BDEV_SIZE=102400 00:11:00.279 11:33:00 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@12 -- # NULL_BLOCK_SIZE=512 00:11:00.279 11:33:00 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@13 -- # NVMF_PORT_REFERRAL=4430 00:11:00.279 11:33:00 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@15 -- # hash nvme 00:11:00.279 11:33:00 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@20 -- # nvmftestinit 00:11:00.279 11:33:00 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:11:00.279 11:33:00 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:11:00.279 11:33:00 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@476 -- # prepare_net_devs 00:11:00.279 11:33:00 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@438 -- # local -g is_hw=no 00:11:00.279 11:33:00 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@440 -- # remove_spdk_ns 00:11:00.279 11:33:00 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:11:00.279 11:33:00 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:11:00.279 11:33:00 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:11:00.279 11:33:00 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@442 -- # [[ virt != virt ]] 00:11:00.279 11:33:00 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@444 -- # [[ no == yes ]] 00:11:00.279 11:33:00 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@451 -- # [[ virt == phy ]] 00:11:00.279 11:33:00 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@454 -- # [[ virt == phy-fallback ]] 00:11:00.279 11:33:00 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@459 -- # [[ tcp == tcp ]] 00:11:00.279 11:33:00 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@460 -- # nvmf_veth_init 00:11:00.279 11:33:00 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@145 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:11:00.279 11:33:00 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@146 -- # NVMF_SECOND_INITIATOR_IP=10.0.0.2 00:11:00.279 11:33:00 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@147 -- # NVMF_FIRST_TARGET_IP=10.0.0.3 00:11:00.279 11:33:00 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@148 -- # NVMF_SECOND_TARGET_IP=10.0.0.4 00:11:00.279 11:33:00 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@149 -- # NVMF_INITIATOR_IP=10.0.0.1 00:11:00.279 11:33:00 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@150 -- # NVMF_BRIDGE=nvmf_br 00:11:00.279 11:33:00 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@151 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:11:00.279 11:33:00 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@152 -- # NVMF_INITIATOR_INTERFACE2=nvmf_init_if2 00:11:00.279 11:33:00 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@153 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:11:00.279 11:33:00 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@154 -- # NVMF_INITIATOR_BRIDGE2=nvmf_init_br2 00:11:00.279 11:33:00 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@155 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:11:00.279 11:33:00 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@156 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:11:00.279 11:33:00 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@157 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:11:00.279 11:33:00 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@158 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:11:00.279 11:33:00 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@159 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:11:00.279 11:33:00 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@160 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:11:00.279 11:33:00 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@162 -- # ip link set nvmf_init_br nomaster 00:11:00.279 Cannot find device "nvmf_init_br" 00:11:00.279 11:33:00 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@162 -- # true 00:11:00.279 11:33:00 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@163 -- # ip link set nvmf_init_br2 nomaster 00:11:00.279 Cannot find device "nvmf_init_br2" 00:11:00.279 11:33:00 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@163 -- # true 00:11:00.279 11:33:00 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@164 -- # ip link set nvmf_tgt_br nomaster 00:11:00.279 Cannot find device "nvmf_tgt_br" 00:11:00.279 11:33:00 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@164 -- # true 00:11:00.279 11:33:00 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@165 -- # ip link set nvmf_tgt_br2 nomaster 00:11:00.279 Cannot find device "nvmf_tgt_br2" 00:11:00.279 11:33:00 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@165 -- # true 00:11:00.279 11:33:00 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@166 -- # ip link set nvmf_init_br down 00:11:00.279 Cannot find device "nvmf_init_br" 00:11:00.279 11:33:00 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@166 -- # true 00:11:00.279 11:33:00 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@167 -- # ip link set nvmf_init_br2 down 00:11:00.279 Cannot find device "nvmf_init_br2" 00:11:00.279 11:33:00 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@167 -- # true 00:11:00.279 11:33:00 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@168 -- # ip link set nvmf_tgt_br down 00:11:00.279 Cannot find device "nvmf_tgt_br" 00:11:00.279 11:33:00 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@168 -- # true 00:11:00.279 11:33:00 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@169 -- # ip link set nvmf_tgt_br2 down 00:11:00.279 Cannot find device "nvmf_tgt_br2" 00:11:00.279 11:33:00 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@169 -- # true 00:11:00.279 11:33:00 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@170 -- # ip link delete nvmf_br type bridge 00:11:00.279 Cannot find device "nvmf_br" 00:11:00.279 11:33:01 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@170 -- # true 00:11:00.279 11:33:01 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@171 -- # ip link delete nvmf_init_if 00:11:00.279 Cannot find device "nvmf_init_if" 00:11:00.279 11:33:01 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@171 -- # true 00:11:00.279 11:33:01 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@172 -- # ip link delete nvmf_init_if2 00:11:00.537 Cannot find device "nvmf_init_if2" 00:11:00.537 11:33:01 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@172 -- # true 00:11:00.537 11:33:01 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@173 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:11:00.537 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:11:00.537 11:33:01 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@173 -- # true 00:11:00.537 11:33:01 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@174 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:11:00.537 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:11:00.537 11:33:01 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@174 -- # true 00:11:00.537 11:33:01 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@177 -- # ip netns add nvmf_tgt_ns_spdk 00:11:00.537 11:33:01 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@180 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:11:00.537 11:33:01 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@181 -- # ip link add nvmf_init_if2 type veth peer name nvmf_init_br2 00:11:00.537 11:33:01 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@182 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:11:00.537 11:33:01 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@183 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:11:00.537 11:33:01 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:11:00.537 11:33:01 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@187 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:11:00.537 11:33:01 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@190 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:11:00.537 11:33:01 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@191 -- # ip addr add 10.0.0.2/24 dev nvmf_init_if2 00:11:00.537 11:33:01 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@192 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if 00:11:00.537 11:33:01 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@193 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.4/24 dev nvmf_tgt_if2 00:11:00.538 11:33:01 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@196 -- # ip link set nvmf_init_if up 00:11:00.538 11:33:01 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@197 -- # ip link set nvmf_init_if2 up 00:11:00.538 11:33:01 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@198 -- # ip link set nvmf_init_br up 00:11:00.538 11:33:01 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@199 -- # ip link set nvmf_init_br2 up 00:11:00.538 11:33:01 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@200 -- # ip link set nvmf_tgt_br up 00:11:00.538 11:33:01 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@201 -- # ip link set nvmf_tgt_br2 up 00:11:00.538 11:33:01 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@202 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:11:00.538 11:33:01 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@203 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:11:00.538 11:33:01 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@204 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:11:00.538 11:33:01 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@207 -- # ip link add nvmf_br type bridge 00:11:00.538 11:33:01 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@208 -- # ip link set nvmf_br up 00:11:00.538 11:33:01 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@211 -- # ip link set nvmf_init_br master nvmf_br 00:11:00.538 11:33:01 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@212 -- # ip link set nvmf_init_br2 master nvmf_br 00:11:00.538 11:33:01 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@213 -- # ip link set nvmf_tgt_br master nvmf_br 00:11:00.538 11:33:01 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@214 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:11:00.538 11:33:01 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@217 -- # ipts -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:11:00.538 11:33:01 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT' 00:11:00.538 11:33:01 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@218 -- # ipts -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT 00:11:00.538 11:33:01 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT' 00:11:00.538 11:33:01 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@219 -- # ipts -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:11:00.538 11:33:01 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@790 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT -m comment --comment 'SPDK_NVMF:-A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT' 00:11:00.538 11:33:01 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@222 -- # ping -c 1 10.0.0.3 00:11:00.538 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:11:00.538 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.084 ms 00:11:00.538 00:11:00.538 --- 10.0.0.3 ping statistics --- 00:11:00.538 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:11:00.538 rtt min/avg/max/mdev = 0.084/0.084/0.084/0.000 ms 00:11:00.538 11:33:01 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@223 -- # ping -c 1 10.0.0.4 00:11:00.538 PING 10.0.0.4 (10.0.0.4) 56(84) bytes of data. 00:11:00.538 64 bytes from 10.0.0.4: icmp_seq=1 ttl=64 time=0.046 ms 00:11:00.538 00:11:00.538 --- 10.0.0.4 ping statistics --- 00:11:00.538 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:11:00.538 rtt min/avg/max/mdev = 0.046/0.046/0.046/0.000 ms 00:11:00.538 11:33:01 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@224 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:11:00.538 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:11:00.538 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.037 ms 00:11:00.538 00:11:00.538 --- 10.0.0.1 ping statistics --- 00:11:00.538 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:11:00.538 rtt min/avg/max/mdev = 0.037/0.037/0.037/0.000 ms 00:11:00.538 11:33:01 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@225 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.2 00:11:00.796 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:11:00.796 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.052 ms 00:11:00.796 00:11:00.796 --- 10.0.0.2 ping statistics --- 00:11:00.796 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:11:00.796 rtt min/avg/max/mdev = 0.052/0.052/0.052/0.000 ms 00:11:00.796 11:33:01 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@227 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:11:00.796 11:33:01 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@461 -- # return 0 00:11:00.796 11:33:01 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:11:00.796 11:33:01 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:11:00.796 11:33:01 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:11:00.796 11:33:01 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:11:00.796 11:33:01 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:11:00.796 11:33:01 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:11:00.796 11:33:01 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:11:00.796 11:33:01 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@21 -- # nvmfappstart -m 0xF 00:11:00.796 11:33:01 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:11:00.796 11:33:01 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@726 -- # xtrace_disable 00:11:00.796 11:33:01 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:11:00.796 11:33:01 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@509 -- # nvmfpid=74355 00:11:00.796 11:33:01 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@508 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:11:00.796 11:33:01 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@510 -- # waitforlisten 74355 00:11:00.796 11:33:01 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@835 -- # '[' -z 74355 ']' 00:11:00.796 11:33:01 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:11:00.796 11:33:01 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@840 -- # local max_retries=100 00:11:00.796 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:11:00.796 11:33:01 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:11:00.796 11:33:01 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@844 -- # xtrace_disable 00:11:00.796 11:33:01 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:11:00.796 [2024-12-16 11:33:01.390330] Starting SPDK v25.01-pre git sha1 c360129e7 / DPDK 24.03.0 initialization... 00:11:00.797 [2024-12-16 11:33:01.390442] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:11:01.055 [2024-12-16 11:33:01.548627] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:11:01.055 [2024-12-16 11:33:01.612214] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:11:01.055 [2024-12-16 11:33:01.612307] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:11:01.055 [2024-12-16 11:33:01.612322] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:11:01.055 [2024-12-16 11:33:01.612332] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:11:01.055 [2024-12-16 11:33:01.612342] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:11:01.055 [2024-12-16 11:33:01.613687] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 1 00:11:01.055 [2024-12-16 11:33:01.613797] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 3 00:11:01.055 [2024-12-16 11:33:01.613800] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:11:01.055 [2024-12-16 11:33:01.613754] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 2 00:11:01.055 11:33:01 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:11:01.055 11:33:01 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@868 -- # return 0 00:11:01.055 11:33:01 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:11:01.055 11:33:01 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@732 -- # xtrace_disable 00:11:01.055 11:33:01 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:11:01.055 11:33:01 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:11:01.055 11:33:01 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@23 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:11:01.055 11:33:01 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:01.055 11:33:01 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:11:01.314 [2024-12-16 11:33:01.810843] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:11:01.314 11:33:01 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:01.314 11:33:01 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@26 -- # seq 1 4 00:11:01.314 11:33:01 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@26 -- # for i in $(seq 1 4) 00:11:01.314 11:33:01 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@27 -- # rpc_cmd bdev_null_create Null1 102400 512 00:11:01.314 11:33:01 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:01.314 11:33:01 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:11:01.314 Null1 00:11:01.314 11:33:01 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:01.314 11:33:01 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@28 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:11:01.314 11:33:01 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:01.314 11:33:01 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:11:01.314 11:33:01 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:01.314 11:33:01 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@29 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Null1 00:11:01.314 11:33:01 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:01.314 11:33:01 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:11:01.314 11:33:01 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:01.314 11:33:01 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@30 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 00:11:01.314 11:33:01 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:01.314 11:33:01 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:11:01.314 [2024-12-16 11:33:01.855162] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4420 *** 00:11:01.314 11:33:01 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:01.314 11:33:01 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@26 -- # for i in $(seq 1 4) 00:11:01.314 11:33:01 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@27 -- # rpc_cmd bdev_null_create Null2 102400 512 00:11:01.314 11:33:01 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:01.315 11:33:01 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:11:01.315 Null2 00:11:01.315 11:33:01 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:01.315 11:33:01 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@28 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode2 -a -s SPDK00000000000002 00:11:01.315 11:33:01 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:01.315 11:33:01 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:11:01.315 11:33:01 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:01.315 11:33:01 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@29 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode2 Null2 00:11:01.315 11:33:01 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:01.315 11:33:01 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:11:01.315 11:33:01 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:01.315 11:33:01 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@30 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode2 -t tcp -a 10.0.0.3 -s 4420 00:11:01.315 11:33:01 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:01.315 11:33:01 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:11:01.315 11:33:01 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:01.315 11:33:01 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@26 -- # for i in $(seq 1 4) 00:11:01.315 11:33:01 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@27 -- # rpc_cmd bdev_null_create Null3 102400 512 00:11:01.315 11:33:01 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:01.315 11:33:01 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:11:01.315 Null3 00:11:01.315 11:33:01 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:01.315 11:33:01 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@28 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode3 -a -s SPDK00000000000003 00:11:01.315 11:33:01 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:01.315 11:33:01 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:11:01.315 11:33:01 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:01.315 11:33:01 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@29 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode3 Null3 00:11:01.315 11:33:01 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:01.315 11:33:01 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:11:01.315 11:33:01 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:01.315 11:33:01 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@30 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode3 -t tcp -a 10.0.0.3 -s 4420 00:11:01.315 11:33:01 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:01.315 11:33:01 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:11:01.315 11:33:01 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:01.315 11:33:01 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@26 -- # for i in $(seq 1 4) 00:11:01.315 11:33:01 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@27 -- # rpc_cmd bdev_null_create Null4 102400 512 00:11:01.315 11:33:01 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:01.315 11:33:01 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:11:01.315 Null4 00:11:01.315 11:33:01 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:01.315 11:33:01 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@28 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode4 -a -s SPDK00000000000004 00:11:01.315 11:33:01 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:01.315 11:33:01 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:11:01.315 11:33:01 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:01.315 11:33:01 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@29 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode4 Null4 00:11:01.315 11:33:01 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:01.315 11:33:01 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:11:01.315 11:33:01 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:01.315 11:33:01 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@30 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode4 -t tcp -a 10.0.0.3 -s 4420 00:11:01.315 11:33:01 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:01.315 11:33:01 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:11:01.315 11:33:01 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:01.315 11:33:01 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@32 -- # rpc_cmd nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.3 -s 4420 00:11:01.315 11:33:01 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:01.315 11:33:01 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:11:01.315 11:33:01 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:01.315 11:33:01 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@35 -- # rpc_cmd nvmf_discovery_add_referral -t tcp -a 10.0.0.3 -s 4430 00:11:01.315 11:33:01 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:01.315 11:33:01 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:11:01.315 11:33:01 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:01.315 11:33:01 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@37 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:2fb5c03d-b67f-4471-b8a9-a83bf7b15be8 --hostid=2fb5c03d-b67f-4471-b8a9-a83bf7b15be8 -t tcp -a 10.0.0.3 -s 4420 00:11:01.574 00:11:01.574 Discovery Log Number of Records 6, Generation counter 6 00:11:01.574 =====Discovery Log Entry 0====== 00:11:01.574 trtype: tcp 00:11:01.574 adrfam: ipv4 00:11:01.574 subtype: current discovery subsystem 00:11:01.574 treq: not required 00:11:01.574 portid: 0 00:11:01.574 trsvcid: 4420 00:11:01.574 subnqn: nqn.2014-08.org.nvmexpress.discovery 00:11:01.574 traddr: 10.0.0.3 00:11:01.574 eflags: explicit discovery connections, duplicate discovery information 00:11:01.574 sectype: none 00:11:01.574 =====Discovery Log Entry 1====== 00:11:01.574 trtype: tcp 00:11:01.574 adrfam: ipv4 00:11:01.574 subtype: nvme subsystem 00:11:01.574 treq: not required 00:11:01.574 portid: 0 00:11:01.574 trsvcid: 4420 00:11:01.574 subnqn: nqn.2016-06.io.spdk:cnode1 00:11:01.574 traddr: 10.0.0.3 00:11:01.574 eflags: none 00:11:01.574 sectype: none 00:11:01.574 =====Discovery Log Entry 2====== 00:11:01.574 trtype: tcp 00:11:01.574 adrfam: ipv4 00:11:01.574 subtype: nvme subsystem 00:11:01.574 treq: not required 00:11:01.574 portid: 0 00:11:01.574 trsvcid: 4420 00:11:01.574 subnqn: nqn.2016-06.io.spdk:cnode2 00:11:01.574 traddr: 10.0.0.3 00:11:01.574 eflags: none 00:11:01.574 sectype: none 00:11:01.574 =====Discovery Log Entry 3====== 00:11:01.574 trtype: tcp 00:11:01.574 adrfam: ipv4 00:11:01.574 subtype: nvme subsystem 00:11:01.574 treq: not required 00:11:01.574 portid: 0 00:11:01.574 trsvcid: 4420 00:11:01.574 subnqn: nqn.2016-06.io.spdk:cnode3 00:11:01.574 traddr: 10.0.0.3 00:11:01.574 eflags: none 00:11:01.574 sectype: none 00:11:01.574 =====Discovery Log Entry 4====== 00:11:01.574 trtype: tcp 00:11:01.574 adrfam: ipv4 00:11:01.574 subtype: nvme subsystem 00:11:01.574 treq: not required 00:11:01.574 portid: 0 00:11:01.574 trsvcid: 4420 00:11:01.574 subnqn: nqn.2016-06.io.spdk:cnode4 00:11:01.574 traddr: 10.0.0.3 00:11:01.574 eflags: none 00:11:01.574 sectype: none 00:11:01.574 =====Discovery Log Entry 5====== 00:11:01.574 trtype: tcp 00:11:01.574 adrfam: ipv4 00:11:01.574 subtype: discovery subsystem referral 00:11:01.575 treq: not required 00:11:01.575 portid: 0 00:11:01.575 trsvcid: 4430 00:11:01.575 subnqn: nqn.2014-08.org.nvmexpress.discovery 00:11:01.575 traddr: 10.0.0.3 00:11:01.575 eflags: none 00:11:01.575 sectype: none 00:11:01.575 Perform nvmf subsystem discovery via RPC 00:11:01.575 11:33:02 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@39 -- # echo 'Perform nvmf subsystem discovery via RPC' 00:11:01.575 11:33:02 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@40 -- # rpc_cmd nvmf_get_subsystems 00:11:01.575 11:33:02 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:01.575 11:33:02 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:11:01.575 [ 00:11:01.575 { 00:11:01.575 "allow_any_host": true, 00:11:01.575 "hosts": [], 00:11:01.575 "listen_addresses": [ 00:11:01.575 { 00:11:01.575 "adrfam": "IPv4", 00:11:01.575 "traddr": "10.0.0.3", 00:11:01.575 "trsvcid": "4420", 00:11:01.575 "trtype": "TCP" 00:11:01.575 } 00:11:01.575 ], 00:11:01.575 "nqn": "nqn.2014-08.org.nvmexpress.discovery", 00:11:01.575 "subtype": "Discovery" 00:11:01.575 }, 00:11:01.575 { 00:11:01.575 "allow_any_host": true, 00:11:01.575 "hosts": [], 00:11:01.575 "listen_addresses": [ 00:11:01.575 { 00:11:01.575 "adrfam": "IPv4", 00:11:01.575 "traddr": "10.0.0.3", 00:11:01.575 "trsvcid": "4420", 00:11:01.575 "trtype": "TCP" 00:11:01.575 } 00:11:01.575 ], 00:11:01.575 "max_cntlid": 65519, 00:11:01.575 "max_namespaces": 32, 00:11:01.575 "min_cntlid": 1, 00:11:01.575 "model_number": "SPDK bdev Controller", 00:11:01.575 "namespaces": [ 00:11:01.575 { 00:11:01.575 "bdev_name": "Null1", 00:11:01.575 "name": "Null1", 00:11:01.575 "nguid": "CB336550DE8E4F26869ACF7DF8565C50", 00:11:01.575 "nsid": 1, 00:11:01.575 "uuid": "cb336550-de8e-4f26-869a-cf7df8565c50" 00:11:01.575 } 00:11:01.575 ], 00:11:01.575 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:11:01.575 "serial_number": "SPDK00000000000001", 00:11:01.575 "subtype": "NVMe" 00:11:01.575 }, 00:11:01.575 { 00:11:01.575 "allow_any_host": true, 00:11:01.575 "hosts": [], 00:11:01.575 "listen_addresses": [ 00:11:01.575 { 00:11:01.575 "adrfam": "IPv4", 00:11:01.575 "traddr": "10.0.0.3", 00:11:01.575 "trsvcid": "4420", 00:11:01.575 "trtype": "TCP" 00:11:01.575 } 00:11:01.575 ], 00:11:01.575 "max_cntlid": 65519, 00:11:01.575 "max_namespaces": 32, 00:11:01.575 "min_cntlid": 1, 00:11:01.575 "model_number": "SPDK bdev Controller", 00:11:01.575 "namespaces": [ 00:11:01.575 { 00:11:01.575 "bdev_name": "Null2", 00:11:01.575 "name": "Null2", 00:11:01.575 "nguid": "BDF4B62EC1864936B4D7F5817609B9B2", 00:11:01.575 "nsid": 1, 00:11:01.575 "uuid": "bdf4b62e-c186-4936-b4d7-f5817609b9b2" 00:11:01.575 } 00:11:01.575 ], 00:11:01.575 "nqn": "nqn.2016-06.io.spdk:cnode2", 00:11:01.575 "serial_number": "SPDK00000000000002", 00:11:01.575 "subtype": "NVMe" 00:11:01.575 }, 00:11:01.575 { 00:11:01.575 "allow_any_host": true, 00:11:01.575 "hosts": [], 00:11:01.575 "listen_addresses": [ 00:11:01.575 { 00:11:01.575 "adrfam": "IPv4", 00:11:01.575 "traddr": "10.0.0.3", 00:11:01.575 "trsvcid": "4420", 00:11:01.575 "trtype": "TCP" 00:11:01.575 } 00:11:01.575 ], 00:11:01.575 "max_cntlid": 65519, 00:11:01.575 "max_namespaces": 32, 00:11:01.575 "min_cntlid": 1, 00:11:01.575 "model_number": "SPDK bdev Controller", 00:11:01.575 "namespaces": [ 00:11:01.575 { 00:11:01.575 "bdev_name": "Null3", 00:11:01.575 "name": "Null3", 00:11:01.575 "nguid": "23F35E12071349E1A86A62E26EB49B5A", 00:11:01.575 "nsid": 1, 00:11:01.575 "uuid": "23f35e12-0713-49e1-a86a-62e26eb49b5a" 00:11:01.575 } 00:11:01.575 ], 00:11:01.575 "nqn": "nqn.2016-06.io.spdk:cnode3", 00:11:01.575 "serial_number": "SPDK00000000000003", 00:11:01.575 "subtype": "NVMe" 00:11:01.575 }, 00:11:01.575 { 00:11:01.575 "allow_any_host": true, 00:11:01.575 "hosts": [], 00:11:01.575 "listen_addresses": [ 00:11:01.575 { 00:11:01.575 "adrfam": "IPv4", 00:11:01.575 "traddr": "10.0.0.3", 00:11:01.575 "trsvcid": "4420", 00:11:01.575 "trtype": "TCP" 00:11:01.575 } 00:11:01.575 ], 00:11:01.575 "max_cntlid": 65519, 00:11:01.575 "max_namespaces": 32, 00:11:01.575 "min_cntlid": 1, 00:11:01.575 "model_number": "SPDK bdev Controller", 00:11:01.575 "namespaces": [ 00:11:01.575 { 00:11:01.575 "bdev_name": "Null4", 00:11:01.575 "name": "Null4", 00:11:01.575 "nguid": "1095E00048FE4448AB6F3A0372A59166", 00:11:01.575 "nsid": 1, 00:11:01.575 "uuid": "1095e000-48fe-4448-ab6f-3a0372a59166" 00:11:01.575 } 00:11:01.575 ], 00:11:01.575 "nqn": "nqn.2016-06.io.spdk:cnode4", 00:11:01.575 "serial_number": "SPDK00000000000004", 00:11:01.575 "subtype": "NVMe" 00:11:01.575 } 00:11:01.575 ] 00:11:01.575 11:33:02 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:01.575 11:33:02 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@42 -- # seq 1 4 00:11:01.575 11:33:02 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@42 -- # for i in $(seq 1 4) 00:11:01.575 11:33:02 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@43 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:11:01.575 11:33:02 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:01.575 11:33:02 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:11:01.575 11:33:02 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:01.575 11:33:02 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@44 -- # rpc_cmd bdev_null_delete Null1 00:11:01.575 11:33:02 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:01.575 11:33:02 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:11:01.575 11:33:02 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:01.575 11:33:02 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@42 -- # for i in $(seq 1 4) 00:11:01.575 11:33:02 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@43 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode2 00:11:01.575 11:33:02 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:01.575 11:33:02 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:11:01.575 11:33:02 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:01.575 11:33:02 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@44 -- # rpc_cmd bdev_null_delete Null2 00:11:01.575 11:33:02 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:01.575 11:33:02 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:11:01.575 11:33:02 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:01.575 11:33:02 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@42 -- # for i in $(seq 1 4) 00:11:01.575 11:33:02 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@43 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode3 00:11:01.575 11:33:02 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:01.575 11:33:02 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:11:01.575 11:33:02 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:01.575 11:33:02 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@44 -- # rpc_cmd bdev_null_delete Null3 00:11:01.575 11:33:02 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:01.575 11:33:02 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:11:01.575 11:33:02 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:01.575 11:33:02 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@42 -- # for i in $(seq 1 4) 00:11:01.575 11:33:02 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@43 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode4 00:11:01.575 11:33:02 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:01.575 11:33:02 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:11:01.575 11:33:02 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:01.575 11:33:02 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@44 -- # rpc_cmd bdev_null_delete Null4 00:11:01.575 11:33:02 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:01.575 11:33:02 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:11:01.575 11:33:02 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:01.575 11:33:02 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@47 -- # rpc_cmd nvmf_discovery_remove_referral -t tcp -a 10.0.0.3 -s 4430 00:11:01.575 11:33:02 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:01.575 11:33:02 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:11:01.575 11:33:02 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:01.575 11:33:02 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@49 -- # rpc_cmd bdev_get_bdevs 00:11:01.575 11:33:02 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:01.576 11:33:02 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:11:01.576 11:33:02 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@49 -- # jq -r '.[].name' 00:11:01.576 11:33:02 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:01.576 11:33:02 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@49 -- # check_bdevs= 00:11:01.576 11:33:02 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@50 -- # '[' -n '' ']' 00:11:01.576 11:33:02 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@55 -- # trap - SIGINT SIGTERM EXIT 00:11:01.576 11:33:02 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@57 -- # nvmftestfini 00:11:01.576 11:33:02 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@516 -- # nvmfcleanup 00:11:01.576 11:33:02 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@121 -- # sync 00:11:01.576 11:33:02 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:11:01.576 11:33:02 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@124 -- # set +e 00:11:01.576 11:33:02 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@125 -- # for i in {1..20} 00:11:01.576 11:33:02 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:11:01.576 rmmod nvme_tcp 00:11:01.576 rmmod nvme_fabrics 00:11:01.576 rmmod nvme_keyring 00:11:01.834 11:33:02 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:11:01.834 11:33:02 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@128 -- # set -e 00:11:01.834 11:33:02 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@129 -- # return 0 00:11:01.834 11:33:02 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@517 -- # '[' -n 74355 ']' 00:11:01.834 11:33:02 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@518 -- # killprocess 74355 00:11:01.834 11:33:02 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@954 -- # '[' -z 74355 ']' 00:11:01.834 11:33:02 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@958 -- # kill -0 74355 00:11:01.834 11:33:02 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@959 -- # uname 00:11:01.834 11:33:02 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:11:01.834 11:33:02 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 74355 00:11:01.834 11:33:02 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:11:01.834 11:33:02 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:11:01.834 killing process with pid 74355 00:11:01.834 11:33:02 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@972 -- # echo 'killing process with pid 74355' 00:11:01.834 11:33:02 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@973 -- # kill 74355 00:11:01.834 11:33:02 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@978 -- # wait 74355 00:11:02.092 11:33:02 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:11:02.092 11:33:02 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:11:02.092 11:33:02 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:11:02.092 11:33:02 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@297 -- # iptr 00:11:02.092 11:33:02 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@791 -- # iptables-save 00:11:02.092 11:33:02 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:11:02.092 11:33:02 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@791 -- # iptables-restore 00:11:02.092 11:33:02 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@298 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:11:02.092 11:33:02 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@299 -- # nvmf_veth_fini 00:11:02.092 11:33:02 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@233 -- # ip link set nvmf_init_br nomaster 00:11:02.092 11:33:02 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@234 -- # ip link set nvmf_init_br2 nomaster 00:11:02.092 11:33:02 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@235 -- # ip link set nvmf_tgt_br nomaster 00:11:02.092 11:33:02 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@236 -- # ip link set nvmf_tgt_br2 nomaster 00:11:02.092 11:33:02 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@237 -- # ip link set nvmf_init_br down 00:11:02.092 11:33:02 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@238 -- # ip link set nvmf_init_br2 down 00:11:02.092 11:33:02 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@239 -- # ip link set nvmf_tgt_br down 00:11:02.092 11:33:02 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@240 -- # ip link set nvmf_tgt_br2 down 00:11:02.092 11:33:02 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@241 -- # ip link delete nvmf_br type bridge 00:11:02.092 11:33:02 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@242 -- # ip link delete nvmf_init_if 00:11:02.092 11:33:02 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@243 -- # ip link delete nvmf_init_if2 00:11:02.092 11:33:02 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@244 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:11:02.092 11:33:02 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@245 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:11:02.092 11:33:02 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@246 -- # remove_spdk_ns 00:11:02.092 11:33:02 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:11:02.092 11:33:02 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:11:02.092 11:33:02 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:11:02.352 11:33:02 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@300 -- # return 0 00:11:02.352 00:11:02.352 real 0m2.217s 00:11:02.352 user 0m4.241s 00:11:02.352 sys 0m0.723s 00:11:02.352 ************************************ 00:11:02.352 END TEST nvmf_target_discovery 00:11:02.352 ************************************ 00:11:02.352 11:33:02 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@1130 -- # xtrace_disable 00:11:02.352 11:33:02 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:11:02.352 11:33:02 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@19 -- # run_test nvmf_referrals /home/vagrant/spdk_repo/spdk/test/nvmf/target/referrals.sh --transport=tcp 00:11:02.352 11:33:02 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:11:02.352 11:33:02 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1111 -- # xtrace_disable 00:11:02.352 11:33:02 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:11:02.352 ************************************ 00:11:02.352 START TEST nvmf_referrals 00:11:02.352 ************************************ 00:11:02.352 11:33:02 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/referrals.sh --transport=tcp 00:11:02.352 * Looking for test storage... 00:11:02.352 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:11:02.352 11:33:02 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:11:02.352 11:33:02 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@1711 -- # lcov --version 00:11:02.352 11:33:02 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:11:02.352 11:33:03 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:11:02.352 11:33:03 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:11:02.352 11:33:03 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@333 -- # local ver1 ver1_l 00:11:02.352 11:33:03 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@334 -- # local ver2 ver2_l 00:11:02.352 11:33:03 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@336 -- # IFS=.-: 00:11:02.352 11:33:03 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@336 -- # read -ra ver1 00:11:02.352 11:33:03 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@337 -- # IFS=.-: 00:11:02.352 11:33:03 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@337 -- # read -ra ver2 00:11:02.352 11:33:03 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@338 -- # local 'op=<' 00:11:02.352 11:33:03 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@340 -- # ver1_l=2 00:11:02.352 11:33:03 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@341 -- # ver2_l=1 00:11:02.352 11:33:03 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:11:02.352 11:33:03 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@344 -- # case "$op" in 00:11:02.352 11:33:03 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@345 -- # : 1 00:11:02.352 11:33:03 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@364 -- # (( v = 0 )) 00:11:02.352 11:33:03 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:11:02.352 11:33:03 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@365 -- # decimal 1 00:11:02.352 11:33:03 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@353 -- # local d=1 00:11:02.352 11:33:03 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:11:02.352 11:33:03 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@355 -- # echo 1 00:11:02.352 11:33:03 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@365 -- # ver1[v]=1 00:11:02.352 11:33:03 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@366 -- # decimal 2 00:11:02.352 11:33:03 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@353 -- # local d=2 00:11:02.352 11:33:03 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:11:02.352 11:33:03 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@355 -- # echo 2 00:11:02.352 11:33:03 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@366 -- # ver2[v]=2 00:11:02.352 11:33:03 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:11:02.352 11:33:03 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:11:02.352 11:33:03 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@368 -- # return 0 00:11:02.352 11:33:03 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:11:02.352 11:33:03 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:11:02.352 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:02.352 --rc genhtml_branch_coverage=1 00:11:02.352 --rc genhtml_function_coverage=1 00:11:02.352 --rc genhtml_legend=1 00:11:02.352 --rc geninfo_all_blocks=1 00:11:02.352 --rc geninfo_unexecuted_blocks=1 00:11:02.352 00:11:02.352 ' 00:11:02.352 11:33:03 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:11:02.352 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:02.352 --rc genhtml_branch_coverage=1 00:11:02.352 --rc genhtml_function_coverage=1 00:11:02.352 --rc genhtml_legend=1 00:11:02.352 --rc geninfo_all_blocks=1 00:11:02.352 --rc geninfo_unexecuted_blocks=1 00:11:02.352 00:11:02.352 ' 00:11:02.352 11:33:03 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:11:02.352 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:02.352 --rc genhtml_branch_coverage=1 00:11:02.352 --rc genhtml_function_coverage=1 00:11:02.352 --rc genhtml_legend=1 00:11:02.352 --rc geninfo_all_blocks=1 00:11:02.352 --rc geninfo_unexecuted_blocks=1 00:11:02.352 00:11:02.352 ' 00:11:02.352 11:33:03 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:11:02.352 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:02.352 --rc genhtml_branch_coverage=1 00:11:02.352 --rc genhtml_function_coverage=1 00:11:02.352 --rc genhtml_legend=1 00:11:02.352 --rc geninfo_all_blocks=1 00:11:02.352 --rc geninfo_unexecuted_blocks=1 00:11:02.352 00:11:02.352 ' 00:11:02.352 11:33:03 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:11:02.352 11:33:03 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@7 -- # uname -s 00:11:02.352 11:33:03 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:11:02.352 11:33:03 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:11:02.352 11:33:03 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:11:02.352 11:33:03 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:11:02.352 11:33:03 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:11:02.352 11:33:03 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:11:02.352 11:33:03 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:11:02.352 11:33:03 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:11:02.352 11:33:03 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:11:02.352 11:33:03 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:11:02.352 11:33:03 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:2fb5c03d-b67f-4471-b8a9-a83bf7b15be8 00:11:02.352 11:33:03 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@18 -- # NVME_HOSTID=2fb5c03d-b67f-4471-b8a9-a83bf7b15be8 00:11:02.352 11:33:03 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:11:02.352 11:33:03 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:11:02.352 11:33:03 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:11:02.352 11:33:03 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:11:02.352 11:33:03 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@49 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:11:02.352 11:33:03 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@15 -- # shopt -s extglob 00:11:02.352 11:33:03 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:11:02.352 11:33:03 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:11:02.353 11:33:03 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:11:02.353 11:33:03 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:02.353 11:33:03 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:02.353 11:33:03 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:02.353 11:33:03 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- paths/export.sh@5 -- # export PATH 00:11:02.353 11:33:03 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:02.353 11:33:03 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@51 -- # : 0 00:11:02.353 11:33:03 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:11:02.353 11:33:03 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:11:02.353 11:33:03 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:11:02.353 11:33:03 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:11:02.353 11:33:03 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:11:02.353 11:33:03 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:11:02.353 /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:11:02.353 11:33:03 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:11:02.353 11:33:03 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:11:02.353 11:33:03 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@55 -- # have_pci_nics=0 00:11:02.353 11:33:03 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@11 -- # NVMF_REFERRAL_IP_1=127.0.0.2 00:11:02.353 11:33:03 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@12 -- # NVMF_REFERRAL_IP_2=127.0.0.3 00:11:02.353 11:33:03 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@13 -- # NVMF_REFERRAL_IP_3=127.0.0.4 00:11:02.353 11:33:03 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@14 -- # NVMF_PORT_REFERRAL=4430 00:11:02.353 11:33:03 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@15 -- # DISCOVERY_NQN=nqn.2014-08.org.nvmexpress.discovery 00:11:02.353 11:33:03 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@16 -- # NQN=nqn.2016-06.io.spdk:cnode1 00:11:02.353 11:33:03 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@37 -- # nvmftestinit 00:11:02.353 11:33:03 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:11:02.353 11:33:03 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:11:02.353 11:33:03 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@476 -- # prepare_net_devs 00:11:02.353 11:33:03 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@438 -- # local -g is_hw=no 00:11:02.353 11:33:03 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@440 -- # remove_spdk_ns 00:11:02.353 11:33:03 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:11:02.353 11:33:03 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:11:02.353 11:33:03 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:11:02.353 11:33:03 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@442 -- # [[ virt != virt ]] 00:11:02.353 11:33:03 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@444 -- # [[ no == yes ]] 00:11:02.353 11:33:03 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@451 -- # [[ virt == phy ]] 00:11:02.353 11:33:03 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@454 -- # [[ virt == phy-fallback ]] 00:11:02.353 11:33:03 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@459 -- # [[ tcp == tcp ]] 00:11:02.353 11:33:03 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@460 -- # nvmf_veth_init 00:11:02.353 11:33:03 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@145 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:11:02.353 11:33:03 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@146 -- # NVMF_SECOND_INITIATOR_IP=10.0.0.2 00:11:02.353 11:33:03 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@147 -- # NVMF_FIRST_TARGET_IP=10.0.0.3 00:11:02.353 11:33:03 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@148 -- # NVMF_SECOND_TARGET_IP=10.0.0.4 00:11:02.353 11:33:03 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@149 -- # NVMF_INITIATOR_IP=10.0.0.1 00:11:02.353 11:33:03 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@150 -- # NVMF_BRIDGE=nvmf_br 00:11:02.353 11:33:03 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@151 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:11:02.353 11:33:03 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@152 -- # NVMF_INITIATOR_INTERFACE2=nvmf_init_if2 00:11:02.353 11:33:03 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@153 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:11:02.353 11:33:03 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@154 -- # NVMF_INITIATOR_BRIDGE2=nvmf_init_br2 00:11:02.353 11:33:03 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@155 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:11:02.353 11:33:03 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@156 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:11:02.353 11:33:03 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@157 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:11:02.353 11:33:03 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@158 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:11:02.353 11:33:03 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@159 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:11:02.353 11:33:03 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@160 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:11:02.353 11:33:03 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@162 -- # ip link set nvmf_init_br nomaster 00:11:02.624 Cannot find device "nvmf_init_br" 00:11:02.624 11:33:03 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@162 -- # true 00:11:02.624 11:33:03 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@163 -- # ip link set nvmf_init_br2 nomaster 00:11:02.624 Cannot find device "nvmf_init_br2" 00:11:02.624 11:33:03 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@163 -- # true 00:11:02.624 11:33:03 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@164 -- # ip link set nvmf_tgt_br nomaster 00:11:02.624 Cannot find device "nvmf_tgt_br" 00:11:02.624 11:33:03 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@164 -- # true 00:11:02.624 11:33:03 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@165 -- # ip link set nvmf_tgt_br2 nomaster 00:11:02.624 Cannot find device "nvmf_tgt_br2" 00:11:02.624 11:33:03 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@165 -- # true 00:11:02.624 11:33:03 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@166 -- # ip link set nvmf_init_br down 00:11:02.624 Cannot find device "nvmf_init_br" 00:11:02.624 11:33:03 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@166 -- # true 00:11:02.624 11:33:03 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@167 -- # ip link set nvmf_init_br2 down 00:11:02.624 Cannot find device "nvmf_init_br2" 00:11:02.624 11:33:03 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@167 -- # true 00:11:02.624 11:33:03 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@168 -- # ip link set nvmf_tgt_br down 00:11:02.624 Cannot find device "nvmf_tgt_br" 00:11:02.624 11:33:03 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@168 -- # true 00:11:02.624 11:33:03 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@169 -- # ip link set nvmf_tgt_br2 down 00:11:02.624 Cannot find device "nvmf_tgt_br2" 00:11:02.624 11:33:03 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@169 -- # true 00:11:02.624 11:33:03 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@170 -- # ip link delete nvmf_br type bridge 00:11:02.624 Cannot find device "nvmf_br" 00:11:02.624 11:33:03 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@170 -- # true 00:11:02.624 11:33:03 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@171 -- # ip link delete nvmf_init_if 00:11:02.624 Cannot find device "nvmf_init_if" 00:11:02.624 11:33:03 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@171 -- # true 00:11:02.624 11:33:03 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@172 -- # ip link delete nvmf_init_if2 00:11:02.624 Cannot find device "nvmf_init_if2" 00:11:02.624 11:33:03 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@172 -- # true 00:11:02.624 11:33:03 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@173 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:11:02.624 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:11:02.624 11:33:03 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@173 -- # true 00:11:02.624 11:33:03 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@174 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:11:02.624 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:11:02.624 11:33:03 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@174 -- # true 00:11:02.624 11:33:03 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@177 -- # ip netns add nvmf_tgt_ns_spdk 00:11:02.624 11:33:03 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@180 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:11:02.624 11:33:03 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@181 -- # ip link add nvmf_init_if2 type veth peer name nvmf_init_br2 00:11:02.624 11:33:03 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@182 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:11:02.624 11:33:03 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@183 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:11:02.624 11:33:03 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:11:02.624 11:33:03 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@187 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:11:02.624 11:33:03 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@190 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:11:02.624 11:33:03 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@191 -- # ip addr add 10.0.0.2/24 dev nvmf_init_if2 00:11:02.625 11:33:03 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@192 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if 00:11:02.625 11:33:03 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@193 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.4/24 dev nvmf_tgt_if2 00:11:02.625 11:33:03 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@196 -- # ip link set nvmf_init_if up 00:11:02.895 11:33:03 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@197 -- # ip link set nvmf_init_if2 up 00:11:02.895 11:33:03 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@198 -- # ip link set nvmf_init_br up 00:11:02.895 11:33:03 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@199 -- # ip link set nvmf_init_br2 up 00:11:02.895 11:33:03 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@200 -- # ip link set nvmf_tgt_br up 00:11:02.895 11:33:03 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@201 -- # ip link set nvmf_tgt_br2 up 00:11:02.895 11:33:03 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@202 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:11:02.895 11:33:03 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@203 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:11:02.895 11:33:03 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@204 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:11:02.895 11:33:03 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@207 -- # ip link add nvmf_br type bridge 00:11:02.895 11:33:03 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@208 -- # ip link set nvmf_br up 00:11:02.895 11:33:03 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@211 -- # ip link set nvmf_init_br master nvmf_br 00:11:02.895 11:33:03 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@212 -- # ip link set nvmf_init_br2 master nvmf_br 00:11:02.895 11:33:03 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@213 -- # ip link set nvmf_tgt_br master nvmf_br 00:11:02.895 11:33:03 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@214 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:11:02.895 11:33:03 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@217 -- # ipts -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:11:02.895 11:33:03 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT' 00:11:02.895 11:33:03 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@218 -- # ipts -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT 00:11:02.895 11:33:03 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT' 00:11:02.895 11:33:03 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@219 -- # ipts -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:11:02.895 11:33:03 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@790 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT -m comment --comment 'SPDK_NVMF:-A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT' 00:11:02.895 11:33:03 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@222 -- # ping -c 1 10.0.0.3 00:11:02.895 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:11:02.895 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.060 ms 00:11:02.895 00:11:02.895 --- 10.0.0.3 ping statistics --- 00:11:02.895 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:11:02.895 rtt min/avg/max/mdev = 0.060/0.060/0.060/0.000 ms 00:11:02.895 11:33:03 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@223 -- # ping -c 1 10.0.0.4 00:11:02.895 PING 10.0.0.4 (10.0.0.4) 56(84) bytes of data. 00:11:02.895 64 bytes from 10.0.0.4: icmp_seq=1 ttl=64 time=0.101 ms 00:11:02.895 00:11:02.895 --- 10.0.0.4 ping statistics --- 00:11:02.895 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:11:02.895 rtt min/avg/max/mdev = 0.101/0.101/0.101/0.000 ms 00:11:02.895 11:33:03 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@224 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:11:02.895 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:11:02.895 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.040 ms 00:11:02.895 00:11:02.895 --- 10.0.0.1 ping statistics --- 00:11:02.895 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:11:02.895 rtt min/avg/max/mdev = 0.040/0.040/0.040/0.000 ms 00:11:02.895 11:33:03 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@225 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.2 00:11:02.895 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:11:02.895 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.060 ms 00:11:02.895 00:11:02.895 --- 10.0.0.2 ping statistics --- 00:11:02.895 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:11:02.895 rtt min/avg/max/mdev = 0.060/0.060/0.060/0.000 ms 00:11:02.895 11:33:03 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@227 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:11:02.896 11:33:03 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@461 -- # return 0 00:11:02.896 11:33:03 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:11:02.896 11:33:03 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:11:02.896 11:33:03 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:11:02.896 11:33:03 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:11:02.896 11:33:03 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:11:02.896 11:33:03 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:11:02.896 11:33:03 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:11:02.896 11:33:03 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@38 -- # nvmfappstart -m 0xF 00:11:02.896 11:33:03 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:11:02.896 11:33:03 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@726 -- # xtrace_disable 00:11:02.896 11:33:03 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:11:02.896 11:33:03 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@509 -- # nvmfpid=74621 00:11:02.896 11:33:03 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@508 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:11:02.896 11:33:03 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@510 -- # waitforlisten 74621 00:11:02.896 11:33:03 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@835 -- # '[' -z 74621 ']' 00:11:02.896 11:33:03 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:11:02.896 11:33:03 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@840 -- # local max_retries=100 00:11:02.896 11:33:03 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:11:02.896 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:11:02.896 11:33:03 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@844 -- # xtrace_disable 00:11:02.896 11:33:03 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:11:02.896 [2024-12-16 11:33:03.622769] Starting SPDK v25.01-pre git sha1 c360129e7 / DPDK 24.03.0 initialization... 00:11:02.896 [2024-12-16 11:33:03.622940] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:11:03.154 [2024-12-16 11:33:03.791044] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:11:03.154 [2024-12-16 11:33:03.852659] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:11:03.154 [2024-12-16 11:33:03.852731] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:11:03.154 [2024-12-16 11:33:03.852748] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:11:03.154 [2024-12-16 11:33:03.852760] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:11:03.154 [2024-12-16 11:33:03.852771] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:11:03.154 [2024-12-16 11:33:03.854200] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 1 00:11:03.154 [2024-12-16 11:33:03.854276] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 2 00:11:03.154 [2024-12-16 11:33:03.854417] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 3 00:11:03.154 [2024-12-16 11:33:03.854427] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:11:04.087 11:33:04 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:11:04.087 11:33:04 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@868 -- # return 0 00:11:04.087 11:33:04 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:11:04.087 11:33:04 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@732 -- # xtrace_disable 00:11:04.087 11:33:04 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:11:04.087 11:33:04 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:11:04.087 11:33:04 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@40 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:11:04.087 11:33:04 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:04.087 11:33:04 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:11:04.087 [2024-12-16 11:33:04.647016] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:11:04.087 11:33:04 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:04.087 11:33:04 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@41 -- # rpc_cmd nvmf_subsystem_add_listener -t tcp -a 10.0.0.3 -s 8009 discovery 00:11:04.087 11:33:04 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:04.087 11:33:04 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:11:04.087 [2024-12-16 11:33:04.663243] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 8009 *** 00:11:04.087 11:33:04 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:04.087 11:33:04 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@44 -- # rpc_cmd nvmf_discovery_add_referral -t tcp -a 127.0.0.2 -s 4430 00:11:04.087 11:33:04 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:04.087 11:33:04 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:11:04.087 11:33:04 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:04.087 11:33:04 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@45 -- # rpc_cmd nvmf_discovery_add_referral -t tcp -a 127.0.0.3 -s 4430 00:11:04.087 11:33:04 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:04.087 11:33:04 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:11:04.087 11:33:04 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:04.087 11:33:04 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@46 -- # rpc_cmd nvmf_discovery_add_referral -t tcp -a 127.0.0.4 -s 4430 00:11:04.087 11:33:04 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:04.087 11:33:04 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:11:04.087 11:33:04 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:04.087 11:33:04 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@48 -- # rpc_cmd nvmf_discovery_get_referrals 00:11:04.087 11:33:04 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@48 -- # jq length 00:11:04.087 11:33:04 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:04.087 11:33:04 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:11:04.087 11:33:04 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:04.087 11:33:04 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@48 -- # (( 3 == 3 )) 00:11:04.087 11:33:04 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@49 -- # get_referral_ips rpc 00:11:04.087 11:33:04 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@19 -- # [[ rpc == \r\p\c ]] 00:11:04.087 11:33:04 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@21 -- # rpc_cmd nvmf_discovery_get_referrals 00:11:04.087 11:33:04 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:04.087 11:33:04 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:11:04.087 11:33:04 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@21 -- # jq -r '.[].address.traddr' 00:11:04.087 11:33:04 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@21 -- # sort 00:11:04.087 11:33:04 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:04.087 11:33:04 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@21 -- # echo 127.0.0.2 127.0.0.3 127.0.0.4 00:11:04.087 11:33:04 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@49 -- # [[ 127.0.0.2 127.0.0.3 127.0.0.4 == \1\2\7\.\0\.\0\.\2\ \1\2\7\.\0\.\0\.\3\ \1\2\7\.\0\.\0\.\4 ]] 00:11:04.087 11:33:04 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@50 -- # get_referral_ips nvme 00:11:04.087 11:33:04 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@19 -- # [[ nvme == \r\p\c ]] 00:11:04.087 11:33:04 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@22 -- # [[ nvme == \n\v\m\e ]] 00:11:04.087 11:33:04 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:2fb5c03d-b67f-4471-b8a9-a83bf7b15be8 --hostid=2fb5c03d-b67f-4471-b8a9-a83bf7b15be8 -t tcp -a 10.0.0.3 -s 8009 -o json 00:11:04.087 11:33:04 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # jq -r '.records[] | select(.subtype != "current discovery subsystem").traddr' 00:11:04.087 11:33:04 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # sort 00:11:04.345 11:33:04 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # echo 127.0.0.2 127.0.0.3 127.0.0.4 00:11:04.345 11:33:04 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@50 -- # [[ 127.0.0.2 127.0.0.3 127.0.0.4 == \1\2\7\.\0\.\0\.\2\ \1\2\7\.\0\.\0\.\3\ \1\2\7\.\0\.\0\.\4 ]] 00:11:04.345 11:33:04 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@52 -- # rpc_cmd nvmf_discovery_remove_referral -t tcp -a 127.0.0.2 -s 4430 00:11:04.345 11:33:04 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:04.345 11:33:04 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:11:04.346 11:33:04 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:04.346 11:33:04 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@53 -- # rpc_cmd nvmf_discovery_remove_referral -t tcp -a 127.0.0.3 -s 4430 00:11:04.346 11:33:04 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:04.346 11:33:04 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:11:04.346 11:33:04 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:04.346 11:33:04 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@54 -- # rpc_cmd nvmf_discovery_remove_referral -t tcp -a 127.0.0.4 -s 4430 00:11:04.346 11:33:04 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:04.346 11:33:04 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:11:04.346 11:33:04 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:04.346 11:33:04 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@56 -- # jq length 00:11:04.346 11:33:04 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@56 -- # rpc_cmd nvmf_discovery_get_referrals 00:11:04.346 11:33:04 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:04.346 11:33:04 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:11:04.346 11:33:04 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:04.346 11:33:05 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@56 -- # (( 0 == 0 )) 00:11:04.346 11:33:05 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@57 -- # get_referral_ips nvme 00:11:04.346 11:33:05 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@19 -- # [[ nvme == \r\p\c ]] 00:11:04.346 11:33:05 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@22 -- # [[ nvme == \n\v\m\e ]] 00:11:04.346 11:33:05 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:2fb5c03d-b67f-4471-b8a9-a83bf7b15be8 --hostid=2fb5c03d-b67f-4471-b8a9-a83bf7b15be8 -t tcp -a 10.0.0.3 -s 8009 -o json 00:11:04.346 11:33:05 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # jq -r '.records[] | select(.subtype != "current discovery subsystem").traddr' 00:11:04.346 11:33:05 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # sort 00:11:04.602 11:33:05 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # echo 00:11:04.602 11:33:05 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@57 -- # [[ '' == '' ]] 00:11:04.602 11:33:05 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@60 -- # rpc_cmd nvmf_discovery_add_referral -t tcp -a 127.0.0.2 -s 4430 -n discovery 00:11:04.602 11:33:05 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:04.602 11:33:05 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:11:04.602 11:33:05 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:04.602 11:33:05 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@62 -- # rpc_cmd nvmf_discovery_add_referral -t tcp -a 127.0.0.2 -s 4430 -n nqn.2016-06.io.spdk:cnode1 00:11:04.602 11:33:05 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:04.602 11:33:05 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:11:04.602 11:33:05 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:04.602 11:33:05 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@65 -- # get_referral_ips rpc 00:11:04.602 11:33:05 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@19 -- # [[ rpc == \r\p\c ]] 00:11:04.602 11:33:05 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@21 -- # rpc_cmd nvmf_discovery_get_referrals 00:11:04.602 11:33:05 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@21 -- # jq -r '.[].address.traddr' 00:11:04.602 11:33:05 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:04.602 11:33:05 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@21 -- # sort 00:11:04.602 11:33:05 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:11:04.602 11:33:05 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:04.602 11:33:05 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@21 -- # echo 127.0.0.2 127.0.0.2 00:11:04.602 11:33:05 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@65 -- # [[ 127.0.0.2 127.0.0.2 == \1\2\7\.\0\.\0\.\2\ \1\2\7\.\0\.\0\.\2 ]] 00:11:04.602 11:33:05 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@66 -- # get_referral_ips nvme 00:11:04.602 11:33:05 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@19 -- # [[ nvme == \r\p\c ]] 00:11:04.602 11:33:05 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@22 -- # [[ nvme == \n\v\m\e ]] 00:11:04.602 11:33:05 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:2fb5c03d-b67f-4471-b8a9-a83bf7b15be8 --hostid=2fb5c03d-b67f-4471-b8a9-a83bf7b15be8 -t tcp -a 10.0.0.3 -s 8009 -o json 00:11:04.602 11:33:05 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # jq -r '.records[] | select(.subtype != "current discovery subsystem").traddr' 00:11:04.602 11:33:05 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # sort 00:11:04.859 11:33:05 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # echo 127.0.0.2 127.0.0.2 00:11:04.859 11:33:05 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@66 -- # [[ 127.0.0.2 127.0.0.2 == \1\2\7\.\0\.\0\.\2\ \1\2\7\.\0\.\0\.\2 ]] 00:11:04.859 11:33:05 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@67 -- # get_discovery_entries 'nvme subsystem' 00:11:04.859 11:33:05 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@67 -- # jq -r .subnqn 00:11:04.859 11:33:05 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@31 -- # local 'subtype=nvme subsystem' 00:11:04.859 11:33:05 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@33 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:2fb5c03d-b67f-4471-b8a9-a83bf7b15be8 --hostid=2fb5c03d-b67f-4471-b8a9-a83bf7b15be8 -t tcp -a 10.0.0.3 -s 8009 -o json 00:11:04.859 11:33:05 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@34 -- # jq '.records[] | select(.subtype == "nvme subsystem")' 00:11:04.859 11:33:05 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@67 -- # [[ nqn.2016-06.io.spdk:cnode1 == \n\q\n\.\2\0\1\6\-\0\6\.\i\o\.\s\p\d\k\:\c\n\o\d\e\1 ]] 00:11:04.859 11:33:05 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@68 -- # get_discovery_entries 'discovery subsystem referral' 00:11:04.859 11:33:05 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@31 -- # local 'subtype=discovery subsystem referral' 00:11:04.859 11:33:05 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@68 -- # jq -r .subnqn 00:11:04.859 11:33:05 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@33 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:2fb5c03d-b67f-4471-b8a9-a83bf7b15be8 --hostid=2fb5c03d-b67f-4471-b8a9-a83bf7b15be8 -t tcp -a 10.0.0.3 -s 8009 -o json 00:11:04.859 11:33:05 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@34 -- # jq '.records[] | select(.subtype == "discovery subsystem referral")' 00:11:04.859 11:33:05 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@68 -- # [[ nqn.2014-08.org.nvmexpress.discovery == \n\q\n\.\2\0\1\4\-\0\8\.\o\r\g\.\n\v\m\e\x\p\r\e\s\s\.\d\i\s\c\o\v\e\r\y ]] 00:11:04.859 11:33:05 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@71 -- # rpc_cmd nvmf_discovery_remove_referral -t tcp -a 127.0.0.2 -s 4430 -n nqn.2016-06.io.spdk:cnode1 00:11:04.859 11:33:05 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:04.859 11:33:05 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:11:04.859 11:33:05 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:04.859 11:33:05 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@73 -- # get_referral_ips rpc 00:11:04.859 11:33:05 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@19 -- # [[ rpc == \r\p\c ]] 00:11:04.859 11:33:05 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@21 -- # rpc_cmd nvmf_discovery_get_referrals 00:11:04.859 11:33:05 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@21 -- # jq -r '.[].address.traddr' 00:11:04.859 11:33:05 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:04.859 11:33:05 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@21 -- # sort 00:11:04.859 11:33:05 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:11:05.117 11:33:05 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:05.117 11:33:05 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@21 -- # echo 127.0.0.2 00:11:05.117 11:33:05 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@73 -- # [[ 127.0.0.2 == \1\2\7\.\0\.\0\.\2 ]] 00:11:05.117 11:33:05 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@74 -- # get_referral_ips nvme 00:11:05.117 11:33:05 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@19 -- # [[ nvme == \r\p\c ]] 00:11:05.117 11:33:05 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@22 -- # [[ nvme == \n\v\m\e ]] 00:11:05.117 11:33:05 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:2fb5c03d-b67f-4471-b8a9-a83bf7b15be8 --hostid=2fb5c03d-b67f-4471-b8a9-a83bf7b15be8 -t tcp -a 10.0.0.3 -s 8009 -o json 00:11:05.117 11:33:05 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # sort 00:11:05.117 11:33:05 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # jq -r '.records[] | select(.subtype != "current discovery subsystem").traddr' 00:11:05.117 11:33:05 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # echo 127.0.0.2 00:11:05.117 11:33:05 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@74 -- # [[ 127.0.0.2 == \1\2\7\.\0\.\0\.\2 ]] 00:11:05.117 11:33:05 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@75 -- # get_discovery_entries 'nvme subsystem' 00:11:05.117 11:33:05 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@31 -- # local 'subtype=nvme subsystem' 00:11:05.117 11:33:05 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@75 -- # jq -r .subnqn 00:11:05.117 11:33:05 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@34 -- # jq '.records[] | select(.subtype == "nvme subsystem")' 00:11:05.117 11:33:05 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@33 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:2fb5c03d-b67f-4471-b8a9-a83bf7b15be8 --hostid=2fb5c03d-b67f-4471-b8a9-a83bf7b15be8 -t tcp -a 10.0.0.3 -s 8009 -o json 00:11:05.375 11:33:05 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@75 -- # [[ '' == '' ]] 00:11:05.375 11:33:05 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@76 -- # jq -r .subnqn 00:11:05.375 11:33:05 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@76 -- # get_discovery_entries 'discovery subsystem referral' 00:11:05.375 11:33:05 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@31 -- # local 'subtype=discovery subsystem referral' 00:11:05.375 11:33:05 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@33 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:2fb5c03d-b67f-4471-b8a9-a83bf7b15be8 --hostid=2fb5c03d-b67f-4471-b8a9-a83bf7b15be8 -t tcp -a 10.0.0.3 -s 8009 -o json 00:11:05.375 11:33:05 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@34 -- # jq '.records[] | select(.subtype == "discovery subsystem referral")' 00:11:05.375 11:33:05 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@76 -- # [[ nqn.2014-08.org.nvmexpress.discovery == \n\q\n\.\2\0\1\4\-\0\8\.\o\r\g\.\n\v\m\e\x\p\r\e\s\s\.\d\i\s\c\o\v\e\r\y ]] 00:11:05.375 11:33:05 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@79 -- # rpc_cmd nvmf_discovery_remove_referral -t tcp -a 127.0.0.2 -s 4430 -n nqn.2014-08.org.nvmexpress.discovery 00:11:05.375 11:33:05 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:05.375 11:33:05 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:11:05.375 11:33:06 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:05.375 11:33:06 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@82 -- # rpc_cmd nvmf_discovery_get_referrals 00:11:05.375 11:33:06 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:05.375 11:33:06 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@82 -- # jq length 00:11:05.375 11:33:06 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:11:05.375 11:33:06 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:05.375 11:33:06 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@82 -- # (( 0 == 0 )) 00:11:05.375 11:33:06 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@83 -- # get_referral_ips nvme 00:11:05.375 11:33:06 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@19 -- # [[ nvme == \r\p\c ]] 00:11:05.375 11:33:06 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@22 -- # [[ nvme == \n\v\m\e ]] 00:11:05.375 11:33:06 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:2fb5c03d-b67f-4471-b8a9-a83bf7b15be8 --hostid=2fb5c03d-b67f-4471-b8a9-a83bf7b15be8 -t tcp -a 10.0.0.3 -s 8009 -o json 00:11:05.375 11:33:06 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # jq -r '.records[] | select(.subtype != "current discovery subsystem").traddr' 00:11:05.375 11:33:06 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # sort 00:11:05.633 11:33:06 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # echo 00:11:05.633 11:33:06 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@83 -- # [[ '' == '' ]] 00:11:05.633 11:33:06 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@85 -- # trap - SIGINT SIGTERM EXIT 00:11:05.633 11:33:06 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@86 -- # nvmftestfini 00:11:05.633 11:33:06 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@516 -- # nvmfcleanup 00:11:05.633 11:33:06 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@121 -- # sync 00:11:05.633 11:33:06 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:11:05.633 11:33:06 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@124 -- # set +e 00:11:05.633 11:33:06 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@125 -- # for i in {1..20} 00:11:05.633 11:33:06 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:11:05.633 rmmod nvme_tcp 00:11:05.633 rmmod nvme_fabrics 00:11:05.633 rmmod nvme_keyring 00:11:05.633 11:33:06 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:11:05.633 11:33:06 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@128 -- # set -e 00:11:05.633 11:33:06 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@129 -- # return 0 00:11:05.633 11:33:06 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@517 -- # '[' -n 74621 ']' 00:11:05.633 11:33:06 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@518 -- # killprocess 74621 00:11:05.633 11:33:06 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@954 -- # '[' -z 74621 ']' 00:11:05.633 11:33:06 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@958 -- # kill -0 74621 00:11:05.633 11:33:06 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@959 -- # uname 00:11:05.633 11:33:06 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:11:05.634 11:33:06 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 74621 00:11:05.634 killing process with pid 74621 00:11:05.634 11:33:06 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:11:05.634 11:33:06 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:11:05.634 11:33:06 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@972 -- # echo 'killing process with pid 74621' 00:11:05.634 11:33:06 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@973 -- # kill 74621 00:11:05.634 11:33:06 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@978 -- # wait 74621 00:11:05.891 11:33:06 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:11:05.891 11:33:06 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:11:05.891 11:33:06 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:11:05.892 11:33:06 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@297 -- # iptr 00:11:05.892 11:33:06 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:11:05.892 11:33:06 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@791 -- # iptables-save 00:11:05.892 11:33:06 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@791 -- # iptables-restore 00:11:05.892 11:33:06 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@298 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:11:05.892 11:33:06 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@299 -- # nvmf_veth_fini 00:11:05.892 11:33:06 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@233 -- # ip link set nvmf_init_br nomaster 00:11:05.892 11:33:06 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@234 -- # ip link set nvmf_init_br2 nomaster 00:11:05.892 11:33:06 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@235 -- # ip link set nvmf_tgt_br nomaster 00:11:05.892 11:33:06 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@236 -- # ip link set nvmf_tgt_br2 nomaster 00:11:05.892 11:33:06 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@237 -- # ip link set nvmf_init_br down 00:11:06.149 11:33:06 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@238 -- # ip link set nvmf_init_br2 down 00:11:06.149 11:33:06 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@239 -- # ip link set nvmf_tgt_br down 00:11:06.149 11:33:06 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@240 -- # ip link set nvmf_tgt_br2 down 00:11:06.149 11:33:06 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@241 -- # ip link delete nvmf_br type bridge 00:11:06.149 11:33:06 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@242 -- # ip link delete nvmf_init_if 00:11:06.149 11:33:06 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@243 -- # ip link delete nvmf_init_if2 00:11:06.149 11:33:06 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@244 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:11:06.149 11:33:06 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@245 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:11:06.149 11:33:06 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@246 -- # remove_spdk_ns 00:11:06.149 11:33:06 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:11:06.149 11:33:06 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:11:06.149 11:33:06 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:11:06.149 11:33:06 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@300 -- # return 0 00:11:06.149 00:11:06.149 real 0m3.942s 00:11:06.149 user 0m12.004s 00:11:06.149 sys 0m1.043s 00:11:06.149 11:33:06 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@1130 -- # xtrace_disable 00:11:06.149 11:33:06 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:11:06.149 ************************************ 00:11:06.149 END TEST nvmf_referrals 00:11:06.149 ************************************ 00:11:06.149 11:33:06 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@20 -- # run_test nvmf_connect_disconnect /home/vagrant/spdk_repo/spdk/test/nvmf/target/connect_disconnect.sh --transport=tcp 00:11:06.149 11:33:06 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:11:06.149 11:33:06 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1111 -- # xtrace_disable 00:11:06.149 11:33:06 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:11:06.149 ************************************ 00:11:06.149 START TEST nvmf_connect_disconnect 00:11:06.149 ************************************ 00:11:06.150 11:33:06 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/connect_disconnect.sh --transport=tcp 00:11:06.408 * Looking for test storage... 00:11:06.408 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:11:06.408 11:33:06 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:11:06.408 11:33:06 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@1711 -- # lcov --version 00:11:06.408 11:33:06 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:11:06.408 11:33:07 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:11:06.408 11:33:07 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:11:06.408 11:33:07 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@333 -- # local ver1 ver1_l 00:11:06.408 11:33:07 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@334 -- # local ver2 ver2_l 00:11:06.408 11:33:07 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@336 -- # IFS=.-: 00:11:06.408 11:33:07 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@336 -- # read -ra ver1 00:11:06.408 11:33:07 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@337 -- # IFS=.-: 00:11:06.408 11:33:07 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@337 -- # read -ra ver2 00:11:06.408 11:33:07 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@338 -- # local 'op=<' 00:11:06.408 11:33:07 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@340 -- # ver1_l=2 00:11:06.408 11:33:07 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@341 -- # ver2_l=1 00:11:06.408 11:33:07 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:11:06.408 11:33:07 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@344 -- # case "$op" in 00:11:06.408 11:33:07 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@345 -- # : 1 00:11:06.408 11:33:07 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@364 -- # (( v = 0 )) 00:11:06.408 11:33:07 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:11:06.408 11:33:07 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@365 -- # decimal 1 00:11:06.409 11:33:07 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@353 -- # local d=1 00:11:06.409 11:33:07 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:11:06.409 11:33:07 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@355 -- # echo 1 00:11:06.409 11:33:07 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@365 -- # ver1[v]=1 00:11:06.409 11:33:07 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@366 -- # decimal 2 00:11:06.409 11:33:07 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@353 -- # local d=2 00:11:06.409 11:33:07 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:11:06.409 11:33:07 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@355 -- # echo 2 00:11:06.409 11:33:07 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@366 -- # ver2[v]=2 00:11:06.409 11:33:07 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:11:06.409 11:33:07 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:11:06.409 11:33:07 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@368 -- # return 0 00:11:06.409 11:33:07 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:11:06.409 11:33:07 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:11:06.409 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:06.409 --rc genhtml_branch_coverage=1 00:11:06.409 --rc genhtml_function_coverage=1 00:11:06.409 --rc genhtml_legend=1 00:11:06.409 --rc geninfo_all_blocks=1 00:11:06.409 --rc geninfo_unexecuted_blocks=1 00:11:06.409 00:11:06.409 ' 00:11:06.409 11:33:07 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:11:06.409 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:06.409 --rc genhtml_branch_coverage=1 00:11:06.409 --rc genhtml_function_coverage=1 00:11:06.409 --rc genhtml_legend=1 00:11:06.409 --rc geninfo_all_blocks=1 00:11:06.409 --rc geninfo_unexecuted_blocks=1 00:11:06.409 00:11:06.409 ' 00:11:06.409 11:33:07 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:11:06.409 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:06.409 --rc genhtml_branch_coverage=1 00:11:06.409 --rc genhtml_function_coverage=1 00:11:06.409 --rc genhtml_legend=1 00:11:06.409 --rc geninfo_all_blocks=1 00:11:06.409 --rc geninfo_unexecuted_blocks=1 00:11:06.409 00:11:06.409 ' 00:11:06.409 11:33:07 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:11:06.409 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:06.409 --rc genhtml_branch_coverage=1 00:11:06.409 --rc genhtml_function_coverage=1 00:11:06.409 --rc genhtml_legend=1 00:11:06.409 --rc geninfo_all_blocks=1 00:11:06.409 --rc geninfo_unexecuted_blocks=1 00:11:06.409 00:11:06.409 ' 00:11:06.409 11:33:07 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- target/connect_disconnect.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:11:06.409 11:33:07 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@7 -- # uname -s 00:11:06.409 11:33:07 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:11:06.409 11:33:07 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:11:06.409 11:33:07 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:11:06.409 11:33:07 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:11:06.409 11:33:07 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:11:06.409 11:33:07 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:11:06.409 11:33:07 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:11:06.409 11:33:07 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:11:06.409 11:33:07 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:11:06.409 11:33:07 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:11:06.409 11:33:07 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:2fb5c03d-b67f-4471-b8a9-a83bf7b15be8 00:11:06.409 11:33:07 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@18 -- # NVME_HOSTID=2fb5c03d-b67f-4471-b8a9-a83bf7b15be8 00:11:06.409 11:33:07 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:11:06.409 11:33:07 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:11:06.409 11:33:07 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:11:06.409 11:33:07 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:11:06.409 11:33:07 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@49 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:11:06.409 11:33:07 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@15 -- # shopt -s extglob 00:11:06.409 11:33:07 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:11:06.409 11:33:07 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:11:06.409 11:33:07 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:11:06.409 11:33:07 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:06.409 11:33:07 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:06.409 11:33:07 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:06.409 11:33:07 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- paths/export.sh@5 -- # export PATH 00:11:06.409 11:33:07 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:06.409 11:33:07 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@51 -- # : 0 00:11:06.409 11:33:07 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:11:06.409 11:33:07 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:11:06.409 11:33:07 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:11:06.409 11:33:07 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:11:06.409 11:33:07 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:11:06.409 11:33:07 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:11:06.409 /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:11:06.409 11:33:07 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:11:06.409 11:33:07 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:11:06.409 11:33:07 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@55 -- # have_pci_nics=0 00:11:06.409 11:33:07 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- target/connect_disconnect.sh@11 -- # MALLOC_BDEV_SIZE=64 00:11:06.409 11:33:07 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- target/connect_disconnect.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:11:06.409 11:33:07 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- target/connect_disconnect.sh@15 -- # nvmftestinit 00:11:06.409 11:33:07 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:11:06.409 11:33:07 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:11:06.409 11:33:07 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@476 -- # prepare_net_devs 00:11:06.409 11:33:07 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@438 -- # local -g is_hw=no 00:11:06.409 11:33:07 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@440 -- # remove_spdk_ns 00:11:06.409 11:33:07 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:11:06.409 11:33:07 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:11:06.409 11:33:07 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:11:06.409 11:33:07 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@442 -- # [[ virt != virt ]] 00:11:06.409 11:33:07 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@444 -- # [[ no == yes ]] 00:11:06.409 11:33:07 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@451 -- # [[ virt == phy ]] 00:11:06.409 11:33:07 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@454 -- # [[ virt == phy-fallback ]] 00:11:06.409 11:33:07 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@459 -- # [[ tcp == tcp ]] 00:11:06.409 11:33:07 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@460 -- # nvmf_veth_init 00:11:06.409 11:33:07 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@145 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:11:06.409 11:33:07 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@146 -- # NVMF_SECOND_INITIATOR_IP=10.0.0.2 00:11:06.409 11:33:07 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@147 -- # NVMF_FIRST_TARGET_IP=10.0.0.3 00:11:06.410 11:33:07 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@148 -- # NVMF_SECOND_TARGET_IP=10.0.0.4 00:11:06.410 11:33:07 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@149 -- # NVMF_INITIATOR_IP=10.0.0.1 00:11:06.410 11:33:07 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@150 -- # NVMF_BRIDGE=nvmf_br 00:11:06.410 11:33:07 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@151 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:11:06.410 11:33:07 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@152 -- # NVMF_INITIATOR_INTERFACE2=nvmf_init_if2 00:11:06.410 11:33:07 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@153 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:11:06.410 11:33:07 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@154 -- # NVMF_INITIATOR_BRIDGE2=nvmf_init_br2 00:11:06.410 11:33:07 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@155 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:11:06.410 11:33:07 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@156 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:11:06.410 11:33:07 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@157 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:11:06.410 11:33:07 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@158 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:11:06.410 11:33:07 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@159 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:11:06.410 11:33:07 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@160 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:11:06.410 11:33:07 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@162 -- # ip link set nvmf_init_br nomaster 00:11:06.410 Cannot find device "nvmf_init_br" 00:11:06.410 11:33:07 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@162 -- # true 00:11:06.410 11:33:07 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@163 -- # ip link set nvmf_init_br2 nomaster 00:11:06.410 Cannot find device "nvmf_init_br2" 00:11:06.410 11:33:07 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@163 -- # true 00:11:06.410 11:33:07 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@164 -- # ip link set nvmf_tgt_br nomaster 00:11:06.410 Cannot find device "nvmf_tgt_br" 00:11:06.410 11:33:07 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@164 -- # true 00:11:06.410 11:33:07 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@165 -- # ip link set nvmf_tgt_br2 nomaster 00:11:06.668 Cannot find device "nvmf_tgt_br2" 00:11:06.668 11:33:07 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@165 -- # true 00:11:06.668 11:33:07 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@166 -- # ip link set nvmf_init_br down 00:11:06.668 Cannot find device "nvmf_init_br" 00:11:06.668 11:33:07 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@166 -- # true 00:11:06.668 11:33:07 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@167 -- # ip link set nvmf_init_br2 down 00:11:06.668 Cannot find device "nvmf_init_br2" 00:11:06.668 11:33:07 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@167 -- # true 00:11:06.668 11:33:07 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@168 -- # ip link set nvmf_tgt_br down 00:11:06.668 Cannot find device "nvmf_tgt_br" 00:11:06.668 11:33:07 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@168 -- # true 00:11:06.668 11:33:07 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@169 -- # ip link set nvmf_tgt_br2 down 00:11:06.668 Cannot find device "nvmf_tgt_br2" 00:11:06.668 11:33:07 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@169 -- # true 00:11:06.668 11:33:07 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@170 -- # ip link delete nvmf_br type bridge 00:11:06.668 Cannot find device "nvmf_br" 00:11:06.668 11:33:07 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@170 -- # true 00:11:06.668 11:33:07 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@171 -- # ip link delete nvmf_init_if 00:11:06.668 Cannot find device "nvmf_init_if" 00:11:06.668 11:33:07 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@171 -- # true 00:11:06.669 11:33:07 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@172 -- # ip link delete nvmf_init_if2 00:11:06.669 Cannot find device "nvmf_init_if2" 00:11:06.669 11:33:07 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@172 -- # true 00:11:06.669 11:33:07 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@173 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:11:06.669 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:11:06.669 11:33:07 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@173 -- # true 00:11:06.669 11:33:07 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@174 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:11:06.669 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:11:06.669 11:33:07 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@174 -- # true 00:11:06.669 11:33:07 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@177 -- # ip netns add nvmf_tgt_ns_spdk 00:11:06.669 11:33:07 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@180 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:11:06.669 11:33:07 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@181 -- # ip link add nvmf_init_if2 type veth peer name nvmf_init_br2 00:11:06.669 11:33:07 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@182 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:11:06.669 11:33:07 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@183 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:11:06.669 11:33:07 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:11:06.669 11:33:07 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@187 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:11:06.669 11:33:07 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@190 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:11:06.669 11:33:07 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@191 -- # ip addr add 10.0.0.2/24 dev nvmf_init_if2 00:11:06.669 11:33:07 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@192 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if 00:11:06.669 11:33:07 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@193 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.4/24 dev nvmf_tgt_if2 00:11:06.669 11:33:07 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@196 -- # ip link set nvmf_init_if up 00:11:06.669 11:33:07 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@197 -- # ip link set nvmf_init_if2 up 00:11:06.669 11:33:07 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@198 -- # ip link set nvmf_init_br up 00:11:06.669 11:33:07 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@199 -- # ip link set nvmf_init_br2 up 00:11:06.927 11:33:07 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@200 -- # ip link set nvmf_tgt_br up 00:11:06.927 11:33:07 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@201 -- # ip link set nvmf_tgt_br2 up 00:11:06.927 11:33:07 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@202 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:11:06.927 11:33:07 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@203 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:11:06.927 11:33:07 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@204 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:11:06.927 11:33:07 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@207 -- # ip link add nvmf_br type bridge 00:11:06.927 11:33:07 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@208 -- # ip link set nvmf_br up 00:11:06.928 11:33:07 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@211 -- # ip link set nvmf_init_br master nvmf_br 00:11:06.928 11:33:07 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@212 -- # ip link set nvmf_init_br2 master nvmf_br 00:11:06.928 11:33:07 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@213 -- # ip link set nvmf_tgt_br master nvmf_br 00:11:06.928 11:33:07 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@214 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:11:06.928 11:33:07 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@217 -- # ipts -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:11:06.928 11:33:07 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT' 00:11:06.928 11:33:07 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@218 -- # ipts -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT 00:11:06.928 11:33:07 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT' 00:11:06.928 11:33:07 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@219 -- # ipts -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:11:06.928 11:33:07 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@790 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT -m comment --comment 'SPDK_NVMF:-A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT' 00:11:06.928 11:33:07 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@222 -- # ping -c 1 10.0.0.3 00:11:06.928 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:11:06.928 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.057 ms 00:11:06.928 00:11:06.928 --- 10.0.0.3 ping statistics --- 00:11:06.928 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:11:06.928 rtt min/avg/max/mdev = 0.057/0.057/0.057/0.000 ms 00:11:06.928 11:33:07 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@223 -- # ping -c 1 10.0.0.4 00:11:06.928 PING 10.0.0.4 (10.0.0.4) 56(84) bytes of data. 00:11:06.928 64 bytes from 10.0.0.4: icmp_seq=1 ttl=64 time=0.041 ms 00:11:06.928 00:11:06.928 --- 10.0.0.4 ping statistics --- 00:11:06.928 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:11:06.928 rtt min/avg/max/mdev = 0.041/0.041/0.041/0.000 ms 00:11:06.928 11:33:07 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@224 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:11:06.928 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:11:06.928 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.028 ms 00:11:06.928 00:11:06.928 --- 10.0.0.1 ping statistics --- 00:11:06.928 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:11:06.928 rtt min/avg/max/mdev = 0.028/0.028/0.028/0.000 ms 00:11:06.928 11:33:07 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@225 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.2 00:11:06.928 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:11:06.928 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.071 ms 00:11:06.928 00:11:06.928 --- 10.0.0.2 ping statistics --- 00:11:06.928 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:11:06.928 rtt min/avg/max/mdev = 0.071/0.071/0.071/0.000 ms 00:11:06.928 11:33:07 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@227 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:11:06.928 11:33:07 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@461 -- # return 0 00:11:06.928 11:33:07 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:11:06.928 11:33:07 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:11:06.928 11:33:07 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:11:06.928 11:33:07 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:11:06.928 11:33:07 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:11:06.928 11:33:07 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:11:06.928 11:33:07 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:11:06.928 11:33:07 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- target/connect_disconnect.sh@16 -- # nvmfappstart -m 0xF 00:11:06.928 11:33:07 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:11:06.928 11:33:07 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@726 -- # xtrace_disable 00:11:06.928 11:33:07 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@10 -- # set +x 00:11:06.928 11:33:07 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@509 -- # nvmfpid=74984 00:11:06.928 11:33:07 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@510 -- # waitforlisten 74984 00:11:06.928 11:33:07 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@835 -- # '[' -z 74984 ']' 00:11:06.928 11:33:07 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@508 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:11:06.928 11:33:07 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:11:06.928 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:11:06.928 11:33:07 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@840 -- # local max_retries=100 00:11:06.928 11:33:07 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:11:06.928 11:33:07 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@844 -- # xtrace_disable 00:11:06.928 11:33:07 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@10 -- # set +x 00:11:06.928 [2024-12-16 11:33:07.632575] Starting SPDK v25.01-pre git sha1 c360129e7 / DPDK 24.03.0 initialization... 00:11:06.928 [2024-12-16 11:33:07.632985] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:11:07.186 [2024-12-16 11:33:07.782397] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:11:07.186 [2024-12-16 11:33:07.836496] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:11:07.186 [2024-12-16 11:33:07.836791] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:11:07.186 [2024-12-16 11:33:07.837030] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:11:07.186 [2024-12-16 11:33:07.837219] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:11:07.186 [2024-12-16 11:33:07.837341] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:11:07.186 [2024-12-16 11:33:07.838664] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 1 00:11:07.186 [2024-12-16 11:33:07.838881] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:11:07.186 [2024-12-16 11:33:07.838749] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 2 00:11:07.186 [2024-12-16 11:33:07.838874] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 3 00:11:07.444 11:33:07 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:11:07.444 11:33:07 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@868 -- # return 0 00:11:07.444 11:33:07 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:11:07.444 11:33:07 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@732 -- # xtrace_disable 00:11:07.444 11:33:07 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@10 -- # set +x 00:11:07.444 11:33:08 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:11:07.444 11:33:08 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- target/connect_disconnect.sh@18 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 -c 0 00:11:07.444 11:33:08 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:07.444 11:33:08 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@10 -- # set +x 00:11:07.444 [2024-12-16 11:33:08.023776] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:11:07.444 11:33:08 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:07.444 11:33:08 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- target/connect_disconnect.sh@20 -- # rpc_cmd bdev_malloc_create 64 512 00:11:07.444 11:33:08 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:07.444 11:33:08 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@10 -- # set +x 00:11:07.444 11:33:08 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:07.444 11:33:08 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- target/connect_disconnect.sh@20 -- # bdev=Malloc0 00:11:07.444 11:33:08 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- target/connect_disconnect.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:11:07.444 11:33:08 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:07.444 11:33:08 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@10 -- # set +x 00:11:07.444 11:33:08 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:07.444 11:33:08 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- target/connect_disconnect.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:11:07.444 11:33:08 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:07.444 11:33:08 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@10 -- # set +x 00:11:07.444 11:33:08 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:07.444 11:33:08 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- target/connect_disconnect.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 00:11:07.445 11:33:08 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:07.445 11:33:08 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@10 -- # set +x 00:11:07.445 [2024-12-16 11:33:08.094974] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4420 *** 00:11:07.445 11:33:08 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:07.445 11:33:08 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- target/connect_disconnect.sh@26 -- # '[' 0 -eq 1 ']' 00:11:07.445 11:33:08 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- target/connect_disconnect.sh@31 -- # num_iterations=5 00:11:07.445 11:33:08 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- target/connect_disconnect.sh@34 -- # set +x 00:11:09.975 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:11:11.879 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:11:14.408 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:11:16.965 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:11:18.867 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:11:18.867 11:33:19 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- target/connect_disconnect.sh@43 -- # trap - SIGINT SIGTERM EXIT 00:11:18.867 11:33:19 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- target/connect_disconnect.sh@45 -- # nvmftestfini 00:11:18.867 11:33:19 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@516 -- # nvmfcleanup 00:11:18.867 11:33:19 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@121 -- # sync 00:11:18.867 11:33:19 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:11:18.867 11:33:19 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@124 -- # set +e 00:11:18.867 11:33:19 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@125 -- # for i in {1..20} 00:11:18.867 11:33:19 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:11:18.867 rmmod nvme_tcp 00:11:18.867 rmmod nvme_fabrics 00:11:18.867 rmmod nvme_keyring 00:11:18.867 11:33:19 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:11:18.867 11:33:19 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@128 -- # set -e 00:11:18.867 11:33:19 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@129 -- # return 0 00:11:18.867 11:33:19 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@517 -- # '[' -n 74984 ']' 00:11:18.867 11:33:19 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@518 -- # killprocess 74984 00:11:18.867 11:33:19 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@954 -- # '[' -z 74984 ']' 00:11:18.867 11:33:19 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@958 -- # kill -0 74984 00:11:18.867 11:33:19 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@959 -- # uname 00:11:18.867 11:33:19 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:11:18.867 11:33:19 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 74984 00:11:18.867 killing process with pid 74984 00:11:18.867 11:33:19 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:11:18.867 11:33:19 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:11:18.867 11:33:19 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@972 -- # echo 'killing process with pid 74984' 00:11:18.867 11:33:19 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@973 -- # kill 74984 00:11:18.867 11:33:19 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@978 -- # wait 74984 00:11:19.125 11:33:19 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:11:19.125 11:33:19 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:11:19.125 11:33:19 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:11:19.125 11:33:19 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@297 -- # iptr 00:11:19.125 11:33:19 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@791 -- # iptables-save 00:11:19.125 11:33:19 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:11:19.125 11:33:19 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@791 -- # iptables-restore 00:11:19.125 11:33:19 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@298 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:11:19.125 11:33:19 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@299 -- # nvmf_veth_fini 00:11:19.125 11:33:19 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@233 -- # ip link set nvmf_init_br nomaster 00:11:19.125 11:33:19 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@234 -- # ip link set nvmf_init_br2 nomaster 00:11:19.125 11:33:19 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@235 -- # ip link set nvmf_tgt_br nomaster 00:11:19.125 11:33:19 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@236 -- # ip link set nvmf_tgt_br2 nomaster 00:11:19.125 11:33:19 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@237 -- # ip link set nvmf_init_br down 00:11:19.382 11:33:19 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@238 -- # ip link set nvmf_init_br2 down 00:11:19.382 11:33:19 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@239 -- # ip link set nvmf_tgt_br down 00:11:19.382 11:33:19 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@240 -- # ip link set nvmf_tgt_br2 down 00:11:19.382 11:33:19 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@241 -- # ip link delete nvmf_br type bridge 00:11:19.382 11:33:19 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@242 -- # ip link delete nvmf_init_if 00:11:19.382 11:33:19 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@243 -- # ip link delete nvmf_init_if2 00:11:19.382 11:33:19 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@244 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:11:19.382 11:33:20 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@245 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:11:19.382 11:33:20 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@246 -- # remove_spdk_ns 00:11:19.382 11:33:20 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:11:19.382 11:33:20 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:11:19.382 11:33:20 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:11:19.382 11:33:20 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@300 -- # return 0 00:11:19.382 00:11:19.382 real 0m13.173s 00:11:19.382 user 0m46.766s 00:11:19.382 sys 0m2.022s 00:11:19.382 11:33:20 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@1130 -- # xtrace_disable 00:11:19.382 11:33:20 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@10 -- # set +x 00:11:19.382 ************************************ 00:11:19.382 END TEST nvmf_connect_disconnect 00:11:19.382 ************************************ 00:11:19.382 11:33:20 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@21 -- # run_test nvmf_multitarget /home/vagrant/spdk_repo/spdk/test/nvmf/target/multitarget.sh --transport=tcp 00:11:19.382 11:33:20 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:11:19.382 11:33:20 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1111 -- # xtrace_disable 00:11:19.382 11:33:20 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:11:19.382 ************************************ 00:11:19.382 START TEST nvmf_multitarget 00:11:19.382 ************************************ 00:11:19.382 11:33:20 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/multitarget.sh --transport=tcp 00:11:19.641 * Looking for test storage... 00:11:19.641 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:11:19.641 11:33:20 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:11:19.641 11:33:20 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:11:19.641 11:33:20 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@1711 -- # lcov --version 00:11:19.641 11:33:20 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:11:19.641 11:33:20 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:11:19.641 11:33:20 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@333 -- # local ver1 ver1_l 00:11:19.641 11:33:20 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@334 -- # local ver2 ver2_l 00:11:19.641 11:33:20 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@336 -- # IFS=.-: 00:11:19.641 11:33:20 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@336 -- # read -ra ver1 00:11:19.641 11:33:20 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@337 -- # IFS=.-: 00:11:19.641 11:33:20 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@337 -- # read -ra ver2 00:11:19.641 11:33:20 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@338 -- # local 'op=<' 00:11:19.641 11:33:20 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@340 -- # ver1_l=2 00:11:19.641 11:33:20 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@341 -- # ver2_l=1 00:11:19.641 11:33:20 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:11:19.641 11:33:20 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@344 -- # case "$op" in 00:11:19.641 11:33:20 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@345 -- # : 1 00:11:19.641 11:33:20 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@364 -- # (( v = 0 )) 00:11:19.641 11:33:20 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:11:19.641 11:33:20 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@365 -- # decimal 1 00:11:19.641 11:33:20 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@353 -- # local d=1 00:11:19.641 11:33:20 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:11:19.641 11:33:20 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@355 -- # echo 1 00:11:19.641 11:33:20 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@365 -- # ver1[v]=1 00:11:19.641 11:33:20 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@366 -- # decimal 2 00:11:19.641 11:33:20 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@353 -- # local d=2 00:11:19.641 11:33:20 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:11:19.641 11:33:20 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@355 -- # echo 2 00:11:19.641 11:33:20 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@366 -- # ver2[v]=2 00:11:19.641 11:33:20 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:11:19.641 11:33:20 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:11:19.641 11:33:20 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@368 -- # return 0 00:11:19.641 11:33:20 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:11:19.641 11:33:20 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:11:19.641 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:19.641 --rc genhtml_branch_coverage=1 00:11:19.641 --rc genhtml_function_coverage=1 00:11:19.641 --rc genhtml_legend=1 00:11:19.641 --rc geninfo_all_blocks=1 00:11:19.641 --rc geninfo_unexecuted_blocks=1 00:11:19.641 00:11:19.641 ' 00:11:19.641 11:33:20 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:11:19.641 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:19.641 --rc genhtml_branch_coverage=1 00:11:19.641 --rc genhtml_function_coverage=1 00:11:19.641 --rc genhtml_legend=1 00:11:19.641 --rc geninfo_all_blocks=1 00:11:19.641 --rc geninfo_unexecuted_blocks=1 00:11:19.641 00:11:19.641 ' 00:11:19.641 11:33:20 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:11:19.641 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:19.641 --rc genhtml_branch_coverage=1 00:11:19.641 --rc genhtml_function_coverage=1 00:11:19.641 --rc genhtml_legend=1 00:11:19.641 --rc geninfo_all_blocks=1 00:11:19.641 --rc geninfo_unexecuted_blocks=1 00:11:19.641 00:11:19.641 ' 00:11:19.641 11:33:20 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:11:19.641 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:19.641 --rc genhtml_branch_coverage=1 00:11:19.641 --rc genhtml_function_coverage=1 00:11:19.641 --rc genhtml_legend=1 00:11:19.641 --rc geninfo_all_blocks=1 00:11:19.641 --rc geninfo_unexecuted_blocks=1 00:11:19.641 00:11:19.641 ' 00:11:19.641 11:33:20 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:11:19.641 11:33:20 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@7 -- # uname -s 00:11:19.641 11:33:20 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:11:19.641 11:33:20 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:11:19.641 11:33:20 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:11:19.641 11:33:20 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:11:19.641 11:33:20 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:11:19.641 11:33:20 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:11:19.641 11:33:20 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:11:19.641 11:33:20 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:11:19.641 11:33:20 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:11:19.641 11:33:20 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:11:19.641 11:33:20 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:2fb5c03d-b67f-4471-b8a9-a83bf7b15be8 00:11:19.641 11:33:20 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@18 -- # NVME_HOSTID=2fb5c03d-b67f-4471-b8a9-a83bf7b15be8 00:11:19.641 11:33:20 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:11:19.641 11:33:20 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:11:19.641 11:33:20 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:11:19.641 11:33:20 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:11:19.641 11:33:20 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@49 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:11:19.641 11:33:20 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@15 -- # shopt -s extglob 00:11:19.641 11:33:20 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:11:19.641 11:33:20 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:11:19.641 11:33:20 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:11:19.641 11:33:20 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:19.641 11:33:20 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:19.641 11:33:20 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:19.641 11:33:20 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- paths/export.sh@5 -- # export PATH 00:11:19.641 11:33:20 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:19.641 11:33:20 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@51 -- # : 0 00:11:19.641 11:33:20 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:11:19.641 11:33:20 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:11:19.641 11:33:20 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:11:19.641 11:33:20 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:11:19.641 11:33:20 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:11:19.641 11:33:20 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:11:19.641 /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:11:19.641 11:33:20 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:11:19.641 11:33:20 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:11:19.642 11:33:20 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@55 -- # have_pci_nics=0 00:11:19.642 11:33:20 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@13 -- # rpc_py=/home/vagrant/spdk_repo/spdk/test/nvmf/target/multitarget_rpc.py 00:11:19.642 11:33:20 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@15 -- # nvmftestinit 00:11:19.642 11:33:20 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:11:19.642 11:33:20 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:11:19.642 11:33:20 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@476 -- # prepare_net_devs 00:11:19.642 11:33:20 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@438 -- # local -g is_hw=no 00:11:19.642 11:33:20 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@440 -- # remove_spdk_ns 00:11:19.642 11:33:20 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:11:19.642 11:33:20 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:11:19.642 11:33:20 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:11:19.642 11:33:20 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@442 -- # [[ virt != virt ]] 00:11:19.642 11:33:20 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@444 -- # [[ no == yes ]] 00:11:19.642 11:33:20 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@451 -- # [[ virt == phy ]] 00:11:19.642 11:33:20 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@454 -- # [[ virt == phy-fallback ]] 00:11:19.642 11:33:20 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@459 -- # [[ tcp == tcp ]] 00:11:19.642 11:33:20 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@460 -- # nvmf_veth_init 00:11:19.642 11:33:20 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@145 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:11:19.642 11:33:20 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@146 -- # NVMF_SECOND_INITIATOR_IP=10.0.0.2 00:11:19.642 11:33:20 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@147 -- # NVMF_FIRST_TARGET_IP=10.0.0.3 00:11:19.642 11:33:20 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@148 -- # NVMF_SECOND_TARGET_IP=10.0.0.4 00:11:19.642 11:33:20 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@149 -- # NVMF_INITIATOR_IP=10.0.0.1 00:11:19.642 11:33:20 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@150 -- # NVMF_BRIDGE=nvmf_br 00:11:19.642 11:33:20 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@151 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:11:19.642 11:33:20 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@152 -- # NVMF_INITIATOR_INTERFACE2=nvmf_init_if2 00:11:19.642 11:33:20 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@153 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:11:19.642 11:33:20 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@154 -- # NVMF_INITIATOR_BRIDGE2=nvmf_init_br2 00:11:19.642 11:33:20 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@155 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:11:19.642 11:33:20 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@156 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:11:19.642 11:33:20 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@157 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:11:19.642 11:33:20 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@158 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:11:19.642 11:33:20 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@159 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:11:19.642 11:33:20 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@160 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:11:19.642 11:33:20 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@162 -- # ip link set nvmf_init_br nomaster 00:11:19.642 Cannot find device "nvmf_init_br" 00:11:19.642 11:33:20 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@162 -- # true 00:11:19.642 11:33:20 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@163 -- # ip link set nvmf_init_br2 nomaster 00:11:19.642 Cannot find device "nvmf_init_br2" 00:11:19.642 11:33:20 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@163 -- # true 00:11:19.642 11:33:20 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@164 -- # ip link set nvmf_tgt_br nomaster 00:11:19.642 Cannot find device "nvmf_tgt_br" 00:11:19.642 11:33:20 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@164 -- # true 00:11:19.642 11:33:20 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@165 -- # ip link set nvmf_tgt_br2 nomaster 00:11:19.642 Cannot find device "nvmf_tgt_br2" 00:11:19.642 11:33:20 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@165 -- # true 00:11:19.642 11:33:20 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@166 -- # ip link set nvmf_init_br down 00:11:19.642 Cannot find device "nvmf_init_br" 00:11:19.642 11:33:20 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@166 -- # true 00:11:19.642 11:33:20 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@167 -- # ip link set nvmf_init_br2 down 00:11:19.642 Cannot find device "nvmf_init_br2" 00:11:19.642 11:33:20 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@167 -- # true 00:11:19.642 11:33:20 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@168 -- # ip link set nvmf_tgt_br down 00:11:19.899 Cannot find device "nvmf_tgt_br" 00:11:19.899 11:33:20 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@168 -- # true 00:11:19.899 11:33:20 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@169 -- # ip link set nvmf_tgt_br2 down 00:11:19.899 Cannot find device "nvmf_tgt_br2" 00:11:19.899 11:33:20 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@169 -- # true 00:11:19.899 11:33:20 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@170 -- # ip link delete nvmf_br type bridge 00:11:19.899 Cannot find device "nvmf_br" 00:11:19.899 11:33:20 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@170 -- # true 00:11:19.899 11:33:20 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@171 -- # ip link delete nvmf_init_if 00:11:19.899 Cannot find device "nvmf_init_if" 00:11:19.899 11:33:20 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@171 -- # true 00:11:19.899 11:33:20 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@172 -- # ip link delete nvmf_init_if2 00:11:19.899 Cannot find device "nvmf_init_if2" 00:11:19.899 11:33:20 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@172 -- # true 00:11:19.899 11:33:20 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@173 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:11:19.899 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:11:19.899 11:33:20 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@173 -- # true 00:11:19.899 11:33:20 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@174 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:11:19.900 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:11:19.900 11:33:20 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@174 -- # true 00:11:19.900 11:33:20 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@177 -- # ip netns add nvmf_tgt_ns_spdk 00:11:19.900 11:33:20 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@180 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:11:19.900 11:33:20 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@181 -- # ip link add nvmf_init_if2 type veth peer name nvmf_init_br2 00:11:19.900 11:33:20 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@182 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:11:19.900 11:33:20 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@183 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:11:19.900 11:33:20 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:11:19.900 11:33:20 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@187 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:11:19.900 11:33:20 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@190 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:11:19.900 11:33:20 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@191 -- # ip addr add 10.0.0.2/24 dev nvmf_init_if2 00:11:19.900 11:33:20 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@192 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if 00:11:19.900 11:33:20 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@193 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.4/24 dev nvmf_tgt_if2 00:11:19.900 11:33:20 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@196 -- # ip link set nvmf_init_if up 00:11:19.900 11:33:20 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@197 -- # ip link set nvmf_init_if2 up 00:11:19.900 11:33:20 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@198 -- # ip link set nvmf_init_br up 00:11:19.900 11:33:20 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@199 -- # ip link set nvmf_init_br2 up 00:11:19.900 11:33:20 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@200 -- # ip link set nvmf_tgt_br up 00:11:19.900 11:33:20 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@201 -- # ip link set nvmf_tgt_br2 up 00:11:19.900 11:33:20 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@202 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:11:19.900 11:33:20 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@203 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:11:19.900 11:33:20 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@204 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:11:19.900 11:33:20 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@207 -- # ip link add nvmf_br type bridge 00:11:19.900 11:33:20 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@208 -- # ip link set nvmf_br up 00:11:19.900 11:33:20 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@211 -- # ip link set nvmf_init_br master nvmf_br 00:11:19.900 11:33:20 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@212 -- # ip link set nvmf_init_br2 master nvmf_br 00:11:20.159 11:33:20 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@213 -- # ip link set nvmf_tgt_br master nvmf_br 00:11:20.159 11:33:20 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@214 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:11:20.159 11:33:20 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@217 -- # ipts -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:11:20.159 11:33:20 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT' 00:11:20.159 11:33:20 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@218 -- # ipts -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT 00:11:20.159 11:33:20 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT' 00:11:20.159 11:33:20 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@219 -- # ipts -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:11:20.159 11:33:20 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@790 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT -m comment --comment 'SPDK_NVMF:-A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT' 00:11:20.159 11:33:20 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@222 -- # ping -c 1 10.0.0.3 00:11:20.159 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:11:20.159 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.050 ms 00:11:20.159 00:11:20.159 --- 10.0.0.3 ping statistics --- 00:11:20.159 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:11:20.159 rtt min/avg/max/mdev = 0.050/0.050/0.050/0.000 ms 00:11:20.159 11:33:20 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@223 -- # ping -c 1 10.0.0.4 00:11:20.159 PING 10.0.0.4 (10.0.0.4) 56(84) bytes of data. 00:11:20.159 64 bytes from 10.0.0.4: icmp_seq=1 ttl=64 time=0.059 ms 00:11:20.159 00:11:20.159 --- 10.0.0.4 ping statistics --- 00:11:20.159 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:11:20.159 rtt min/avg/max/mdev = 0.059/0.059/0.059/0.000 ms 00:11:20.159 11:33:20 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@224 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:11:20.159 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:11:20.159 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.025 ms 00:11:20.159 00:11:20.159 --- 10.0.0.1 ping statistics --- 00:11:20.159 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:11:20.159 rtt min/avg/max/mdev = 0.025/0.025/0.025/0.000 ms 00:11:20.159 11:33:20 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@225 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.2 00:11:20.159 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:11:20.159 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.064 ms 00:11:20.159 00:11:20.159 --- 10.0.0.2 ping statistics --- 00:11:20.159 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:11:20.159 rtt min/avg/max/mdev = 0.064/0.064/0.064/0.000 ms 00:11:20.159 11:33:20 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@227 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:11:20.159 11:33:20 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@461 -- # return 0 00:11:20.159 11:33:20 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:11:20.159 11:33:20 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:11:20.159 11:33:20 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:11:20.159 11:33:20 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:11:20.159 11:33:20 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:11:20.159 11:33:20 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:11:20.159 11:33:20 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:11:20.159 11:33:20 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@16 -- # nvmfappstart -m 0xF 00:11:20.159 11:33:20 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:11:20.159 11:33:20 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@726 -- # xtrace_disable 00:11:20.159 11:33:20 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@10 -- # set +x 00:11:20.159 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:11:20.159 11:33:20 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@509 -- # nvmfpid=75425 00:11:20.159 11:33:20 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@508 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:11:20.159 11:33:20 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@510 -- # waitforlisten 75425 00:11:20.159 11:33:20 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@835 -- # '[' -z 75425 ']' 00:11:20.159 11:33:20 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:11:20.159 11:33:20 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@840 -- # local max_retries=100 00:11:20.159 11:33:20 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:11:20.159 11:33:20 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@844 -- # xtrace_disable 00:11:20.159 11:33:20 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@10 -- # set +x 00:11:20.159 [2024-12-16 11:33:20.816729] Starting SPDK v25.01-pre git sha1 c360129e7 / DPDK 24.03.0 initialization... 00:11:20.159 [2024-12-16 11:33:20.817035] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:11:20.417 [2024-12-16 11:33:20.972032] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:11:20.417 [2024-12-16 11:33:21.042514] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:11:20.418 [2024-12-16 11:33:21.042779] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:11:20.418 [2024-12-16 11:33:21.042951] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:11:20.418 [2024-12-16 11:33:21.043113] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:11:20.418 [2024-12-16 11:33:21.043181] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:11:20.418 [2024-12-16 11:33:21.044631] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 1 00:11:20.418 [2024-12-16 11:33:21.044720] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 2 00:11:20.418 [2024-12-16 11:33:21.044842] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 3 00:11:20.418 [2024-12-16 11:33:21.044849] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:11:20.676 11:33:21 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:11:20.676 11:33:21 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@868 -- # return 0 00:11:20.676 11:33:21 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:11:20.676 11:33:21 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@732 -- # xtrace_disable 00:11:20.676 11:33:21 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@10 -- # set +x 00:11:20.676 11:33:21 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:11:20.676 11:33:21 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@18 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; nvmftestfini $1; exit 1' SIGINT SIGTERM EXIT 00:11:20.676 11:33:21 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@21 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/multitarget_rpc.py nvmf_get_targets 00:11:20.676 11:33:21 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@21 -- # jq length 00:11:20.676 11:33:21 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@21 -- # '[' 1 '!=' 1 ']' 00:11:20.676 11:33:21 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@25 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/multitarget_rpc.py nvmf_create_target -n nvmf_tgt_1 -s 32 00:11:20.934 "nvmf_tgt_1" 00:11:20.934 11:33:21 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@26 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/multitarget_rpc.py nvmf_create_target -n nvmf_tgt_2 -s 32 00:11:20.934 "nvmf_tgt_2" 00:11:20.934 11:33:21 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@28 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/multitarget_rpc.py nvmf_get_targets 00:11:20.934 11:33:21 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@28 -- # jq length 00:11:21.192 11:33:21 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@28 -- # '[' 3 '!=' 3 ']' 00:11:21.192 11:33:21 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@32 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/multitarget_rpc.py nvmf_delete_target -n nvmf_tgt_1 00:11:21.192 true 00:11:21.192 11:33:21 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@33 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/multitarget_rpc.py nvmf_delete_target -n nvmf_tgt_2 00:11:21.450 true 00:11:21.450 11:33:22 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@35 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/multitarget_rpc.py nvmf_get_targets 00:11:21.450 11:33:22 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@35 -- # jq length 00:11:21.450 11:33:22 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@35 -- # '[' 1 '!=' 1 ']' 00:11:21.450 11:33:22 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@39 -- # trap - SIGINT SIGTERM EXIT 00:11:21.450 11:33:22 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@41 -- # nvmftestfini 00:11:21.450 11:33:22 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@516 -- # nvmfcleanup 00:11:21.450 11:33:22 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@121 -- # sync 00:11:21.708 11:33:22 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:11:21.708 11:33:22 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@124 -- # set +e 00:11:21.708 11:33:22 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@125 -- # for i in {1..20} 00:11:21.708 11:33:22 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:11:21.708 rmmod nvme_tcp 00:11:21.708 rmmod nvme_fabrics 00:11:21.708 rmmod nvme_keyring 00:11:21.708 11:33:22 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:11:21.708 11:33:22 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@128 -- # set -e 00:11:21.708 11:33:22 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@129 -- # return 0 00:11:21.708 11:33:22 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@517 -- # '[' -n 75425 ']' 00:11:21.708 11:33:22 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@518 -- # killprocess 75425 00:11:21.708 11:33:22 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@954 -- # '[' -z 75425 ']' 00:11:21.708 11:33:22 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@958 -- # kill -0 75425 00:11:21.708 11:33:22 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@959 -- # uname 00:11:21.708 11:33:22 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:11:21.708 11:33:22 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 75425 00:11:21.708 killing process with pid 75425 00:11:21.708 11:33:22 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:11:21.708 11:33:22 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:11:21.708 11:33:22 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@972 -- # echo 'killing process with pid 75425' 00:11:21.708 11:33:22 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@973 -- # kill 75425 00:11:21.708 11:33:22 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@978 -- # wait 75425 00:11:21.967 11:33:22 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:11:21.967 11:33:22 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:11:21.967 11:33:22 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:11:21.967 11:33:22 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@297 -- # iptr 00:11:21.967 11:33:22 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:11:21.967 11:33:22 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@791 -- # iptables-save 00:11:21.967 11:33:22 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@791 -- # iptables-restore 00:11:21.967 11:33:22 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@298 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:11:21.967 11:33:22 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@299 -- # nvmf_veth_fini 00:11:21.967 11:33:22 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@233 -- # ip link set nvmf_init_br nomaster 00:11:21.967 11:33:22 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@234 -- # ip link set nvmf_init_br2 nomaster 00:11:21.967 11:33:22 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@235 -- # ip link set nvmf_tgt_br nomaster 00:11:21.967 11:33:22 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@236 -- # ip link set nvmf_tgt_br2 nomaster 00:11:21.967 11:33:22 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@237 -- # ip link set nvmf_init_br down 00:11:21.967 11:33:22 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@238 -- # ip link set nvmf_init_br2 down 00:11:21.967 11:33:22 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@239 -- # ip link set nvmf_tgt_br down 00:11:21.967 11:33:22 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@240 -- # ip link set nvmf_tgt_br2 down 00:11:21.967 11:33:22 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@241 -- # ip link delete nvmf_br type bridge 00:11:21.967 11:33:22 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@242 -- # ip link delete nvmf_init_if 00:11:21.967 11:33:22 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@243 -- # ip link delete nvmf_init_if2 00:11:21.967 11:33:22 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@244 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:11:21.967 11:33:22 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@245 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:11:22.226 11:33:22 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@246 -- # remove_spdk_ns 00:11:22.226 11:33:22 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:11:22.226 11:33:22 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:11:22.226 11:33:22 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:11:22.226 11:33:22 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@300 -- # return 0 00:11:22.226 ************************************ 00:11:22.226 END TEST nvmf_multitarget 00:11:22.226 ************************************ 00:11:22.226 00:11:22.226 real 0m2.651s 00:11:22.226 user 0m7.019s 00:11:22.226 sys 0m0.817s 00:11:22.226 11:33:22 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@1130 -- # xtrace_disable 00:11:22.226 11:33:22 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@10 -- # set +x 00:11:22.226 11:33:22 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@22 -- # run_test nvmf_rpc /home/vagrant/spdk_repo/spdk/test/nvmf/target/rpc.sh --transport=tcp 00:11:22.226 11:33:22 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:11:22.226 11:33:22 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1111 -- # xtrace_disable 00:11:22.226 11:33:22 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:11:22.226 ************************************ 00:11:22.226 START TEST nvmf_rpc 00:11:22.226 ************************************ 00:11:22.226 11:33:22 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/rpc.sh --transport=tcp 00:11:22.226 * Looking for test storage... 00:11:22.226 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:11:22.226 11:33:22 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:11:22.226 11:33:22 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1711 -- # lcov --version 00:11:22.226 11:33:22 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:11:22.226 11:33:22 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:11:22.226 11:33:22 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:11:22.226 11:33:22 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@333 -- # local ver1 ver1_l 00:11:22.226 11:33:22 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@334 -- # local ver2 ver2_l 00:11:22.226 11:33:22 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@336 -- # IFS=.-: 00:11:22.226 11:33:22 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@336 -- # read -ra ver1 00:11:22.226 11:33:22 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@337 -- # IFS=.-: 00:11:22.226 11:33:22 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@337 -- # read -ra ver2 00:11:22.226 11:33:22 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@338 -- # local 'op=<' 00:11:22.226 11:33:22 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@340 -- # ver1_l=2 00:11:22.226 11:33:22 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@341 -- # ver2_l=1 00:11:22.226 11:33:22 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:11:22.226 11:33:22 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@344 -- # case "$op" in 00:11:22.226 11:33:22 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@345 -- # : 1 00:11:22.226 11:33:22 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@364 -- # (( v = 0 )) 00:11:22.226 11:33:22 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:11:22.226 11:33:22 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@365 -- # decimal 1 00:11:22.226 11:33:22 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@353 -- # local d=1 00:11:22.226 11:33:22 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:11:22.226 11:33:22 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@355 -- # echo 1 00:11:22.226 11:33:22 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@365 -- # ver1[v]=1 00:11:22.226 11:33:22 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@366 -- # decimal 2 00:11:22.226 11:33:22 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@353 -- # local d=2 00:11:22.226 11:33:22 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:11:22.226 11:33:22 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@355 -- # echo 2 00:11:22.226 11:33:22 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@366 -- # ver2[v]=2 00:11:22.226 11:33:22 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:11:22.226 11:33:22 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:11:22.226 11:33:22 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@368 -- # return 0 00:11:22.226 11:33:22 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:11:22.226 11:33:22 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:11:22.226 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:22.226 --rc genhtml_branch_coverage=1 00:11:22.226 --rc genhtml_function_coverage=1 00:11:22.226 --rc genhtml_legend=1 00:11:22.226 --rc geninfo_all_blocks=1 00:11:22.226 --rc geninfo_unexecuted_blocks=1 00:11:22.226 00:11:22.226 ' 00:11:22.226 11:33:22 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:11:22.226 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:22.226 --rc genhtml_branch_coverage=1 00:11:22.226 --rc genhtml_function_coverage=1 00:11:22.226 --rc genhtml_legend=1 00:11:22.226 --rc geninfo_all_blocks=1 00:11:22.226 --rc geninfo_unexecuted_blocks=1 00:11:22.226 00:11:22.226 ' 00:11:22.226 11:33:22 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:11:22.226 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:22.226 --rc genhtml_branch_coverage=1 00:11:22.226 --rc genhtml_function_coverage=1 00:11:22.226 --rc genhtml_legend=1 00:11:22.226 --rc geninfo_all_blocks=1 00:11:22.226 --rc geninfo_unexecuted_blocks=1 00:11:22.226 00:11:22.226 ' 00:11:22.226 11:33:22 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:11:22.226 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:22.226 --rc genhtml_branch_coverage=1 00:11:22.226 --rc genhtml_function_coverage=1 00:11:22.226 --rc genhtml_legend=1 00:11:22.226 --rc geninfo_all_blocks=1 00:11:22.226 --rc geninfo_unexecuted_blocks=1 00:11:22.226 00:11:22.226 ' 00:11:22.226 11:33:22 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:11:22.226 11:33:22 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@7 -- # uname -s 00:11:22.226 11:33:22 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:11:22.226 11:33:22 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:11:22.226 11:33:22 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:11:22.226 11:33:22 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:11:22.226 11:33:22 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:11:22.226 11:33:22 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:11:22.226 11:33:22 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:11:22.226 11:33:22 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:11:22.226 11:33:22 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:11:22.485 11:33:22 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:11:22.485 11:33:22 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:2fb5c03d-b67f-4471-b8a9-a83bf7b15be8 00:11:22.485 11:33:22 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@18 -- # NVME_HOSTID=2fb5c03d-b67f-4471-b8a9-a83bf7b15be8 00:11:22.485 11:33:22 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:11:22.485 11:33:22 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:11:22.485 11:33:22 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:11:22.485 11:33:22 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:11:22.485 11:33:22 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@49 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:11:22.485 11:33:22 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@15 -- # shopt -s extglob 00:11:22.485 11:33:22 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:11:22.485 11:33:22 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:11:22.485 11:33:22 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:11:22.485 11:33:22 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:22.485 11:33:22 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:22.485 11:33:22 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:22.485 11:33:22 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- paths/export.sh@5 -- # export PATH 00:11:22.485 11:33:22 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:22.485 11:33:22 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@51 -- # : 0 00:11:22.485 11:33:22 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:11:22.485 11:33:22 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:11:22.485 11:33:22 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:11:22.486 11:33:22 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:11:22.486 11:33:22 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:11:22.486 11:33:22 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:11:22.486 /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:11:22.486 11:33:22 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:11:22.486 11:33:22 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:11:22.486 11:33:22 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@55 -- # have_pci_nics=0 00:11:22.486 11:33:22 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@11 -- # loops=5 00:11:22.486 11:33:22 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@23 -- # nvmftestinit 00:11:22.486 11:33:22 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:11:22.486 11:33:22 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:11:22.486 11:33:22 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@476 -- # prepare_net_devs 00:11:22.486 11:33:22 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@438 -- # local -g is_hw=no 00:11:22.486 11:33:22 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@440 -- # remove_spdk_ns 00:11:22.486 11:33:22 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:11:22.486 11:33:22 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:11:22.486 11:33:22 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:11:22.486 11:33:23 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@442 -- # [[ virt != virt ]] 00:11:22.486 11:33:23 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@444 -- # [[ no == yes ]] 00:11:22.486 11:33:23 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@451 -- # [[ virt == phy ]] 00:11:22.486 11:33:23 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@454 -- # [[ virt == phy-fallback ]] 00:11:22.486 11:33:23 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@459 -- # [[ tcp == tcp ]] 00:11:22.486 11:33:23 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@460 -- # nvmf_veth_init 00:11:22.486 11:33:23 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@145 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:11:22.486 11:33:23 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@146 -- # NVMF_SECOND_INITIATOR_IP=10.0.0.2 00:11:22.486 11:33:23 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@147 -- # NVMF_FIRST_TARGET_IP=10.0.0.3 00:11:22.486 11:33:23 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@148 -- # NVMF_SECOND_TARGET_IP=10.0.0.4 00:11:22.486 11:33:23 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@149 -- # NVMF_INITIATOR_IP=10.0.0.1 00:11:22.486 11:33:23 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@150 -- # NVMF_BRIDGE=nvmf_br 00:11:22.486 11:33:23 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@151 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:11:22.486 11:33:23 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@152 -- # NVMF_INITIATOR_INTERFACE2=nvmf_init_if2 00:11:22.486 11:33:23 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@153 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:11:22.486 11:33:23 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@154 -- # NVMF_INITIATOR_BRIDGE2=nvmf_init_br2 00:11:22.486 11:33:23 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@155 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:11:22.486 11:33:23 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@156 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:11:22.486 11:33:23 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@157 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:11:22.486 11:33:23 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@158 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:11:22.486 11:33:23 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@159 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:11:22.486 11:33:23 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@160 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:11:22.486 11:33:23 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@162 -- # ip link set nvmf_init_br nomaster 00:11:22.486 Cannot find device "nvmf_init_br" 00:11:22.486 11:33:23 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@162 -- # true 00:11:22.486 11:33:23 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@163 -- # ip link set nvmf_init_br2 nomaster 00:11:22.486 Cannot find device "nvmf_init_br2" 00:11:22.486 11:33:23 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@163 -- # true 00:11:22.486 11:33:23 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@164 -- # ip link set nvmf_tgt_br nomaster 00:11:22.486 Cannot find device "nvmf_tgt_br" 00:11:22.486 11:33:23 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@164 -- # true 00:11:22.486 11:33:23 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@165 -- # ip link set nvmf_tgt_br2 nomaster 00:11:22.486 Cannot find device "nvmf_tgt_br2" 00:11:22.486 11:33:23 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@165 -- # true 00:11:22.486 11:33:23 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@166 -- # ip link set nvmf_init_br down 00:11:22.486 Cannot find device "nvmf_init_br" 00:11:22.486 11:33:23 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@166 -- # true 00:11:22.486 11:33:23 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@167 -- # ip link set nvmf_init_br2 down 00:11:22.486 Cannot find device "nvmf_init_br2" 00:11:22.486 11:33:23 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@167 -- # true 00:11:22.486 11:33:23 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@168 -- # ip link set nvmf_tgt_br down 00:11:22.486 Cannot find device "nvmf_tgt_br" 00:11:22.486 11:33:23 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@168 -- # true 00:11:22.486 11:33:23 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@169 -- # ip link set nvmf_tgt_br2 down 00:11:22.486 Cannot find device "nvmf_tgt_br2" 00:11:22.486 11:33:23 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@169 -- # true 00:11:22.486 11:33:23 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@170 -- # ip link delete nvmf_br type bridge 00:11:22.486 Cannot find device "nvmf_br" 00:11:22.486 11:33:23 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@170 -- # true 00:11:22.486 11:33:23 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@171 -- # ip link delete nvmf_init_if 00:11:22.486 Cannot find device "nvmf_init_if" 00:11:22.486 11:33:23 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@171 -- # true 00:11:22.486 11:33:23 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@172 -- # ip link delete nvmf_init_if2 00:11:22.486 Cannot find device "nvmf_init_if2" 00:11:22.486 11:33:23 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@172 -- # true 00:11:22.486 11:33:23 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@173 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:11:22.486 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:11:22.486 11:33:23 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@173 -- # true 00:11:22.486 11:33:23 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@174 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:11:22.486 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:11:22.486 11:33:23 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@174 -- # true 00:11:22.486 11:33:23 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@177 -- # ip netns add nvmf_tgt_ns_spdk 00:11:22.486 11:33:23 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@180 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:11:22.486 11:33:23 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@181 -- # ip link add nvmf_init_if2 type veth peer name nvmf_init_br2 00:11:22.486 11:33:23 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@182 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:11:22.486 11:33:23 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@183 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:11:22.486 11:33:23 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:11:22.486 11:33:23 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@187 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:11:22.745 11:33:23 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@190 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:11:22.745 11:33:23 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@191 -- # ip addr add 10.0.0.2/24 dev nvmf_init_if2 00:11:22.745 11:33:23 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@192 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if 00:11:22.745 11:33:23 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@193 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.4/24 dev nvmf_tgt_if2 00:11:22.745 11:33:23 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@196 -- # ip link set nvmf_init_if up 00:11:22.745 11:33:23 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@197 -- # ip link set nvmf_init_if2 up 00:11:22.745 11:33:23 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@198 -- # ip link set nvmf_init_br up 00:11:22.745 11:33:23 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@199 -- # ip link set nvmf_init_br2 up 00:11:22.745 11:33:23 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@200 -- # ip link set nvmf_tgt_br up 00:11:22.745 11:33:23 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@201 -- # ip link set nvmf_tgt_br2 up 00:11:22.745 11:33:23 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@202 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:11:22.745 11:33:23 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@203 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:11:22.745 11:33:23 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@204 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:11:22.745 11:33:23 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@207 -- # ip link add nvmf_br type bridge 00:11:22.745 11:33:23 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@208 -- # ip link set nvmf_br up 00:11:22.745 11:33:23 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@211 -- # ip link set nvmf_init_br master nvmf_br 00:11:22.745 11:33:23 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@212 -- # ip link set nvmf_init_br2 master nvmf_br 00:11:22.745 11:33:23 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@213 -- # ip link set nvmf_tgt_br master nvmf_br 00:11:22.745 11:33:23 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@214 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:11:22.745 11:33:23 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@217 -- # ipts -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:11:22.745 11:33:23 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT' 00:11:22.745 11:33:23 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@218 -- # ipts -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT 00:11:22.745 11:33:23 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT' 00:11:22.745 11:33:23 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@219 -- # ipts -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:11:22.745 11:33:23 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@790 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT -m comment --comment 'SPDK_NVMF:-A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT' 00:11:22.745 11:33:23 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@222 -- # ping -c 1 10.0.0.3 00:11:22.745 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:11:22.745 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.062 ms 00:11:22.745 00:11:22.745 --- 10.0.0.3 ping statistics --- 00:11:22.745 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:11:22.745 rtt min/avg/max/mdev = 0.062/0.062/0.062/0.000 ms 00:11:22.745 11:33:23 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@223 -- # ping -c 1 10.0.0.4 00:11:22.745 PING 10.0.0.4 (10.0.0.4) 56(84) bytes of data. 00:11:22.745 64 bytes from 10.0.0.4: icmp_seq=1 ttl=64 time=0.053 ms 00:11:22.745 00:11:22.745 --- 10.0.0.4 ping statistics --- 00:11:22.745 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:11:22.745 rtt min/avg/max/mdev = 0.053/0.053/0.053/0.000 ms 00:11:22.745 11:33:23 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@224 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:11:22.745 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:11:22.745 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.037 ms 00:11:22.745 00:11:22.745 --- 10.0.0.1 ping statistics --- 00:11:22.745 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:11:22.745 rtt min/avg/max/mdev = 0.037/0.037/0.037/0.000 ms 00:11:22.745 11:33:23 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@225 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.2 00:11:22.745 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:11:22.745 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.064 ms 00:11:22.745 00:11:22.745 --- 10.0.0.2 ping statistics --- 00:11:22.745 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:11:22.745 rtt min/avg/max/mdev = 0.064/0.064/0.064/0.000 ms 00:11:22.745 11:33:23 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@227 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:11:22.745 11:33:23 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@461 -- # return 0 00:11:22.745 11:33:23 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:11:22.745 11:33:23 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:11:22.745 11:33:23 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:11:22.745 11:33:23 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:11:22.745 11:33:23 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:11:22.745 11:33:23 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:11:22.745 11:33:23 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:11:22.745 11:33:23 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@24 -- # nvmfappstart -m 0xF 00:11:22.745 11:33:23 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:11:22.745 11:33:23 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@726 -- # xtrace_disable 00:11:22.745 11:33:23 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:11:22.745 11:33:23 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@509 -- # nvmfpid=75696 00:11:22.745 11:33:23 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@508 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:11:22.745 11:33:23 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@510 -- # waitforlisten 75696 00:11:22.745 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:11:22.745 11:33:23 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@835 -- # '[' -z 75696 ']' 00:11:22.745 11:33:23 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:11:22.745 11:33:23 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@840 -- # local max_retries=100 00:11:22.745 11:33:23 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:11:22.745 11:33:23 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@844 -- # xtrace_disable 00:11:22.746 11:33:23 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:11:22.746 [2024-12-16 11:33:23.458996] Starting SPDK v25.01-pre git sha1 c360129e7 / DPDK 24.03.0 initialization... 00:11:22.746 [2024-12-16 11:33:23.459671] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:11:23.004 [2024-12-16 11:33:23.604078] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:11:23.004 [2024-12-16 11:33:23.655920] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:11:23.004 [2024-12-16 11:33:23.656439] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:11:23.004 [2024-12-16 11:33:23.656661] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:11:23.004 [2024-12-16 11:33:23.656765] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:11:23.004 [2024-12-16 11:33:23.656851] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:11:23.004 [2024-12-16 11:33:23.658347] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 1 00:11:23.004 [2024-12-16 11:33:23.658487] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 2 00:11:23.004 [2024-12-16 11:33:23.658575] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 3 00:11:23.004 [2024-12-16 11:33:23.658579] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:11:23.262 11:33:23 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:11:23.262 11:33:23 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@868 -- # return 0 00:11:23.262 11:33:23 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:11:23.262 11:33:23 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@732 -- # xtrace_disable 00:11:23.262 11:33:23 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:11:23.262 11:33:23 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:11:23.262 11:33:23 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@26 -- # rpc_cmd nvmf_get_stats 00:11:23.262 11:33:23 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:23.262 11:33:23 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:11:23.262 11:33:23 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:23.262 11:33:23 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@26 -- # stats='{ 00:11:23.262 "poll_groups": [ 00:11:23.262 { 00:11:23.262 "admin_qpairs": 0, 00:11:23.262 "completed_nvme_io": 0, 00:11:23.262 "current_admin_qpairs": 0, 00:11:23.262 "current_io_qpairs": 0, 00:11:23.262 "io_qpairs": 0, 00:11:23.262 "name": "nvmf_tgt_poll_group_000", 00:11:23.262 "pending_bdev_io": 0, 00:11:23.262 "transports": [] 00:11:23.262 }, 00:11:23.262 { 00:11:23.262 "admin_qpairs": 0, 00:11:23.262 "completed_nvme_io": 0, 00:11:23.262 "current_admin_qpairs": 0, 00:11:23.262 "current_io_qpairs": 0, 00:11:23.262 "io_qpairs": 0, 00:11:23.262 "name": "nvmf_tgt_poll_group_001", 00:11:23.262 "pending_bdev_io": 0, 00:11:23.262 "transports": [] 00:11:23.262 }, 00:11:23.262 { 00:11:23.262 "admin_qpairs": 0, 00:11:23.262 "completed_nvme_io": 0, 00:11:23.262 "current_admin_qpairs": 0, 00:11:23.262 "current_io_qpairs": 0, 00:11:23.262 "io_qpairs": 0, 00:11:23.262 "name": "nvmf_tgt_poll_group_002", 00:11:23.262 "pending_bdev_io": 0, 00:11:23.262 "transports": [] 00:11:23.262 }, 00:11:23.262 { 00:11:23.262 "admin_qpairs": 0, 00:11:23.262 "completed_nvme_io": 0, 00:11:23.262 "current_admin_qpairs": 0, 00:11:23.262 "current_io_qpairs": 0, 00:11:23.262 "io_qpairs": 0, 00:11:23.262 "name": "nvmf_tgt_poll_group_003", 00:11:23.262 "pending_bdev_io": 0, 00:11:23.262 "transports": [] 00:11:23.262 } 00:11:23.262 ], 00:11:23.262 "tick_rate": 2200000000 00:11:23.262 }' 00:11:23.262 11:33:23 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@28 -- # jcount '.poll_groups[].name' 00:11:23.262 11:33:23 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@14 -- # local 'filter=.poll_groups[].name' 00:11:23.262 11:33:23 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@15 -- # jq '.poll_groups[].name' 00:11:23.262 11:33:23 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@15 -- # wc -l 00:11:23.262 11:33:23 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@28 -- # (( 4 == 4 )) 00:11:23.262 11:33:23 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@29 -- # jq '.poll_groups[0].transports[0]' 00:11:23.262 11:33:23 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@29 -- # [[ null == null ]] 00:11:23.262 11:33:23 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@31 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:11:23.262 11:33:23 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:23.262 11:33:23 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:11:23.262 [2024-12-16 11:33:23.958369] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:11:23.262 11:33:23 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:23.262 11:33:23 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@33 -- # rpc_cmd nvmf_get_stats 00:11:23.262 11:33:23 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:23.262 11:33:23 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:11:23.262 11:33:23 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:23.262 11:33:23 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@33 -- # stats='{ 00:11:23.262 "poll_groups": [ 00:11:23.262 { 00:11:23.262 "admin_qpairs": 0, 00:11:23.262 "completed_nvme_io": 0, 00:11:23.262 "current_admin_qpairs": 0, 00:11:23.262 "current_io_qpairs": 0, 00:11:23.262 "io_qpairs": 0, 00:11:23.262 "name": "nvmf_tgt_poll_group_000", 00:11:23.262 "pending_bdev_io": 0, 00:11:23.262 "transports": [ 00:11:23.262 { 00:11:23.262 "trtype": "TCP" 00:11:23.262 } 00:11:23.262 ] 00:11:23.262 }, 00:11:23.262 { 00:11:23.262 "admin_qpairs": 0, 00:11:23.262 "completed_nvme_io": 0, 00:11:23.262 "current_admin_qpairs": 0, 00:11:23.262 "current_io_qpairs": 0, 00:11:23.262 "io_qpairs": 0, 00:11:23.262 "name": "nvmf_tgt_poll_group_001", 00:11:23.262 "pending_bdev_io": 0, 00:11:23.262 "transports": [ 00:11:23.262 { 00:11:23.262 "trtype": "TCP" 00:11:23.262 } 00:11:23.262 ] 00:11:23.262 }, 00:11:23.263 { 00:11:23.263 "admin_qpairs": 0, 00:11:23.263 "completed_nvme_io": 0, 00:11:23.263 "current_admin_qpairs": 0, 00:11:23.263 "current_io_qpairs": 0, 00:11:23.263 "io_qpairs": 0, 00:11:23.263 "name": "nvmf_tgt_poll_group_002", 00:11:23.263 "pending_bdev_io": 0, 00:11:23.263 "transports": [ 00:11:23.263 { 00:11:23.263 "trtype": "TCP" 00:11:23.263 } 00:11:23.263 ] 00:11:23.263 }, 00:11:23.263 { 00:11:23.263 "admin_qpairs": 0, 00:11:23.263 "completed_nvme_io": 0, 00:11:23.263 "current_admin_qpairs": 0, 00:11:23.263 "current_io_qpairs": 0, 00:11:23.263 "io_qpairs": 0, 00:11:23.263 "name": "nvmf_tgt_poll_group_003", 00:11:23.263 "pending_bdev_io": 0, 00:11:23.263 "transports": [ 00:11:23.263 { 00:11:23.263 "trtype": "TCP" 00:11:23.263 } 00:11:23.263 ] 00:11:23.263 } 00:11:23.263 ], 00:11:23.263 "tick_rate": 2200000000 00:11:23.263 }' 00:11:23.263 11:33:23 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@35 -- # jsum '.poll_groups[].admin_qpairs' 00:11:23.263 11:33:23 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@19 -- # local 'filter=.poll_groups[].admin_qpairs' 00:11:23.263 11:33:23 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@20 -- # jq '.poll_groups[].admin_qpairs' 00:11:23.263 11:33:23 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@20 -- # awk '{s+=$1}END{print s}' 00:11:23.521 11:33:24 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@35 -- # (( 0 == 0 )) 00:11:23.521 11:33:24 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@36 -- # jsum '.poll_groups[].io_qpairs' 00:11:23.521 11:33:24 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@19 -- # local 'filter=.poll_groups[].io_qpairs' 00:11:23.521 11:33:24 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@20 -- # awk '{s+=$1}END{print s}' 00:11:23.521 11:33:24 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@20 -- # jq '.poll_groups[].io_qpairs' 00:11:23.521 11:33:24 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@36 -- # (( 0 == 0 )) 00:11:23.521 11:33:24 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@38 -- # '[' rdma == tcp ']' 00:11:23.521 11:33:24 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@46 -- # MALLOC_BDEV_SIZE=64 00:11:23.521 11:33:24 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@47 -- # MALLOC_BLOCK_SIZE=512 00:11:23.521 11:33:24 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@49 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc1 00:11:23.521 11:33:24 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:23.521 11:33:24 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:11:23.521 Malloc1 00:11:23.521 11:33:24 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:23.521 11:33:24 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@52 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:11:23.521 11:33:24 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:23.521 11:33:24 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:11:23.521 11:33:24 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:23.521 11:33:24 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@53 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:11:23.522 11:33:24 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:23.522 11:33:24 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:11:23.522 11:33:24 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:23.522 11:33:24 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@54 -- # rpc_cmd nvmf_subsystem_allow_any_host -d nqn.2016-06.io.spdk:cnode1 00:11:23.522 11:33:24 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:23.522 11:33:24 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:11:23.522 11:33:24 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:23.522 11:33:24 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@55 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 00:11:23.522 11:33:24 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:23.522 11:33:24 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:11:23.522 [2024-12-16 11:33:24.164563] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4420 *** 00:11:23.522 11:33:24 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:23.522 11:33:24 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@58 -- # NOT nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:2fb5c03d-b67f-4471-b8a9-a83bf7b15be8 --hostid=2fb5c03d-b67f-4471-b8a9-a83bf7b15be8 -t tcp -n nqn.2016-06.io.spdk:cnode1 -q nqn.2014-08.org.nvmexpress:uuid:2fb5c03d-b67f-4471-b8a9-a83bf7b15be8 -a 10.0.0.3 -s 4420 00:11:23.522 11:33:24 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@652 -- # local es=0 00:11:23.522 11:33:24 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@654 -- # valid_exec_arg nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:2fb5c03d-b67f-4471-b8a9-a83bf7b15be8 --hostid=2fb5c03d-b67f-4471-b8a9-a83bf7b15be8 -t tcp -n nqn.2016-06.io.spdk:cnode1 -q nqn.2014-08.org.nvmexpress:uuid:2fb5c03d-b67f-4471-b8a9-a83bf7b15be8 -a 10.0.0.3 -s 4420 00:11:23.522 11:33:24 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@640 -- # local arg=nvme 00:11:23.522 11:33:24 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:11:23.522 11:33:24 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@644 -- # type -t nvme 00:11:23.522 11:33:24 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:11:23.522 11:33:24 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@646 -- # type -P nvme 00:11:23.522 11:33:24 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:11:23.522 11:33:24 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@646 -- # arg=/usr/sbin/nvme 00:11:23.522 11:33:24 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@646 -- # [[ -x /usr/sbin/nvme ]] 00:11:23.522 11:33:24 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@655 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:2fb5c03d-b67f-4471-b8a9-a83bf7b15be8 --hostid=2fb5c03d-b67f-4471-b8a9-a83bf7b15be8 -t tcp -n nqn.2016-06.io.spdk:cnode1 -q nqn.2014-08.org.nvmexpress:uuid:2fb5c03d-b67f-4471-b8a9-a83bf7b15be8 -a 10.0.0.3 -s 4420 00:11:23.522 [2024-12-16 11:33:24.193251] ctrlr.c: 825:nvmf_qpair_access_allowed: *ERROR*: Subsystem 'nqn.2016-06.io.spdk:cnode1' does not allow host 'nqn.2014-08.org.nvmexpress:uuid:2fb5c03d-b67f-4471-b8a9-a83bf7b15be8' 00:11:23.522 Failed to write to /dev/nvme-fabrics: Input/output error 00:11:23.522 could not add new controller: failed to write to nvme-fabrics device 00:11:23.522 11:33:24 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@655 -- # es=1 00:11:23.522 11:33:24 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:11:23.522 11:33:24 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:11:23.522 11:33:24 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:11:23.522 11:33:24 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@61 -- # rpc_cmd nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode1 nqn.2014-08.org.nvmexpress:uuid:2fb5c03d-b67f-4471-b8a9-a83bf7b15be8 00:11:23.522 11:33:24 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:23.522 11:33:24 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:11:23.522 11:33:24 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:23.522 11:33:24 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@62 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:2fb5c03d-b67f-4471-b8a9-a83bf7b15be8 --hostid=2fb5c03d-b67f-4471-b8a9-a83bf7b15be8 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.3 -s 4420 00:11:23.780 11:33:24 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@63 -- # waitforserial SPDKISFASTANDAWESOME 00:11:23.780 11:33:24 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1202 -- # local i=0 00:11:23.780 11:33:24 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1203 -- # local nvme_device_counter=1 nvme_devices=0 00:11:23.780 11:33:24 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1204 -- # [[ -n '' ]] 00:11:23.780 11:33:24 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1209 -- # sleep 2 00:11:25.681 11:33:26 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1210 -- # (( i++ <= 15 )) 00:11:25.681 11:33:26 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1211 -- # lsblk -l -o NAME,SERIAL 00:11:25.681 11:33:26 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1211 -- # grep -c SPDKISFASTANDAWESOME 00:11:25.681 11:33:26 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1211 -- # nvme_devices=1 00:11:25.681 11:33:26 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1212 -- # (( nvme_devices == nvme_device_counter )) 00:11:25.681 11:33:26 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1212 -- # return 0 00:11:25.681 11:33:26 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@64 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:11:25.939 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:11:25.939 11:33:26 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@65 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:11:25.939 11:33:26 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1223 -- # local i=0 00:11:25.940 11:33:26 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1224 -- # lsblk -o NAME,SERIAL 00:11:25.940 11:33:26 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1224 -- # grep -q -w SPDKISFASTANDAWESOME 00:11:25.940 11:33:26 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1231 -- # lsblk -l -o NAME,SERIAL 00:11:25.940 11:33:26 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1231 -- # grep -q -w SPDKISFASTANDAWESOME 00:11:25.940 11:33:26 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1235 -- # return 0 00:11:25.940 11:33:26 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@68 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2016-06.io.spdk:cnode1 nqn.2014-08.org.nvmexpress:uuid:2fb5c03d-b67f-4471-b8a9-a83bf7b15be8 00:11:25.940 11:33:26 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:25.940 11:33:26 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:11:25.940 11:33:26 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:25.940 11:33:26 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@69 -- # NOT nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:2fb5c03d-b67f-4471-b8a9-a83bf7b15be8 --hostid=2fb5c03d-b67f-4471-b8a9-a83bf7b15be8 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.3 -s 4420 00:11:25.940 11:33:26 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@652 -- # local es=0 00:11:25.940 11:33:26 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@654 -- # valid_exec_arg nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:2fb5c03d-b67f-4471-b8a9-a83bf7b15be8 --hostid=2fb5c03d-b67f-4471-b8a9-a83bf7b15be8 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.3 -s 4420 00:11:25.940 11:33:26 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@640 -- # local arg=nvme 00:11:25.940 11:33:26 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:11:25.940 11:33:26 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@644 -- # type -t nvme 00:11:25.940 11:33:26 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:11:25.940 11:33:26 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@646 -- # type -P nvme 00:11:25.940 11:33:26 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:11:25.940 11:33:26 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@646 -- # arg=/usr/sbin/nvme 00:11:25.940 11:33:26 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@646 -- # [[ -x /usr/sbin/nvme ]] 00:11:25.940 11:33:26 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@655 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:2fb5c03d-b67f-4471-b8a9-a83bf7b15be8 --hostid=2fb5c03d-b67f-4471-b8a9-a83bf7b15be8 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.3 -s 4420 00:11:25.940 [2024-12-16 11:33:26.483966] ctrlr.c: 825:nvmf_qpair_access_allowed: *ERROR*: Subsystem 'nqn.2016-06.io.spdk:cnode1' does not allow host 'nqn.2014-08.org.nvmexpress:uuid:2fb5c03d-b67f-4471-b8a9-a83bf7b15be8' 00:11:25.940 Failed to write to /dev/nvme-fabrics: Input/output error 00:11:25.940 could not add new controller: failed to write to nvme-fabrics device 00:11:25.940 11:33:26 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@655 -- # es=1 00:11:25.940 11:33:26 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:11:25.940 11:33:26 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:11:25.940 11:33:26 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:11:25.940 11:33:26 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@72 -- # rpc_cmd nvmf_subsystem_allow_any_host -e nqn.2016-06.io.spdk:cnode1 00:11:25.940 11:33:26 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:25.940 11:33:26 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:11:25.940 11:33:26 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:25.940 11:33:26 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@73 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:2fb5c03d-b67f-4471-b8a9-a83bf7b15be8 --hostid=2fb5c03d-b67f-4471-b8a9-a83bf7b15be8 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.3 -s 4420 00:11:25.940 11:33:26 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@74 -- # waitforserial SPDKISFASTANDAWESOME 00:11:25.940 11:33:26 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1202 -- # local i=0 00:11:25.940 11:33:26 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1203 -- # local nvme_device_counter=1 nvme_devices=0 00:11:25.940 11:33:26 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1204 -- # [[ -n '' ]] 00:11:25.940 11:33:26 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1209 -- # sleep 2 00:11:28.469 11:33:28 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1210 -- # (( i++ <= 15 )) 00:11:28.469 11:33:28 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1211 -- # lsblk -l -o NAME,SERIAL 00:11:28.469 11:33:28 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1211 -- # grep -c SPDKISFASTANDAWESOME 00:11:28.469 11:33:28 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1211 -- # nvme_devices=1 00:11:28.469 11:33:28 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1212 -- # (( nvme_devices == nvme_device_counter )) 00:11:28.469 11:33:28 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1212 -- # return 0 00:11:28.469 11:33:28 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@75 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:11:28.469 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:11:28.469 11:33:28 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@76 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:11:28.469 11:33:28 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1223 -- # local i=0 00:11:28.469 11:33:28 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1224 -- # lsblk -o NAME,SERIAL 00:11:28.469 11:33:28 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1224 -- # grep -q -w SPDKISFASTANDAWESOME 00:11:28.469 11:33:28 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1231 -- # lsblk -l -o NAME,SERIAL 00:11:28.469 11:33:28 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1231 -- # grep -q -w SPDKISFASTANDAWESOME 00:11:28.469 11:33:28 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1235 -- # return 0 00:11:28.469 11:33:28 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@78 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:11:28.469 11:33:28 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:28.469 11:33:28 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:11:28.469 11:33:28 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:28.469 11:33:28 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@81 -- # seq 1 5 00:11:28.469 11:33:28 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@81 -- # for i in $(seq 1 $loops) 00:11:28.469 11:33:28 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@82 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:11:28.469 11:33:28 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:28.469 11:33:28 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:11:28.469 11:33:28 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:28.469 11:33:28 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@83 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 00:11:28.469 11:33:28 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:28.469 11:33:28 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:11:28.470 [2024-12-16 11:33:28.775645] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4420 *** 00:11:28.470 11:33:28 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:28.470 11:33:28 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@84 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 5 00:11:28.470 11:33:28 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:28.470 11:33:28 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:11:28.470 11:33:28 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:28.470 11:33:28 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@85 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:11:28.470 11:33:28 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:28.470 11:33:28 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:11:28.470 11:33:28 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:28.470 11:33:28 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@86 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:2fb5c03d-b67f-4471-b8a9-a83bf7b15be8 --hostid=2fb5c03d-b67f-4471-b8a9-a83bf7b15be8 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.3 -s 4420 00:11:28.470 11:33:28 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@88 -- # waitforserial SPDKISFASTANDAWESOME 00:11:28.470 11:33:28 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1202 -- # local i=0 00:11:28.470 11:33:28 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1203 -- # local nvme_device_counter=1 nvme_devices=0 00:11:28.470 11:33:28 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1204 -- # [[ -n '' ]] 00:11:28.470 11:33:28 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1209 -- # sleep 2 00:11:30.367 11:33:30 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1210 -- # (( i++ <= 15 )) 00:11:30.367 11:33:30 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1211 -- # lsblk -l -o NAME,SERIAL 00:11:30.367 11:33:30 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1211 -- # grep -c SPDKISFASTANDAWESOME 00:11:30.367 11:33:30 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1211 -- # nvme_devices=1 00:11:30.367 11:33:30 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1212 -- # (( nvme_devices == nvme_device_counter )) 00:11:30.367 11:33:30 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1212 -- # return 0 00:11:30.367 11:33:30 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@90 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:11:30.367 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:11:30.367 11:33:31 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@91 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:11:30.367 11:33:31 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1223 -- # local i=0 00:11:30.367 11:33:31 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1224 -- # lsblk -o NAME,SERIAL 00:11:30.367 11:33:31 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1224 -- # grep -q -w SPDKISFASTANDAWESOME 00:11:30.367 11:33:31 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1231 -- # lsblk -l -o NAME,SERIAL 00:11:30.367 11:33:31 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1231 -- # grep -q -w SPDKISFASTANDAWESOME 00:11:30.367 11:33:31 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1235 -- # return 0 00:11:30.367 11:33:31 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@93 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:11:30.367 11:33:31 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:30.367 11:33:31 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:11:30.367 11:33:31 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:30.367 11:33:31 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@94 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:11:30.367 11:33:31 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:30.367 11:33:31 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:11:30.367 11:33:31 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:30.367 11:33:31 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@81 -- # for i in $(seq 1 $loops) 00:11:30.368 11:33:31 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@82 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:11:30.368 11:33:31 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:30.368 11:33:31 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:11:30.368 11:33:31 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:30.368 11:33:31 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@83 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 00:11:30.368 11:33:31 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:30.368 11:33:31 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:11:30.368 [2024-12-16 11:33:31.074692] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4420 *** 00:11:30.368 11:33:31 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:30.368 11:33:31 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@84 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 5 00:11:30.368 11:33:31 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:30.368 11:33:31 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:11:30.368 11:33:31 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:30.368 11:33:31 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@85 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:11:30.368 11:33:31 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:30.368 11:33:31 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:11:30.368 11:33:31 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:30.368 11:33:31 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@86 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:2fb5c03d-b67f-4471-b8a9-a83bf7b15be8 --hostid=2fb5c03d-b67f-4471-b8a9-a83bf7b15be8 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.3 -s 4420 00:11:30.625 11:33:31 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@88 -- # waitforserial SPDKISFASTANDAWESOME 00:11:30.625 11:33:31 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1202 -- # local i=0 00:11:30.625 11:33:31 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1203 -- # local nvme_device_counter=1 nvme_devices=0 00:11:30.625 11:33:31 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1204 -- # [[ -n '' ]] 00:11:30.625 11:33:31 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1209 -- # sleep 2 00:11:32.523 11:33:33 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1210 -- # (( i++ <= 15 )) 00:11:32.781 11:33:33 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1211 -- # lsblk -l -o NAME,SERIAL 00:11:32.781 11:33:33 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1211 -- # grep -c SPDKISFASTANDAWESOME 00:11:32.781 11:33:33 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1211 -- # nvme_devices=1 00:11:32.781 11:33:33 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1212 -- # (( nvme_devices == nvme_device_counter )) 00:11:32.781 11:33:33 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1212 -- # return 0 00:11:32.781 11:33:33 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@90 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:11:32.781 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:11:32.781 11:33:33 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@91 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:11:32.781 11:33:33 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1223 -- # local i=0 00:11:32.781 11:33:33 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1224 -- # lsblk -o NAME,SERIAL 00:11:32.781 11:33:33 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1224 -- # grep -q -w SPDKISFASTANDAWESOME 00:11:32.781 11:33:33 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1231 -- # lsblk -l -o NAME,SERIAL 00:11:32.781 11:33:33 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1231 -- # grep -q -w SPDKISFASTANDAWESOME 00:11:32.781 11:33:33 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1235 -- # return 0 00:11:32.781 11:33:33 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@93 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:11:32.781 11:33:33 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:32.781 11:33:33 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:11:32.781 11:33:33 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:32.781 11:33:33 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@94 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:11:32.781 11:33:33 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:32.781 11:33:33 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:11:32.781 11:33:33 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:32.781 11:33:33 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@81 -- # for i in $(seq 1 $loops) 00:11:32.781 11:33:33 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@82 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:11:32.781 11:33:33 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:32.781 11:33:33 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:11:32.782 11:33:33 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:32.782 11:33:33 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@83 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 00:11:32.782 11:33:33 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:32.782 11:33:33 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:11:32.782 [2024-12-16 11:33:33.486404] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4420 *** 00:11:32.782 11:33:33 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:32.782 11:33:33 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@84 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 5 00:11:32.782 11:33:33 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:32.782 11:33:33 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:11:32.782 11:33:33 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:32.782 11:33:33 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@85 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:11:32.782 11:33:33 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:32.782 11:33:33 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:11:32.782 11:33:33 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:32.782 11:33:33 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@86 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:2fb5c03d-b67f-4471-b8a9-a83bf7b15be8 --hostid=2fb5c03d-b67f-4471-b8a9-a83bf7b15be8 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.3 -s 4420 00:11:33.039 11:33:33 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@88 -- # waitforserial SPDKISFASTANDAWESOME 00:11:33.039 11:33:33 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1202 -- # local i=0 00:11:33.039 11:33:33 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1203 -- # local nvme_device_counter=1 nvme_devices=0 00:11:33.039 11:33:33 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1204 -- # [[ -n '' ]] 00:11:33.039 11:33:33 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1209 -- # sleep 2 00:11:34.937 11:33:35 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1210 -- # (( i++ <= 15 )) 00:11:34.937 11:33:35 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1211 -- # lsblk -l -o NAME,SERIAL 00:11:34.937 11:33:35 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1211 -- # grep -c SPDKISFASTANDAWESOME 00:11:35.195 11:33:35 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1211 -- # nvme_devices=1 00:11:35.195 11:33:35 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1212 -- # (( nvme_devices == nvme_device_counter )) 00:11:35.195 11:33:35 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1212 -- # return 0 00:11:35.195 11:33:35 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@90 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:11:35.195 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:11:35.195 11:33:35 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@91 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:11:35.195 11:33:35 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1223 -- # local i=0 00:11:35.195 11:33:35 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1224 -- # grep -q -w SPDKISFASTANDAWESOME 00:11:35.195 11:33:35 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1224 -- # lsblk -o NAME,SERIAL 00:11:35.195 11:33:35 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1231 -- # lsblk -l -o NAME,SERIAL 00:11:35.195 11:33:35 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1231 -- # grep -q -w SPDKISFASTANDAWESOME 00:11:35.195 11:33:35 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1235 -- # return 0 00:11:35.195 11:33:35 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@93 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:11:35.195 11:33:35 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:35.195 11:33:35 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:11:35.195 11:33:35 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:35.195 11:33:35 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@94 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:11:35.196 11:33:35 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:35.196 11:33:35 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:11:35.196 11:33:35 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:35.196 11:33:35 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@81 -- # for i in $(seq 1 $loops) 00:11:35.196 11:33:35 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@82 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:11:35.196 11:33:35 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:35.196 11:33:35 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:11:35.196 11:33:35 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:35.196 11:33:35 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@83 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 00:11:35.196 11:33:35 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:35.196 11:33:35 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:11:35.196 [2024-12-16 11:33:35.797796] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4420 *** 00:11:35.196 11:33:35 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:35.196 11:33:35 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@84 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 5 00:11:35.196 11:33:35 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:35.196 11:33:35 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:11:35.196 11:33:35 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:35.196 11:33:35 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@85 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:11:35.196 11:33:35 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:35.196 11:33:35 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:11:35.196 11:33:35 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:35.196 11:33:35 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@86 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:2fb5c03d-b67f-4471-b8a9-a83bf7b15be8 --hostid=2fb5c03d-b67f-4471-b8a9-a83bf7b15be8 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.3 -s 4420 00:11:35.454 11:33:35 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@88 -- # waitforserial SPDKISFASTANDAWESOME 00:11:35.454 11:33:35 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1202 -- # local i=0 00:11:35.454 11:33:35 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1203 -- # local nvme_device_counter=1 nvme_devices=0 00:11:35.454 11:33:35 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1204 -- # [[ -n '' ]] 00:11:35.454 11:33:35 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1209 -- # sleep 2 00:11:37.351 11:33:37 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1210 -- # (( i++ <= 15 )) 00:11:37.351 11:33:37 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1211 -- # lsblk -l -o NAME,SERIAL 00:11:37.351 11:33:37 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1211 -- # grep -c SPDKISFASTANDAWESOME 00:11:37.351 11:33:38 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1211 -- # nvme_devices=1 00:11:37.351 11:33:38 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1212 -- # (( nvme_devices == nvme_device_counter )) 00:11:37.351 11:33:38 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1212 -- # return 0 00:11:37.351 11:33:38 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@90 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:11:37.351 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:11:37.351 11:33:38 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@91 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:11:37.351 11:33:38 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1223 -- # local i=0 00:11:37.351 11:33:38 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1224 -- # lsblk -o NAME,SERIAL 00:11:37.351 11:33:38 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1224 -- # grep -q -w SPDKISFASTANDAWESOME 00:11:37.351 11:33:38 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1231 -- # lsblk -l -o NAME,SERIAL 00:11:37.351 11:33:38 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1231 -- # grep -q -w SPDKISFASTANDAWESOME 00:11:37.351 11:33:38 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1235 -- # return 0 00:11:37.351 11:33:38 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@93 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:11:37.351 11:33:38 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:37.351 11:33:38 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:11:37.351 11:33:38 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:37.351 11:33:38 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@94 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:11:37.351 11:33:38 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:37.351 11:33:38 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:11:37.351 11:33:38 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:37.351 11:33:38 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@81 -- # for i in $(seq 1 $loops) 00:11:37.351 11:33:38 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@82 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:11:37.351 11:33:38 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:37.351 11:33:38 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:11:37.351 11:33:38 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:37.351 11:33:38 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@83 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 00:11:37.351 11:33:38 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:37.351 11:33:38 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:11:37.351 [2024-12-16 11:33:38.101104] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4420 *** 00:11:37.609 11:33:38 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:37.609 11:33:38 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@84 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 5 00:11:37.609 11:33:38 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:37.609 11:33:38 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:11:37.609 11:33:38 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:37.609 11:33:38 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@85 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:11:37.609 11:33:38 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:37.609 11:33:38 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:11:37.609 11:33:38 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:37.609 11:33:38 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@86 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:2fb5c03d-b67f-4471-b8a9-a83bf7b15be8 --hostid=2fb5c03d-b67f-4471-b8a9-a83bf7b15be8 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.3 -s 4420 00:11:37.609 11:33:38 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@88 -- # waitforserial SPDKISFASTANDAWESOME 00:11:37.609 11:33:38 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1202 -- # local i=0 00:11:37.609 11:33:38 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1203 -- # local nvme_device_counter=1 nvme_devices=0 00:11:37.609 11:33:38 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1204 -- # [[ -n '' ]] 00:11:37.609 11:33:38 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1209 -- # sleep 2 00:11:40.136 11:33:40 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1210 -- # (( i++ <= 15 )) 00:11:40.136 11:33:40 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1211 -- # lsblk -l -o NAME,SERIAL 00:11:40.136 11:33:40 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1211 -- # grep -c SPDKISFASTANDAWESOME 00:11:40.136 11:33:40 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1211 -- # nvme_devices=1 00:11:40.136 11:33:40 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1212 -- # (( nvme_devices == nvme_device_counter )) 00:11:40.136 11:33:40 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1212 -- # return 0 00:11:40.136 11:33:40 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@90 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:11:40.136 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:11:40.136 11:33:40 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@91 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:11:40.136 11:33:40 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1223 -- # local i=0 00:11:40.136 11:33:40 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1224 -- # lsblk -o NAME,SERIAL 00:11:40.136 11:33:40 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1224 -- # grep -q -w SPDKISFASTANDAWESOME 00:11:40.136 11:33:40 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1231 -- # lsblk -l -o NAME,SERIAL 00:11:40.136 11:33:40 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1231 -- # grep -q -w SPDKISFASTANDAWESOME 00:11:40.136 11:33:40 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1235 -- # return 0 00:11:40.136 11:33:40 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@93 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:11:40.136 11:33:40 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:40.136 11:33:40 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:11:40.136 11:33:40 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:40.136 11:33:40 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@94 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:11:40.136 11:33:40 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:40.136 11:33:40 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:11:40.136 11:33:40 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:40.136 11:33:40 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@99 -- # seq 1 5 00:11:40.136 11:33:40 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@99 -- # for i in $(seq 1 $loops) 00:11:40.136 11:33:40 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@100 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:11:40.136 11:33:40 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:40.136 11:33:40 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:11:40.136 11:33:40 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:40.136 11:33:40 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@101 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 00:11:40.136 11:33:40 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:40.136 11:33:40 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:11:40.136 [2024-12-16 11:33:40.412232] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4420 *** 00:11:40.136 11:33:40 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:40.136 11:33:40 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@102 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:11:40.136 11:33:40 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:40.136 11:33:40 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:11:40.136 11:33:40 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:40.136 11:33:40 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@103 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:11:40.136 11:33:40 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:40.136 11:33:40 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:11:40.136 11:33:40 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:40.136 11:33:40 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@105 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:11:40.136 11:33:40 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:40.136 11:33:40 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:11:40.136 11:33:40 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:40.136 11:33:40 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@107 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:11:40.136 11:33:40 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:40.136 11:33:40 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:11:40.136 11:33:40 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:40.136 11:33:40 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@99 -- # for i in $(seq 1 $loops) 00:11:40.137 11:33:40 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@100 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:11:40.137 11:33:40 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:40.137 11:33:40 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:11:40.137 11:33:40 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:40.137 11:33:40 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@101 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 00:11:40.137 11:33:40 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:40.137 11:33:40 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:11:40.137 [2024-12-16 11:33:40.460231] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4420 *** 00:11:40.137 11:33:40 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:40.137 11:33:40 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@102 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:11:40.137 11:33:40 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:40.137 11:33:40 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:11:40.137 11:33:40 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:40.137 11:33:40 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@103 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:11:40.137 11:33:40 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:40.137 11:33:40 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:11:40.137 11:33:40 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:40.137 11:33:40 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@105 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:11:40.137 11:33:40 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:40.137 11:33:40 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:11:40.137 11:33:40 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:40.137 11:33:40 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@107 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:11:40.137 11:33:40 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:40.137 11:33:40 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:11:40.137 11:33:40 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:40.137 11:33:40 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@99 -- # for i in $(seq 1 $loops) 00:11:40.137 11:33:40 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@100 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:11:40.137 11:33:40 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:40.137 11:33:40 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:11:40.137 11:33:40 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:40.137 11:33:40 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@101 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 00:11:40.137 11:33:40 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:40.137 11:33:40 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:11:40.137 [2024-12-16 11:33:40.508276] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4420 *** 00:11:40.137 11:33:40 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:40.137 11:33:40 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@102 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:11:40.137 11:33:40 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:40.137 11:33:40 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:11:40.137 11:33:40 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:40.137 11:33:40 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@103 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:11:40.137 11:33:40 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:40.137 11:33:40 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:11:40.137 11:33:40 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:40.137 11:33:40 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@105 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:11:40.137 11:33:40 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:40.137 11:33:40 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:11:40.137 11:33:40 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:40.137 11:33:40 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@107 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:11:40.137 11:33:40 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:40.137 11:33:40 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:11:40.137 11:33:40 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:40.137 11:33:40 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@99 -- # for i in $(seq 1 $loops) 00:11:40.137 11:33:40 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@100 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:11:40.137 11:33:40 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:40.137 11:33:40 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:11:40.137 11:33:40 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:40.137 11:33:40 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@101 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 00:11:40.137 11:33:40 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:40.137 11:33:40 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:11:40.137 [2024-12-16 11:33:40.556331] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4420 *** 00:11:40.137 11:33:40 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:40.137 11:33:40 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@102 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:11:40.137 11:33:40 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:40.137 11:33:40 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:11:40.137 11:33:40 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:40.137 11:33:40 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@103 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:11:40.137 11:33:40 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:40.137 11:33:40 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:11:40.137 11:33:40 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:40.137 11:33:40 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@105 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:11:40.137 11:33:40 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:40.137 11:33:40 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:11:40.137 11:33:40 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:40.137 11:33:40 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@107 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:11:40.137 11:33:40 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:40.137 11:33:40 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:11:40.137 11:33:40 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:40.137 11:33:40 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@99 -- # for i in $(seq 1 $loops) 00:11:40.137 11:33:40 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@100 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:11:40.137 11:33:40 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:40.137 11:33:40 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:11:40.137 11:33:40 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:40.137 11:33:40 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@101 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 00:11:40.137 11:33:40 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:40.137 11:33:40 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:11:40.137 [2024-12-16 11:33:40.604362] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4420 *** 00:11:40.137 11:33:40 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:40.137 11:33:40 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@102 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:11:40.137 11:33:40 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:40.137 11:33:40 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:11:40.137 11:33:40 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:40.137 11:33:40 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@103 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:11:40.137 11:33:40 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:40.137 11:33:40 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:11:40.137 11:33:40 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:40.137 11:33:40 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@105 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:11:40.137 11:33:40 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:40.137 11:33:40 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:11:40.137 11:33:40 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:40.137 11:33:40 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@107 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:11:40.137 11:33:40 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:40.137 11:33:40 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:11:40.137 11:33:40 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:40.137 11:33:40 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@110 -- # rpc_cmd nvmf_get_stats 00:11:40.137 11:33:40 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:40.137 11:33:40 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:11:40.137 11:33:40 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:40.137 11:33:40 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@110 -- # stats='{ 00:11:40.137 "poll_groups": [ 00:11:40.137 { 00:11:40.137 "admin_qpairs": 2, 00:11:40.137 "completed_nvme_io": 69, 00:11:40.137 "current_admin_qpairs": 0, 00:11:40.137 "current_io_qpairs": 0, 00:11:40.137 "io_qpairs": 16, 00:11:40.137 "name": "nvmf_tgt_poll_group_000", 00:11:40.137 "pending_bdev_io": 0, 00:11:40.137 "transports": [ 00:11:40.137 { 00:11:40.137 "trtype": "TCP" 00:11:40.137 } 00:11:40.137 ] 00:11:40.138 }, 00:11:40.138 { 00:11:40.138 "admin_qpairs": 3, 00:11:40.138 "completed_nvme_io": 115, 00:11:40.138 "current_admin_qpairs": 0, 00:11:40.138 "current_io_qpairs": 0, 00:11:40.138 "io_qpairs": 17, 00:11:40.138 "name": "nvmf_tgt_poll_group_001", 00:11:40.138 "pending_bdev_io": 0, 00:11:40.138 "transports": [ 00:11:40.138 { 00:11:40.138 "trtype": "TCP" 00:11:40.138 } 00:11:40.138 ] 00:11:40.138 }, 00:11:40.138 { 00:11:40.138 "admin_qpairs": 1, 00:11:40.138 "completed_nvme_io": 168, 00:11:40.138 "current_admin_qpairs": 0, 00:11:40.138 "current_io_qpairs": 0, 00:11:40.138 "io_qpairs": 19, 00:11:40.138 "name": "nvmf_tgt_poll_group_002", 00:11:40.138 "pending_bdev_io": 0, 00:11:40.138 "transports": [ 00:11:40.138 { 00:11:40.138 "trtype": "TCP" 00:11:40.138 } 00:11:40.138 ] 00:11:40.138 }, 00:11:40.138 { 00:11:40.138 "admin_qpairs": 1, 00:11:40.138 "completed_nvme_io": 68, 00:11:40.138 "current_admin_qpairs": 0, 00:11:40.138 "current_io_qpairs": 0, 00:11:40.138 "io_qpairs": 18, 00:11:40.138 "name": "nvmf_tgt_poll_group_003", 00:11:40.138 "pending_bdev_io": 0, 00:11:40.138 "transports": [ 00:11:40.138 { 00:11:40.138 "trtype": "TCP" 00:11:40.138 } 00:11:40.138 ] 00:11:40.138 } 00:11:40.138 ], 00:11:40.138 "tick_rate": 2200000000 00:11:40.138 }' 00:11:40.138 11:33:40 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@112 -- # jsum '.poll_groups[].admin_qpairs' 00:11:40.138 11:33:40 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@19 -- # local 'filter=.poll_groups[].admin_qpairs' 00:11:40.138 11:33:40 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@20 -- # jq '.poll_groups[].admin_qpairs' 00:11:40.138 11:33:40 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@20 -- # awk '{s+=$1}END{print s}' 00:11:40.138 11:33:40 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@112 -- # (( 7 > 0 )) 00:11:40.138 11:33:40 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@113 -- # jsum '.poll_groups[].io_qpairs' 00:11:40.138 11:33:40 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@19 -- # local 'filter=.poll_groups[].io_qpairs' 00:11:40.138 11:33:40 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@20 -- # jq '.poll_groups[].io_qpairs' 00:11:40.138 11:33:40 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@20 -- # awk '{s+=$1}END{print s}' 00:11:40.138 11:33:40 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@113 -- # (( 70 > 0 )) 00:11:40.138 11:33:40 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@115 -- # '[' rdma == tcp ']' 00:11:40.138 11:33:40 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@121 -- # trap - SIGINT SIGTERM EXIT 00:11:40.138 11:33:40 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@123 -- # nvmftestfini 00:11:40.138 11:33:40 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@516 -- # nvmfcleanup 00:11:40.138 11:33:40 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@121 -- # sync 00:11:40.138 11:33:40 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:11:40.138 11:33:40 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@124 -- # set +e 00:11:40.138 11:33:40 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@125 -- # for i in {1..20} 00:11:40.138 11:33:40 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:11:40.138 rmmod nvme_tcp 00:11:40.138 rmmod nvme_fabrics 00:11:40.138 rmmod nvme_keyring 00:11:40.138 11:33:40 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:11:40.138 11:33:40 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@128 -- # set -e 00:11:40.138 11:33:40 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@129 -- # return 0 00:11:40.138 11:33:40 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@517 -- # '[' -n 75696 ']' 00:11:40.138 11:33:40 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@518 -- # killprocess 75696 00:11:40.138 11:33:40 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@954 -- # '[' -z 75696 ']' 00:11:40.138 11:33:40 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@958 -- # kill -0 75696 00:11:40.138 11:33:40 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@959 -- # uname 00:11:40.138 11:33:40 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:11:40.138 11:33:40 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 75696 00:11:40.138 killing process with pid 75696 00:11:40.138 11:33:40 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:11:40.138 11:33:40 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:11:40.138 11:33:40 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@972 -- # echo 'killing process with pid 75696' 00:11:40.138 11:33:40 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@973 -- # kill 75696 00:11:40.138 11:33:40 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@978 -- # wait 75696 00:11:40.395 11:33:41 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:11:40.395 11:33:41 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:11:40.395 11:33:41 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:11:40.395 11:33:41 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@297 -- # iptr 00:11:40.395 11:33:41 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@791 -- # iptables-save 00:11:40.395 11:33:41 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:11:40.396 11:33:41 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@791 -- # iptables-restore 00:11:40.396 11:33:41 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@298 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:11:40.396 11:33:41 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@299 -- # nvmf_veth_fini 00:11:40.396 11:33:41 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@233 -- # ip link set nvmf_init_br nomaster 00:11:40.396 11:33:41 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@234 -- # ip link set nvmf_init_br2 nomaster 00:11:40.396 11:33:41 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@235 -- # ip link set nvmf_tgt_br nomaster 00:11:40.654 11:33:41 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@236 -- # ip link set nvmf_tgt_br2 nomaster 00:11:40.654 11:33:41 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@237 -- # ip link set nvmf_init_br down 00:11:40.654 11:33:41 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@238 -- # ip link set nvmf_init_br2 down 00:11:40.654 11:33:41 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@239 -- # ip link set nvmf_tgt_br down 00:11:40.654 11:33:41 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@240 -- # ip link set nvmf_tgt_br2 down 00:11:40.654 11:33:41 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@241 -- # ip link delete nvmf_br type bridge 00:11:40.654 11:33:41 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@242 -- # ip link delete nvmf_init_if 00:11:40.654 11:33:41 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@243 -- # ip link delete nvmf_init_if2 00:11:40.654 11:33:41 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@244 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:11:40.654 11:33:41 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@245 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:11:40.654 11:33:41 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@246 -- # remove_spdk_ns 00:11:40.654 11:33:41 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:11:40.654 11:33:41 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:11:40.654 11:33:41 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:11:40.654 11:33:41 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@300 -- # return 0 00:11:40.654 00:11:40.654 real 0m18.532s 00:11:40.654 user 1m8.617s 00:11:40.654 sys 0m2.629s 00:11:40.654 ************************************ 00:11:40.654 END TEST nvmf_rpc 00:11:40.654 ************************************ 00:11:40.654 11:33:41 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1130 -- # xtrace_disable 00:11:40.654 11:33:41 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:11:40.654 11:33:41 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@23 -- # run_test nvmf_invalid /home/vagrant/spdk_repo/spdk/test/nvmf/target/invalid.sh --transport=tcp 00:11:40.654 11:33:41 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:11:40.654 11:33:41 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1111 -- # xtrace_disable 00:11:40.654 11:33:41 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:11:40.654 ************************************ 00:11:40.654 START TEST nvmf_invalid 00:11:40.654 ************************************ 00:11:40.654 11:33:41 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/invalid.sh --transport=tcp 00:11:40.913 * Looking for test storage... 00:11:40.913 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:11:40.913 11:33:41 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:11:40.913 11:33:41 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@1711 -- # lcov --version 00:11:40.913 11:33:41 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:11:40.913 11:33:41 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:11:40.913 11:33:41 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:11:40.913 11:33:41 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@333 -- # local ver1 ver1_l 00:11:40.913 11:33:41 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@334 -- # local ver2 ver2_l 00:11:40.913 11:33:41 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@336 -- # IFS=.-: 00:11:40.913 11:33:41 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@336 -- # read -ra ver1 00:11:40.913 11:33:41 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@337 -- # IFS=.-: 00:11:40.913 11:33:41 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@337 -- # read -ra ver2 00:11:40.913 11:33:41 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@338 -- # local 'op=<' 00:11:40.913 11:33:41 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@340 -- # ver1_l=2 00:11:40.913 11:33:41 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@341 -- # ver2_l=1 00:11:40.913 11:33:41 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:11:40.913 11:33:41 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@344 -- # case "$op" in 00:11:40.913 11:33:41 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@345 -- # : 1 00:11:40.913 11:33:41 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@364 -- # (( v = 0 )) 00:11:40.913 11:33:41 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:11:40.913 11:33:41 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@365 -- # decimal 1 00:11:40.913 11:33:41 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@353 -- # local d=1 00:11:40.913 11:33:41 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:11:40.913 11:33:41 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@355 -- # echo 1 00:11:40.913 11:33:41 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@365 -- # ver1[v]=1 00:11:40.913 11:33:41 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@366 -- # decimal 2 00:11:40.913 11:33:41 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@353 -- # local d=2 00:11:40.913 11:33:41 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:11:40.913 11:33:41 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@355 -- # echo 2 00:11:40.913 11:33:41 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@366 -- # ver2[v]=2 00:11:40.913 11:33:41 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:11:40.913 11:33:41 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:11:40.913 11:33:41 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@368 -- # return 0 00:11:40.913 11:33:41 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:11:40.913 11:33:41 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:11:40.913 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:40.913 --rc genhtml_branch_coverage=1 00:11:40.913 --rc genhtml_function_coverage=1 00:11:40.913 --rc genhtml_legend=1 00:11:40.913 --rc geninfo_all_blocks=1 00:11:40.913 --rc geninfo_unexecuted_blocks=1 00:11:40.913 00:11:40.913 ' 00:11:40.913 11:33:41 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:11:40.913 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:40.913 --rc genhtml_branch_coverage=1 00:11:40.913 --rc genhtml_function_coverage=1 00:11:40.913 --rc genhtml_legend=1 00:11:40.913 --rc geninfo_all_blocks=1 00:11:40.913 --rc geninfo_unexecuted_blocks=1 00:11:40.913 00:11:40.913 ' 00:11:40.913 11:33:41 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:11:40.913 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:40.913 --rc genhtml_branch_coverage=1 00:11:40.913 --rc genhtml_function_coverage=1 00:11:40.913 --rc genhtml_legend=1 00:11:40.913 --rc geninfo_all_blocks=1 00:11:40.913 --rc geninfo_unexecuted_blocks=1 00:11:40.913 00:11:40.913 ' 00:11:40.913 11:33:41 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:11:40.913 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:40.913 --rc genhtml_branch_coverage=1 00:11:40.913 --rc genhtml_function_coverage=1 00:11:40.913 --rc genhtml_legend=1 00:11:40.913 --rc geninfo_all_blocks=1 00:11:40.913 --rc geninfo_unexecuted_blocks=1 00:11:40.913 00:11:40.913 ' 00:11:40.913 11:33:41 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:11:40.913 11:33:41 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@7 -- # uname -s 00:11:40.913 11:33:41 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:11:40.913 11:33:41 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:11:40.913 11:33:41 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:11:40.913 11:33:41 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:11:40.913 11:33:41 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:11:40.913 11:33:41 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:11:40.913 11:33:41 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:11:40.913 11:33:41 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:11:40.913 11:33:41 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:11:40.913 11:33:41 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:11:40.913 11:33:41 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:2fb5c03d-b67f-4471-b8a9-a83bf7b15be8 00:11:40.913 11:33:41 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@18 -- # NVME_HOSTID=2fb5c03d-b67f-4471-b8a9-a83bf7b15be8 00:11:40.913 11:33:41 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:11:40.913 11:33:41 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:11:40.913 11:33:41 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:11:40.913 11:33:41 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:11:40.913 11:33:41 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@49 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:11:40.913 11:33:41 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@15 -- # shopt -s extglob 00:11:40.913 11:33:41 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:11:40.913 11:33:41 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:11:40.913 11:33:41 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:11:40.913 11:33:41 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:40.913 11:33:41 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:40.913 11:33:41 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:40.913 11:33:41 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- paths/export.sh@5 -- # export PATH 00:11:40.914 11:33:41 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:40.914 11:33:41 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@51 -- # : 0 00:11:40.914 11:33:41 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:11:40.914 11:33:41 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:11:40.914 11:33:41 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:11:40.914 11:33:41 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:11:40.914 11:33:41 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:11:40.914 11:33:41 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:11:40.914 /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:11:40.914 11:33:41 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:11:40.914 11:33:41 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:11:40.914 11:33:41 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@55 -- # have_pci_nics=0 00:11:40.914 11:33:41 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@11 -- # multi_target_rpc=/home/vagrant/spdk_repo/spdk/test/nvmf/target/multitarget_rpc.py 00:11:40.914 11:33:41 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@12 -- # rpc=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:11:40.914 11:33:41 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@13 -- # nqn=nqn.2016-06.io.spdk:cnode 00:11:40.914 11:33:41 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@14 -- # target=foobar 00:11:40.914 11:33:41 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@16 -- # RANDOM=0 00:11:40.914 11:33:41 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@34 -- # nvmftestinit 00:11:40.914 11:33:41 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:11:40.914 11:33:41 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:11:40.914 11:33:41 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@476 -- # prepare_net_devs 00:11:40.914 11:33:41 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@438 -- # local -g is_hw=no 00:11:40.914 11:33:41 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@440 -- # remove_spdk_ns 00:11:40.914 11:33:41 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:11:40.914 11:33:41 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:11:40.914 11:33:41 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:11:40.914 11:33:41 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@442 -- # [[ virt != virt ]] 00:11:40.914 11:33:41 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@444 -- # [[ no == yes ]] 00:11:40.914 11:33:41 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@451 -- # [[ virt == phy ]] 00:11:40.914 11:33:41 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@454 -- # [[ virt == phy-fallback ]] 00:11:40.914 11:33:41 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@459 -- # [[ tcp == tcp ]] 00:11:40.914 11:33:41 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@460 -- # nvmf_veth_init 00:11:40.914 11:33:41 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@145 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:11:40.914 11:33:41 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@146 -- # NVMF_SECOND_INITIATOR_IP=10.0.0.2 00:11:40.914 11:33:41 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@147 -- # NVMF_FIRST_TARGET_IP=10.0.0.3 00:11:40.914 11:33:41 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@148 -- # NVMF_SECOND_TARGET_IP=10.0.0.4 00:11:40.914 11:33:41 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@149 -- # NVMF_INITIATOR_IP=10.0.0.1 00:11:40.914 11:33:41 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@150 -- # NVMF_BRIDGE=nvmf_br 00:11:40.914 11:33:41 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@151 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:11:40.914 11:33:41 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@152 -- # NVMF_INITIATOR_INTERFACE2=nvmf_init_if2 00:11:40.914 11:33:41 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@153 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:11:40.914 11:33:41 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@154 -- # NVMF_INITIATOR_BRIDGE2=nvmf_init_br2 00:11:40.914 11:33:41 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@155 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:11:40.914 11:33:41 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@156 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:11:40.914 11:33:41 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@157 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:11:40.914 11:33:41 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@158 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:11:40.914 11:33:41 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@159 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:11:40.914 11:33:41 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@160 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:11:40.914 11:33:41 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@162 -- # ip link set nvmf_init_br nomaster 00:11:40.914 Cannot find device "nvmf_init_br" 00:11:40.914 11:33:41 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@162 -- # true 00:11:40.914 11:33:41 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@163 -- # ip link set nvmf_init_br2 nomaster 00:11:40.914 Cannot find device "nvmf_init_br2" 00:11:40.914 11:33:41 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@163 -- # true 00:11:40.914 11:33:41 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@164 -- # ip link set nvmf_tgt_br nomaster 00:11:40.914 Cannot find device "nvmf_tgt_br" 00:11:40.914 11:33:41 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@164 -- # true 00:11:40.914 11:33:41 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@165 -- # ip link set nvmf_tgt_br2 nomaster 00:11:40.914 Cannot find device "nvmf_tgt_br2" 00:11:40.914 11:33:41 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@165 -- # true 00:11:40.914 11:33:41 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@166 -- # ip link set nvmf_init_br down 00:11:40.914 Cannot find device "nvmf_init_br" 00:11:40.914 11:33:41 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@166 -- # true 00:11:40.914 11:33:41 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@167 -- # ip link set nvmf_init_br2 down 00:11:40.914 Cannot find device "nvmf_init_br2" 00:11:40.914 11:33:41 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@167 -- # true 00:11:40.914 11:33:41 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@168 -- # ip link set nvmf_tgt_br down 00:11:41.173 Cannot find device "nvmf_tgt_br" 00:11:41.173 11:33:41 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@168 -- # true 00:11:41.173 11:33:41 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@169 -- # ip link set nvmf_tgt_br2 down 00:11:41.173 Cannot find device "nvmf_tgt_br2" 00:11:41.173 11:33:41 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@169 -- # true 00:11:41.173 11:33:41 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@170 -- # ip link delete nvmf_br type bridge 00:11:41.173 Cannot find device "nvmf_br" 00:11:41.173 11:33:41 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@170 -- # true 00:11:41.173 11:33:41 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@171 -- # ip link delete nvmf_init_if 00:11:41.173 Cannot find device "nvmf_init_if" 00:11:41.173 11:33:41 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@171 -- # true 00:11:41.173 11:33:41 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@172 -- # ip link delete nvmf_init_if2 00:11:41.173 Cannot find device "nvmf_init_if2" 00:11:41.173 11:33:41 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@172 -- # true 00:11:41.173 11:33:41 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@173 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:11:41.173 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:11:41.173 11:33:41 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@173 -- # true 00:11:41.173 11:33:41 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@174 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:11:41.173 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:11:41.173 11:33:41 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@174 -- # true 00:11:41.173 11:33:41 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@177 -- # ip netns add nvmf_tgt_ns_spdk 00:11:41.173 11:33:41 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@180 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:11:41.173 11:33:41 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@181 -- # ip link add nvmf_init_if2 type veth peer name nvmf_init_br2 00:11:41.173 11:33:41 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@182 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:11:41.173 11:33:41 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@183 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:11:41.173 11:33:41 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:11:41.173 11:33:41 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@187 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:11:41.173 11:33:41 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@190 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:11:41.173 11:33:41 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@191 -- # ip addr add 10.0.0.2/24 dev nvmf_init_if2 00:11:41.173 11:33:41 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@192 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if 00:11:41.173 11:33:41 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@193 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.4/24 dev nvmf_tgt_if2 00:11:41.173 11:33:41 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@196 -- # ip link set nvmf_init_if up 00:11:41.173 11:33:41 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@197 -- # ip link set nvmf_init_if2 up 00:11:41.173 11:33:41 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@198 -- # ip link set nvmf_init_br up 00:11:41.173 11:33:41 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@199 -- # ip link set nvmf_init_br2 up 00:11:41.173 11:33:41 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@200 -- # ip link set nvmf_tgt_br up 00:11:41.173 11:33:41 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@201 -- # ip link set nvmf_tgt_br2 up 00:11:41.173 11:33:41 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@202 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:11:41.173 11:33:41 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@203 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:11:41.173 11:33:41 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@204 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:11:41.173 11:33:41 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@207 -- # ip link add nvmf_br type bridge 00:11:41.173 11:33:41 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@208 -- # ip link set nvmf_br up 00:11:41.173 11:33:41 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@211 -- # ip link set nvmf_init_br master nvmf_br 00:11:41.173 11:33:41 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@212 -- # ip link set nvmf_init_br2 master nvmf_br 00:11:41.431 11:33:41 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@213 -- # ip link set nvmf_tgt_br master nvmf_br 00:11:41.431 11:33:41 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@214 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:11:41.431 11:33:41 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@217 -- # ipts -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:11:41.431 11:33:41 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT' 00:11:41.431 11:33:41 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@218 -- # ipts -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT 00:11:41.431 11:33:41 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT' 00:11:41.431 11:33:41 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@219 -- # ipts -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:11:41.431 11:33:41 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@790 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT -m comment --comment 'SPDK_NVMF:-A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT' 00:11:41.431 11:33:41 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@222 -- # ping -c 1 10.0.0.3 00:11:41.431 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:11:41.431 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.061 ms 00:11:41.431 00:11:41.431 --- 10.0.0.3 ping statistics --- 00:11:41.431 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:11:41.431 rtt min/avg/max/mdev = 0.061/0.061/0.061/0.000 ms 00:11:41.431 11:33:41 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@223 -- # ping -c 1 10.0.0.4 00:11:41.431 PING 10.0.0.4 (10.0.0.4) 56(84) bytes of data. 00:11:41.431 64 bytes from 10.0.0.4: icmp_seq=1 ttl=64 time=0.040 ms 00:11:41.431 00:11:41.431 --- 10.0.0.4 ping statistics --- 00:11:41.431 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:11:41.431 rtt min/avg/max/mdev = 0.040/0.040/0.040/0.000 ms 00:11:41.431 11:33:41 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@224 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:11:41.431 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:11:41.431 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.022 ms 00:11:41.431 00:11:41.431 --- 10.0.0.1 ping statistics --- 00:11:41.431 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:11:41.431 rtt min/avg/max/mdev = 0.022/0.022/0.022/0.000 ms 00:11:41.431 11:33:41 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@225 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.2 00:11:41.431 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:11:41.431 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.046 ms 00:11:41.431 00:11:41.431 --- 10.0.0.2 ping statistics --- 00:11:41.431 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:11:41.431 rtt min/avg/max/mdev = 0.046/0.046/0.046/0.000 ms 00:11:41.431 11:33:41 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@227 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:11:41.431 11:33:41 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@461 -- # return 0 00:11:41.431 11:33:41 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:11:41.431 11:33:41 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:11:41.431 11:33:41 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:11:41.431 11:33:41 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:11:41.431 11:33:41 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:11:41.432 11:33:41 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:11:41.432 11:33:41 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:11:41.432 11:33:42 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@35 -- # nvmfappstart -m 0xF 00:11:41.432 11:33:42 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:11:41.432 11:33:42 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@726 -- # xtrace_disable 00:11:41.432 11:33:42 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@10 -- # set +x 00:11:41.432 11:33:42 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@509 -- # nvmfpid=76244 00:11:41.432 11:33:42 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@508 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:11:41.432 11:33:42 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@510 -- # waitforlisten 76244 00:11:41.432 11:33:42 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@835 -- # '[' -z 76244 ']' 00:11:41.432 11:33:42 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:11:41.432 11:33:42 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@840 -- # local max_retries=100 00:11:41.432 11:33:42 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:11:41.432 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:11:41.432 11:33:42 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@844 -- # xtrace_disable 00:11:41.432 11:33:42 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@10 -- # set +x 00:11:41.432 [2024-12-16 11:33:42.105651] Starting SPDK v25.01-pre git sha1 c360129e7 / DPDK 24.03.0 initialization... 00:11:41.432 [2024-12-16 11:33:42.105788] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:11:41.690 [2024-12-16 11:33:42.267808] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:11:41.690 [2024-12-16 11:33:42.325096] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:11:41.690 [2024-12-16 11:33:42.325421] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:11:41.690 [2024-12-16 11:33:42.325550] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:11:41.690 [2024-12-16 11:33:42.325673] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:11:41.690 [2024-12-16 11:33:42.325707] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:11:41.690 [2024-12-16 11:33:42.326952] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 1 00:11:41.690 [2024-12-16 11:33:42.327033] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 2 00:11:41.690 [2024-12-16 11:33:42.327177] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 3 00:11:41.690 [2024-12-16 11:33:42.327178] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:11:41.949 11:33:42 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:11:41.949 11:33:42 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@868 -- # return 0 00:11:41.949 11:33:42 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:11:41.949 11:33:42 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@732 -- # xtrace_disable 00:11:41.949 11:33:42 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@10 -- # set +x 00:11:41.949 11:33:42 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:11:41.949 11:33:42 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@37 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; nvmftestfini $1; exit 1' SIGINT SIGTERM EXIT 00:11:41.949 11:33:42 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@40 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_subsystem -t foobar nqn.2016-06.io.spdk:cnode3039 00:11:42.207 [2024-12-16 11:33:42.785936] nvmf_rpc.c: 396:rpc_nvmf_create_subsystem: *ERROR*: Unable to find target foobar 00:11:42.207 11:33:42 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@40 -- # out='2024/12/16 11:33:42 error on JSON-RPC call, method: nvmf_create_subsystem, params: map[nqn:nqn.2016-06.io.spdk:cnode3039 tgt_name:foobar], err: error received for nvmf_create_subsystem method, err: Code=-32603 Msg=Unable to find target foobar 00:11:42.207 request: 00:11:42.207 { 00:11:42.207 "method": "nvmf_create_subsystem", 00:11:42.207 "params": { 00:11:42.207 "nqn": "nqn.2016-06.io.spdk:cnode3039", 00:11:42.207 "tgt_name": "foobar" 00:11:42.207 } 00:11:42.207 } 00:11:42.207 Got JSON-RPC error response 00:11:42.207 GoRPCClient: error on JSON-RPC call' 00:11:42.208 11:33:42 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@41 -- # [[ 2024/12/16 11:33:42 error on JSON-RPC call, method: nvmf_create_subsystem, params: map[nqn:nqn.2016-06.io.spdk:cnode3039 tgt_name:foobar], err: error received for nvmf_create_subsystem method, err: Code=-32603 Msg=Unable to find target foobar 00:11:42.208 request: 00:11:42.208 { 00:11:42.208 "method": "nvmf_create_subsystem", 00:11:42.208 "params": { 00:11:42.208 "nqn": "nqn.2016-06.io.spdk:cnode3039", 00:11:42.208 "tgt_name": "foobar" 00:11:42.208 } 00:11:42.208 } 00:11:42.208 Got JSON-RPC error response 00:11:42.208 GoRPCClient: error on JSON-RPC call == *\U\n\a\b\l\e\ \t\o\ \f\i\n\d\ \t\a\r\g\e\t* ]] 00:11:42.208 11:33:42 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@45 -- # echo -e '\x1f' 00:11:42.208 11:33:42 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@45 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_subsystem -s $'SPDKISFASTANDAWESOME\037' nqn.2016-06.io.spdk:cnode14092 00:11:42.466 [2024-12-16 11:33:43.098236] nvmf_rpc.c: 413:rpc_nvmf_create_subsystem: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode14092: invalid serial number 'SPDKISFASTANDAWESOME' 00:11:42.466 11:33:43 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@45 -- # out='2024/12/16 11:33:43 error on JSON-RPC call, method: nvmf_create_subsystem, params: map[nqn:nqn.2016-06.io.spdk:cnode14092 serial_number:SPDKISFASTANDAWESOME], err: error received for nvmf_create_subsystem method, err: Code=-32602 Msg=Invalid SN SPDKISFASTANDAWESOME 00:11:42.466 request: 00:11:42.466 { 00:11:42.466 "method": "nvmf_create_subsystem", 00:11:42.466 "params": { 00:11:42.466 "nqn": "nqn.2016-06.io.spdk:cnode14092", 00:11:42.466 "serial_number": "SPDKISFASTANDAWESOME\u001f" 00:11:42.466 } 00:11:42.466 } 00:11:42.466 Got JSON-RPC error response 00:11:42.466 GoRPCClient: error on JSON-RPC call' 00:11:42.466 11:33:43 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@46 -- # [[ 2024/12/16 11:33:43 error on JSON-RPC call, method: nvmf_create_subsystem, params: map[nqn:nqn.2016-06.io.spdk:cnode14092 serial_number:SPDKISFASTANDAWESOME], err: error received for nvmf_create_subsystem method, err: Code=-32602 Msg=Invalid SN SPDKISFASTANDAWESOME 00:11:42.466 request: 00:11:42.466 { 00:11:42.466 "method": "nvmf_create_subsystem", 00:11:42.466 "params": { 00:11:42.466 "nqn": "nqn.2016-06.io.spdk:cnode14092", 00:11:42.466 "serial_number": "SPDKISFASTANDAWESOME\u001f" 00:11:42.466 } 00:11:42.466 } 00:11:42.466 Got JSON-RPC error response 00:11:42.466 GoRPCClient: error on JSON-RPC call == *\I\n\v\a\l\i\d\ \S\N* ]] 00:11:42.466 11:33:43 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@50 -- # echo -e '\x1f' 00:11:42.466 11:33:43 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@50 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_subsystem -d $'SPDK_Controller\037' nqn.2016-06.io.spdk:cnode2655 00:11:42.725 [2024-12-16 11:33:43.402630] nvmf_rpc.c: 422:rpc_nvmf_create_subsystem: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode2655: invalid model number 'SPDK_Controller' 00:11:42.725 11:33:43 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@50 -- # out='2024/12/16 11:33:43 error on JSON-RPC call, method: nvmf_create_subsystem, params: map[model_number:SPDK_Controller nqn:nqn.2016-06.io.spdk:cnode2655], err: error received for nvmf_create_subsystem method, err: Code=-32602 Msg=Invalid MN SPDK_Controller 00:11:42.725 request: 00:11:42.725 { 00:11:42.725 "method": "nvmf_create_subsystem", 00:11:42.725 "params": { 00:11:42.725 "nqn": "nqn.2016-06.io.spdk:cnode2655", 00:11:42.725 "model_number": "SPDK_Controller\u001f" 00:11:42.725 } 00:11:42.725 } 00:11:42.725 Got JSON-RPC error response 00:11:42.725 GoRPCClient: error on JSON-RPC call' 00:11:42.725 11:33:43 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@51 -- # [[ 2024/12/16 11:33:43 error on JSON-RPC call, method: nvmf_create_subsystem, params: map[model_number:SPDK_Controller nqn:nqn.2016-06.io.spdk:cnode2655], err: error received for nvmf_create_subsystem method, err: Code=-32602 Msg=Invalid MN SPDK_Controller 00:11:42.725 request: 00:11:42.725 { 00:11:42.725 "method": "nvmf_create_subsystem", 00:11:42.725 "params": { 00:11:42.725 "nqn": "nqn.2016-06.io.spdk:cnode2655", 00:11:42.725 "model_number": "SPDK_Controller\u001f" 00:11:42.725 } 00:11:42.725 } 00:11:42.725 Got JSON-RPC error response 00:11:42.725 GoRPCClient: error on JSON-RPC call == *\I\n\v\a\l\i\d\ \M\N* ]] 00:11:42.725 11:33:43 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@54 -- # gen_random_s 21 00:11:42.725 11:33:43 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@19 -- # local length=21 ll 00:11:42.725 11:33:43 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@21 -- # chars=('32' '33' '34' '35' '36' '37' '38' '39' '40' '41' '42' '43' '44' '45' '46' '47' '48' '49' '50' '51' '52' '53' '54' '55' '56' '57' '58' '59' '60' '61' '62' '63' '64' '65' '66' '67' '68' '69' '70' '71' '72' '73' '74' '75' '76' '77' '78' '79' '80' '81' '82' '83' '84' '85' '86' '87' '88' '89' '90' '91' '92' '93' '94' '95' '96' '97' '98' '99' '100' '101' '102' '103' '104' '105' '106' '107' '108' '109' '110' '111' '112' '113' '114' '115' '116' '117' '118' '119' '120' '121' '122' '123' '124' '125' '126' '127') 00:11:42.725 11:33:43 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@21 -- # local chars 00:11:42.725 11:33:43 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@22 -- # local string 00:11:42.725 11:33:43 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll = 0 )) 00:11:42.725 11:33:43 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:11:42.725 11:33:43 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 85 00:11:42.725 11:33:43 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x55' 00:11:42.725 11:33:43 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=U 00:11:42.725 11:33:43 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:11:42.725 11:33:43 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:11:42.725 11:33:43 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 124 00:11:42.725 11:33:43 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x7c' 00:11:42.725 11:33:43 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+='|' 00:11:42.725 11:33:43 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:11:42.725 11:33:43 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:11:42.725 11:33:43 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 54 00:11:42.725 11:33:43 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x36' 00:11:42.725 11:33:43 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=6 00:11:42.725 11:33:43 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:11:42.725 11:33:43 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:11:42.725 11:33:43 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 126 00:11:42.725 11:33:43 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x7e' 00:11:42.725 11:33:43 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+='~' 00:11:42.725 11:33:43 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:11:42.725 11:33:43 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:11:42.725 11:33:43 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 42 00:11:42.725 11:33:43 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x2a' 00:11:42.725 11:33:43 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+='*' 00:11:42.725 11:33:43 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:11:42.725 11:33:43 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:11:42.725 11:33:43 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 53 00:11:42.726 11:33:43 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x35' 00:11:42.726 11:33:43 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=5 00:11:42.726 11:33:43 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:11:42.726 11:33:43 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:11:42.726 11:33:43 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 119 00:11:42.726 11:33:43 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x77' 00:11:42.726 11:33:43 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=w 00:11:42.726 11:33:43 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:11:42.726 11:33:43 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:11:42.726 11:33:43 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 60 00:11:42.726 11:33:43 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x3c' 00:11:43.051 11:33:43 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+='<' 00:11:43.051 11:33:43 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:11:43.051 11:33:43 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:11:43.051 11:33:43 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 42 00:11:43.051 11:33:43 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x2a' 00:11:43.051 11:33:43 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+='*' 00:11:43.051 11:33:43 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:11:43.051 11:33:43 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:11:43.051 11:33:43 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 71 00:11:43.051 11:33:43 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x47' 00:11:43.051 11:33:43 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=G 00:11:43.051 11:33:43 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:11:43.051 11:33:43 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:11:43.051 11:33:43 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 32 00:11:43.051 11:33:43 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x20' 00:11:43.051 11:33:43 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=' ' 00:11:43.051 11:33:43 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:11:43.051 11:33:43 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:11:43.051 11:33:43 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 45 00:11:43.051 11:33:43 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x2d' 00:11:43.051 11:33:43 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=- 00:11:43.051 11:33:43 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:11:43.051 11:33:43 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:11:43.051 11:33:43 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 84 00:11:43.051 11:33:43 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x54' 00:11:43.051 11:33:43 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=T 00:11:43.051 11:33:43 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:11:43.051 11:33:43 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:11:43.051 11:33:43 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 99 00:11:43.051 11:33:43 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x63' 00:11:43.051 11:33:43 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=c 00:11:43.051 11:33:43 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:11:43.051 11:33:43 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:11:43.051 11:33:43 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 75 00:11:43.051 11:33:43 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x4b' 00:11:43.051 11:33:43 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=K 00:11:43.051 11:33:43 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:11:43.051 11:33:43 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:11:43.051 11:33:43 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 105 00:11:43.051 11:33:43 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x69' 00:11:43.051 11:33:43 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=i 00:11:43.051 11:33:43 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:11:43.051 11:33:43 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:11:43.051 11:33:43 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 87 00:11:43.051 11:33:43 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x57' 00:11:43.051 11:33:43 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=W 00:11:43.051 11:33:43 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:11:43.051 11:33:43 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:11:43.051 11:33:43 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 67 00:11:43.051 11:33:43 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x43' 00:11:43.051 11:33:43 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=C 00:11:43.051 11:33:43 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:11:43.051 11:33:43 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:11:43.051 11:33:43 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 51 00:11:43.051 11:33:43 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x33' 00:11:43.051 11:33:43 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=3 00:11:43.051 11:33:43 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:11:43.051 11:33:43 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:11:43.051 11:33:43 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 127 00:11:43.051 11:33:43 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x7f' 00:11:43.051 11:33:43 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=$'\177' 00:11:43.051 11:33:43 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:11:43.051 11:33:43 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:11:43.051 11:33:43 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 40 00:11:43.051 11:33:43 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x28' 00:11:43.051 11:33:43 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+='(' 00:11:43.051 11:33:43 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:11:43.051 11:33:43 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:11:43.051 11:33:43 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@28 -- # [[ U == \- ]] 00:11:43.051 11:33:43 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@31 -- # echo 'U|6~*5w<*G -TcKiWC3(' 00:11:43.051 11:33:43 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_subsystem -s 'U|6~*5w<*G -TcKiWC3(' nqn.2016-06.io.spdk:cnode492 00:11:43.311 [2024-12-16 11:33:43.854994] nvmf_rpc.c: 413:rpc_nvmf_create_subsystem: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode492: invalid serial number 'U|6~*5w<*G -TcKiWC3(' 00:11:43.311 11:33:43 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@54 -- # out='2024/12/16 11:33:43 error on JSON-RPC call, method: nvmf_create_subsystem, params: map[nqn:nqn.2016-06.io.spdk:cnode492 serial_number:U|6~*5w<*G -TcKiWC3(], err: error received for nvmf_create_subsystem method, err: Code=-32602 Msg=Invalid SN U|6~*5w<*G -TcKiWC3( 00:11:43.311 request: 00:11:43.311 { 00:11:43.311 "method": "nvmf_create_subsystem", 00:11:43.311 "params": { 00:11:43.311 "nqn": "nqn.2016-06.io.spdk:cnode492", 00:11:43.311 "serial_number": "U|6~*5w<*G -TcKiWC3\u007f(" 00:11:43.311 } 00:11:43.311 } 00:11:43.311 Got JSON-RPC error response 00:11:43.311 GoRPCClient: error on JSON-RPC call' 00:11:43.311 11:33:43 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@55 -- # [[ 2024/12/16 11:33:43 error on JSON-RPC call, method: nvmf_create_subsystem, params: map[nqn:nqn.2016-06.io.spdk:cnode492 serial_number:U|6~*5w<*G -TcKiWC3(], err: error received for nvmf_create_subsystem method, err: Code=-32602 Msg=Invalid SN U|6~*5w<*G -TcKiWC3( 00:11:43.311 request: 00:11:43.311 { 00:11:43.311 "method": "nvmf_create_subsystem", 00:11:43.311 "params": { 00:11:43.311 "nqn": "nqn.2016-06.io.spdk:cnode492", 00:11:43.311 "serial_number": "U|6~*5w<*G -TcKiWC3\u007f(" 00:11:43.311 } 00:11:43.311 } 00:11:43.311 Got JSON-RPC error response 00:11:43.311 GoRPCClient: error on JSON-RPC call == *\I\n\v\a\l\i\d\ \S\N* ]] 00:11:43.311 11:33:43 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@58 -- # gen_random_s 41 00:11:43.311 11:33:43 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@19 -- # local length=41 ll 00:11:43.311 11:33:43 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@21 -- # chars=('32' '33' '34' '35' '36' '37' '38' '39' '40' '41' '42' '43' '44' '45' '46' '47' '48' '49' '50' '51' '52' '53' '54' '55' '56' '57' '58' '59' '60' '61' '62' '63' '64' '65' '66' '67' '68' '69' '70' '71' '72' '73' '74' '75' '76' '77' '78' '79' '80' '81' '82' '83' '84' '85' '86' '87' '88' '89' '90' '91' '92' '93' '94' '95' '96' '97' '98' '99' '100' '101' '102' '103' '104' '105' '106' '107' '108' '109' '110' '111' '112' '113' '114' '115' '116' '117' '118' '119' '120' '121' '122' '123' '124' '125' '126' '127') 00:11:43.311 11:33:43 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@21 -- # local chars 00:11:43.311 11:33:43 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@22 -- # local string 00:11:43.311 11:33:43 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll = 0 )) 00:11:43.311 11:33:43 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:11:43.311 11:33:43 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 113 00:11:43.311 11:33:43 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x71' 00:11:43.311 11:33:43 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=q 00:11:43.311 11:33:43 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:11:43.311 11:33:43 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:11:43.311 11:33:43 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 37 00:11:43.311 11:33:43 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x25' 00:11:43.311 11:33:43 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=% 00:11:43.311 11:33:43 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:11:43.311 11:33:43 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:11:43.311 11:33:43 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 111 00:11:43.311 11:33:43 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x6f' 00:11:43.312 11:33:43 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=o 00:11:43.312 11:33:43 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:11:43.312 11:33:43 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:11:43.312 11:33:43 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 67 00:11:43.312 11:33:43 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x43' 00:11:43.312 11:33:43 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=C 00:11:43.312 11:33:43 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:11:43.312 11:33:43 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:11:43.312 11:33:43 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 95 00:11:43.312 11:33:43 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x5f' 00:11:43.312 11:33:43 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=_ 00:11:43.312 11:33:43 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:11:43.312 11:33:43 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:11:43.312 11:33:43 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 127 00:11:43.312 11:33:43 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x7f' 00:11:43.312 11:33:43 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=$'\177' 00:11:43.312 11:33:43 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:11:43.312 11:33:43 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:11:43.312 11:33:43 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 71 00:11:43.312 11:33:43 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x47' 00:11:43.312 11:33:43 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=G 00:11:43.312 11:33:43 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:11:43.312 11:33:43 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:11:43.312 11:33:43 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 111 00:11:43.312 11:33:43 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x6f' 00:11:43.312 11:33:43 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=o 00:11:43.312 11:33:43 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:11:43.312 11:33:43 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:11:43.312 11:33:43 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 101 00:11:43.312 11:33:43 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x65' 00:11:43.312 11:33:43 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=e 00:11:43.312 11:33:43 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:11:43.312 11:33:43 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:11:43.312 11:33:43 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 75 00:11:43.312 11:33:43 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x4b' 00:11:43.312 11:33:43 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=K 00:11:43.312 11:33:43 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:11:43.312 11:33:43 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:11:43.312 11:33:43 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 84 00:11:43.312 11:33:43 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x54' 00:11:43.312 11:33:43 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=T 00:11:43.312 11:33:43 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:11:43.312 11:33:43 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:11:43.312 11:33:43 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 42 00:11:43.312 11:33:43 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x2a' 00:11:43.312 11:33:43 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+='*' 00:11:43.312 11:33:43 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:11:43.312 11:33:43 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:11:43.312 11:33:43 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 38 00:11:43.312 11:33:43 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x26' 00:11:43.312 11:33:43 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+='&' 00:11:43.312 11:33:43 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:11:43.312 11:33:43 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:11:43.312 11:33:43 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 85 00:11:43.312 11:33:43 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x55' 00:11:43.312 11:33:43 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=U 00:11:43.312 11:33:43 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:11:43.312 11:33:43 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:11:43.312 11:33:43 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 116 00:11:43.312 11:33:43 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x74' 00:11:43.312 11:33:43 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=t 00:11:43.312 11:33:43 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:11:43.312 11:33:43 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:11:43.312 11:33:43 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 77 00:11:43.312 11:33:43 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x4d' 00:11:43.312 11:33:43 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=M 00:11:43.312 11:33:43 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:11:43.312 11:33:43 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:11:43.312 11:33:43 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 58 00:11:43.312 11:33:43 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x3a' 00:11:43.312 11:33:43 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=: 00:11:43.312 11:33:43 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:11:43.312 11:33:43 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:11:43.312 11:33:43 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 86 00:11:43.312 11:33:43 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x56' 00:11:43.312 11:33:43 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=V 00:11:43.312 11:33:43 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:11:43.312 11:33:43 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:11:43.312 11:33:43 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 127 00:11:43.312 11:33:43 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x7f' 00:11:43.312 11:33:43 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=$'\177' 00:11:43.312 11:33:43 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:11:43.312 11:33:43 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:11:43.312 11:33:43 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 111 00:11:43.312 11:33:43 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x6f' 00:11:43.312 11:33:43 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=o 00:11:43.312 11:33:43 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:11:43.312 11:33:43 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:11:43.312 11:33:43 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 52 00:11:43.312 11:33:43 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x34' 00:11:43.312 11:33:43 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=4 00:11:43.312 11:33:43 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:11:43.312 11:33:43 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:11:43.312 11:33:43 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 51 00:11:43.312 11:33:43 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x33' 00:11:43.312 11:33:43 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=3 00:11:43.312 11:33:43 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:11:43.312 11:33:43 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:11:43.312 11:33:43 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 68 00:11:43.312 11:33:43 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x44' 00:11:43.312 11:33:43 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=D 00:11:43.312 11:33:43 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:11:43.312 11:33:43 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:11:43.312 11:33:43 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 57 00:11:43.312 11:33:43 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x39' 00:11:43.312 11:33:43 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=9 00:11:43.312 11:33:43 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:11:43.312 11:33:43 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:11:43.312 11:33:44 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 57 00:11:43.312 11:33:44 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x39' 00:11:43.312 11:33:44 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=9 00:11:43.312 11:33:44 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:11:43.312 11:33:44 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:11:43.312 11:33:44 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 93 00:11:43.312 11:33:44 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x5d' 00:11:43.312 11:33:44 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=']' 00:11:43.312 11:33:44 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:11:43.312 11:33:44 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:11:43.312 11:33:44 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 107 00:11:43.312 11:33:44 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x6b' 00:11:43.312 11:33:44 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=k 00:11:43.312 11:33:44 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:11:43.313 11:33:44 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:11:43.313 11:33:44 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 56 00:11:43.313 11:33:44 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x38' 00:11:43.313 11:33:44 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=8 00:11:43.313 11:33:44 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:11:43.313 11:33:44 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:11:43.313 11:33:44 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 91 00:11:43.313 11:33:44 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x5b' 00:11:43.313 11:33:44 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+='[' 00:11:43.313 11:33:44 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:11:43.313 11:33:44 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:11:43.313 11:33:44 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 71 00:11:43.313 11:33:44 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x47' 00:11:43.313 11:33:44 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=G 00:11:43.313 11:33:44 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:11:43.313 11:33:44 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:11:43.313 11:33:44 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 127 00:11:43.313 11:33:44 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x7f' 00:11:43.313 11:33:44 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=$'\177' 00:11:43.313 11:33:44 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:11:43.313 11:33:44 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:11:43.313 11:33:44 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 54 00:11:43.313 11:33:44 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x36' 00:11:43.313 11:33:44 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=6 00:11:43.313 11:33:44 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:11:43.313 11:33:44 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:11:43.313 11:33:44 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 62 00:11:43.313 11:33:44 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x3e' 00:11:43.313 11:33:44 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+='>' 00:11:43.313 11:33:44 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:11:43.313 11:33:44 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:11:43.313 11:33:44 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 36 00:11:43.313 11:33:44 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x24' 00:11:43.313 11:33:44 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+='$' 00:11:43.313 11:33:44 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:11:43.313 11:33:44 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:11:43.313 11:33:44 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 80 00:11:43.313 11:33:44 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x50' 00:11:43.313 11:33:44 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=P 00:11:43.313 11:33:44 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:11:43.313 11:33:44 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:11:43.313 11:33:44 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 45 00:11:43.313 11:33:44 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x2d' 00:11:43.313 11:33:44 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=- 00:11:43.313 11:33:44 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:11:43.313 11:33:44 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:11:43.571 11:33:44 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 50 00:11:43.571 11:33:44 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x32' 00:11:43.571 11:33:44 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=2 00:11:43.571 11:33:44 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:11:43.571 11:33:44 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:11:43.571 11:33:44 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 76 00:11:43.571 11:33:44 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x4c' 00:11:43.571 11:33:44 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=L 00:11:43.572 11:33:44 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:11:43.572 11:33:44 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:11:43.572 11:33:44 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 127 00:11:43.572 11:33:44 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x7f' 00:11:43.572 11:33:44 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=$'\177' 00:11:43.572 11:33:44 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:11:43.572 11:33:44 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:11:43.572 11:33:44 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 111 00:11:43.572 11:33:44 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x6f' 00:11:43.572 11:33:44 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=o 00:11:43.572 11:33:44 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:11:43.572 11:33:44 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:11:43.572 11:33:44 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 113 00:11:43.572 11:33:44 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x71' 00:11:43.572 11:33:44 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=q 00:11:43.572 11:33:44 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:11:43.572 11:33:44 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:11:43.572 11:33:44 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@28 -- # [[ q == \- ]] 00:11:43.572 11:33:44 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@31 -- # echo 'q%oC_GoeKT*&UtM:Vo43D99]k8[G6>$P-2Loq' 00:11:43.572 11:33:44 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@58 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_subsystem -d 'q%oC_GoeKT*&UtM:Vo43D99]k8[G6>$P-2Loq' nqn.2016-06.io.spdk:cnode25124 00:11:43.831 [2024-12-16 11:33:44.375443] nvmf_rpc.c: 422:rpc_nvmf_create_subsystem: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode25124: invalid model number 'q%oC_GoeKT*&UtM:Vo43D99]k8[G6>$P-2Loq' 00:11:43.831 11:33:44 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@58 -- # out='2024/12/16 11:33:44 error on JSON-RPC call, method: nvmf_create_subsystem, params: map[model_number:q%oC_GoeKT*&UtM:Vo43D99]k8[G6>$P-2Loq nqn:nqn.2016-06.io.spdk:cnode25124], err: error received for nvmf_create_subsystem method, err: Code=-32602 Msg=Invalid MN q%oC_GoeKT*&UtM:Vo43D99]k8[G6>$P-2Loq 00:11:43.831 request: 00:11:43.831 { 00:11:43.831 "method": "nvmf_create_subsystem", 00:11:43.831 "params": { 00:11:43.831 "nqn": "nqn.2016-06.io.spdk:cnode25124", 00:11:43.831 "model_number": "q%oC_\u007fGoeKT*&UtM:V\u007fo43D99]k8[G\u007f6>$P-2L\u007foq" 00:11:43.831 } 00:11:43.831 } 00:11:43.831 Got JSON-RPC error response 00:11:43.831 GoRPCClient: error on JSON-RPC call' 00:11:43.831 11:33:44 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@59 -- # [[ 2024/12/16 11:33:44 error on JSON-RPC call, method: nvmf_create_subsystem, params: map[model_number:q%oC_GoeKT*&UtM:Vo43D99]k8[G6>$P-2Loq nqn:nqn.2016-06.io.spdk:cnode25124], err: error received for nvmf_create_subsystem method, err: Code=-32602 Msg=Invalid MN q%oC_GoeKT*&UtM:Vo43D99]k8[G6>$P-2Loq 00:11:43.831 request: 00:11:43.831 { 00:11:43.831 "method": "nvmf_create_subsystem", 00:11:43.831 "params": { 00:11:43.831 "nqn": "nqn.2016-06.io.spdk:cnode25124", 00:11:43.831 "model_number": "q%oC_\u007fGoeKT*&UtM:V\u007fo43D99]k8[G\u007f6>$P-2L\u007foq" 00:11:43.831 } 00:11:43.831 } 00:11:43.831 Got JSON-RPC error response 00:11:43.831 GoRPCClient: error on JSON-RPC call == *\I\n\v\a\l\i\d\ \M\N* ]] 00:11:43.831 11:33:44 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@62 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_transport --trtype tcp 00:11:44.089 [2024-12-16 11:33:44.707780] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:11:44.089 11:33:44 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode -s SPDK001 -a 00:11:44.347 11:33:45 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@64 -- # [[ tcp == \T\C\P ]] 00:11:44.347 11:33:45 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@67 -- # echo '' 00:11:44.347 11:33:45 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@67 -- # head -n 1 00:11:44.347 11:33:45 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@67 -- # IP= 00:11:44.347 11:33:45 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@69 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_listener nqn.2016-06.io.spdk:cnode -t tcp -a '' -s 4421 00:11:44.605 [2024-12-16 11:33:45.324365] nvmf_rpc.c: 783:nvmf_rpc_listen_paused: *ERROR*: Unable to remove listener, rc -2 00:11:44.605 11:33:45 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@69 -- # out='2024/12/16 11:33:45 error on JSON-RPC call, method: nvmf_subsystem_remove_listener, params: map[listen_address:map[traddr: trsvcid:4421 trtype:tcp] nqn:nqn.2016-06.io.spdk:cnode], err: error received for nvmf_subsystem_remove_listener method, err: Code=-32602 Msg=Invalid parameters 00:11:44.605 request: 00:11:44.605 { 00:11:44.605 "method": "nvmf_subsystem_remove_listener", 00:11:44.605 "params": { 00:11:44.605 "nqn": "nqn.2016-06.io.spdk:cnode", 00:11:44.605 "listen_address": { 00:11:44.605 "trtype": "tcp", 00:11:44.605 "traddr": "", 00:11:44.605 "trsvcid": "4421" 00:11:44.605 } 00:11:44.605 } 00:11:44.605 } 00:11:44.605 Got JSON-RPC error response 00:11:44.605 GoRPCClient: error on JSON-RPC call' 00:11:44.605 11:33:45 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@70 -- # [[ 2024/12/16 11:33:45 error on JSON-RPC call, method: nvmf_subsystem_remove_listener, params: map[listen_address:map[traddr: trsvcid:4421 trtype:tcp] nqn:nqn.2016-06.io.spdk:cnode], err: error received for nvmf_subsystem_remove_listener method, err: Code=-32602 Msg=Invalid parameters 00:11:44.605 request: 00:11:44.605 { 00:11:44.606 "method": "nvmf_subsystem_remove_listener", 00:11:44.606 "params": { 00:11:44.606 "nqn": "nqn.2016-06.io.spdk:cnode", 00:11:44.606 "listen_address": { 00:11:44.606 "trtype": "tcp", 00:11:44.606 "traddr": "", 00:11:44.606 "trsvcid": "4421" 00:11:44.606 } 00:11:44.606 } 00:11:44.606 } 00:11:44.606 Got JSON-RPC error response 00:11:44.606 GoRPCClient: error on JSON-RPC call != *\U\n\a\b\l\e\ \t\o\ \s\t\o\p\ \l\i\s\t\e\n\e\r\.* ]] 00:11:44.606 11:33:45 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@73 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode4599 -i 0 00:11:45.171 [2024-12-16 11:33:45.640610] nvmf_rpc.c: 434:rpc_nvmf_create_subsystem: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode4599: invalid cntlid range [0-65519] 00:11:45.171 11:33:45 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@73 -- # out='2024/12/16 11:33:45 error on JSON-RPC call, method: nvmf_create_subsystem, params: map[min_cntlid:0 nqn:nqn.2016-06.io.spdk:cnode4599], err: error received for nvmf_create_subsystem method, err: Code=-32602 Msg=Invalid cntlid range [0-65519] 00:11:45.171 request: 00:11:45.171 { 00:11:45.171 "method": "nvmf_create_subsystem", 00:11:45.171 "params": { 00:11:45.171 "nqn": "nqn.2016-06.io.spdk:cnode4599", 00:11:45.171 "min_cntlid": 0 00:11:45.171 } 00:11:45.171 } 00:11:45.171 Got JSON-RPC error response 00:11:45.171 GoRPCClient: error on JSON-RPC call' 00:11:45.171 11:33:45 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@74 -- # [[ 2024/12/16 11:33:45 error on JSON-RPC call, method: nvmf_create_subsystem, params: map[min_cntlid:0 nqn:nqn.2016-06.io.spdk:cnode4599], err: error received for nvmf_create_subsystem method, err: Code=-32602 Msg=Invalid cntlid range [0-65519] 00:11:45.171 request: 00:11:45.171 { 00:11:45.171 "method": "nvmf_create_subsystem", 00:11:45.171 "params": { 00:11:45.171 "nqn": "nqn.2016-06.io.spdk:cnode4599", 00:11:45.171 "min_cntlid": 0 00:11:45.171 } 00:11:45.171 } 00:11:45.171 Got JSON-RPC error response 00:11:45.171 GoRPCClient: error on JSON-RPC call == *\I\n\v\a\l\i\d\ \c\n\t\l\i\d\ \r\a\n\g\e* ]] 00:11:45.171 11:33:45 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@75 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode14512 -i 65520 00:11:45.430 [2024-12-16 11:33:45.953988] nvmf_rpc.c: 434:rpc_nvmf_create_subsystem: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode14512: invalid cntlid range [65520-65519] 00:11:45.430 11:33:45 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@75 -- # out='2024/12/16 11:33:45 error on JSON-RPC call, method: nvmf_create_subsystem, params: map[min_cntlid:65520 nqn:nqn.2016-06.io.spdk:cnode14512], err: error received for nvmf_create_subsystem method, err: Code=-32602 Msg=Invalid cntlid range [65520-65519] 00:11:45.430 request: 00:11:45.430 { 00:11:45.430 "method": "nvmf_create_subsystem", 00:11:45.430 "params": { 00:11:45.430 "nqn": "nqn.2016-06.io.spdk:cnode14512", 00:11:45.430 "min_cntlid": 65520 00:11:45.430 } 00:11:45.430 } 00:11:45.430 Got JSON-RPC error response 00:11:45.430 GoRPCClient: error on JSON-RPC call' 00:11:45.430 11:33:45 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@76 -- # [[ 2024/12/16 11:33:45 error on JSON-RPC call, method: nvmf_create_subsystem, params: map[min_cntlid:65520 nqn:nqn.2016-06.io.spdk:cnode14512], err: error received for nvmf_create_subsystem method, err: Code=-32602 Msg=Invalid cntlid range [65520-65519] 00:11:45.430 request: 00:11:45.430 { 00:11:45.430 "method": "nvmf_create_subsystem", 00:11:45.430 "params": { 00:11:45.430 "nqn": "nqn.2016-06.io.spdk:cnode14512", 00:11:45.430 "min_cntlid": 65520 00:11:45.430 } 00:11:45.430 } 00:11:45.430 Got JSON-RPC error response 00:11:45.430 GoRPCClient: error on JSON-RPC call == *\I\n\v\a\l\i\d\ \c\n\t\l\i\d\ \r\a\n\g\e* ]] 00:11:45.430 11:33:45 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@77 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode8379 -I 0 00:11:45.689 [2024-12-16 11:33:46.214235] nvmf_rpc.c: 434:rpc_nvmf_create_subsystem: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode8379: invalid cntlid range [1-0] 00:11:45.689 11:33:46 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@77 -- # out='2024/12/16 11:33:46 error on JSON-RPC call, method: nvmf_create_subsystem, params: map[max_cntlid:0 nqn:nqn.2016-06.io.spdk:cnode8379], err: error received for nvmf_create_subsystem method, err: Code=-32602 Msg=Invalid cntlid range [1-0] 00:11:45.689 request: 00:11:45.689 { 00:11:45.689 "method": "nvmf_create_subsystem", 00:11:45.689 "params": { 00:11:45.689 "nqn": "nqn.2016-06.io.spdk:cnode8379", 00:11:45.689 "max_cntlid": 0 00:11:45.689 } 00:11:45.689 } 00:11:45.689 Got JSON-RPC error response 00:11:45.689 GoRPCClient: error on JSON-RPC call' 00:11:45.689 11:33:46 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@78 -- # [[ 2024/12/16 11:33:46 error on JSON-RPC call, method: nvmf_create_subsystem, params: map[max_cntlid:0 nqn:nqn.2016-06.io.spdk:cnode8379], err: error received for nvmf_create_subsystem method, err: Code=-32602 Msg=Invalid cntlid range [1-0] 00:11:45.689 request: 00:11:45.689 { 00:11:45.689 "method": "nvmf_create_subsystem", 00:11:45.689 "params": { 00:11:45.689 "nqn": "nqn.2016-06.io.spdk:cnode8379", 00:11:45.689 "max_cntlid": 0 00:11:45.689 } 00:11:45.689 } 00:11:45.689 Got JSON-RPC error response 00:11:45.689 GoRPCClient: error on JSON-RPC call == *\I\n\v\a\l\i\d\ \c\n\t\l\i\d\ \r\a\n\g\e* ]] 00:11:45.689 11:33:46 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@79 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode21799 -I 65520 00:11:45.948 [2024-12-16 11:33:46.519390] nvmf_rpc.c: 434:rpc_nvmf_create_subsystem: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode21799: invalid cntlid range [1-65520] 00:11:45.948 11:33:46 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@79 -- # out='2024/12/16 11:33:46 error on JSON-RPC call, method: nvmf_create_subsystem, params: map[max_cntlid:65520 nqn:nqn.2016-06.io.spdk:cnode21799], err: error received for nvmf_create_subsystem method, err: Code=-32602 Msg=Invalid cntlid range [1-65520] 00:11:45.948 request: 00:11:45.948 { 00:11:45.948 "method": "nvmf_create_subsystem", 00:11:45.948 "params": { 00:11:45.948 "nqn": "nqn.2016-06.io.spdk:cnode21799", 00:11:45.948 "max_cntlid": 65520 00:11:45.948 } 00:11:45.948 } 00:11:45.948 Got JSON-RPC error response 00:11:45.948 GoRPCClient: error on JSON-RPC call' 00:11:45.948 11:33:46 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@80 -- # [[ 2024/12/16 11:33:46 error on JSON-RPC call, method: nvmf_create_subsystem, params: map[max_cntlid:65520 nqn:nqn.2016-06.io.spdk:cnode21799], err: error received for nvmf_create_subsystem method, err: Code=-32602 Msg=Invalid cntlid range [1-65520] 00:11:45.948 request: 00:11:45.948 { 00:11:45.948 "method": "nvmf_create_subsystem", 00:11:45.948 "params": { 00:11:45.948 "nqn": "nqn.2016-06.io.spdk:cnode21799", 00:11:45.948 "max_cntlid": 65520 00:11:45.948 } 00:11:45.948 } 00:11:45.948 Got JSON-RPC error response 00:11:45.948 GoRPCClient: error on JSON-RPC call == *\I\n\v\a\l\i\d\ \c\n\t\l\i\d\ \r\a\n\g\e* ]] 00:11:45.948 11:33:46 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@83 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode17293 -i 6 -I 5 00:11:46.206 [2024-12-16 11:33:46.779611] nvmf_rpc.c: 434:rpc_nvmf_create_subsystem: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode17293: invalid cntlid range [6-5] 00:11:46.207 11:33:46 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@83 -- # out='2024/12/16 11:33:46 error on JSON-RPC call, method: nvmf_create_subsystem, params: map[max_cntlid:5 min_cntlid:6 nqn:nqn.2016-06.io.spdk:cnode17293], err: error received for nvmf_create_subsystem method, err: Code=-32602 Msg=Invalid cntlid range [6-5] 00:11:46.207 request: 00:11:46.207 { 00:11:46.207 "method": "nvmf_create_subsystem", 00:11:46.207 "params": { 00:11:46.207 "nqn": "nqn.2016-06.io.spdk:cnode17293", 00:11:46.207 "min_cntlid": 6, 00:11:46.207 "max_cntlid": 5 00:11:46.207 } 00:11:46.207 } 00:11:46.207 Got JSON-RPC error response 00:11:46.207 GoRPCClient: error on JSON-RPC call' 00:11:46.207 11:33:46 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@84 -- # [[ 2024/12/16 11:33:46 error on JSON-RPC call, method: nvmf_create_subsystem, params: map[max_cntlid:5 min_cntlid:6 nqn:nqn.2016-06.io.spdk:cnode17293], err: error received for nvmf_create_subsystem method, err: Code=-32602 Msg=Invalid cntlid range [6-5] 00:11:46.207 request: 00:11:46.207 { 00:11:46.207 "method": "nvmf_create_subsystem", 00:11:46.207 "params": { 00:11:46.207 "nqn": "nqn.2016-06.io.spdk:cnode17293", 00:11:46.207 "min_cntlid": 6, 00:11:46.207 "max_cntlid": 5 00:11:46.207 } 00:11:46.207 } 00:11:46.207 Got JSON-RPC error response 00:11:46.207 GoRPCClient: error on JSON-RPC call == *\I\n\v\a\l\i\d\ \c\n\t\l\i\d\ \r\a\n\g\e* ]] 00:11:46.207 11:33:46 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@87 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/multitarget_rpc.py nvmf_delete_target --name foobar 00:11:46.207 11:33:46 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@87 -- # out='request: 00:11:46.207 { 00:11:46.207 "name": "foobar", 00:11:46.207 "method": "nvmf_delete_target", 00:11:46.207 "req_id": 1 00:11:46.207 } 00:11:46.207 Got JSON-RPC error response 00:11:46.207 response: 00:11:46.207 { 00:11:46.207 "code": -32602, 00:11:46.207 "message": "The specified target doesn'\''t exist, cannot delete it." 00:11:46.207 }' 00:11:46.207 11:33:46 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@88 -- # [[ request: 00:11:46.207 { 00:11:46.207 "name": "foobar", 00:11:46.207 "method": "nvmf_delete_target", 00:11:46.207 "req_id": 1 00:11:46.207 } 00:11:46.207 Got JSON-RPC error response 00:11:46.207 response: 00:11:46.207 { 00:11:46.207 "code": -32602, 00:11:46.207 "message": "The specified target doesn't exist, cannot delete it." 00:11:46.207 } == *\T\h\e\ \s\p\e\c\i\f\i\e\d\ \t\a\r\g\e\t\ \d\o\e\s\n\'\t\ \e\x\i\s\t\,\ \c\a\n\n\o\t\ \d\e\l\e\t\e\ \i\t\.* ]] 00:11:46.207 11:33:46 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@90 -- # trap - SIGINT SIGTERM EXIT 00:11:46.207 11:33:46 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@91 -- # nvmftestfini 00:11:46.207 11:33:46 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@516 -- # nvmfcleanup 00:11:46.207 11:33:46 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@121 -- # sync 00:11:46.207 11:33:46 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:11:46.207 11:33:46 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@124 -- # set +e 00:11:46.207 11:33:46 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@125 -- # for i in {1..20} 00:11:46.207 11:33:46 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:11:46.207 rmmod nvme_tcp 00:11:46.465 rmmod nvme_fabrics 00:11:46.465 rmmod nvme_keyring 00:11:46.465 11:33:46 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:11:46.465 11:33:47 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@128 -- # set -e 00:11:46.465 11:33:47 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@129 -- # return 0 00:11:46.465 11:33:47 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@517 -- # '[' -n 76244 ']' 00:11:46.465 11:33:47 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@518 -- # killprocess 76244 00:11:46.465 11:33:47 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@954 -- # '[' -z 76244 ']' 00:11:46.465 11:33:47 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@958 -- # kill -0 76244 00:11:46.465 11:33:47 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@959 -- # uname 00:11:46.465 11:33:47 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:11:46.465 11:33:47 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 76244 00:11:46.465 11:33:47 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:11:46.465 11:33:47 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:11:46.465 11:33:47 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@972 -- # echo 'killing process with pid 76244' 00:11:46.465 killing process with pid 76244 00:11:46.465 11:33:47 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@973 -- # kill 76244 00:11:46.465 11:33:47 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@978 -- # wait 76244 00:11:46.724 11:33:47 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:11:46.724 11:33:47 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:11:46.724 11:33:47 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:11:46.724 11:33:47 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@297 -- # iptr 00:11:46.724 11:33:47 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:11:46.724 11:33:47 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@791 -- # iptables-save 00:11:46.724 11:33:47 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@791 -- # iptables-restore 00:11:46.724 11:33:47 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@298 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:11:46.724 11:33:47 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@299 -- # nvmf_veth_fini 00:11:46.724 11:33:47 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@233 -- # ip link set nvmf_init_br nomaster 00:11:46.724 11:33:47 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@234 -- # ip link set nvmf_init_br2 nomaster 00:11:46.724 11:33:47 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@235 -- # ip link set nvmf_tgt_br nomaster 00:11:46.724 11:33:47 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@236 -- # ip link set nvmf_tgt_br2 nomaster 00:11:46.724 11:33:47 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@237 -- # ip link set nvmf_init_br down 00:11:46.724 11:33:47 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@238 -- # ip link set nvmf_init_br2 down 00:11:46.724 11:33:47 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@239 -- # ip link set nvmf_tgt_br down 00:11:46.724 11:33:47 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@240 -- # ip link set nvmf_tgt_br2 down 00:11:46.724 11:33:47 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@241 -- # ip link delete nvmf_br type bridge 00:11:46.724 11:33:47 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@242 -- # ip link delete nvmf_init_if 00:11:46.724 11:33:47 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@243 -- # ip link delete nvmf_init_if2 00:11:46.724 11:33:47 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@244 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:11:46.724 11:33:47 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@245 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:11:46.724 11:33:47 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@246 -- # remove_spdk_ns 00:11:46.724 11:33:47 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:11:46.724 11:33:47 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:11:46.724 11:33:47 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:11:46.983 11:33:47 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@300 -- # return 0 00:11:46.983 ************************************ 00:11:46.983 END TEST nvmf_invalid 00:11:46.983 ************************************ 00:11:46.983 00:11:46.983 real 0m6.120s 00:11:46.983 user 0m23.444s 00:11:46.983 sys 0m1.422s 00:11:46.983 11:33:47 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@1130 -- # xtrace_disable 00:11:46.983 11:33:47 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@10 -- # set +x 00:11:46.983 11:33:47 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@24 -- # run_test nvmf_connect_stress /home/vagrant/spdk_repo/spdk/test/nvmf/target/connect_stress.sh --transport=tcp 00:11:46.983 11:33:47 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:11:46.983 11:33:47 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1111 -- # xtrace_disable 00:11:46.983 11:33:47 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:11:46.983 ************************************ 00:11:46.983 START TEST nvmf_connect_stress 00:11:46.983 ************************************ 00:11:46.983 11:33:47 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/connect_stress.sh --transport=tcp 00:11:46.983 * Looking for test storage... 00:11:46.983 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:11:46.983 11:33:47 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:11:46.983 11:33:47 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@1711 -- # lcov --version 00:11:46.983 11:33:47 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:11:47.243 11:33:47 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:11:47.243 11:33:47 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:11:47.243 11:33:47 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@333 -- # local ver1 ver1_l 00:11:47.243 11:33:47 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@334 -- # local ver2 ver2_l 00:11:47.243 11:33:47 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@336 -- # IFS=.-: 00:11:47.243 11:33:47 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@336 -- # read -ra ver1 00:11:47.243 11:33:47 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@337 -- # IFS=.-: 00:11:47.243 11:33:47 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@337 -- # read -ra ver2 00:11:47.243 11:33:47 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@338 -- # local 'op=<' 00:11:47.243 11:33:47 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@340 -- # ver1_l=2 00:11:47.243 11:33:47 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@341 -- # ver2_l=1 00:11:47.243 11:33:47 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:11:47.243 11:33:47 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@344 -- # case "$op" in 00:11:47.243 11:33:47 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@345 -- # : 1 00:11:47.243 11:33:47 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@364 -- # (( v = 0 )) 00:11:47.243 11:33:47 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:11:47.243 11:33:47 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@365 -- # decimal 1 00:11:47.243 11:33:47 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@353 -- # local d=1 00:11:47.243 11:33:47 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:11:47.243 11:33:47 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@355 -- # echo 1 00:11:47.243 11:33:47 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@365 -- # ver1[v]=1 00:11:47.243 11:33:47 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@366 -- # decimal 2 00:11:47.243 11:33:47 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@353 -- # local d=2 00:11:47.243 11:33:47 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:11:47.243 11:33:47 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@355 -- # echo 2 00:11:47.243 11:33:47 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@366 -- # ver2[v]=2 00:11:47.243 11:33:47 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:11:47.243 11:33:47 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:11:47.243 11:33:47 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@368 -- # return 0 00:11:47.243 11:33:47 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:11:47.243 11:33:47 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:11:47.243 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:47.243 --rc genhtml_branch_coverage=1 00:11:47.243 --rc genhtml_function_coverage=1 00:11:47.243 --rc genhtml_legend=1 00:11:47.243 --rc geninfo_all_blocks=1 00:11:47.243 --rc geninfo_unexecuted_blocks=1 00:11:47.243 00:11:47.243 ' 00:11:47.243 11:33:47 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:11:47.243 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:47.243 --rc genhtml_branch_coverage=1 00:11:47.243 --rc genhtml_function_coverage=1 00:11:47.243 --rc genhtml_legend=1 00:11:47.243 --rc geninfo_all_blocks=1 00:11:47.243 --rc geninfo_unexecuted_blocks=1 00:11:47.243 00:11:47.243 ' 00:11:47.243 11:33:47 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:11:47.243 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:47.243 --rc genhtml_branch_coverage=1 00:11:47.243 --rc genhtml_function_coverage=1 00:11:47.243 --rc genhtml_legend=1 00:11:47.243 --rc geninfo_all_blocks=1 00:11:47.243 --rc geninfo_unexecuted_blocks=1 00:11:47.243 00:11:47.243 ' 00:11:47.243 11:33:47 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:11:47.243 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:47.243 --rc genhtml_branch_coverage=1 00:11:47.243 --rc genhtml_function_coverage=1 00:11:47.243 --rc genhtml_legend=1 00:11:47.243 --rc geninfo_all_blocks=1 00:11:47.243 --rc geninfo_unexecuted_blocks=1 00:11:47.243 00:11:47.243 ' 00:11:47.243 11:33:47 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@10 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:11:47.243 11:33:47 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@7 -- # uname -s 00:11:47.243 11:33:47 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:11:47.243 11:33:47 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:11:47.243 11:33:47 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:11:47.243 11:33:47 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:11:47.243 11:33:47 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:11:47.243 11:33:47 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:11:47.243 11:33:47 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:11:47.243 11:33:47 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:11:47.243 11:33:47 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:11:47.243 11:33:47 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:11:47.243 11:33:47 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:2fb5c03d-b67f-4471-b8a9-a83bf7b15be8 00:11:47.243 11:33:47 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@18 -- # NVME_HOSTID=2fb5c03d-b67f-4471-b8a9-a83bf7b15be8 00:11:47.243 11:33:47 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:11:47.243 11:33:47 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:11:47.243 11:33:47 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:11:47.243 11:33:47 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:11:47.243 11:33:47 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@49 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:11:47.243 11:33:47 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@15 -- # shopt -s extglob 00:11:47.243 11:33:47 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:11:47.244 11:33:47 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:11:47.244 11:33:47 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:11:47.244 11:33:47 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:47.244 11:33:47 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:47.244 11:33:47 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:47.244 11:33:47 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- paths/export.sh@5 -- # export PATH 00:11:47.244 11:33:47 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:47.244 11:33:47 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@51 -- # : 0 00:11:47.244 11:33:47 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:11:47.244 11:33:47 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:11:47.244 11:33:47 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:11:47.244 11:33:47 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:11:47.244 11:33:47 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:11:47.244 11:33:47 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:11:47.244 /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:11:47.244 11:33:47 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:11:47.244 11:33:47 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:11:47.244 11:33:47 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@55 -- # have_pci_nics=0 00:11:47.244 11:33:47 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@12 -- # nvmftestinit 00:11:47.244 11:33:47 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:11:47.244 11:33:47 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:11:47.244 11:33:47 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@476 -- # prepare_net_devs 00:11:47.244 11:33:47 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@438 -- # local -g is_hw=no 00:11:47.244 11:33:47 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@440 -- # remove_spdk_ns 00:11:47.244 11:33:47 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:11:47.244 11:33:47 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:11:47.244 11:33:47 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:11:47.244 11:33:47 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@442 -- # [[ virt != virt ]] 00:11:47.244 11:33:47 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@444 -- # [[ no == yes ]] 00:11:47.244 11:33:47 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@451 -- # [[ virt == phy ]] 00:11:47.244 11:33:47 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@454 -- # [[ virt == phy-fallback ]] 00:11:47.244 11:33:47 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@459 -- # [[ tcp == tcp ]] 00:11:47.244 11:33:47 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@460 -- # nvmf_veth_init 00:11:47.244 11:33:47 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@145 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:11:47.244 11:33:47 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@146 -- # NVMF_SECOND_INITIATOR_IP=10.0.0.2 00:11:47.244 11:33:47 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@147 -- # NVMF_FIRST_TARGET_IP=10.0.0.3 00:11:47.244 11:33:47 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@148 -- # NVMF_SECOND_TARGET_IP=10.0.0.4 00:11:47.244 11:33:47 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@149 -- # NVMF_INITIATOR_IP=10.0.0.1 00:11:47.244 11:33:47 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@150 -- # NVMF_BRIDGE=nvmf_br 00:11:47.244 11:33:47 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@151 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:11:47.244 11:33:47 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@152 -- # NVMF_INITIATOR_INTERFACE2=nvmf_init_if2 00:11:47.244 11:33:47 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@153 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:11:47.244 11:33:47 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@154 -- # NVMF_INITIATOR_BRIDGE2=nvmf_init_br2 00:11:47.244 11:33:47 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@155 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:11:47.244 11:33:47 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@156 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:11:47.244 11:33:47 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@157 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:11:47.244 11:33:47 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@158 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:11:47.244 11:33:47 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@159 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:11:47.244 11:33:47 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@160 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:11:47.244 11:33:47 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@162 -- # ip link set nvmf_init_br nomaster 00:11:47.244 Cannot find device "nvmf_init_br" 00:11:47.244 11:33:47 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@162 -- # true 00:11:47.244 11:33:47 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@163 -- # ip link set nvmf_init_br2 nomaster 00:11:47.244 Cannot find device "nvmf_init_br2" 00:11:47.244 11:33:47 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@163 -- # true 00:11:47.244 11:33:47 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@164 -- # ip link set nvmf_tgt_br nomaster 00:11:47.244 Cannot find device "nvmf_tgt_br" 00:11:47.244 11:33:47 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@164 -- # true 00:11:47.244 11:33:47 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@165 -- # ip link set nvmf_tgt_br2 nomaster 00:11:47.244 Cannot find device "nvmf_tgt_br2" 00:11:47.244 11:33:47 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@165 -- # true 00:11:47.244 11:33:47 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@166 -- # ip link set nvmf_init_br down 00:11:47.244 Cannot find device "nvmf_init_br" 00:11:47.244 11:33:47 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@166 -- # true 00:11:47.244 11:33:47 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@167 -- # ip link set nvmf_init_br2 down 00:11:47.244 Cannot find device "nvmf_init_br2" 00:11:47.244 11:33:47 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@167 -- # true 00:11:47.244 11:33:47 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@168 -- # ip link set nvmf_tgt_br down 00:11:47.244 Cannot find device "nvmf_tgt_br" 00:11:47.244 11:33:47 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@168 -- # true 00:11:47.244 11:33:47 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@169 -- # ip link set nvmf_tgt_br2 down 00:11:47.244 Cannot find device "nvmf_tgt_br2" 00:11:47.244 11:33:47 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@169 -- # true 00:11:47.244 11:33:47 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@170 -- # ip link delete nvmf_br type bridge 00:11:47.244 Cannot find device "nvmf_br" 00:11:47.244 11:33:47 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@170 -- # true 00:11:47.244 11:33:47 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@171 -- # ip link delete nvmf_init_if 00:11:47.244 Cannot find device "nvmf_init_if" 00:11:47.244 11:33:47 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@171 -- # true 00:11:47.244 11:33:47 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@172 -- # ip link delete nvmf_init_if2 00:11:47.244 Cannot find device "nvmf_init_if2" 00:11:47.244 11:33:47 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@172 -- # true 00:11:47.244 11:33:47 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@173 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:11:47.244 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:11:47.244 11:33:47 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@173 -- # true 00:11:47.244 11:33:47 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@174 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:11:47.244 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:11:47.244 11:33:47 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@174 -- # true 00:11:47.244 11:33:47 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@177 -- # ip netns add nvmf_tgt_ns_spdk 00:11:47.244 11:33:47 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@180 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:11:47.244 11:33:47 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@181 -- # ip link add nvmf_init_if2 type veth peer name nvmf_init_br2 00:11:47.244 11:33:47 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@182 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:11:47.244 11:33:47 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@183 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:11:47.503 11:33:47 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:11:47.503 11:33:48 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@187 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:11:47.503 11:33:48 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@190 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:11:47.503 11:33:48 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@191 -- # ip addr add 10.0.0.2/24 dev nvmf_init_if2 00:11:47.503 11:33:48 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@192 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if 00:11:47.503 11:33:48 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@193 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.4/24 dev nvmf_tgt_if2 00:11:47.503 11:33:48 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@196 -- # ip link set nvmf_init_if up 00:11:47.503 11:33:48 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@197 -- # ip link set nvmf_init_if2 up 00:11:47.503 11:33:48 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@198 -- # ip link set nvmf_init_br up 00:11:47.503 11:33:48 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@199 -- # ip link set nvmf_init_br2 up 00:11:47.503 11:33:48 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@200 -- # ip link set nvmf_tgt_br up 00:11:47.503 11:33:48 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@201 -- # ip link set nvmf_tgt_br2 up 00:11:47.503 11:33:48 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@202 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:11:47.503 11:33:48 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@203 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:11:47.503 11:33:48 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@204 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:11:47.503 11:33:48 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@207 -- # ip link add nvmf_br type bridge 00:11:47.503 11:33:48 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@208 -- # ip link set nvmf_br up 00:11:47.503 11:33:48 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@211 -- # ip link set nvmf_init_br master nvmf_br 00:11:47.503 11:33:48 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@212 -- # ip link set nvmf_init_br2 master nvmf_br 00:11:47.503 11:33:48 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@213 -- # ip link set nvmf_tgt_br master nvmf_br 00:11:47.503 11:33:48 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@214 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:11:47.503 11:33:48 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@217 -- # ipts -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:11:47.503 11:33:48 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT' 00:11:47.503 11:33:48 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@218 -- # ipts -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT 00:11:47.503 11:33:48 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT' 00:11:47.503 11:33:48 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@219 -- # ipts -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:11:47.503 11:33:48 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@790 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT -m comment --comment 'SPDK_NVMF:-A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT' 00:11:47.503 11:33:48 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@222 -- # ping -c 1 10.0.0.3 00:11:47.503 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:11:47.503 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.059 ms 00:11:47.503 00:11:47.504 --- 10.0.0.3 ping statistics --- 00:11:47.504 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:11:47.504 rtt min/avg/max/mdev = 0.059/0.059/0.059/0.000 ms 00:11:47.504 11:33:48 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@223 -- # ping -c 1 10.0.0.4 00:11:47.504 PING 10.0.0.4 (10.0.0.4) 56(84) bytes of data. 00:11:47.504 64 bytes from 10.0.0.4: icmp_seq=1 ttl=64 time=0.036 ms 00:11:47.504 00:11:47.504 --- 10.0.0.4 ping statistics --- 00:11:47.504 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:11:47.504 rtt min/avg/max/mdev = 0.036/0.036/0.036/0.000 ms 00:11:47.504 11:33:48 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@224 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:11:47.504 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:11:47.504 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.026 ms 00:11:47.504 00:11:47.504 --- 10.0.0.1 ping statistics --- 00:11:47.504 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:11:47.504 rtt min/avg/max/mdev = 0.026/0.026/0.026/0.000 ms 00:11:47.504 11:33:48 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@225 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.2 00:11:47.504 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:11:47.504 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.071 ms 00:11:47.504 00:11:47.504 --- 10.0.0.2 ping statistics --- 00:11:47.504 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:11:47.504 rtt min/avg/max/mdev = 0.071/0.071/0.071/0.000 ms 00:11:47.504 11:33:48 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@227 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:11:47.504 11:33:48 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@461 -- # return 0 00:11:47.504 11:33:48 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:11:47.504 11:33:48 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:11:47.504 11:33:48 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:11:47.504 11:33:48 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:11:47.504 11:33:48 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:11:47.504 11:33:48 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:11:47.504 11:33:48 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:11:47.504 11:33:48 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@13 -- # nvmfappstart -m 0xE 00:11:47.504 11:33:48 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:11:47.504 11:33:48 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@726 -- # xtrace_disable 00:11:47.504 11:33:48 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:11:47.504 11:33:48 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@509 -- # nvmfpid=76797 00:11:47.504 11:33:48 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@508 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xE 00:11:47.504 11:33:48 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@510 -- # waitforlisten 76797 00:11:47.504 11:33:48 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@835 -- # '[' -z 76797 ']' 00:11:47.504 11:33:48 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:11:47.504 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:11:47.504 11:33:48 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@840 -- # local max_retries=100 00:11:47.504 11:33:48 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:11:47.504 11:33:48 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@844 -- # xtrace_disable 00:11:47.504 11:33:48 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:11:47.762 [2024-12-16 11:33:48.270782] Starting SPDK v25.01-pre git sha1 c360129e7 / DPDK 24.03.0 initialization... 00:11:47.763 [2024-12-16 11:33:48.271378] [ DPDK EAL parameters: nvmf -c 0xE --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:11:47.763 [2024-12-16 11:33:48.422813] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:11:47.763 [2024-12-16 11:33:48.492530] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:11:47.763 [2024-12-16 11:33:48.492888] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:11:47.763 [2024-12-16 11:33:48.492923] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:11:47.763 [2024-12-16 11:33:48.492934] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:11:47.763 [2024-12-16 11:33:48.492942] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:11:47.763 [2024-12-16 11:33:48.494427] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 2 00:11:47.763 [2024-12-16 11:33:48.494596] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 3 00:11:47.763 [2024-12-16 11:33:48.494604] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 1 00:11:48.701 11:33:49 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:11:48.701 11:33:49 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@868 -- # return 0 00:11:48.701 11:33:49 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:11:48.701 11:33:49 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@732 -- # xtrace_disable 00:11:48.701 11:33:49 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:11:48.701 11:33:49 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:11:48.701 11:33:49 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@15 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:11:48.701 11:33:49 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:48.701 11:33:49 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:11:48.701 [2024-12-16 11:33:49.335607] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:11:48.701 11:33:49 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:48.701 11:33:49 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@16 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 10 00:11:48.701 11:33:49 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:48.701 11:33:49 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:11:48.701 11:33:49 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:48.701 11:33:49 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@17 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 00:11:48.701 11:33:49 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:48.701 11:33:49 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:11:48.701 [2024-12-16 11:33:49.353419] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4420 *** 00:11:48.701 11:33:49 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:48.701 11:33:49 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@18 -- # rpc_cmd bdev_null_create NULL1 1000 512 00:11:48.701 11:33:49 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:48.701 11:33:49 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:11:48.701 NULL1 00:11:48.701 11:33:49 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:48.701 11:33:49 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@21 -- # PERF_PID=76849 00:11:48.702 11:33:49 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@20 -- # /home/vagrant/spdk_repo/spdk/test/nvme/connect_stress/connect_stress -c 0x1 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.3 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' -t 10 00:11:48.702 11:33:49 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@23 -- # rpcs=/home/vagrant/spdk_repo/spdk/test/nvmf/target/rpc.txt 00:11:48.702 11:33:49 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@25 -- # rm -f /home/vagrant/spdk_repo/spdk/test/nvmf/target/rpc.txt 00:11:48.702 11:33:49 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # seq 1 20 00:11:48.702 11:33:49 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:11:48.702 11:33:49 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:11:48.702 11:33:49 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:11:48.702 11:33:49 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:11:48.702 11:33:49 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:11:48.702 11:33:49 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:11:48.702 11:33:49 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:11:48.702 11:33:49 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:11:48.702 11:33:49 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:11:48.702 11:33:49 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:11:48.702 11:33:49 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:11:48.702 11:33:49 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:11:48.702 11:33:49 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:11:48.702 11:33:49 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:11:48.702 11:33:49 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:11:48.702 11:33:49 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:11:48.702 11:33:49 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:11:48.702 11:33:49 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:11:48.702 11:33:49 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:11:48.702 11:33:49 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:11:48.702 11:33:49 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:11:48.702 11:33:49 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:11:48.702 11:33:49 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:11:48.702 11:33:49 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:11:48.702 11:33:49 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:11:48.702 11:33:49 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:11:48.702 11:33:49 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:11:48.702 11:33:49 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:11:48.702 11:33:49 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:11:48.702 11:33:49 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:11:48.702 11:33:49 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:11:48.702 11:33:49 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:11:48.702 11:33:49 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:11:48.702 11:33:49 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:11:48.702 11:33:49 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:11:48.702 11:33:49 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:11:48.702 11:33:49 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:11:48.702 11:33:49 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:11:48.702 11:33:49 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:11:48.702 11:33:49 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:11:48.702 11:33:49 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 76849 00:11:48.702 11:33:49 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:11:48.702 11:33:49 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:48.702 11:33:49 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:11:49.270 11:33:49 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:49.270 11:33:49 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 76849 00:11:49.270 11:33:49 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:11:49.270 11:33:49 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:49.270 11:33:49 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:11:49.529 11:33:50 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:49.529 11:33:50 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 76849 00:11:49.529 11:33:50 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:11:49.529 11:33:50 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:49.529 11:33:50 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:11:49.788 11:33:50 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:49.788 11:33:50 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 76849 00:11:49.788 11:33:50 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:11:49.788 11:33:50 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:49.788 11:33:50 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:11:50.047 11:33:50 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:50.047 11:33:50 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 76849 00:11:50.047 11:33:50 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:11:50.047 11:33:50 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:50.047 11:33:50 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:11:50.331 11:33:51 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:50.331 11:33:51 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 76849 00:11:50.331 11:33:51 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:11:50.331 11:33:51 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:50.331 11:33:51 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:11:50.897 11:33:51 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:50.897 11:33:51 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 76849 00:11:50.897 11:33:51 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:11:50.897 11:33:51 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:50.897 11:33:51 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:11:51.155 11:33:51 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:51.155 11:33:51 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 76849 00:11:51.155 11:33:51 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:11:51.155 11:33:51 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:51.155 11:33:51 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:11:51.413 11:33:52 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:51.413 11:33:52 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 76849 00:11:51.413 11:33:52 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:11:51.414 11:33:52 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:51.414 11:33:52 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:11:51.672 11:33:52 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:51.672 11:33:52 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 76849 00:11:51.672 11:33:52 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:11:51.672 11:33:52 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:51.672 11:33:52 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:11:51.931 11:33:52 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:51.931 11:33:52 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 76849 00:11:51.931 11:33:52 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:11:51.931 11:33:52 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:51.931 11:33:52 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:11:52.497 11:33:52 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:52.497 11:33:52 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 76849 00:11:52.497 11:33:52 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:11:52.497 11:33:52 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:52.497 11:33:52 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:11:52.756 11:33:53 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:52.756 11:33:53 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 76849 00:11:52.756 11:33:53 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:11:52.756 11:33:53 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:52.756 11:33:53 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:11:53.015 11:33:53 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:53.015 11:33:53 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 76849 00:11:53.015 11:33:53 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:11:53.015 11:33:53 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:53.015 11:33:53 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:11:53.274 11:33:53 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:53.274 11:33:53 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 76849 00:11:53.274 11:33:53 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:11:53.274 11:33:53 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:53.274 11:33:53 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:11:53.532 11:33:54 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:53.532 11:33:54 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 76849 00:11:53.532 11:33:54 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:11:53.532 11:33:54 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:53.532 11:33:54 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:11:54.099 11:33:54 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:54.099 11:33:54 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 76849 00:11:54.099 11:33:54 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:11:54.099 11:33:54 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:54.099 11:33:54 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:11:54.358 11:33:54 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:54.358 11:33:54 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 76849 00:11:54.358 11:33:54 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:11:54.358 11:33:54 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:54.358 11:33:54 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:11:54.616 11:33:55 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:54.616 11:33:55 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 76849 00:11:54.616 11:33:55 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:11:54.616 11:33:55 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:54.616 11:33:55 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:11:54.874 11:33:55 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:54.874 11:33:55 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 76849 00:11:54.874 11:33:55 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:11:54.874 11:33:55 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:54.874 11:33:55 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:11:55.131 11:33:55 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:55.131 11:33:55 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 76849 00:11:55.131 11:33:55 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:11:55.131 11:33:55 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:55.131 11:33:55 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:11:55.697 11:33:56 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:55.697 11:33:56 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 76849 00:11:55.697 11:33:56 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:11:55.697 11:33:56 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:55.697 11:33:56 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:11:55.955 11:33:56 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:55.956 11:33:56 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 76849 00:11:55.956 11:33:56 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:11:55.956 11:33:56 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:55.956 11:33:56 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:11:56.214 11:33:56 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:56.214 11:33:56 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 76849 00:11:56.214 11:33:56 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:11:56.214 11:33:56 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:56.214 11:33:56 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:11:56.472 11:33:57 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:56.472 11:33:57 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 76849 00:11:56.472 11:33:57 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:11:56.472 11:33:57 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:56.473 11:33:57 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:11:57.040 11:33:57 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:57.040 11:33:57 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 76849 00:11:57.040 11:33:57 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:11:57.040 11:33:57 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:57.040 11:33:57 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:11:57.298 11:33:57 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:57.298 11:33:57 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 76849 00:11:57.298 11:33:57 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:11:57.298 11:33:57 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:57.298 11:33:57 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:11:57.555 11:33:58 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:57.555 11:33:58 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 76849 00:11:57.555 11:33:58 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:11:57.555 11:33:58 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:57.555 11:33:58 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:11:57.812 11:33:58 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:57.812 11:33:58 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 76849 00:11:57.812 11:33:58 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:11:57.812 11:33:58 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:57.812 11:33:58 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:11:58.070 11:33:58 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:58.070 11:33:58 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 76849 00:11:58.070 11:33:58 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:11:58.070 11:33:58 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:58.070 11:33:58 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:11:58.638 11:33:59 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:58.638 11:33:59 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 76849 00:11:58.638 11:33:59 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:11:58.638 11:33:59 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:58.638 11:33:59 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:11:58.895 11:33:59 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:58.895 11:33:59 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 76849 00:11:58.895 11:33:59 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:11:58.895 11:33:59 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:58.895 11:33:59 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:11:58.895 Testing NVMe over Fabrics controller at 10.0.0.3:4420: nqn.2016-06.io.spdk:cnode1 00:11:59.154 11:33:59 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:59.154 11:33:59 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 76849 00:11:59.154 /home/vagrant/spdk_repo/spdk/test/nvmf/target/connect_stress.sh: line 34: kill: (76849) - No such process 00:11:59.154 11:33:59 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@38 -- # wait 76849 00:11:59.154 11:33:59 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@39 -- # rm -f /home/vagrant/spdk_repo/spdk/test/nvmf/target/rpc.txt 00:11:59.154 11:33:59 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@41 -- # trap - SIGINT SIGTERM EXIT 00:11:59.154 11:33:59 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@43 -- # nvmftestfini 00:11:59.154 11:33:59 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@516 -- # nvmfcleanup 00:11:59.154 11:33:59 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@121 -- # sync 00:11:59.154 11:33:59 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:11:59.154 11:33:59 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@124 -- # set +e 00:11:59.154 11:33:59 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@125 -- # for i in {1..20} 00:11:59.154 11:33:59 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:11:59.154 rmmod nvme_tcp 00:11:59.154 rmmod nvme_fabrics 00:11:59.154 rmmod nvme_keyring 00:11:59.154 11:33:59 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:11:59.154 11:33:59 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@128 -- # set -e 00:11:59.154 11:33:59 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@129 -- # return 0 00:11:59.154 11:33:59 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@517 -- # '[' -n 76797 ']' 00:11:59.154 11:33:59 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@518 -- # killprocess 76797 00:11:59.154 11:33:59 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@954 -- # '[' -z 76797 ']' 00:11:59.154 11:33:59 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@958 -- # kill -0 76797 00:11:59.154 11:33:59 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@959 -- # uname 00:11:59.154 11:33:59 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:11:59.154 11:33:59 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 76797 00:11:59.154 killing process with pid 76797 00:11:59.154 11:33:59 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:11:59.154 11:33:59 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:11:59.154 11:33:59 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@972 -- # echo 'killing process with pid 76797' 00:11:59.154 11:33:59 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@973 -- # kill 76797 00:11:59.154 11:33:59 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@978 -- # wait 76797 00:11:59.439 11:34:00 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:11:59.439 11:34:00 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:11:59.439 11:34:00 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:11:59.439 11:34:00 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@297 -- # iptr 00:11:59.439 11:34:00 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@791 -- # iptables-save 00:11:59.439 11:34:00 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:11:59.439 11:34:00 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@791 -- # iptables-restore 00:11:59.439 11:34:00 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@298 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:11:59.439 11:34:00 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@299 -- # nvmf_veth_fini 00:11:59.439 11:34:00 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@233 -- # ip link set nvmf_init_br nomaster 00:11:59.439 11:34:00 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@234 -- # ip link set nvmf_init_br2 nomaster 00:11:59.439 11:34:00 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@235 -- # ip link set nvmf_tgt_br nomaster 00:11:59.439 11:34:00 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@236 -- # ip link set nvmf_tgt_br2 nomaster 00:11:59.439 11:34:00 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@237 -- # ip link set nvmf_init_br down 00:11:59.439 11:34:00 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@238 -- # ip link set nvmf_init_br2 down 00:11:59.439 11:34:00 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@239 -- # ip link set nvmf_tgt_br down 00:11:59.439 11:34:00 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@240 -- # ip link set nvmf_tgt_br2 down 00:11:59.439 11:34:00 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@241 -- # ip link delete nvmf_br type bridge 00:11:59.440 11:34:00 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@242 -- # ip link delete nvmf_init_if 00:11:59.440 11:34:00 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@243 -- # ip link delete nvmf_init_if2 00:11:59.698 11:34:00 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@244 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:11:59.698 11:34:00 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@245 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:11:59.698 11:34:00 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@246 -- # remove_spdk_ns 00:11:59.698 11:34:00 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:11:59.698 11:34:00 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:11:59.698 11:34:00 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:11:59.698 11:34:00 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@300 -- # return 0 00:11:59.698 00:11:59.698 real 0m12.727s 00:11:59.698 user 0m41.143s 00:11:59.698 sys 0m3.452s 00:11:59.698 ************************************ 00:11:59.698 END TEST nvmf_connect_stress 00:11:59.698 ************************************ 00:11:59.698 11:34:00 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@1130 -- # xtrace_disable 00:11:59.698 11:34:00 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:11:59.698 11:34:00 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@25 -- # run_test nvmf_fused_ordering /home/vagrant/spdk_repo/spdk/test/nvmf/target/fused_ordering.sh --transport=tcp 00:11:59.698 11:34:00 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:11:59.698 11:34:00 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1111 -- # xtrace_disable 00:11:59.698 11:34:00 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:11:59.698 ************************************ 00:11:59.698 START TEST nvmf_fused_ordering 00:11:59.698 ************************************ 00:11:59.698 11:34:00 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/fused_ordering.sh --transport=tcp 00:11:59.698 * Looking for test storage... 00:11:59.698 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:11:59.698 11:34:00 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:11:59.698 11:34:00 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@1711 -- # lcov --version 00:11:59.698 11:34:00 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:11:59.958 11:34:00 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:11:59.958 11:34:00 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:11:59.958 11:34:00 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@333 -- # local ver1 ver1_l 00:11:59.958 11:34:00 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@334 -- # local ver2 ver2_l 00:11:59.958 11:34:00 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@336 -- # IFS=.-: 00:11:59.958 11:34:00 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@336 -- # read -ra ver1 00:11:59.958 11:34:00 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@337 -- # IFS=.-: 00:11:59.958 11:34:00 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@337 -- # read -ra ver2 00:11:59.958 11:34:00 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@338 -- # local 'op=<' 00:11:59.958 11:34:00 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@340 -- # ver1_l=2 00:11:59.958 11:34:00 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@341 -- # ver2_l=1 00:11:59.958 11:34:00 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:11:59.958 11:34:00 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@344 -- # case "$op" in 00:11:59.958 11:34:00 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@345 -- # : 1 00:11:59.958 11:34:00 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@364 -- # (( v = 0 )) 00:11:59.958 11:34:00 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:11:59.958 11:34:00 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@365 -- # decimal 1 00:11:59.958 11:34:00 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@353 -- # local d=1 00:11:59.958 11:34:00 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:11:59.958 11:34:00 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@355 -- # echo 1 00:11:59.958 11:34:00 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@365 -- # ver1[v]=1 00:11:59.958 11:34:00 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@366 -- # decimal 2 00:11:59.958 11:34:00 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@353 -- # local d=2 00:11:59.958 11:34:00 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:11:59.958 11:34:00 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@355 -- # echo 2 00:11:59.958 11:34:00 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@366 -- # ver2[v]=2 00:11:59.958 11:34:00 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:11:59.958 11:34:00 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:11:59.958 11:34:00 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@368 -- # return 0 00:11:59.958 11:34:00 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:11:59.958 11:34:00 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:11:59.958 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:59.958 --rc genhtml_branch_coverage=1 00:11:59.958 --rc genhtml_function_coverage=1 00:11:59.958 --rc genhtml_legend=1 00:11:59.958 --rc geninfo_all_blocks=1 00:11:59.958 --rc geninfo_unexecuted_blocks=1 00:11:59.958 00:11:59.958 ' 00:11:59.958 11:34:00 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:11:59.958 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:59.958 --rc genhtml_branch_coverage=1 00:11:59.958 --rc genhtml_function_coverage=1 00:11:59.958 --rc genhtml_legend=1 00:11:59.958 --rc geninfo_all_blocks=1 00:11:59.958 --rc geninfo_unexecuted_blocks=1 00:11:59.958 00:11:59.958 ' 00:11:59.958 11:34:00 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:11:59.958 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:59.958 --rc genhtml_branch_coverage=1 00:11:59.958 --rc genhtml_function_coverage=1 00:11:59.958 --rc genhtml_legend=1 00:11:59.958 --rc geninfo_all_blocks=1 00:11:59.958 --rc geninfo_unexecuted_blocks=1 00:11:59.958 00:11:59.958 ' 00:11:59.958 11:34:00 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:11:59.958 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:59.958 --rc genhtml_branch_coverage=1 00:11:59.958 --rc genhtml_function_coverage=1 00:11:59.958 --rc genhtml_legend=1 00:11:59.958 --rc geninfo_all_blocks=1 00:11:59.958 --rc geninfo_unexecuted_blocks=1 00:11:59.958 00:11:59.958 ' 00:11:59.959 11:34:00 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- target/fused_ordering.sh@10 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:11:59.959 11:34:00 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@7 -- # uname -s 00:11:59.959 11:34:00 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:11:59.959 11:34:00 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:11:59.959 11:34:00 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:11:59.959 11:34:00 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:11:59.959 11:34:00 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:11:59.959 11:34:00 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:11:59.959 11:34:00 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:11:59.959 11:34:00 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:11:59.959 11:34:00 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:11:59.959 11:34:00 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:11:59.959 11:34:00 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:2fb5c03d-b67f-4471-b8a9-a83bf7b15be8 00:11:59.959 11:34:00 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@18 -- # NVME_HOSTID=2fb5c03d-b67f-4471-b8a9-a83bf7b15be8 00:11:59.959 11:34:00 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:11:59.959 11:34:00 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:11:59.959 11:34:00 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:11:59.959 11:34:00 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:11:59.959 11:34:00 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@49 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:11:59.959 11:34:00 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@15 -- # shopt -s extglob 00:11:59.959 11:34:00 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:11:59.959 11:34:00 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:11:59.959 11:34:00 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:11:59.959 11:34:00 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:59.959 11:34:00 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:59.959 11:34:00 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:59.959 11:34:00 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- paths/export.sh@5 -- # export PATH 00:11:59.959 11:34:00 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:59.959 11:34:00 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@51 -- # : 0 00:11:59.959 11:34:00 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:11:59.959 11:34:00 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:11:59.959 11:34:00 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:11:59.959 11:34:00 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:11:59.959 11:34:00 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:11:59.959 11:34:00 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:11:59.959 /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:11:59.959 11:34:00 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:11:59.959 11:34:00 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:11:59.959 11:34:00 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@55 -- # have_pci_nics=0 00:11:59.959 11:34:00 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- target/fused_ordering.sh@12 -- # nvmftestinit 00:11:59.959 11:34:00 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:11:59.959 11:34:00 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:11:59.959 11:34:00 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@476 -- # prepare_net_devs 00:11:59.959 11:34:00 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@438 -- # local -g is_hw=no 00:11:59.959 11:34:00 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@440 -- # remove_spdk_ns 00:11:59.959 11:34:00 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:11:59.959 11:34:00 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:11:59.959 11:34:00 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:11:59.959 11:34:00 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@442 -- # [[ virt != virt ]] 00:11:59.959 11:34:00 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@444 -- # [[ no == yes ]] 00:11:59.959 11:34:00 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@451 -- # [[ virt == phy ]] 00:11:59.959 11:34:00 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@454 -- # [[ virt == phy-fallback ]] 00:11:59.959 11:34:00 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@459 -- # [[ tcp == tcp ]] 00:11:59.959 11:34:00 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@460 -- # nvmf_veth_init 00:11:59.959 11:34:00 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@145 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:11:59.959 11:34:00 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@146 -- # NVMF_SECOND_INITIATOR_IP=10.0.0.2 00:11:59.959 11:34:00 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@147 -- # NVMF_FIRST_TARGET_IP=10.0.0.3 00:11:59.959 11:34:00 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@148 -- # NVMF_SECOND_TARGET_IP=10.0.0.4 00:11:59.959 11:34:00 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@149 -- # NVMF_INITIATOR_IP=10.0.0.1 00:11:59.959 11:34:00 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@150 -- # NVMF_BRIDGE=nvmf_br 00:11:59.959 11:34:00 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@151 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:11:59.959 11:34:00 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@152 -- # NVMF_INITIATOR_INTERFACE2=nvmf_init_if2 00:11:59.959 11:34:00 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@153 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:11:59.959 11:34:00 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@154 -- # NVMF_INITIATOR_BRIDGE2=nvmf_init_br2 00:11:59.959 11:34:00 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@155 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:11:59.959 11:34:00 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@156 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:11:59.959 11:34:00 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@157 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:11:59.959 11:34:00 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@158 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:11:59.959 11:34:00 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@159 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:11:59.959 11:34:00 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@160 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:11:59.959 11:34:00 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@162 -- # ip link set nvmf_init_br nomaster 00:11:59.959 Cannot find device "nvmf_init_br" 00:11:59.959 11:34:00 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@162 -- # true 00:11:59.959 11:34:00 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@163 -- # ip link set nvmf_init_br2 nomaster 00:11:59.959 Cannot find device "nvmf_init_br2" 00:11:59.959 11:34:00 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@163 -- # true 00:11:59.959 11:34:00 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@164 -- # ip link set nvmf_tgt_br nomaster 00:11:59.959 Cannot find device "nvmf_tgt_br" 00:11:59.959 11:34:00 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@164 -- # true 00:11:59.959 11:34:00 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@165 -- # ip link set nvmf_tgt_br2 nomaster 00:11:59.959 Cannot find device "nvmf_tgt_br2" 00:11:59.959 11:34:00 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@165 -- # true 00:11:59.959 11:34:00 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@166 -- # ip link set nvmf_init_br down 00:11:59.959 Cannot find device "nvmf_init_br" 00:11:59.959 11:34:00 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@166 -- # true 00:11:59.959 11:34:00 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@167 -- # ip link set nvmf_init_br2 down 00:11:59.959 Cannot find device "nvmf_init_br2" 00:11:59.959 11:34:00 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@167 -- # true 00:11:59.959 11:34:00 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@168 -- # ip link set nvmf_tgt_br down 00:11:59.959 Cannot find device "nvmf_tgt_br" 00:11:59.959 11:34:00 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@168 -- # true 00:11:59.959 11:34:00 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@169 -- # ip link set nvmf_tgt_br2 down 00:11:59.959 Cannot find device "nvmf_tgt_br2" 00:11:59.959 11:34:00 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@169 -- # true 00:11:59.959 11:34:00 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@170 -- # ip link delete nvmf_br type bridge 00:11:59.959 Cannot find device "nvmf_br" 00:11:59.959 11:34:00 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@170 -- # true 00:11:59.959 11:34:00 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@171 -- # ip link delete nvmf_init_if 00:11:59.959 Cannot find device "nvmf_init_if" 00:11:59.959 11:34:00 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@171 -- # true 00:11:59.959 11:34:00 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@172 -- # ip link delete nvmf_init_if2 00:11:59.959 Cannot find device "nvmf_init_if2" 00:11:59.959 11:34:00 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@172 -- # true 00:11:59.959 11:34:00 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@173 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:11:59.959 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:11:59.959 11:34:00 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@173 -- # true 00:11:59.959 11:34:00 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@174 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:11:59.959 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:11:59.960 11:34:00 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@174 -- # true 00:11:59.960 11:34:00 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@177 -- # ip netns add nvmf_tgt_ns_spdk 00:11:59.960 11:34:00 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@180 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:11:59.960 11:34:00 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@181 -- # ip link add nvmf_init_if2 type veth peer name nvmf_init_br2 00:11:59.960 11:34:00 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@182 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:12:00.219 11:34:00 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@183 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:12:00.219 11:34:00 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:12:00.219 11:34:00 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@187 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:12:00.219 11:34:00 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@190 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:12:00.219 11:34:00 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@191 -- # ip addr add 10.0.0.2/24 dev nvmf_init_if2 00:12:00.219 11:34:00 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@192 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if 00:12:00.219 11:34:00 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@193 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.4/24 dev nvmf_tgt_if2 00:12:00.219 11:34:00 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@196 -- # ip link set nvmf_init_if up 00:12:00.219 11:34:00 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@197 -- # ip link set nvmf_init_if2 up 00:12:00.219 11:34:00 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@198 -- # ip link set nvmf_init_br up 00:12:00.219 11:34:00 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@199 -- # ip link set nvmf_init_br2 up 00:12:00.219 11:34:00 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@200 -- # ip link set nvmf_tgt_br up 00:12:00.219 11:34:00 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@201 -- # ip link set nvmf_tgt_br2 up 00:12:00.219 11:34:00 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@202 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:12:00.219 11:34:00 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@203 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:12:00.219 11:34:00 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@204 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:12:00.219 11:34:00 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@207 -- # ip link add nvmf_br type bridge 00:12:00.219 11:34:00 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@208 -- # ip link set nvmf_br up 00:12:00.219 11:34:00 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@211 -- # ip link set nvmf_init_br master nvmf_br 00:12:00.219 11:34:00 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@212 -- # ip link set nvmf_init_br2 master nvmf_br 00:12:00.219 11:34:00 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@213 -- # ip link set nvmf_tgt_br master nvmf_br 00:12:00.219 11:34:00 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@214 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:12:00.219 11:34:00 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@217 -- # ipts -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:12:00.219 11:34:00 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT' 00:12:00.219 11:34:00 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@218 -- # ipts -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT 00:12:00.219 11:34:00 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT' 00:12:00.219 11:34:00 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@219 -- # ipts -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:12:00.219 11:34:00 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@790 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT -m comment --comment 'SPDK_NVMF:-A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT' 00:12:00.219 11:34:00 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@222 -- # ping -c 1 10.0.0.3 00:12:00.219 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:12:00.219 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.093 ms 00:12:00.219 00:12:00.219 --- 10.0.0.3 ping statistics --- 00:12:00.219 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:12:00.219 rtt min/avg/max/mdev = 0.093/0.093/0.093/0.000 ms 00:12:00.219 11:34:00 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@223 -- # ping -c 1 10.0.0.4 00:12:00.219 PING 10.0.0.4 (10.0.0.4) 56(84) bytes of data. 00:12:00.219 64 bytes from 10.0.0.4: icmp_seq=1 ttl=64 time=0.049 ms 00:12:00.219 00:12:00.219 --- 10.0.0.4 ping statistics --- 00:12:00.219 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:12:00.219 rtt min/avg/max/mdev = 0.049/0.049/0.049/0.000 ms 00:12:00.219 11:34:00 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@224 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:12:00.219 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:12:00.219 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.031 ms 00:12:00.219 00:12:00.219 --- 10.0.0.1 ping statistics --- 00:12:00.219 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:12:00.219 rtt min/avg/max/mdev = 0.031/0.031/0.031/0.000 ms 00:12:00.219 11:34:00 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@225 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.2 00:12:00.219 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:12:00.219 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.070 ms 00:12:00.219 00:12:00.219 --- 10.0.0.2 ping statistics --- 00:12:00.219 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:12:00.219 rtt min/avg/max/mdev = 0.070/0.070/0.070/0.000 ms 00:12:00.219 11:34:00 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@227 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:12:00.219 11:34:00 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@461 -- # return 0 00:12:00.219 11:34:00 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:12:00.219 11:34:00 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:12:00.219 11:34:00 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:12:00.219 11:34:00 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:12:00.219 11:34:00 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:12:00.219 11:34:00 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:12:00.219 11:34:00 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:12:00.219 11:34:00 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- target/fused_ordering.sh@13 -- # nvmfappstart -m 0x2 00:12:00.219 11:34:00 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:12:00.219 11:34:00 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@726 -- # xtrace_disable 00:12:00.219 11:34:00 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 00:12:00.219 11:34:00 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@509 -- # nvmfpid=77227 00:12:00.219 11:34:00 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@508 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:12:00.219 11:34:00 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@510 -- # waitforlisten 77227 00:12:00.219 11:34:00 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@835 -- # '[' -z 77227 ']' 00:12:00.219 11:34:00 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:12:00.219 11:34:00 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@840 -- # local max_retries=100 00:12:00.219 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:12:00.219 11:34:00 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:12:00.219 11:34:00 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@844 -- # xtrace_disable 00:12:00.219 11:34:00 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 00:12:00.478 [2024-12-16 11:34:01.030952] Starting SPDK v25.01-pre git sha1 c360129e7 / DPDK 24.03.0 initialization... 00:12:00.478 [2024-12-16 11:34:01.031054] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:12:00.478 [2024-12-16 11:34:01.182494] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:12:00.738 [2024-12-16 11:34:01.246907] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:12:00.738 [2024-12-16 11:34:01.246964] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:12:00.738 [2024-12-16 11:34:01.246978] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:12:00.738 [2024-12-16 11:34:01.246989] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:12:00.738 [2024-12-16 11:34:01.246998] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:12:00.738 [2024-12-16 11:34:01.247477] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 1 00:12:00.738 11:34:01 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:12:00.738 11:34:01 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@868 -- # return 0 00:12:00.738 11:34:01 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:12:00.738 11:34:01 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@732 -- # xtrace_disable 00:12:00.738 11:34:01 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 00:12:00.738 11:34:01 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:12:00.738 11:34:01 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- target/fused_ordering.sh@15 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:12:00.738 11:34:01 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:00.738 11:34:01 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 00:12:00.738 [2024-12-16 11:34:01.435582] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:12:00.738 11:34:01 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:00.738 11:34:01 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- target/fused_ordering.sh@16 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 10 00:12:00.738 11:34:01 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:00.738 11:34:01 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 00:12:00.738 11:34:01 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:00.738 11:34:01 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- target/fused_ordering.sh@17 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 00:12:00.738 11:34:01 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:00.738 11:34:01 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 00:12:00.738 [2024-12-16 11:34:01.451723] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4420 *** 00:12:00.738 11:34:01 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:00.738 11:34:01 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- target/fused_ordering.sh@18 -- # rpc_cmd bdev_null_create NULL1 1000 512 00:12:00.738 11:34:01 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:00.738 11:34:01 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 00:12:00.738 NULL1 00:12:00.738 11:34:01 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:00.738 11:34:01 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- target/fused_ordering.sh@19 -- # rpc_cmd bdev_wait_for_examine 00:12:00.738 11:34:01 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:00.738 11:34:01 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 00:12:00.738 11:34:01 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:00.738 11:34:01 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- target/fused_ordering.sh@20 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 NULL1 00:12:00.738 11:34:01 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:00.738 11:34:01 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 00:12:00.738 11:34:01 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:00.738 11:34:01 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- target/fused_ordering.sh@22 -- # /home/vagrant/spdk_repo/spdk/test/nvme/fused_ordering/fused_ordering -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.3 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' 00:12:00.997 [2024-12-16 11:34:01.503627] Starting SPDK v25.01-pre git sha1 c360129e7 / DPDK 24.03.0 initialization... 00:12:00.997 [2024-12-16 11:34:01.503675] [ DPDK EAL parameters: fused_ordering --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid77269 ] 00:12:01.255 Attached to nqn.2016-06.io.spdk:cnode1 00:12:01.255 Namespace ID: 1 size: 1GB 00:12:01.255 fused_ordering(0) 00:12:01.255 fused_ordering(1) 00:12:01.255 fused_ordering(2) 00:12:01.255 fused_ordering(3) 00:12:01.255 fused_ordering(4) 00:12:01.255 fused_ordering(5) 00:12:01.255 fused_ordering(6) 00:12:01.255 fused_ordering(7) 00:12:01.255 fused_ordering(8) 00:12:01.255 fused_ordering(9) 00:12:01.255 fused_ordering(10) 00:12:01.255 fused_ordering(11) 00:12:01.255 fused_ordering(12) 00:12:01.255 fused_ordering(13) 00:12:01.255 fused_ordering(14) 00:12:01.255 fused_ordering(15) 00:12:01.255 fused_ordering(16) 00:12:01.255 fused_ordering(17) 00:12:01.255 fused_ordering(18) 00:12:01.255 fused_ordering(19) 00:12:01.255 fused_ordering(20) 00:12:01.255 fused_ordering(21) 00:12:01.255 fused_ordering(22) 00:12:01.255 fused_ordering(23) 00:12:01.255 fused_ordering(24) 00:12:01.255 fused_ordering(25) 00:12:01.255 fused_ordering(26) 00:12:01.255 fused_ordering(27) 00:12:01.255 fused_ordering(28) 00:12:01.255 fused_ordering(29) 00:12:01.255 fused_ordering(30) 00:12:01.255 fused_ordering(31) 00:12:01.255 fused_ordering(32) 00:12:01.255 fused_ordering(33) 00:12:01.255 fused_ordering(34) 00:12:01.255 fused_ordering(35) 00:12:01.255 fused_ordering(36) 00:12:01.255 fused_ordering(37) 00:12:01.255 fused_ordering(38) 00:12:01.255 fused_ordering(39) 00:12:01.255 fused_ordering(40) 00:12:01.255 fused_ordering(41) 00:12:01.255 fused_ordering(42) 00:12:01.255 fused_ordering(43) 00:12:01.255 fused_ordering(44) 00:12:01.255 fused_ordering(45) 00:12:01.255 fused_ordering(46) 00:12:01.255 fused_ordering(47) 00:12:01.255 fused_ordering(48) 00:12:01.255 fused_ordering(49) 00:12:01.255 fused_ordering(50) 00:12:01.255 fused_ordering(51) 00:12:01.255 fused_ordering(52) 00:12:01.255 fused_ordering(53) 00:12:01.255 fused_ordering(54) 00:12:01.255 fused_ordering(55) 00:12:01.255 fused_ordering(56) 00:12:01.255 fused_ordering(57) 00:12:01.255 fused_ordering(58) 00:12:01.255 fused_ordering(59) 00:12:01.255 fused_ordering(60) 00:12:01.255 fused_ordering(61) 00:12:01.255 fused_ordering(62) 00:12:01.255 fused_ordering(63) 00:12:01.255 fused_ordering(64) 00:12:01.256 fused_ordering(65) 00:12:01.256 fused_ordering(66) 00:12:01.256 fused_ordering(67) 00:12:01.256 fused_ordering(68) 00:12:01.256 fused_ordering(69) 00:12:01.256 fused_ordering(70) 00:12:01.256 fused_ordering(71) 00:12:01.256 fused_ordering(72) 00:12:01.256 fused_ordering(73) 00:12:01.256 fused_ordering(74) 00:12:01.256 fused_ordering(75) 00:12:01.256 fused_ordering(76) 00:12:01.256 fused_ordering(77) 00:12:01.256 fused_ordering(78) 00:12:01.256 fused_ordering(79) 00:12:01.256 fused_ordering(80) 00:12:01.256 fused_ordering(81) 00:12:01.256 fused_ordering(82) 00:12:01.256 fused_ordering(83) 00:12:01.256 fused_ordering(84) 00:12:01.256 fused_ordering(85) 00:12:01.256 fused_ordering(86) 00:12:01.256 fused_ordering(87) 00:12:01.256 fused_ordering(88) 00:12:01.256 fused_ordering(89) 00:12:01.256 fused_ordering(90) 00:12:01.256 fused_ordering(91) 00:12:01.256 fused_ordering(92) 00:12:01.256 fused_ordering(93) 00:12:01.256 fused_ordering(94) 00:12:01.256 fused_ordering(95) 00:12:01.256 fused_ordering(96) 00:12:01.256 fused_ordering(97) 00:12:01.256 fused_ordering(98) 00:12:01.256 fused_ordering(99) 00:12:01.256 fused_ordering(100) 00:12:01.256 fused_ordering(101) 00:12:01.256 fused_ordering(102) 00:12:01.256 fused_ordering(103) 00:12:01.256 fused_ordering(104) 00:12:01.256 fused_ordering(105) 00:12:01.256 fused_ordering(106) 00:12:01.256 fused_ordering(107) 00:12:01.256 fused_ordering(108) 00:12:01.256 fused_ordering(109) 00:12:01.256 fused_ordering(110) 00:12:01.256 fused_ordering(111) 00:12:01.256 fused_ordering(112) 00:12:01.256 fused_ordering(113) 00:12:01.256 fused_ordering(114) 00:12:01.256 fused_ordering(115) 00:12:01.256 fused_ordering(116) 00:12:01.256 fused_ordering(117) 00:12:01.256 fused_ordering(118) 00:12:01.256 fused_ordering(119) 00:12:01.256 fused_ordering(120) 00:12:01.256 fused_ordering(121) 00:12:01.256 fused_ordering(122) 00:12:01.256 fused_ordering(123) 00:12:01.256 fused_ordering(124) 00:12:01.256 fused_ordering(125) 00:12:01.256 fused_ordering(126) 00:12:01.256 fused_ordering(127) 00:12:01.256 fused_ordering(128) 00:12:01.256 fused_ordering(129) 00:12:01.256 fused_ordering(130) 00:12:01.256 fused_ordering(131) 00:12:01.256 fused_ordering(132) 00:12:01.256 fused_ordering(133) 00:12:01.256 fused_ordering(134) 00:12:01.256 fused_ordering(135) 00:12:01.256 fused_ordering(136) 00:12:01.256 fused_ordering(137) 00:12:01.256 fused_ordering(138) 00:12:01.256 fused_ordering(139) 00:12:01.256 fused_ordering(140) 00:12:01.256 fused_ordering(141) 00:12:01.256 fused_ordering(142) 00:12:01.256 fused_ordering(143) 00:12:01.256 fused_ordering(144) 00:12:01.256 fused_ordering(145) 00:12:01.256 fused_ordering(146) 00:12:01.256 fused_ordering(147) 00:12:01.256 fused_ordering(148) 00:12:01.256 fused_ordering(149) 00:12:01.256 fused_ordering(150) 00:12:01.256 fused_ordering(151) 00:12:01.256 fused_ordering(152) 00:12:01.256 fused_ordering(153) 00:12:01.256 fused_ordering(154) 00:12:01.256 fused_ordering(155) 00:12:01.256 fused_ordering(156) 00:12:01.256 fused_ordering(157) 00:12:01.256 fused_ordering(158) 00:12:01.256 fused_ordering(159) 00:12:01.256 fused_ordering(160) 00:12:01.256 fused_ordering(161) 00:12:01.256 fused_ordering(162) 00:12:01.256 fused_ordering(163) 00:12:01.256 fused_ordering(164) 00:12:01.256 fused_ordering(165) 00:12:01.256 fused_ordering(166) 00:12:01.256 fused_ordering(167) 00:12:01.256 fused_ordering(168) 00:12:01.256 fused_ordering(169) 00:12:01.256 fused_ordering(170) 00:12:01.256 fused_ordering(171) 00:12:01.256 fused_ordering(172) 00:12:01.256 fused_ordering(173) 00:12:01.256 fused_ordering(174) 00:12:01.256 fused_ordering(175) 00:12:01.256 fused_ordering(176) 00:12:01.256 fused_ordering(177) 00:12:01.256 fused_ordering(178) 00:12:01.256 fused_ordering(179) 00:12:01.256 fused_ordering(180) 00:12:01.256 fused_ordering(181) 00:12:01.256 fused_ordering(182) 00:12:01.256 fused_ordering(183) 00:12:01.256 fused_ordering(184) 00:12:01.256 fused_ordering(185) 00:12:01.256 fused_ordering(186) 00:12:01.256 fused_ordering(187) 00:12:01.256 fused_ordering(188) 00:12:01.256 fused_ordering(189) 00:12:01.256 fused_ordering(190) 00:12:01.256 fused_ordering(191) 00:12:01.256 fused_ordering(192) 00:12:01.256 fused_ordering(193) 00:12:01.256 fused_ordering(194) 00:12:01.256 fused_ordering(195) 00:12:01.256 fused_ordering(196) 00:12:01.256 fused_ordering(197) 00:12:01.256 fused_ordering(198) 00:12:01.256 fused_ordering(199) 00:12:01.256 fused_ordering(200) 00:12:01.256 fused_ordering(201) 00:12:01.256 fused_ordering(202) 00:12:01.256 fused_ordering(203) 00:12:01.256 fused_ordering(204) 00:12:01.256 fused_ordering(205) 00:12:01.515 fused_ordering(206) 00:12:01.515 fused_ordering(207) 00:12:01.515 fused_ordering(208) 00:12:01.515 fused_ordering(209) 00:12:01.515 fused_ordering(210) 00:12:01.515 fused_ordering(211) 00:12:01.515 fused_ordering(212) 00:12:01.515 fused_ordering(213) 00:12:01.515 fused_ordering(214) 00:12:01.515 fused_ordering(215) 00:12:01.515 fused_ordering(216) 00:12:01.515 fused_ordering(217) 00:12:01.515 fused_ordering(218) 00:12:01.515 fused_ordering(219) 00:12:01.515 fused_ordering(220) 00:12:01.515 fused_ordering(221) 00:12:01.515 fused_ordering(222) 00:12:01.515 fused_ordering(223) 00:12:01.515 fused_ordering(224) 00:12:01.515 fused_ordering(225) 00:12:01.515 fused_ordering(226) 00:12:01.515 fused_ordering(227) 00:12:01.515 fused_ordering(228) 00:12:01.515 fused_ordering(229) 00:12:01.515 fused_ordering(230) 00:12:01.515 fused_ordering(231) 00:12:01.515 fused_ordering(232) 00:12:01.515 fused_ordering(233) 00:12:01.515 fused_ordering(234) 00:12:01.515 fused_ordering(235) 00:12:01.515 fused_ordering(236) 00:12:01.515 fused_ordering(237) 00:12:01.515 fused_ordering(238) 00:12:01.515 fused_ordering(239) 00:12:01.515 fused_ordering(240) 00:12:01.515 fused_ordering(241) 00:12:01.515 fused_ordering(242) 00:12:01.515 fused_ordering(243) 00:12:01.515 fused_ordering(244) 00:12:01.515 fused_ordering(245) 00:12:01.515 fused_ordering(246) 00:12:01.515 fused_ordering(247) 00:12:01.515 fused_ordering(248) 00:12:01.515 fused_ordering(249) 00:12:01.515 fused_ordering(250) 00:12:01.515 fused_ordering(251) 00:12:01.515 fused_ordering(252) 00:12:01.515 fused_ordering(253) 00:12:01.515 fused_ordering(254) 00:12:01.515 fused_ordering(255) 00:12:01.515 fused_ordering(256) 00:12:01.515 fused_ordering(257) 00:12:01.515 fused_ordering(258) 00:12:01.515 fused_ordering(259) 00:12:01.515 fused_ordering(260) 00:12:01.515 fused_ordering(261) 00:12:01.515 fused_ordering(262) 00:12:01.515 fused_ordering(263) 00:12:01.515 fused_ordering(264) 00:12:01.515 fused_ordering(265) 00:12:01.515 fused_ordering(266) 00:12:01.515 fused_ordering(267) 00:12:01.515 fused_ordering(268) 00:12:01.515 fused_ordering(269) 00:12:01.515 fused_ordering(270) 00:12:01.515 fused_ordering(271) 00:12:01.515 fused_ordering(272) 00:12:01.515 fused_ordering(273) 00:12:01.515 fused_ordering(274) 00:12:01.515 fused_ordering(275) 00:12:01.515 fused_ordering(276) 00:12:01.515 fused_ordering(277) 00:12:01.515 fused_ordering(278) 00:12:01.515 fused_ordering(279) 00:12:01.515 fused_ordering(280) 00:12:01.515 fused_ordering(281) 00:12:01.515 fused_ordering(282) 00:12:01.515 fused_ordering(283) 00:12:01.515 fused_ordering(284) 00:12:01.515 fused_ordering(285) 00:12:01.515 fused_ordering(286) 00:12:01.515 fused_ordering(287) 00:12:01.515 fused_ordering(288) 00:12:01.515 fused_ordering(289) 00:12:01.515 fused_ordering(290) 00:12:01.515 fused_ordering(291) 00:12:01.515 fused_ordering(292) 00:12:01.515 fused_ordering(293) 00:12:01.515 fused_ordering(294) 00:12:01.515 fused_ordering(295) 00:12:01.515 fused_ordering(296) 00:12:01.515 fused_ordering(297) 00:12:01.515 fused_ordering(298) 00:12:01.515 fused_ordering(299) 00:12:01.515 fused_ordering(300) 00:12:01.515 fused_ordering(301) 00:12:01.515 fused_ordering(302) 00:12:01.515 fused_ordering(303) 00:12:01.515 fused_ordering(304) 00:12:01.515 fused_ordering(305) 00:12:01.515 fused_ordering(306) 00:12:01.515 fused_ordering(307) 00:12:01.515 fused_ordering(308) 00:12:01.515 fused_ordering(309) 00:12:01.515 fused_ordering(310) 00:12:01.515 fused_ordering(311) 00:12:01.515 fused_ordering(312) 00:12:01.515 fused_ordering(313) 00:12:01.515 fused_ordering(314) 00:12:01.515 fused_ordering(315) 00:12:01.515 fused_ordering(316) 00:12:01.515 fused_ordering(317) 00:12:01.515 fused_ordering(318) 00:12:01.515 fused_ordering(319) 00:12:01.515 fused_ordering(320) 00:12:01.515 fused_ordering(321) 00:12:01.515 fused_ordering(322) 00:12:01.515 fused_ordering(323) 00:12:01.515 fused_ordering(324) 00:12:01.515 fused_ordering(325) 00:12:01.515 fused_ordering(326) 00:12:01.515 fused_ordering(327) 00:12:01.515 fused_ordering(328) 00:12:01.515 fused_ordering(329) 00:12:01.515 fused_ordering(330) 00:12:01.515 fused_ordering(331) 00:12:01.515 fused_ordering(332) 00:12:01.515 fused_ordering(333) 00:12:01.515 fused_ordering(334) 00:12:01.515 fused_ordering(335) 00:12:01.515 fused_ordering(336) 00:12:01.515 fused_ordering(337) 00:12:01.515 fused_ordering(338) 00:12:01.515 fused_ordering(339) 00:12:01.515 fused_ordering(340) 00:12:01.515 fused_ordering(341) 00:12:01.515 fused_ordering(342) 00:12:01.515 fused_ordering(343) 00:12:01.515 fused_ordering(344) 00:12:01.515 fused_ordering(345) 00:12:01.515 fused_ordering(346) 00:12:01.515 fused_ordering(347) 00:12:01.515 fused_ordering(348) 00:12:01.515 fused_ordering(349) 00:12:01.515 fused_ordering(350) 00:12:01.515 fused_ordering(351) 00:12:01.515 fused_ordering(352) 00:12:01.515 fused_ordering(353) 00:12:01.515 fused_ordering(354) 00:12:01.515 fused_ordering(355) 00:12:01.515 fused_ordering(356) 00:12:01.515 fused_ordering(357) 00:12:01.515 fused_ordering(358) 00:12:01.515 fused_ordering(359) 00:12:01.516 fused_ordering(360) 00:12:01.516 fused_ordering(361) 00:12:01.516 fused_ordering(362) 00:12:01.516 fused_ordering(363) 00:12:01.516 fused_ordering(364) 00:12:01.516 fused_ordering(365) 00:12:01.516 fused_ordering(366) 00:12:01.516 fused_ordering(367) 00:12:01.516 fused_ordering(368) 00:12:01.516 fused_ordering(369) 00:12:01.516 fused_ordering(370) 00:12:01.516 fused_ordering(371) 00:12:01.516 fused_ordering(372) 00:12:01.516 fused_ordering(373) 00:12:01.516 fused_ordering(374) 00:12:01.516 fused_ordering(375) 00:12:01.516 fused_ordering(376) 00:12:01.516 fused_ordering(377) 00:12:01.516 fused_ordering(378) 00:12:01.516 fused_ordering(379) 00:12:01.516 fused_ordering(380) 00:12:01.516 fused_ordering(381) 00:12:01.516 fused_ordering(382) 00:12:01.516 fused_ordering(383) 00:12:01.516 fused_ordering(384) 00:12:01.516 fused_ordering(385) 00:12:01.516 fused_ordering(386) 00:12:01.516 fused_ordering(387) 00:12:01.516 fused_ordering(388) 00:12:01.516 fused_ordering(389) 00:12:01.516 fused_ordering(390) 00:12:01.516 fused_ordering(391) 00:12:01.516 fused_ordering(392) 00:12:01.516 fused_ordering(393) 00:12:01.516 fused_ordering(394) 00:12:01.516 fused_ordering(395) 00:12:01.516 fused_ordering(396) 00:12:01.516 fused_ordering(397) 00:12:01.516 fused_ordering(398) 00:12:01.516 fused_ordering(399) 00:12:01.516 fused_ordering(400) 00:12:01.516 fused_ordering(401) 00:12:01.516 fused_ordering(402) 00:12:01.516 fused_ordering(403) 00:12:01.516 fused_ordering(404) 00:12:01.516 fused_ordering(405) 00:12:01.516 fused_ordering(406) 00:12:01.516 fused_ordering(407) 00:12:01.516 fused_ordering(408) 00:12:01.516 fused_ordering(409) 00:12:01.516 fused_ordering(410) 00:12:02.083 fused_ordering(411) 00:12:02.083 fused_ordering(412) 00:12:02.083 fused_ordering(413) 00:12:02.083 fused_ordering(414) 00:12:02.083 fused_ordering(415) 00:12:02.083 fused_ordering(416) 00:12:02.083 fused_ordering(417) 00:12:02.083 fused_ordering(418) 00:12:02.083 fused_ordering(419) 00:12:02.083 fused_ordering(420) 00:12:02.083 fused_ordering(421) 00:12:02.083 fused_ordering(422) 00:12:02.083 fused_ordering(423) 00:12:02.083 fused_ordering(424) 00:12:02.083 fused_ordering(425) 00:12:02.083 fused_ordering(426) 00:12:02.083 fused_ordering(427) 00:12:02.083 fused_ordering(428) 00:12:02.083 fused_ordering(429) 00:12:02.083 fused_ordering(430) 00:12:02.083 fused_ordering(431) 00:12:02.083 fused_ordering(432) 00:12:02.083 fused_ordering(433) 00:12:02.083 fused_ordering(434) 00:12:02.083 fused_ordering(435) 00:12:02.083 fused_ordering(436) 00:12:02.083 fused_ordering(437) 00:12:02.083 fused_ordering(438) 00:12:02.083 fused_ordering(439) 00:12:02.083 fused_ordering(440) 00:12:02.083 fused_ordering(441) 00:12:02.083 fused_ordering(442) 00:12:02.083 fused_ordering(443) 00:12:02.083 fused_ordering(444) 00:12:02.083 fused_ordering(445) 00:12:02.083 fused_ordering(446) 00:12:02.083 fused_ordering(447) 00:12:02.083 fused_ordering(448) 00:12:02.083 fused_ordering(449) 00:12:02.083 fused_ordering(450) 00:12:02.083 fused_ordering(451) 00:12:02.083 fused_ordering(452) 00:12:02.083 fused_ordering(453) 00:12:02.083 fused_ordering(454) 00:12:02.083 fused_ordering(455) 00:12:02.083 fused_ordering(456) 00:12:02.083 fused_ordering(457) 00:12:02.083 fused_ordering(458) 00:12:02.083 fused_ordering(459) 00:12:02.083 fused_ordering(460) 00:12:02.083 fused_ordering(461) 00:12:02.083 fused_ordering(462) 00:12:02.083 fused_ordering(463) 00:12:02.083 fused_ordering(464) 00:12:02.083 fused_ordering(465) 00:12:02.083 fused_ordering(466) 00:12:02.083 fused_ordering(467) 00:12:02.083 fused_ordering(468) 00:12:02.083 fused_ordering(469) 00:12:02.083 fused_ordering(470) 00:12:02.083 fused_ordering(471) 00:12:02.083 fused_ordering(472) 00:12:02.083 fused_ordering(473) 00:12:02.083 fused_ordering(474) 00:12:02.083 fused_ordering(475) 00:12:02.083 fused_ordering(476) 00:12:02.083 fused_ordering(477) 00:12:02.083 fused_ordering(478) 00:12:02.083 fused_ordering(479) 00:12:02.083 fused_ordering(480) 00:12:02.083 fused_ordering(481) 00:12:02.083 fused_ordering(482) 00:12:02.083 fused_ordering(483) 00:12:02.083 fused_ordering(484) 00:12:02.083 fused_ordering(485) 00:12:02.083 fused_ordering(486) 00:12:02.083 fused_ordering(487) 00:12:02.083 fused_ordering(488) 00:12:02.083 fused_ordering(489) 00:12:02.083 fused_ordering(490) 00:12:02.083 fused_ordering(491) 00:12:02.083 fused_ordering(492) 00:12:02.083 fused_ordering(493) 00:12:02.083 fused_ordering(494) 00:12:02.083 fused_ordering(495) 00:12:02.083 fused_ordering(496) 00:12:02.083 fused_ordering(497) 00:12:02.083 fused_ordering(498) 00:12:02.083 fused_ordering(499) 00:12:02.083 fused_ordering(500) 00:12:02.083 fused_ordering(501) 00:12:02.083 fused_ordering(502) 00:12:02.083 fused_ordering(503) 00:12:02.083 fused_ordering(504) 00:12:02.083 fused_ordering(505) 00:12:02.084 fused_ordering(506) 00:12:02.084 fused_ordering(507) 00:12:02.084 fused_ordering(508) 00:12:02.084 fused_ordering(509) 00:12:02.084 fused_ordering(510) 00:12:02.084 fused_ordering(511) 00:12:02.084 fused_ordering(512) 00:12:02.084 fused_ordering(513) 00:12:02.084 fused_ordering(514) 00:12:02.084 fused_ordering(515) 00:12:02.084 fused_ordering(516) 00:12:02.084 fused_ordering(517) 00:12:02.084 fused_ordering(518) 00:12:02.084 fused_ordering(519) 00:12:02.084 fused_ordering(520) 00:12:02.084 fused_ordering(521) 00:12:02.084 fused_ordering(522) 00:12:02.084 fused_ordering(523) 00:12:02.084 fused_ordering(524) 00:12:02.084 fused_ordering(525) 00:12:02.084 fused_ordering(526) 00:12:02.084 fused_ordering(527) 00:12:02.084 fused_ordering(528) 00:12:02.084 fused_ordering(529) 00:12:02.084 fused_ordering(530) 00:12:02.084 fused_ordering(531) 00:12:02.084 fused_ordering(532) 00:12:02.084 fused_ordering(533) 00:12:02.084 fused_ordering(534) 00:12:02.084 fused_ordering(535) 00:12:02.084 fused_ordering(536) 00:12:02.084 fused_ordering(537) 00:12:02.084 fused_ordering(538) 00:12:02.084 fused_ordering(539) 00:12:02.084 fused_ordering(540) 00:12:02.084 fused_ordering(541) 00:12:02.084 fused_ordering(542) 00:12:02.084 fused_ordering(543) 00:12:02.084 fused_ordering(544) 00:12:02.084 fused_ordering(545) 00:12:02.084 fused_ordering(546) 00:12:02.084 fused_ordering(547) 00:12:02.084 fused_ordering(548) 00:12:02.084 fused_ordering(549) 00:12:02.084 fused_ordering(550) 00:12:02.084 fused_ordering(551) 00:12:02.084 fused_ordering(552) 00:12:02.084 fused_ordering(553) 00:12:02.084 fused_ordering(554) 00:12:02.084 fused_ordering(555) 00:12:02.084 fused_ordering(556) 00:12:02.084 fused_ordering(557) 00:12:02.084 fused_ordering(558) 00:12:02.084 fused_ordering(559) 00:12:02.084 fused_ordering(560) 00:12:02.084 fused_ordering(561) 00:12:02.084 fused_ordering(562) 00:12:02.084 fused_ordering(563) 00:12:02.084 fused_ordering(564) 00:12:02.084 fused_ordering(565) 00:12:02.084 fused_ordering(566) 00:12:02.084 fused_ordering(567) 00:12:02.084 fused_ordering(568) 00:12:02.084 fused_ordering(569) 00:12:02.084 fused_ordering(570) 00:12:02.084 fused_ordering(571) 00:12:02.084 fused_ordering(572) 00:12:02.084 fused_ordering(573) 00:12:02.084 fused_ordering(574) 00:12:02.084 fused_ordering(575) 00:12:02.084 fused_ordering(576) 00:12:02.084 fused_ordering(577) 00:12:02.084 fused_ordering(578) 00:12:02.084 fused_ordering(579) 00:12:02.084 fused_ordering(580) 00:12:02.084 fused_ordering(581) 00:12:02.084 fused_ordering(582) 00:12:02.084 fused_ordering(583) 00:12:02.084 fused_ordering(584) 00:12:02.084 fused_ordering(585) 00:12:02.084 fused_ordering(586) 00:12:02.084 fused_ordering(587) 00:12:02.084 fused_ordering(588) 00:12:02.084 fused_ordering(589) 00:12:02.084 fused_ordering(590) 00:12:02.084 fused_ordering(591) 00:12:02.084 fused_ordering(592) 00:12:02.084 fused_ordering(593) 00:12:02.084 fused_ordering(594) 00:12:02.084 fused_ordering(595) 00:12:02.084 fused_ordering(596) 00:12:02.084 fused_ordering(597) 00:12:02.084 fused_ordering(598) 00:12:02.084 fused_ordering(599) 00:12:02.084 fused_ordering(600) 00:12:02.084 fused_ordering(601) 00:12:02.084 fused_ordering(602) 00:12:02.084 fused_ordering(603) 00:12:02.084 fused_ordering(604) 00:12:02.084 fused_ordering(605) 00:12:02.084 fused_ordering(606) 00:12:02.084 fused_ordering(607) 00:12:02.084 fused_ordering(608) 00:12:02.084 fused_ordering(609) 00:12:02.084 fused_ordering(610) 00:12:02.084 fused_ordering(611) 00:12:02.084 fused_ordering(612) 00:12:02.084 fused_ordering(613) 00:12:02.084 fused_ordering(614) 00:12:02.084 fused_ordering(615) 00:12:02.343 fused_ordering(616) 00:12:02.343 fused_ordering(617) 00:12:02.343 fused_ordering(618) 00:12:02.343 fused_ordering(619) 00:12:02.343 fused_ordering(620) 00:12:02.343 fused_ordering(621) 00:12:02.343 fused_ordering(622) 00:12:02.343 fused_ordering(623) 00:12:02.343 fused_ordering(624) 00:12:02.343 fused_ordering(625) 00:12:02.343 fused_ordering(626) 00:12:02.343 fused_ordering(627) 00:12:02.343 fused_ordering(628) 00:12:02.343 fused_ordering(629) 00:12:02.343 fused_ordering(630) 00:12:02.343 fused_ordering(631) 00:12:02.343 fused_ordering(632) 00:12:02.343 fused_ordering(633) 00:12:02.343 fused_ordering(634) 00:12:02.343 fused_ordering(635) 00:12:02.343 fused_ordering(636) 00:12:02.343 fused_ordering(637) 00:12:02.343 fused_ordering(638) 00:12:02.343 fused_ordering(639) 00:12:02.343 fused_ordering(640) 00:12:02.343 fused_ordering(641) 00:12:02.343 fused_ordering(642) 00:12:02.343 fused_ordering(643) 00:12:02.343 fused_ordering(644) 00:12:02.343 fused_ordering(645) 00:12:02.343 fused_ordering(646) 00:12:02.343 fused_ordering(647) 00:12:02.343 fused_ordering(648) 00:12:02.343 fused_ordering(649) 00:12:02.343 fused_ordering(650) 00:12:02.343 fused_ordering(651) 00:12:02.343 fused_ordering(652) 00:12:02.343 fused_ordering(653) 00:12:02.343 fused_ordering(654) 00:12:02.343 fused_ordering(655) 00:12:02.343 fused_ordering(656) 00:12:02.343 fused_ordering(657) 00:12:02.343 fused_ordering(658) 00:12:02.343 fused_ordering(659) 00:12:02.343 fused_ordering(660) 00:12:02.343 fused_ordering(661) 00:12:02.343 fused_ordering(662) 00:12:02.343 fused_ordering(663) 00:12:02.343 fused_ordering(664) 00:12:02.343 fused_ordering(665) 00:12:02.343 fused_ordering(666) 00:12:02.343 fused_ordering(667) 00:12:02.343 fused_ordering(668) 00:12:02.343 fused_ordering(669) 00:12:02.343 fused_ordering(670) 00:12:02.343 fused_ordering(671) 00:12:02.343 fused_ordering(672) 00:12:02.343 fused_ordering(673) 00:12:02.343 fused_ordering(674) 00:12:02.343 fused_ordering(675) 00:12:02.343 fused_ordering(676) 00:12:02.343 fused_ordering(677) 00:12:02.343 fused_ordering(678) 00:12:02.343 fused_ordering(679) 00:12:02.343 fused_ordering(680) 00:12:02.343 fused_ordering(681) 00:12:02.343 fused_ordering(682) 00:12:02.343 fused_ordering(683) 00:12:02.343 fused_ordering(684) 00:12:02.343 fused_ordering(685) 00:12:02.343 fused_ordering(686) 00:12:02.343 fused_ordering(687) 00:12:02.343 fused_ordering(688) 00:12:02.343 fused_ordering(689) 00:12:02.343 fused_ordering(690) 00:12:02.343 fused_ordering(691) 00:12:02.343 fused_ordering(692) 00:12:02.343 fused_ordering(693) 00:12:02.343 fused_ordering(694) 00:12:02.343 fused_ordering(695) 00:12:02.343 fused_ordering(696) 00:12:02.343 fused_ordering(697) 00:12:02.343 fused_ordering(698) 00:12:02.343 fused_ordering(699) 00:12:02.343 fused_ordering(700) 00:12:02.343 fused_ordering(701) 00:12:02.343 fused_ordering(702) 00:12:02.343 fused_ordering(703) 00:12:02.343 fused_ordering(704) 00:12:02.343 fused_ordering(705) 00:12:02.343 fused_ordering(706) 00:12:02.343 fused_ordering(707) 00:12:02.343 fused_ordering(708) 00:12:02.343 fused_ordering(709) 00:12:02.343 fused_ordering(710) 00:12:02.343 fused_ordering(711) 00:12:02.343 fused_ordering(712) 00:12:02.343 fused_ordering(713) 00:12:02.343 fused_ordering(714) 00:12:02.343 fused_ordering(715) 00:12:02.343 fused_ordering(716) 00:12:02.343 fused_ordering(717) 00:12:02.343 fused_ordering(718) 00:12:02.343 fused_ordering(719) 00:12:02.343 fused_ordering(720) 00:12:02.343 fused_ordering(721) 00:12:02.343 fused_ordering(722) 00:12:02.343 fused_ordering(723) 00:12:02.343 fused_ordering(724) 00:12:02.343 fused_ordering(725) 00:12:02.343 fused_ordering(726) 00:12:02.343 fused_ordering(727) 00:12:02.343 fused_ordering(728) 00:12:02.343 fused_ordering(729) 00:12:02.343 fused_ordering(730) 00:12:02.343 fused_ordering(731) 00:12:02.343 fused_ordering(732) 00:12:02.343 fused_ordering(733) 00:12:02.343 fused_ordering(734) 00:12:02.343 fused_ordering(735) 00:12:02.343 fused_ordering(736) 00:12:02.343 fused_ordering(737) 00:12:02.343 fused_ordering(738) 00:12:02.343 fused_ordering(739) 00:12:02.343 fused_ordering(740) 00:12:02.343 fused_ordering(741) 00:12:02.343 fused_ordering(742) 00:12:02.343 fused_ordering(743) 00:12:02.343 fused_ordering(744) 00:12:02.343 fused_ordering(745) 00:12:02.343 fused_ordering(746) 00:12:02.343 fused_ordering(747) 00:12:02.343 fused_ordering(748) 00:12:02.343 fused_ordering(749) 00:12:02.343 fused_ordering(750) 00:12:02.343 fused_ordering(751) 00:12:02.343 fused_ordering(752) 00:12:02.343 fused_ordering(753) 00:12:02.343 fused_ordering(754) 00:12:02.343 fused_ordering(755) 00:12:02.343 fused_ordering(756) 00:12:02.343 fused_ordering(757) 00:12:02.343 fused_ordering(758) 00:12:02.343 fused_ordering(759) 00:12:02.343 fused_ordering(760) 00:12:02.343 fused_ordering(761) 00:12:02.343 fused_ordering(762) 00:12:02.343 fused_ordering(763) 00:12:02.344 fused_ordering(764) 00:12:02.344 fused_ordering(765) 00:12:02.344 fused_ordering(766) 00:12:02.344 fused_ordering(767) 00:12:02.344 fused_ordering(768) 00:12:02.344 fused_ordering(769) 00:12:02.344 fused_ordering(770) 00:12:02.344 fused_ordering(771) 00:12:02.344 fused_ordering(772) 00:12:02.344 fused_ordering(773) 00:12:02.344 fused_ordering(774) 00:12:02.344 fused_ordering(775) 00:12:02.344 fused_ordering(776) 00:12:02.344 fused_ordering(777) 00:12:02.344 fused_ordering(778) 00:12:02.344 fused_ordering(779) 00:12:02.344 fused_ordering(780) 00:12:02.344 fused_ordering(781) 00:12:02.344 fused_ordering(782) 00:12:02.344 fused_ordering(783) 00:12:02.344 fused_ordering(784) 00:12:02.344 fused_ordering(785) 00:12:02.344 fused_ordering(786) 00:12:02.344 fused_ordering(787) 00:12:02.344 fused_ordering(788) 00:12:02.344 fused_ordering(789) 00:12:02.344 fused_ordering(790) 00:12:02.344 fused_ordering(791) 00:12:02.344 fused_ordering(792) 00:12:02.344 fused_ordering(793) 00:12:02.344 fused_ordering(794) 00:12:02.344 fused_ordering(795) 00:12:02.344 fused_ordering(796) 00:12:02.344 fused_ordering(797) 00:12:02.344 fused_ordering(798) 00:12:02.344 fused_ordering(799) 00:12:02.344 fused_ordering(800) 00:12:02.344 fused_ordering(801) 00:12:02.344 fused_ordering(802) 00:12:02.344 fused_ordering(803) 00:12:02.344 fused_ordering(804) 00:12:02.344 fused_ordering(805) 00:12:02.344 fused_ordering(806) 00:12:02.344 fused_ordering(807) 00:12:02.344 fused_ordering(808) 00:12:02.344 fused_ordering(809) 00:12:02.344 fused_ordering(810) 00:12:02.344 fused_ordering(811) 00:12:02.344 fused_ordering(812) 00:12:02.344 fused_ordering(813) 00:12:02.344 fused_ordering(814) 00:12:02.344 fused_ordering(815) 00:12:02.344 fused_ordering(816) 00:12:02.344 fused_ordering(817) 00:12:02.344 fused_ordering(818) 00:12:02.344 fused_ordering(819) 00:12:02.344 fused_ordering(820) 00:12:02.911 fused_ordering(821) 00:12:02.911 fused_ordering(822) 00:12:02.911 fused_ordering(823) 00:12:02.911 fused_ordering(824) 00:12:02.911 fused_ordering(825) 00:12:02.911 fused_ordering(826) 00:12:02.911 fused_ordering(827) 00:12:02.911 fused_ordering(828) 00:12:02.911 fused_ordering(829) 00:12:02.911 fused_ordering(830) 00:12:02.911 fused_ordering(831) 00:12:02.911 fused_ordering(832) 00:12:02.911 fused_ordering(833) 00:12:02.911 fused_ordering(834) 00:12:02.911 fused_ordering(835) 00:12:02.911 fused_ordering(836) 00:12:02.911 fused_ordering(837) 00:12:02.911 fused_ordering(838) 00:12:02.911 fused_ordering(839) 00:12:02.911 fused_ordering(840) 00:12:02.911 fused_ordering(841) 00:12:02.911 fused_ordering(842) 00:12:02.911 fused_ordering(843) 00:12:02.911 fused_ordering(844) 00:12:02.911 fused_ordering(845) 00:12:02.911 fused_ordering(846) 00:12:02.911 fused_ordering(847) 00:12:02.911 fused_ordering(848) 00:12:02.911 fused_ordering(849) 00:12:02.911 fused_ordering(850) 00:12:02.911 fused_ordering(851) 00:12:02.911 fused_ordering(852) 00:12:02.911 fused_ordering(853) 00:12:02.911 fused_ordering(854) 00:12:02.911 fused_ordering(855) 00:12:02.911 fused_ordering(856) 00:12:02.911 fused_ordering(857) 00:12:02.911 fused_ordering(858) 00:12:02.911 fused_ordering(859) 00:12:02.911 fused_ordering(860) 00:12:02.911 fused_ordering(861) 00:12:02.911 fused_ordering(862) 00:12:02.911 fused_ordering(863) 00:12:02.911 fused_ordering(864) 00:12:02.911 fused_ordering(865) 00:12:02.911 fused_ordering(866) 00:12:02.911 fused_ordering(867) 00:12:02.911 fused_ordering(868) 00:12:02.911 fused_ordering(869) 00:12:02.911 fused_ordering(870) 00:12:02.911 fused_ordering(871) 00:12:02.911 fused_ordering(872) 00:12:02.911 fused_ordering(873) 00:12:02.911 fused_ordering(874) 00:12:02.911 fused_ordering(875) 00:12:02.911 fused_ordering(876) 00:12:02.911 fused_ordering(877) 00:12:02.911 fused_ordering(878) 00:12:02.911 fused_ordering(879) 00:12:02.911 fused_ordering(880) 00:12:02.911 fused_ordering(881) 00:12:02.911 fused_ordering(882) 00:12:02.911 fused_ordering(883) 00:12:02.911 fused_ordering(884) 00:12:02.911 fused_ordering(885) 00:12:02.911 fused_ordering(886) 00:12:02.911 fused_ordering(887) 00:12:02.911 fused_ordering(888) 00:12:02.911 fused_ordering(889) 00:12:02.911 fused_ordering(890) 00:12:02.911 fused_ordering(891) 00:12:02.911 fused_ordering(892) 00:12:02.911 fused_ordering(893) 00:12:02.911 fused_ordering(894) 00:12:02.911 fused_ordering(895) 00:12:02.911 fused_ordering(896) 00:12:02.911 fused_ordering(897) 00:12:02.911 fused_ordering(898) 00:12:02.911 fused_ordering(899) 00:12:02.911 fused_ordering(900) 00:12:02.911 fused_ordering(901) 00:12:02.911 fused_ordering(902) 00:12:02.911 fused_ordering(903) 00:12:02.911 fused_ordering(904) 00:12:02.911 fused_ordering(905) 00:12:02.911 fused_ordering(906) 00:12:02.911 fused_ordering(907) 00:12:02.911 fused_ordering(908) 00:12:02.911 fused_ordering(909) 00:12:02.911 fused_ordering(910) 00:12:02.911 fused_ordering(911) 00:12:02.912 fused_ordering(912) 00:12:02.912 fused_ordering(913) 00:12:02.912 fused_ordering(914) 00:12:02.912 fused_ordering(915) 00:12:02.912 fused_ordering(916) 00:12:02.912 fused_ordering(917) 00:12:02.912 fused_ordering(918) 00:12:02.912 fused_ordering(919) 00:12:02.912 fused_ordering(920) 00:12:02.912 fused_ordering(921) 00:12:02.912 fused_ordering(922) 00:12:02.912 fused_ordering(923) 00:12:02.912 fused_ordering(924) 00:12:02.912 fused_ordering(925) 00:12:02.912 fused_ordering(926) 00:12:02.912 fused_ordering(927) 00:12:02.912 fused_ordering(928) 00:12:02.912 fused_ordering(929) 00:12:02.912 fused_ordering(930) 00:12:02.912 fused_ordering(931) 00:12:02.912 fused_ordering(932) 00:12:02.912 fused_ordering(933) 00:12:02.912 fused_ordering(934) 00:12:02.912 fused_ordering(935) 00:12:02.912 fused_ordering(936) 00:12:02.912 fused_ordering(937) 00:12:02.912 fused_ordering(938) 00:12:02.912 fused_ordering(939) 00:12:02.912 fused_ordering(940) 00:12:02.912 fused_ordering(941) 00:12:02.912 fused_ordering(942) 00:12:02.912 fused_ordering(943) 00:12:02.912 fused_ordering(944) 00:12:02.912 fused_ordering(945) 00:12:02.912 fused_ordering(946) 00:12:02.912 fused_ordering(947) 00:12:02.912 fused_ordering(948) 00:12:02.912 fused_ordering(949) 00:12:02.912 fused_ordering(950) 00:12:02.912 fused_ordering(951) 00:12:02.912 fused_ordering(952) 00:12:02.912 fused_ordering(953) 00:12:02.912 fused_ordering(954) 00:12:02.912 fused_ordering(955) 00:12:02.912 fused_ordering(956) 00:12:02.912 fused_ordering(957) 00:12:02.912 fused_ordering(958) 00:12:02.912 fused_ordering(959) 00:12:02.912 fused_ordering(960) 00:12:02.912 fused_ordering(961) 00:12:02.912 fused_ordering(962) 00:12:02.912 fused_ordering(963) 00:12:02.912 fused_ordering(964) 00:12:02.912 fused_ordering(965) 00:12:02.912 fused_ordering(966) 00:12:02.912 fused_ordering(967) 00:12:02.912 fused_ordering(968) 00:12:02.912 fused_ordering(969) 00:12:02.912 fused_ordering(970) 00:12:02.912 fused_ordering(971) 00:12:02.912 fused_ordering(972) 00:12:02.912 fused_ordering(973) 00:12:02.912 fused_ordering(974) 00:12:02.912 fused_ordering(975) 00:12:02.912 fused_ordering(976) 00:12:02.912 fused_ordering(977) 00:12:02.912 fused_ordering(978) 00:12:02.912 fused_ordering(979) 00:12:02.912 fused_ordering(980) 00:12:02.912 fused_ordering(981) 00:12:02.912 fused_ordering(982) 00:12:02.912 fused_ordering(983) 00:12:02.912 fused_ordering(984) 00:12:02.912 fused_ordering(985) 00:12:02.912 fused_ordering(986) 00:12:02.912 fused_ordering(987) 00:12:02.912 fused_ordering(988) 00:12:02.912 fused_ordering(989) 00:12:02.912 fused_ordering(990) 00:12:02.912 fused_ordering(991) 00:12:02.912 fused_ordering(992) 00:12:02.912 fused_ordering(993) 00:12:02.912 fused_ordering(994) 00:12:02.912 fused_ordering(995) 00:12:02.912 fused_ordering(996) 00:12:02.912 fused_ordering(997) 00:12:02.912 fused_ordering(998) 00:12:02.912 fused_ordering(999) 00:12:02.912 fused_ordering(1000) 00:12:02.912 fused_ordering(1001) 00:12:02.912 fused_ordering(1002) 00:12:02.912 fused_ordering(1003) 00:12:02.912 fused_ordering(1004) 00:12:02.912 fused_ordering(1005) 00:12:02.912 fused_ordering(1006) 00:12:02.912 fused_ordering(1007) 00:12:02.912 fused_ordering(1008) 00:12:02.912 fused_ordering(1009) 00:12:02.912 fused_ordering(1010) 00:12:02.912 fused_ordering(1011) 00:12:02.912 fused_ordering(1012) 00:12:02.912 fused_ordering(1013) 00:12:02.912 fused_ordering(1014) 00:12:02.912 fused_ordering(1015) 00:12:02.912 fused_ordering(1016) 00:12:02.912 fused_ordering(1017) 00:12:02.912 fused_ordering(1018) 00:12:02.912 fused_ordering(1019) 00:12:02.912 fused_ordering(1020) 00:12:02.912 fused_ordering(1021) 00:12:02.912 fused_ordering(1022) 00:12:02.912 fused_ordering(1023) 00:12:02.912 11:34:03 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- target/fused_ordering.sh@23 -- # trap - SIGINT SIGTERM EXIT 00:12:02.912 11:34:03 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- target/fused_ordering.sh@25 -- # nvmftestfini 00:12:02.912 11:34:03 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@516 -- # nvmfcleanup 00:12:02.912 11:34:03 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@121 -- # sync 00:12:02.912 11:34:03 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:12:02.912 11:34:03 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@124 -- # set +e 00:12:02.912 11:34:03 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@125 -- # for i in {1..20} 00:12:02.912 11:34:03 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:12:02.912 rmmod nvme_tcp 00:12:02.912 rmmod nvme_fabrics 00:12:02.912 rmmod nvme_keyring 00:12:02.912 11:34:03 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:12:02.912 11:34:03 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@128 -- # set -e 00:12:02.912 11:34:03 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@129 -- # return 0 00:12:02.912 11:34:03 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@517 -- # '[' -n 77227 ']' 00:12:02.912 11:34:03 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@518 -- # killprocess 77227 00:12:02.912 11:34:03 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@954 -- # '[' -z 77227 ']' 00:12:02.912 11:34:03 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@958 -- # kill -0 77227 00:12:02.912 11:34:03 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@959 -- # uname 00:12:02.912 11:34:03 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:12:02.912 11:34:03 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 77227 00:12:03.170 killing process with pid 77227 00:12:03.171 11:34:03 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:12:03.171 11:34:03 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:12:03.171 11:34:03 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@972 -- # echo 'killing process with pid 77227' 00:12:03.171 11:34:03 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@973 -- # kill 77227 00:12:03.171 11:34:03 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@978 -- # wait 77227 00:12:03.171 11:34:03 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:12:03.171 11:34:03 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:12:03.171 11:34:03 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:12:03.171 11:34:03 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@297 -- # iptr 00:12:03.171 11:34:03 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@791 -- # iptables-restore 00:12:03.171 11:34:03 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@791 -- # iptables-save 00:12:03.171 11:34:03 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:12:03.171 11:34:03 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@298 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:12:03.171 11:34:03 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@299 -- # nvmf_veth_fini 00:12:03.171 11:34:03 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@233 -- # ip link set nvmf_init_br nomaster 00:12:03.171 11:34:03 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@234 -- # ip link set nvmf_init_br2 nomaster 00:12:03.429 11:34:03 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@235 -- # ip link set nvmf_tgt_br nomaster 00:12:03.429 11:34:03 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@236 -- # ip link set nvmf_tgt_br2 nomaster 00:12:03.429 11:34:03 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@237 -- # ip link set nvmf_init_br down 00:12:03.429 11:34:03 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@238 -- # ip link set nvmf_init_br2 down 00:12:03.429 11:34:03 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@239 -- # ip link set nvmf_tgt_br down 00:12:03.429 11:34:03 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@240 -- # ip link set nvmf_tgt_br2 down 00:12:03.429 11:34:03 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@241 -- # ip link delete nvmf_br type bridge 00:12:03.429 11:34:04 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@242 -- # ip link delete nvmf_init_if 00:12:03.429 11:34:04 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@243 -- # ip link delete nvmf_init_if2 00:12:03.429 11:34:04 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@244 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:12:03.429 11:34:04 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@245 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:12:03.429 11:34:04 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@246 -- # remove_spdk_ns 00:12:03.429 11:34:04 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:12:03.429 11:34:04 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:12:03.429 11:34:04 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:12:03.429 11:34:04 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@300 -- # return 0 00:12:03.429 00:12:03.429 real 0m3.806s 00:12:03.429 user 0m4.209s 00:12:03.429 sys 0m1.425s 00:12:03.429 11:34:04 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@1130 -- # xtrace_disable 00:12:03.429 11:34:04 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 00:12:03.429 ************************************ 00:12:03.429 END TEST nvmf_fused_ordering 00:12:03.429 ************************************ 00:12:03.690 11:34:04 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@26 -- # run_test nvmf_ns_masking test/nvmf/target/ns_masking.sh --transport=tcp 00:12:03.690 11:34:04 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:12:03.690 11:34:04 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1111 -- # xtrace_disable 00:12:03.690 11:34:04 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:12:03.690 ************************************ 00:12:03.690 START TEST nvmf_ns_masking 00:12:03.690 ************************************ 00:12:03.690 11:34:04 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1129 -- # test/nvmf/target/ns_masking.sh --transport=tcp 00:12:03.690 * Looking for test storage... 00:12:03.690 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:12:03.690 11:34:04 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:12:03.690 11:34:04 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:12:03.690 11:34:04 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1711 -- # lcov --version 00:12:03.690 11:34:04 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:12:03.690 11:34:04 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:12:03.690 11:34:04 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@333 -- # local ver1 ver1_l 00:12:03.690 11:34:04 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@334 -- # local ver2 ver2_l 00:12:03.690 11:34:04 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@336 -- # IFS=.-: 00:12:03.690 11:34:04 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@336 -- # read -ra ver1 00:12:03.690 11:34:04 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@337 -- # IFS=.-: 00:12:03.690 11:34:04 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@337 -- # read -ra ver2 00:12:03.690 11:34:04 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@338 -- # local 'op=<' 00:12:03.690 11:34:04 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@340 -- # ver1_l=2 00:12:03.690 11:34:04 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@341 -- # ver2_l=1 00:12:03.690 11:34:04 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:12:03.690 11:34:04 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@344 -- # case "$op" in 00:12:03.690 11:34:04 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@345 -- # : 1 00:12:03.690 11:34:04 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@364 -- # (( v = 0 )) 00:12:03.690 11:34:04 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:12:03.690 11:34:04 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@365 -- # decimal 1 00:12:03.690 11:34:04 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@353 -- # local d=1 00:12:03.690 11:34:04 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:12:03.690 11:34:04 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@355 -- # echo 1 00:12:03.690 11:34:04 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@365 -- # ver1[v]=1 00:12:03.690 11:34:04 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@366 -- # decimal 2 00:12:03.690 11:34:04 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@353 -- # local d=2 00:12:03.690 11:34:04 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:12:03.690 11:34:04 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@355 -- # echo 2 00:12:03.690 11:34:04 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@366 -- # ver2[v]=2 00:12:03.690 11:34:04 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:12:03.690 11:34:04 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:12:03.690 11:34:04 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@368 -- # return 0 00:12:03.690 11:34:04 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:12:03.690 11:34:04 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:12:03.690 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:12:03.690 --rc genhtml_branch_coverage=1 00:12:03.690 --rc genhtml_function_coverage=1 00:12:03.690 --rc genhtml_legend=1 00:12:03.690 --rc geninfo_all_blocks=1 00:12:03.690 --rc geninfo_unexecuted_blocks=1 00:12:03.690 00:12:03.690 ' 00:12:03.690 11:34:04 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:12:03.690 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:12:03.690 --rc genhtml_branch_coverage=1 00:12:03.690 --rc genhtml_function_coverage=1 00:12:03.690 --rc genhtml_legend=1 00:12:03.690 --rc geninfo_all_blocks=1 00:12:03.690 --rc geninfo_unexecuted_blocks=1 00:12:03.690 00:12:03.690 ' 00:12:03.690 11:34:04 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:12:03.690 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:12:03.690 --rc genhtml_branch_coverage=1 00:12:03.690 --rc genhtml_function_coverage=1 00:12:03.690 --rc genhtml_legend=1 00:12:03.690 --rc geninfo_all_blocks=1 00:12:03.690 --rc geninfo_unexecuted_blocks=1 00:12:03.690 00:12:03.690 ' 00:12:03.690 11:34:04 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:12:03.690 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:12:03.690 --rc genhtml_branch_coverage=1 00:12:03.690 --rc genhtml_function_coverage=1 00:12:03.690 --rc genhtml_legend=1 00:12:03.690 --rc geninfo_all_blocks=1 00:12:03.690 --rc geninfo_unexecuted_blocks=1 00:12:03.690 00:12:03.690 ' 00:12:03.690 11:34:04 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@8 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:12:03.690 11:34:04 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@7 -- # uname -s 00:12:03.690 11:34:04 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:12:03.690 11:34:04 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:12:03.690 11:34:04 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:12:03.690 11:34:04 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:12:03.690 11:34:04 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:12:03.690 11:34:04 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:12:03.690 11:34:04 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:12:03.690 11:34:04 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:12:03.690 11:34:04 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:12:03.690 11:34:04 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:12:03.690 11:34:04 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:2fb5c03d-b67f-4471-b8a9-a83bf7b15be8 00:12:03.690 11:34:04 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@18 -- # NVME_HOSTID=2fb5c03d-b67f-4471-b8a9-a83bf7b15be8 00:12:03.690 11:34:04 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:12:03.690 11:34:04 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:12:03.690 11:34:04 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:12:03.690 11:34:04 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:12:03.690 11:34:04 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@49 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:12:03.690 11:34:04 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@15 -- # shopt -s extglob 00:12:03.690 11:34:04 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:12:03.690 11:34:04 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:12:03.690 11:34:04 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:12:03.690 11:34:04 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:03.690 11:34:04 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:03.690 11:34:04 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:03.690 11:34:04 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- paths/export.sh@5 -- # export PATH 00:12:03.691 11:34:04 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:03.691 11:34:04 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@51 -- # : 0 00:12:03.691 11:34:04 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:12:03.691 11:34:04 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:12:03.691 11:34:04 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:12:03.691 11:34:04 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:12:03.691 11:34:04 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:12:03.691 11:34:04 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:12:03.691 /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:12:03.691 11:34:04 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:12:03.691 11:34:04 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:12:03.691 11:34:04 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@55 -- # have_pci_nics=0 00:12:03.691 11:34:04 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@10 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:12:03.691 11:34:04 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@11 -- # hostsock=/var/tmp/host.sock 00:12:03.691 11:34:04 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@12 -- # loops=5 00:12:03.691 11:34:04 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@13 -- # uuidgen 00:12:03.691 11:34:04 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@13 -- # ns1uuid=9d9a2916-5c42-4b7c-944f-477d116d55ec 00:12:03.691 11:34:04 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@14 -- # uuidgen 00:12:03.691 11:34:04 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@14 -- # ns2uuid=79979093-4035-41b5-b3dc-a36bfc3bdbf5 00:12:03.691 11:34:04 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@16 -- # SUBSYSNQN=nqn.2016-06.io.spdk:cnode1 00:12:03.691 11:34:04 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@17 -- # HOSTNQN1=nqn.2016-06.io.spdk:host1 00:12:03.691 11:34:04 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@18 -- # HOSTNQN2=nqn.2016-06.io.spdk:host2 00:12:03.691 11:34:04 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@19 -- # uuidgen 00:12:03.691 11:34:04 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@19 -- # HOSTID=02838152-59d2-40ba-9995-fd5bdbb3b9a7 00:12:03.691 11:34:04 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@50 -- # nvmftestinit 00:12:03.691 11:34:04 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:12:03.691 11:34:04 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:12:03.691 11:34:04 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@476 -- # prepare_net_devs 00:12:03.691 11:34:04 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@438 -- # local -g is_hw=no 00:12:03.691 11:34:04 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@440 -- # remove_spdk_ns 00:12:03.691 11:34:04 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:12:03.691 11:34:04 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:12:03.691 11:34:04 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:12:03.691 11:34:04 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@442 -- # [[ virt != virt ]] 00:12:03.691 11:34:04 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@444 -- # [[ no == yes ]] 00:12:03.691 11:34:04 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@451 -- # [[ virt == phy ]] 00:12:03.691 11:34:04 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@454 -- # [[ virt == phy-fallback ]] 00:12:03.691 11:34:04 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@459 -- # [[ tcp == tcp ]] 00:12:03.691 11:34:04 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@460 -- # nvmf_veth_init 00:12:03.691 11:34:04 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@145 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:12:03.691 11:34:04 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@146 -- # NVMF_SECOND_INITIATOR_IP=10.0.0.2 00:12:03.691 11:34:04 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@147 -- # NVMF_FIRST_TARGET_IP=10.0.0.3 00:12:03.691 11:34:04 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@148 -- # NVMF_SECOND_TARGET_IP=10.0.0.4 00:12:03.691 11:34:04 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@149 -- # NVMF_INITIATOR_IP=10.0.0.1 00:12:03.691 11:34:04 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@150 -- # NVMF_BRIDGE=nvmf_br 00:12:03.691 11:34:04 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@151 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:12:03.691 11:34:04 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@152 -- # NVMF_INITIATOR_INTERFACE2=nvmf_init_if2 00:12:03.691 11:34:04 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@153 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:12:03.691 11:34:04 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@154 -- # NVMF_INITIATOR_BRIDGE2=nvmf_init_br2 00:12:03.691 11:34:04 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@155 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:12:03.691 11:34:04 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@156 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:12:03.691 11:34:04 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@157 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:12:03.691 11:34:04 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@158 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:12:03.691 11:34:04 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@159 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:12:03.691 11:34:04 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@160 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:12:03.691 11:34:04 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@162 -- # ip link set nvmf_init_br nomaster 00:12:03.691 Cannot find device "nvmf_init_br" 00:12:03.691 11:34:04 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@162 -- # true 00:12:03.691 11:34:04 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@163 -- # ip link set nvmf_init_br2 nomaster 00:12:03.950 Cannot find device "nvmf_init_br2" 00:12:03.950 11:34:04 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@163 -- # true 00:12:03.950 11:34:04 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@164 -- # ip link set nvmf_tgt_br nomaster 00:12:03.950 Cannot find device "nvmf_tgt_br" 00:12:03.950 11:34:04 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@164 -- # true 00:12:03.950 11:34:04 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@165 -- # ip link set nvmf_tgt_br2 nomaster 00:12:03.950 Cannot find device "nvmf_tgt_br2" 00:12:03.950 11:34:04 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@165 -- # true 00:12:03.950 11:34:04 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@166 -- # ip link set nvmf_init_br down 00:12:03.950 Cannot find device "nvmf_init_br" 00:12:03.950 11:34:04 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@166 -- # true 00:12:03.950 11:34:04 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@167 -- # ip link set nvmf_init_br2 down 00:12:03.950 Cannot find device "nvmf_init_br2" 00:12:03.950 11:34:04 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@167 -- # true 00:12:03.950 11:34:04 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@168 -- # ip link set nvmf_tgt_br down 00:12:03.950 Cannot find device "nvmf_tgt_br" 00:12:03.950 11:34:04 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@168 -- # true 00:12:03.950 11:34:04 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@169 -- # ip link set nvmf_tgt_br2 down 00:12:03.950 Cannot find device "nvmf_tgt_br2" 00:12:03.950 11:34:04 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@169 -- # true 00:12:03.950 11:34:04 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@170 -- # ip link delete nvmf_br type bridge 00:12:03.950 Cannot find device "nvmf_br" 00:12:03.950 11:34:04 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@170 -- # true 00:12:03.950 11:34:04 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@171 -- # ip link delete nvmf_init_if 00:12:03.950 Cannot find device "nvmf_init_if" 00:12:03.950 11:34:04 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@171 -- # true 00:12:03.950 11:34:04 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@172 -- # ip link delete nvmf_init_if2 00:12:03.950 Cannot find device "nvmf_init_if2" 00:12:03.950 11:34:04 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@172 -- # true 00:12:03.950 11:34:04 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@173 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:12:03.950 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:12:03.950 11:34:04 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@173 -- # true 00:12:03.950 11:34:04 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@174 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:12:03.950 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:12:03.950 11:34:04 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@174 -- # true 00:12:03.950 11:34:04 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@177 -- # ip netns add nvmf_tgt_ns_spdk 00:12:03.950 11:34:04 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@180 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:12:03.950 11:34:04 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@181 -- # ip link add nvmf_init_if2 type veth peer name nvmf_init_br2 00:12:03.950 11:34:04 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@182 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:12:03.950 11:34:04 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@183 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:12:03.950 11:34:04 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:12:03.950 11:34:04 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@187 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:12:03.950 11:34:04 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@190 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:12:03.950 11:34:04 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@191 -- # ip addr add 10.0.0.2/24 dev nvmf_init_if2 00:12:03.950 11:34:04 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@192 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if 00:12:03.950 11:34:04 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@193 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.4/24 dev nvmf_tgt_if2 00:12:04.216 11:34:04 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@196 -- # ip link set nvmf_init_if up 00:12:04.216 11:34:04 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@197 -- # ip link set nvmf_init_if2 up 00:12:04.216 11:34:04 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@198 -- # ip link set nvmf_init_br up 00:12:04.216 11:34:04 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@199 -- # ip link set nvmf_init_br2 up 00:12:04.216 11:34:04 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@200 -- # ip link set nvmf_tgt_br up 00:12:04.216 11:34:04 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@201 -- # ip link set nvmf_tgt_br2 up 00:12:04.216 11:34:04 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@202 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:12:04.216 11:34:04 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@203 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:12:04.217 11:34:04 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@204 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:12:04.217 11:34:04 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@207 -- # ip link add nvmf_br type bridge 00:12:04.217 11:34:04 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@208 -- # ip link set nvmf_br up 00:12:04.217 11:34:04 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@211 -- # ip link set nvmf_init_br master nvmf_br 00:12:04.217 11:34:04 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@212 -- # ip link set nvmf_init_br2 master nvmf_br 00:12:04.217 11:34:04 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@213 -- # ip link set nvmf_tgt_br master nvmf_br 00:12:04.217 11:34:04 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@214 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:12:04.217 11:34:04 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@217 -- # ipts -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:12:04.217 11:34:04 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT' 00:12:04.217 11:34:04 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@218 -- # ipts -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT 00:12:04.217 11:34:04 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT' 00:12:04.217 11:34:04 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@219 -- # ipts -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:12:04.217 11:34:04 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@790 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT -m comment --comment 'SPDK_NVMF:-A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT' 00:12:04.217 11:34:04 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@222 -- # ping -c 1 10.0.0.3 00:12:04.217 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:12:04.217 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.060 ms 00:12:04.217 00:12:04.217 --- 10.0.0.3 ping statistics --- 00:12:04.217 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:12:04.217 rtt min/avg/max/mdev = 0.060/0.060/0.060/0.000 ms 00:12:04.217 11:34:04 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@223 -- # ping -c 1 10.0.0.4 00:12:04.217 PING 10.0.0.4 (10.0.0.4) 56(84) bytes of data. 00:12:04.217 64 bytes from 10.0.0.4: icmp_seq=1 ttl=64 time=0.056 ms 00:12:04.217 00:12:04.217 --- 10.0.0.4 ping statistics --- 00:12:04.217 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:12:04.217 rtt min/avg/max/mdev = 0.056/0.056/0.056/0.000 ms 00:12:04.217 11:34:04 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@224 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:12:04.217 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:12:04.217 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.037 ms 00:12:04.217 00:12:04.217 --- 10.0.0.1 ping statistics --- 00:12:04.217 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:12:04.217 rtt min/avg/max/mdev = 0.037/0.037/0.037/0.000 ms 00:12:04.217 11:34:04 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@225 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.2 00:12:04.217 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:12:04.217 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.074 ms 00:12:04.217 00:12:04.217 --- 10.0.0.2 ping statistics --- 00:12:04.217 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:12:04.217 rtt min/avg/max/mdev = 0.074/0.074/0.074/0.000 ms 00:12:04.217 11:34:04 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@227 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:12:04.217 11:34:04 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@461 -- # return 0 00:12:04.217 11:34:04 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:12:04.217 11:34:04 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:12:04.217 11:34:04 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:12:04.217 11:34:04 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:12:04.217 11:34:04 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:12:04.217 11:34:04 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:12:04.217 11:34:04 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:12:04.217 11:34:04 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@51 -- # nvmfappstart 00:12:04.217 11:34:04 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:12:04.217 11:34:04 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@726 -- # xtrace_disable 00:12:04.217 11:34:04 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@10 -- # set +x 00:12:04.217 11:34:04 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@509 -- # nvmfpid=77511 00:12:04.217 11:34:04 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@508 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF 00:12:04.217 11:34:04 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@510 -- # waitforlisten 77511 00:12:04.217 11:34:04 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@835 -- # '[' -z 77511 ']' 00:12:04.217 11:34:04 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:12:04.217 11:34:04 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@840 -- # local max_retries=100 00:12:04.217 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:12:04.217 11:34:04 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:12:04.217 11:34:04 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@844 -- # xtrace_disable 00:12:04.217 11:34:04 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@10 -- # set +x 00:12:04.217 [2024-12-16 11:34:04.935656] Starting SPDK v25.01-pre git sha1 c360129e7 / DPDK 24.03.0 initialization... 00:12:04.217 [2024-12-16 11:34:04.935753] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:12:04.478 [2024-12-16 11:34:05.090300] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:12:04.478 [2024-12-16 11:34:05.146821] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:12:04.478 [2024-12-16 11:34:05.146893] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:12:04.478 [2024-12-16 11:34:05.146915] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:12:04.478 [2024-12-16 11:34:05.146925] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:12:04.478 [2024-12-16 11:34:05.146934] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:12:04.478 [2024-12-16 11:34:05.147395] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:12:04.736 11:34:05 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:12:04.736 11:34:05 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@868 -- # return 0 00:12:04.736 11:34:05 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:12:04.736 11:34:05 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@732 -- # xtrace_disable 00:12:04.736 11:34:05 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@10 -- # set +x 00:12:04.736 11:34:05 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:12:04.736 11:34:05 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@53 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:12:04.994 [2024-12-16 11:34:05.629938] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:12:04.994 11:34:05 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@55 -- # MALLOC_BDEV_SIZE=64 00:12:04.994 11:34:05 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@56 -- # MALLOC_BLOCK_SIZE=512 00:12:04.994 11:34:05 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@58 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc1 00:12:05.253 Malloc1 00:12:05.253 11:34:05 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@59 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc2 00:12:05.820 Malloc2 00:12:05.820 11:34:06 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@62 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:12:06.078 11:34:06 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 1 00:12:06.337 11:34:06 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 00:12:06.595 [2024-12-16 11:34:07.125721] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4420 *** 00:12:06.595 11:34:07 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@67 -- # connect 00:12:06.595 11:34:07 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@22 -- # nvme connect -t tcp -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 -I 02838152-59d2-40ba-9995-fd5bdbb3b9a7 -a 10.0.0.3 -s 4420 -i 4 00:12:06.595 11:34:07 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@24 -- # waitforserial SPDKISFASTANDAWESOME 00:12:06.595 11:34:07 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1202 -- # local i=0 00:12:06.595 11:34:07 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1203 -- # local nvme_device_counter=1 nvme_devices=0 00:12:06.595 11:34:07 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1204 -- # [[ -n '' ]] 00:12:06.595 11:34:07 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1209 -- # sleep 2 00:12:09.124 11:34:09 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1210 -- # (( i++ <= 15 )) 00:12:09.124 11:34:09 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1211 -- # lsblk -l -o NAME,SERIAL 00:12:09.124 11:34:09 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1211 -- # grep -c SPDKISFASTANDAWESOME 00:12:09.124 11:34:09 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1211 -- # nvme_devices=1 00:12:09.124 11:34:09 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1212 -- # (( nvme_devices == nvme_device_counter )) 00:12:09.124 11:34:09 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1212 -- # return 0 00:12:09.124 11:34:09 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@26 -- # nvme list-subsys -o json 00:12:09.124 11:34:09 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@26 -- # jq -r '.[].Subsystems[] | select(.NQN=="nqn.2016-06.io.spdk:cnode1") | .Paths[0].Name' 00:12:09.124 11:34:09 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@26 -- # ctrl_id=nvme0 00:12:09.124 11:34:09 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@27 -- # [[ -z nvme0 ]] 00:12:09.124 11:34:09 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@68 -- # ns_is_visible 0x1 00:12:09.124 11:34:09 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:12:09.124 11:34:09 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x1 00:12:09.124 [ 0]:0x1 00:12:09.124 11:34:09 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x1 -o json 00:12:09.124 11:34:09 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:12:09.124 11:34:09 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=98db5db802d3402aaa3e41de8720f423 00:12:09.124 11:34:09 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ 98db5db802d3402aaa3e41de8720f423 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:12:09.124 11:34:09 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@71 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc2 -n 2 00:12:09.124 11:34:09 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@72 -- # ns_is_visible 0x1 00:12:09.124 11:34:09 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:12:09.124 11:34:09 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x1 00:12:09.124 [ 0]:0x1 00:12:09.124 11:34:09 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x1 -o json 00:12:09.124 11:34:09 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:12:09.124 11:34:09 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=98db5db802d3402aaa3e41de8720f423 00:12:09.124 11:34:09 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ 98db5db802d3402aaa3e41de8720f423 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:12:09.124 11:34:09 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@73 -- # ns_is_visible 0x2 00:12:09.124 11:34:09 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:12:09.124 11:34:09 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x2 00:12:09.124 [ 1]:0x2 00:12:09.124 11:34:09 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x2 -o json 00:12:09.124 11:34:09 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:12:09.124 11:34:09 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=00e9a8dea06e4bd0a692d98a162f62ce 00:12:09.124 11:34:09 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ 00e9a8dea06e4bd0a692d98a162f62ce != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:12:09.124 11:34:09 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@75 -- # disconnect 00:12:09.124 11:34:09 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@38 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:12:09.383 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:12:09.383 11:34:09 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@79 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:12:09.641 11:34:10 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@80 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 1 --no-auto-visible 00:12:09.899 11:34:10 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@83 -- # connect 1 00:12:09.899 11:34:10 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@22 -- # nvme connect -t tcp -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 -I 02838152-59d2-40ba-9995-fd5bdbb3b9a7 -a 10.0.0.3 -s 4420 -i 4 00:12:10.157 11:34:10 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@24 -- # waitforserial SPDKISFASTANDAWESOME 1 00:12:10.157 11:34:10 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1202 -- # local i=0 00:12:10.157 11:34:10 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1203 -- # local nvme_device_counter=1 nvme_devices=0 00:12:10.157 11:34:10 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1204 -- # [[ -n 1 ]] 00:12:10.157 11:34:10 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1205 -- # nvme_device_counter=1 00:12:10.157 11:34:10 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1209 -- # sleep 2 00:12:12.083 11:34:12 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1210 -- # (( i++ <= 15 )) 00:12:12.083 11:34:12 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1211 -- # lsblk -l -o NAME,SERIAL 00:12:12.083 11:34:12 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1211 -- # grep -c SPDKISFASTANDAWESOME 00:12:12.083 11:34:12 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1211 -- # nvme_devices=1 00:12:12.083 11:34:12 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1212 -- # (( nvme_devices == nvme_device_counter )) 00:12:12.083 11:34:12 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1212 -- # return 0 00:12:12.083 11:34:12 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@26 -- # nvme list-subsys -o json 00:12:12.083 11:34:12 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@26 -- # jq -r '.[].Subsystems[] | select(.NQN=="nqn.2016-06.io.spdk:cnode1") | .Paths[0].Name' 00:12:12.083 11:34:12 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@26 -- # ctrl_id=nvme0 00:12:12.083 11:34:12 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@27 -- # [[ -z nvme0 ]] 00:12:12.083 11:34:12 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@84 -- # NOT ns_is_visible 0x1 00:12:12.083 11:34:12 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@652 -- # local es=0 00:12:12.083 11:34:12 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@654 -- # valid_exec_arg ns_is_visible 0x1 00:12:12.083 11:34:12 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@640 -- # local arg=ns_is_visible 00:12:12.083 11:34:12 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:12:12.083 11:34:12 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@644 -- # type -t ns_is_visible 00:12:12.083 11:34:12 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:12:12.083 11:34:12 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@655 -- # ns_is_visible 0x1 00:12:12.083 11:34:12 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:12:12.083 11:34:12 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x1 00:12:12.083 11:34:12 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x1 -o json 00:12:12.083 11:34:12 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:12:12.083 11:34:12 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=00000000000000000000000000000000 00:12:12.083 11:34:12 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ 00000000000000000000000000000000 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:12:12.083 11:34:12 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@655 -- # es=1 00:12:12.083 11:34:12 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:12:12.083 11:34:12 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:12:12.083 11:34:12 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:12:12.083 11:34:12 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@85 -- # ns_is_visible 0x2 00:12:12.083 11:34:12 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:12:12.083 11:34:12 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x2 00:12:12.083 [ 0]:0x2 00:12:12.083 11:34:12 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x2 -o json 00:12:12.083 11:34:12 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:12:12.342 11:34:12 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=00e9a8dea06e4bd0a692d98a162f62ce 00:12:12.342 11:34:12 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ 00e9a8dea06e4bd0a692d98a162f62ce != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:12:12.342 11:34:12 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@88 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_ns_add_host nqn.2016-06.io.spdk:cnode1 1 nqn.2016-06.io.spdk:host1 00:12:12.601 11:34:13 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@89 -- # ns_is_visible 0x1 00:12:12.601 11:34:13 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:12:12.601 11:34:13 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x1 00:12:12.601 [ 0]:0x1 00:12:12.601 11:34:13 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x1 -o json 00:12:12.601 11:34:13 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:12:12.601 11:34:13 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=98db5db802d3402aaa3e41de8720f423 00:12:12.601 11:34:13 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ 98db5db802d3402aaa3e41de8720f423 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:12:12.601 11:34:13 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@90 -- # ns_is_visible 0x2 00:12:12.601 11:34:13 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:12:12.601 11:34:13 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x2 00:12:12.601 [ 1]:0x2 00:12:12.601 11:34:13 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x2 -o json 00:12:12.601 11:34:13 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:12:12.601 11:34:13 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=00e9a8dea06e4bd0a692d98a162f62ce 00:12:12.601 11:34:13 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ 00e9a8dea06e4bd0a692d98a162f62ce != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:12:12.601 11:34:13 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@93 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_ns_remove_host nqn.2016-06.io.spdk:cnode1 1 nqn.2016-06.io.spdk:host1 00:12:13.168 11:34:13 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@94 -- # NOT ns_is_visible 0x1 00:12:13.168 11:34:13 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@652 -- # local es=0 00:12:13.168 11:34:13 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@654 -- # valid_exec_arg ns_is_visible 0x1 00:12:13.168 11:34:13 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@640 -- # local arg=ns_is_visible 00:12:13.168 11:34:13 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:12:13.168 11:34:13 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@644 -- # type -t ns_is_visible 00:12:13.168 11:34:13 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:12:13.168 11:34:13 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@655 -- # ns_is_visible 0x1 00:12:13.168 11:34:13 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:12:13.168 11:34:13 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x1 00:12:13.168 11:34:13 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x1 -o json 00:12:13.168 11:34:13 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:12:13.168 11:34:13 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=00000000000000000000000000000000 00:12:13.168 11:34:13 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ 00000000000000000000000000000000 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:12:13.168 11:34:13 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@655 -- # es=1 00:12:13.168 11:34:13 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:12:13.168 11:34:13 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:12:13.168 11:34:13 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:12:13.168 11:34:13 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@95 -- # ns_is_visible 0x2 00:12:13.168 11:34:13 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:12:13.168 11:34:13 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x2 00:12:13.168 [ 0]:0x2 00:12:13.169 11:34:13 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x2 -o json 00:12:13.169 11:34:13 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:12:13.169 11:34:13 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=00e9a8dea06e4bd0a692d98a162f62ce 00:12:13.169 11:34:13 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ 00e9a8dea06e4bd0a692d98a162f62ce != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:12:13.169 11:34:13 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@97 -- # disconnect 00:12:13.169 11:34:13 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@38 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:12:13.169 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:12:13.169 11:34:13 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@100 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_ns_add_host nqn.2016-06.io.spdk:cnode1 1 nqn.2016-06.io.spdk:host1 00:12:13.427 11:34:14 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@101 -- # connect 2 00:12:13.427 11:34:14 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@22 -- # nvme connect -t tcp -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 -I 02838152-59d2-40ba-9995-fd5bdbb3b9a7 -a 10.0.0.3 -s 4420 -i 4 00:12:13.686 11:34:14 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@24 -- # waitforserial SPDKISFASTANDAWESOME 2 00:12:13.686 11:34:14 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1202 -- # local i=0 00:12:13.686 11:34:14 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1203 -- # local nvme_device_counter=1 nvme_devices=0 00:12:13.686 11:34:14 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1204 -- # [[ -n 2 ]] 00:12:13.686 11:34:14 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1205 -- # nvme_device_counter=2 00:12:13.686 11:34:14 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1209 -- # sleep 2 00:12:15.586 11:34:16 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1210 -- # (( i++ <= 15 )) 00:12:15.586 11:34:16 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1211 -- # lsblk -l -o NAME,SERIAL 00:12:15.586 11:34:16 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1211 -- # grep -c SPDKISFASTANDAWESOME 00:12:15.586 11:34:16 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1211 -- # nvme_devices=2 00:12:15.586 11:34:16 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1212 -- # (( nvme_devices == nvme_device_counter )) 00:12:15.586 11:34:16 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1212 -- # return 0 00:12:15.586 11:34:16 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@26 -- # nvme list-subsys -o json 00:12:15.586 11:34:16 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@26 -- # jq -r '.[].Subsystems[] | select(.NQN=="nqn.2016-06.io.spdk:cnode1") | .Paths[0].Name' 00:12:15.586 11:34:16 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@26 -- # ctrl_id=nvme0 00:12:15.586 11:34:16 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@27 -- # [[ -z nvme0 ]] 00:12:15.586 11:34:16 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@102 -- # ns_is_visible 0x1 00:12:15.586 11:34:16 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:12:15.586 11:34:16 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x1 00:12:15.586 [ 0]:0x1 00:12:15.586 11:34:16 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x1 -o json 00:12:15.586 11:34:16 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:12:15.845 11:34:16 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=98db5db802d3402aaa3e41de8720f423 00:12:15.845 11:34:16 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ 98db5db802d3402aaa3e41de8720f423 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:12:15.845 11:34:16 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@103 -- # ns_is_visible 0x2 00:12:15.845 11:34:16 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:12:15.845 11:34:16 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x2 00:12:15.845 [ 1]:0x2 00:12:15.845 11:34:16 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:12:15.845 11:34:16 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x2 -o json 00:12:15.845 11:34:16 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=00e9a8dea06e4bd0a692d98a162f62ce 00:12:15.845 11:34:16 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ 00e9a8dea06e4bd0a692d98a162f62ce != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:12:15.845 11:34:16 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@106 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_ns_remove_host nqn.2016-06.io.spdk:cnode1 1 nqn.2016-06.io.spdk:host1 00:12:16.104 11:34:16 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@107 -- # NOT ns_is_visible 0x1 00:12:16.104 11:34:16 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@652 -- # local es=0 00:12:16.104 11:34:16 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@654 -- # valid_exec_arg ns_is_visible 0x1 00:12:16.104 11:34:16 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@640 -- # local arg=ns_is_visible 00:12:16.104 11:34:16 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:12:16.104 11:34:16 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@644 -- # type -t ns_is_visible 00:12:16.104 11:34:16 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:12:16.104 11:34:16 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@655 -- # ns_is_visible 0x1 00:12:16.104 11:34:16 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:12:16.104 11:34:16 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x1 00:12:16.104 11:34:16 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x1 -o json 00:12:16.104 11:34:16 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:12:16.104 11:34:16 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=00000000000000000000000000000000 00:12:16.104 11:34:16 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ 00000000000000000000000000000000 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:12:16.104 11:34:16 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@655 -- # es=1 00:12:16.104 11:34:16 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:12:16.104 11:34:16 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:12:16.363 11:34:16 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:12:16.363 11:34:16 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@108 -- # ns_is_visible 0x2 00:12:16.363 11:34:16 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:12:16.363 11:34:16 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x2 00:12:16.363 [ 0]:0x2 00:12:16.363 11:34:16 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x2 -o json 00:12:16.363 11:34:16 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:12:16.363 11:34:16 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=00e9a8dea06e4bd0a692d98a162f62ce 00:12:16.363 11:34:16 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ 00e9a8dea06e4bd0a692d98a162f62ce != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:12:16.363 11:34:16 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@111 -- # NOT /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_ns_remove_host nqn.2016-06.io.spdk:cnode1 2 nqn.2016-06.io.spdk:host1 00:12:16.363 11:34:16 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@652 -- # local es=0 00:12:16.363 11:34:16 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@654 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_ns_remove_host nqn.2016-06.io.spdk:cnode1 2 nqn.2016-06.io.spdk:host1 00:12:16.363 11:34:16 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@640 -- # local arg=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:12:16.363 11:34:16 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:12:16.363 11:34:16 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@644 -- # type -t /home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:12:16.363 11:34:16 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:12:16.364 11:34:16 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@646 -- # type -P /home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:12:16.364 11:34:16 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:12:16.364 11:34:16 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@646 -- # arg=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:12:16.364 11:34:16 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@646 -- # [[ -x /home/vagrant/spdk_repo/spdk/scripts/rpc.py ]] 00:12:16.364 11:34:16 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@655 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_ns_remove_host nqn.2016-06.io.spdk:cnode1 2 nqn.2016-06.io.spdk:host1 00:12:16.622 [2024-12-16 11:34:17.220507] nvmf_rpc.c:1873:nvmf_rpc_ns_visible_paused: *ERROR*: Unable to add/remove nqn.2016-06.io.spdk:host1 to namespace ID 2 00:12:16.622 2024/12/16 11:34:17 error on JSON-RPC call, method: nvmf_ns_remove_host, params: map[host:nqn.2016-06.io.spdk:host1 nqn:nqn.2016-06.io.spdk:cnode1 nsid:2], err: error received for nvmf_ns_remove_host method, err: Code=-32602 Msg=Invalid parameters 00:12:16.622 request: 00:12:16.622 { 00:12:16.622 "method": "nvmf_ns_remove_host", 00:12:16.622 "params": { 00:12:16.622 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:12:16.622 "nsid": 2, 00:12:16.622 "host": "nqn.2016-06.io.spdk:host1" 00:12:16.622 } 00:12:16.622 } 00:12:16.622 Got JSON-RPC error response 00:12:16.622 GoRPCClient: error on JSON-RPC call 00:12:16.622 11:34:17 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@655 -- # es=1 00:12:16.622 11:34:17 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:12:16.622 11:34:17 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:12:16.622 11:34:17 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:12:16.622 11:34:17 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@112 -- # NOT ns_is_visible 0x1 00:12:16.622 11:34:17 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@652 -- # local es=0 00:12:16.622 11:34:17 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@654 -- # valid_exec_arg ns_is_visible 0x1 00:12:16.622 11:34:17 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@640 -- # local arg=ns_is_visible 00:12:16.622 11:34:17 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:12:16.622 11:34:17 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@644 -- # type -t ns_is_visible 00:12:16.622 11:34:17 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:12:16.622 11:34:17 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@655 -- # ns_is_visible 0x1 00:12:16.622 11:34:17 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:12:16.622 11:34:17 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x1 00:12:16.622 11:34:17 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x1 -o json 00:12:16.622 11:34:17 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:12:16.623 11:34:17 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=00000000000000000000000000000000 00:12:16.623 11:34:17 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ 00000000000000000000000000000000 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:12:16.623 11:34:17 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@655 -- # es=1 00:12:16.623 11:34:17 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:12:16.623 11:34:17 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:12:16.623 11:34:17 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:12:16.623 11:34:17 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@113 -- # ns_is_visible 0x2 00:12:16.623 11:34:17 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:12:16.623 11:34:17 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x2 00:12:16.623 [ 0]:0x2 00:12:16.623 11:34:17 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x2 -o json 00:12:16.623 11:34:17 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:12:16.623 11:34:17 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=00e9a8dea06e4bd0a692d98a162f62ce 00:12:16.623 11:34:17 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ 00e9a8dea06e4bd0a692d98a162f62ce != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:12:16.623 11:34:17 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@114 -- # disconnect 00:12:16.623 11:34:17 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@38 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:12:16.881 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:12:16.881 11:34:17 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@118 -- # hostpid=77879 00:12:16.881 11:34:17 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@117 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -r /var/tmp/host.sock -m 2 00:12:16.881 11:34:17 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@119 -- # trap 'killprocess $hostpid; nvmftestfini' SIGINT SIGTERM EXIT 00:12:16.881 11:34:17 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@121 -- # waitforlisten 77879 /var/tmp/host.sock 00:12:16.881 11:34:17 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@835 -- # '[' -z 77879 ']' 00:12:16.881 11:34:17 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/host.sock 00:12:16.881 11:34:17 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@840 -- # local max_retries=100 00:12:16.881 Waiting for process to start up and listen on UNIX domain socket /var/tmp/host.sock... 00:12:16.881 11:34:17 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/host.sock...' 00:12:16.881 11:34:17 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@844 -- # xtrace_disable 00:12:16.881 11:34:17 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@10 -- # set +x 00:12:16.881 [2024-12-16 11:34:17.478816] Starting SPDK v25.01-pre git sha1 c360129e7 / DPDK 24.03.0 initialization... 00:12:16.881 [2024-12-16 11:34:17.478938] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid77879 ] 00:12:16.881 [2024-12-16 11:34:17.632406] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:12:17.140 [2024-12-16 11:34:17.691513] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 1 00:12:17.399 11:34:17 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:12:17.399 11:34:17 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@868 -- # return 0 00:12:17.399 11:34:17 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@122 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:12:17.658 11:34:18 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@123 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:12:17.916 11:34:18 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@124 -- # uuid2nguid 9d9a2916-5c42-4b7c-944f-477d116d55ec 00:12:17.916 11:34:18 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@787 -- # tr -d - 00:12:17.916 11:34:18 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@124 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 1 -g 9D9A29165C424B7C944F477D116D55EC -i 00:12:18.175 11:34:18 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@125 -- # uuid2nguid 79979093-4035-41b5-b3dc-a36bfc3bdbf5 00:12:18.175 11:34:18 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@787 -- # tr -d - 00:12:18.175 11:34:18 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@125 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc2 -n 2 -g 79979093403541B5B3DCA36BFC3BDBF5 -i 00:12:18.434 11:34:19 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_ns_add_host nqn.2016-06.io.spdk:cnode1 1 nqn.2016-06.io.spdk:host1 00:12:18.693 11:34:19 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_ns_add_host nqn.2016-06.io.spdk:cnode1 2 nqn.2016-06.io.spdk:host2 00:12:18.951 11:34:19 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@129 -- # hostrpc bdev_nvme_attach_controller -t tcp -a 10.0.0.3 -f ipv4 -s 4420 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 -b nvme0 00:12:18.951 11:34:19 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@48 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -a 10.0.0.3 -f ipv4 -s 4420 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 -b nvme0 00:12:19.210 nvme0n1 00:12:19.210 11:34:19 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@131 -- # hostrpc bdev_nvme_attach_controller -t tcp -a 10.0.0.3 -f ipv4 -s 4420 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host2 -b nvme1 00:12:19.210 11:34:19 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@48 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -a 10.0.0.3 -f ipv4 -s 4420 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host2 -b nvme1 00:12:19.777 nvme1n2 00:12:19.777 11:34:20 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@134 -- # sort 00:12:19.777 11:34:20 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@134 -- # hostrpc bdev_get_bdevs 00:12:19.777 11:34:20 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@48 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_get_bdevs 00:12:19.777 11:34:20 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@134 -- # jq -r '.[].name' 00:12:19.777 11:34:20 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@134 -- # xargs 00:12:20.036 11:34:20 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@134 -- # [[ nvme0n1 nvme1n2 == \n\v\m\e\0\n\1\ \n\v\m\e\1\n\2 ]] 00:12:20.036 11:34:20 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@135 -- # jq -r '.[].uuid' 00:12:20.036 11:34:20 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@135 -- # hostrpc bdev_get_bdevs -b nvme0n1 00:12:20.036 11:34:20 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@48 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_get_bdevs -b nvme0n1 00:12:20.294 11:34:20 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@135 -- # [[ 9d9a2916-5c42-4b7c-944f-477d116d55ec == \9\d\9\a\2\9\1\6\-\5\c\4\2\-\4\b\7\c\-\9\4\4\f\-\4\7\7\d\1\1\6\d\5\5\e\c ]] 00:12:20.294 11:34:20 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@136 -- # hostrpc bdev_get_bdevs -b nvme1n2 00:12:20.294 11:34:20 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@48 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_get_bdevs -b nvme1n2 00:12:20.294 11:34:20 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@136 -- # jq -r '.[].uuid' 00:12:20.552 11:34:21 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@136 -- # [[ 79979093-4035-41b5-b3dc-a36bfc3bdbf5 == \7\9\9\7\9\0\9\3\-\4\0\3\5\-\4\1\b\5\-\b\3\d\c\-\a\3\6\b\f\c\3\b\d\b\f\5 ]] 00:12:20.552 11:34:21 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@137 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:12:20.814 11:34:21 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@138 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:12:21.092 11:34:21 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@141 -- # uuid2nguid 9d9a2916-5c42-4b7c-944f-477d116d55ec 00:12:21.092 11:34:21 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@787 -- # tr -d - 00:12:21.092 11:34:21 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@141 -- # NOT /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 invalid -n 1 -g 9D9A29165C424B7C944F477D116D55EC 00:12:21.092 11:34:21 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@652 -- # local es=0 00:12:21.092 11:34:21 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@654 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 invalid -n 1 -g 9D9A29165C424B7C944F477D116D55EC 00:12:21.092 11:34:21 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@640 -- # local arg=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:12:21.092 11:34:21 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:12:21.092 11:34:21 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@644 -- # type -t /home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:12:21.092 11:34:21 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:12:21.092 11:34:21 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@646 -- # type -P /home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:12:21.092 11:34:21 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:12:21.092 11:34:21 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@646 -- # arg=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:12:21.092 11:34:21 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@646 -- # [[ -x /home/vagrant/spdk_repo/spdk/scripts/rpc.py ]] 00:12:21.092 11:34:21 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@655 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 invalid -n 1 -g 9D9A29165C424B7C944F477D116D55EC 00:12:21.350 [2024-12-16 11:34:22.094857] bdev.c:8697:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: invalid 00:12:21.350 [2024-12-16 11:34:22.094910] subsystem.c:2160:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode1: bdev invalid cannot be opened, error=-19 00:12:21.350 [2024-12-16 11:34:22.094923] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:21.350 2024/12/16 11:34:22 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:invalid hide_metadata:%!s(bool=false) nguid:9D9A29165C424B7C944F477D116D55EC no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:12:21.350 request: 00:12:21.350 { 00:12:21.350 "method": "nvmf_subsystem_add_ns", 00:12:21.350 "params": { 00:12:21.350 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:12:21.350 "namespace": { 00:12:21.350 "bdev_name": "invalid", 00:12:21.350 "nsid": 1, 00:12:21.350 "nguid": "9D9A29165C424B7C944F477D116D55EC", 00:12:21.350 "no_auto_visible": false, 00:12:21.350 "hide_metadata": false 00:12:21.350 } 00:12:21.350 } 00:12:21.350 } 00:12:21.350 Got JSON-RPC error response 00:12:21.350 GoRPCClient: error on JSON-RPC call 00:12:21.609 11:34:22 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@655 -- # es=1 00:12:21.609 11:34:22 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:12:21.609 11:34:22 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:12:21.609 11:34:22 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:12:21.609 11:34:22 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@142 -- # uuid2nguid 9d9a2916-5c42-4b7c-944f-477d116d55ec 00:12:21.609 11:34:22 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@787 -- # tr -d - 00:12:21.609 11:34:22 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@142 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 1 -g 9D9A29165C424B7C944F477D116D55EC -i 00:12:21.868 11:34:22 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@143 -- # sleep 2s 00:12:23.769 11:34:24 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@144 -- # hostrpc bdev_get_bdevs 00:12:23.769 11:34:24 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@48 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_get_bdevs 00:12:23.769 11:34:24 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@144 -- # jq length 00:12:24.028 11:34:24 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@144 -- # (( 0 == 0 )) 00:12:24.028 11:34:24 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@146 -- # killprocess 77879 00:12:24.028 11:34:24 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@954 -- # '[' -z 77879 ']' 00:12:24.028 11:34:24 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@958 -- # kill -0 77879 00:12:24.028 11:34:24 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@959 -- # uname 00:12:24.028 11:34:24 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:12:24.028 11:34:24 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 77879 00:12:24.287 11:34:24 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:12:24.287 11:34:24 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:12:24.287 killing process with pid 77879 00:12:24.287 11:34:24 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@972 -- # echo 'killing process with pid 77879' 00:12:24.287 11:34:24 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@973 -- # kill 77879 00:12:24.287 11:34:24 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@978 -- # wait 77879 00:12:24.545 11:34:25 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@147 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:12:24.803 11:34:25 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@149 -- # trap - SIGINT SIGTERM EXIT 00:12:24.803 11:34:25 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@150 -- # nvmftestfini 00:12:24.803 11:34:25 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@516 -- # nvmfcleanup 00:12:24.803 11:34:25 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@121 -- # sync 00:12:24.803 11:34:25 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:12:24.803 11:34:25 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@124 -- # set +e 00:12:24.803 11:34:25 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@125 -- # for i in {1..20} 00:12:24.803 11:34:25 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:12:24.803 rmmod nvme_tcp 00:12:24.803 rmmod nvme_fabrics 00:12:25.062 rmmod nvme_keyring 00:12:25.062 11:34:25 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:12:25.062 11:34:25 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@128 -- # set -e 00:12:25.062 11:34:25 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@129 -- # return 0 00:12:25.062 11:34:25 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@517 -- # '[' -n 77511 ']' 00:12:25.062 11:34:25 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@518 -- # killprocess 77511 00:12:25.062 11:34:25 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@954 -- # '[' -z 77511 ']' 00:12:25.062 11:34:25 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@958 -- # kill -0 77511 00:12:25.062 11:34:25 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@959 -- # uname 00:12:25.062 11:34:25 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:12:25.062 11:34:25 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 77511 00:12:25.062 killing process with pid 77511 00:12:25.062 11:34:25 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:12:25.062 11:34:25 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:12:25.062 11:34:25 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@972 -- # echo 'killing process with pid 77511' 00:12:25.062 11:34:25 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@973 -- # kill 77511 00:12:25.062 11:34:25 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@978 -- # wait 77511 00:12:25.321 11:34:25 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:12:25.321 11:34:25 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:12:25.321 11:34:25 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:12:25.321 11:34:25 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@297 -- # iptr 00:12:25.321 11:34:25 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:12:25.321 11:34:25 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@791 -- # iptables-save 00:12:25.321 11:34:25 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@791 -- # iptables-restore 00:12:25.321 11:34:25 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@298 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:12:25.321 11:34:25 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@299 -- # nvmf_veth_fini 00:12:25.321 11:34:25 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@233 -- # ip link set nvmf_init_br nomaster 00:12:25.321 11:34:25 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@234 -- # ip link set nvmf_init_br2 nomaster 00:12:25.321 11:34:25 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@235 -- # ip link set nvmf_tgt_br nomaster 00:12:25.321 11:34:25 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@236 -- # ip link set nvmf_tgt_br2 nomaster 00:12:25.321 11:34:25 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@237 -- # ip link set nvmf_init_br down 00:12:25.321 11:34:25 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@238 -- # ip link set nvmf_init_br2 down 00:12:25.321 11:34:25 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@239 -- # ip link set nvmf_tgt_br down 00:12:25.321 11:34:25 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@240 -- # ip link set nvmf_tgt_br2 down 00:12:25.321 11:34:25 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@241 -- # ip link delete nvmf_br type bridge 00:12:25.321 11:34:26 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@242 -- # ip link delete nvmf_init_if 00:12:25.321 11:34:26 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@243 -- # ip link delete nvmf_init_if2 00:12:25.580 11:34:26 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@244 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:12:25.580 11:34:26 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@245 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:12:25.580 11:34:26 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@246 -- # remove_spdk_ns 00:12:25.580 11:34:26 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:12:25.580 11:34:26 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:12:25.580 11:34:26 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:12:25.580 11:34:26 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@300 -- # return 0 00:12:25.580 00:12:25.580 real 0m21.974s 00:12:25.580 user 0m37.227s 00:12:25.580 sys 0m3.318s 00:12:25.580 ************************************ 00:12:25.580 END TEST nvmf_ns_masking 00:12:25.580 ************************************ 00:12:25.580 11:34:26 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1130 -- # xtrace_disable 00:12:25.580 11:34:26 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@10 -- # set +x 00:12:25.580 11:34:26 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@27 -- # [[ 0 -eq 1 ]] 00:12:25.580 11:34:26 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@30 -- # [[ 0 -eq 1 ]] 00:12:25.580 11:34:26 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@37 -- # run_test nvmf_auth_target /home/vagrant/spdk_repo/spdk/test/nvmf/target/auth.sh --transport=tcp 00:12:25.580 11:34:26 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:12:25.580 11:34:26 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1111 -- # xtrace_disable 00:12:25.580 11:34:26 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:12:25.580 ************************************ 00:12:25.580 START TEST nvmf_auth_target 00:12:25.580 ************************************ 00:12:25.580 11:34:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/auth.sh --transport=tcp 00:12:25.580 * Looking for test storage... 00:12:25.580 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:12:25.580 11:34:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:12:25.580 11:34:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1711 -- # lcov --version 00:12:25.580 11:34:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:12:25.842 11:34:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:12:25.842 11:34:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:12:25.842 11:34:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@333 -- # local ver1 ver1_l 00:12:25.842 11:34:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@334 -- # local ver2 ver2_l 00:12:25.842 11:34:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@336 -- # IFS=.-: 00:12:25.842 11:34:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@336 -- # read -ra ver1 00:12:25.842 11:34:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@337 -- # IFS=.-: 00:12:25.842 11:34:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@337 -- # read -ra ver2 00:12:25.842 11:34:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@338 -- # local 'op=<' 00:12:25.842 11:34:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@340 -- # ver1_l=2 00:12:25.842 11:34:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@341 -- # ver2_l=1 00:12:25.842 11:34:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:12:25.842 11:34:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@344 -- # case "$op" in 00:12:25.842 11:34:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@345 -- # : 1 00:12:25.842 11:34:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@364 -- # (( v = 0 )) 00:12:25.842 11:34:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:12:25.842 11:34:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@365 -- # decimal 1 00:12:25.842 11:34:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@353 -- # local d=1 00:12:25.842 11:34:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:12:25.842 11:34:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@355 -- # echo 1 00:12:25.842 11:34:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@365 -- # ver1[v]=1 00:12:25.842 11:34:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@366 -- # decimal 2 00:12:25.842 11:34:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@353 -- # local d=2 00:12:25.842 11:34:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:12:25.842 11:34:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@355 -- # echo 2 00:12:25.842 11:34:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@366 -- # ver2[v]=2 00:12:25.842 11:34:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:12:25.842 11:34:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:12:25.842 11:34:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@368 -- # return 0 00:12:25.842 11:34:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:12:25.842 11:34:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:12:25.842 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:12:25.842 --rc genhtml_branch_coverage=1 00:12:25.842 --rc genhtml_function_coverage=1 00:12:25.842 --rc genhtml_legend=1 00:12:25.842 --rc geninfo_all_blocks=1 00:12:25.842 --rc geninfo_unexecuted_blocks=1 00:12:25.842 00:12:25.842 ' 00:12:25.842 11:34:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:12:25.842 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:12:25.842 --rc genhtml_branch_coverage=1 00:12:25.842 --rc genhtml_function_coverage=1 00:12:25.842 --rc genhtml_legend=1 00:12:25.842 --rc geninfo_all_blocks=1 00:12:25.842 --rc geninfo_unexecuted_blocks=1 00:12:25.842 00:12:25.842 ' 00:12:25.842 11:34:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:12:25.842 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:12:25.842 --rc genhtml_branch_coverage=1 00:12:25.843 --rc genhtml_function_coverage=1 00:12:25.843 --rc genhtml_legend=1 00:12:25.843 --rc geninfo_all_blocks=1 00:12:25.843 --rc geninfo_unexecuted_blocks=1 00:12:25.843 00:12:25.843 ' 00:12:25.843 11:34:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:12:25.843 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:12:25.843 --rc genhtml_branch_coverage=1 00:12:25.843 --rc genhtml_function_coverage=1 00:12:25.843 --rc genhtml_legend=1 00:12:25.843 --rc geninfo_all_blocks=1 00:12:25.843 --rc geninfo_unexecuted_blocks=1 00:12:25.843 00:12:25.843 ' 00:12:25.843 11:34:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@10 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:12:25.843 11:34:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@7 -- # uname -s 00:12:25.843 11:34:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:12:25.843 11:34:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:12:25.843 11:34:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:12:25.843 11:34:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:12:25.843 11:34:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:12:25.843 11:34:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:12:25.843 11:34:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:12:25.843 11:34:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:12:25.843 11:34:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:12:25.843 11:34:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:12:25.843 11:34:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:2fb5c03d-b67f-4471-b8a9-a83bf7b15be8 00:12:25.843 11:34:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@18 -- # NVME_HOSTID=2fb5c03d-b67f-4471-b8a9-a83bf7b15be8 00:12:25.843 11:34:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:12:25.843 11:34:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:12:25.843 11:34:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:12:25.843 11:34:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:12:25.843 11:34:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@49 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:12:25.843 11:34:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@15 -- # shopt -s extglob 00:12:25.843 11:34:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:12:25.843 11:34:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:12:25.843 11:34:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:12:25.843 11:34:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:25.843 11:34:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:25.843 11:34:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:25.843 11:34:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- paths/export.sh@5 -- # export PATH 00:12:25.843 11:34:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:25.843 11:34:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@51 -- # : 0 00:12:25.843 11:34:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:12:25.843 11:34:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:12:25.843 11:34:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:12:25.843 11:34:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:12:25.843 11:34:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:12:25.843 11:34:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:12:25.843 /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:12:25.843 11:34:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:12:25.843 11:34:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:12:25.843 11:34:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@55 -- # have_pci_nics=0 00:12:25.843 11:34:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@13 -- # digests=("sha256" "sha384" "sha512") 00:12:25.843 11:34:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@14 -- # dhgroups=("null" "ffdhe2048" "ffdhe3072" "ffdhe4096" "ffdhe6144" "ffdhe8192") 00:12:25.843 11:34:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@15 -- # subnqn=nqn.2024-03.io.spdk:cnode0 00:12:25.843 11:34:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@16 -- # hostnqn=nqn.2014-08.org.nvmexpress:uuid:2fb5c03d-b67f-4471-b8a9-a83bf7b15be8 00:12:25.843 11:34:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@17 -- # hostsock=/var/tmp/host.sock 00:12:25.843 11:34:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@18 -- # keys=() 00:12:25.843 11:34:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@18 -- # ckeys=() 00:12:25.843 11:34:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@86 -- # nvmftestinit 00:12:25.843 11:34:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:12:25.843 11:34:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:12:25.843 11:34:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@476 -- # prepare_net_devs 00:12:25.843 11:34:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@438 -- # local -g is_hw=no 00:12:25.843 11:34:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@440 -- # remove_spdk_ns 00:12:25.843 11:34:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:12:25.843 11:34:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:12:25.843 11:34:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:12:25.843 11:34:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@442 -- # [[ virt != virt ]] 00:12:25.843 11:34:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@444 -- # [[ no == yes ]] 00:12:25.843 11:34:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@451 -- # [[ virt == phy ]] 00:12:25.843 11:34:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@454 -- # [[ virt == phy-fallback ]] 00:12:25.843 11:34:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@459 -- # [[ tcp == tcp ]] 00:12:25.843 11:34:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@460 -- # nvmf_veth_init 00:12:25.843 11:34:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@145 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:12:25.843 11:34:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@146 -- # NVMF_SECOND_INITIATOR_IP=10.0.0.2 00:12:25.843 11:34:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@147 -- # NVMF_FIRST_TARGET_IP=10.0.0.3 00:12:25.843 11:34:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@148 -- # NVMF_SECOND_TARGET_IP=10.0.0.4 00:12:25.843 11:34:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@149 -- # NVMF_INITIATOR_IP=10.0.0.1 00:12:25.843 11:34:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@150 -- # NVMF_BRIDGE=nvmf_br 00:12:25.843 11:34:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@151 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:12:25.843 11:34:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@152 -- # NVMF_INITIATOR_INTERFACE2=nvmf_init_if2 00:12:25.843 11:34:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@153 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:12:25.843 11:34:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@154 -- # NVMF_INITIATOR_BRIDGE2=nvmf_init_br2 00:12:25.843 11:34:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@155 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:12:25.843 11:34:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@156 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:12:25.843 11:34:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@157 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:12:25.843 11:34:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@158 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:12:25.843 11:34:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@159 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:12:25.843 11:34:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@160 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:12:25.843 11:34:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@162 -- # ip link set nvmf_init_br nomaster 00:12:25.843 Cannot find device "nvmf_init_br" 00:12:25.843 11:34:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@162 -- # true 00:12:25.843 11:34:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@163 -- # ip link set nvmf_init_br2 nomaster 00:12:25.843 Cannot find device "nvmf_init_br2" 00:12:25.843 11:34:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@163 -- # true 00:12:25.843 11:34:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@164 -- # ip link set nvmf_tgt_br nomaster 00:12:25.843 Cannot find device "nvmf_tgt_br" 00:12:25.843 11:34:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@164 -- # true 00:12:25.843 11:34:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@165 -- # ip link set nvmf_tgt_br2 nomaster 00:12:25.843 Cannot find device "nvmf_tgt_br2" 00:12:25.843 11:34:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@165 -- # true 00:12:25.844 11:34:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@166 -- # ip link set nvmf_init_br down 00:12:25.844 Cannot find device "nvmf_init_br" 00:12:25.844 11:34:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@166 -- # true 00:12:25.844 11:34:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@167 -- # ip link set nvmf_init_br2 down 00:12:25.844 Cannot find device "nvmf_init_br2" 00:12:25.844 11:34:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@167 -- # true 00:12:25.844 11:34:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@168 -- # ip link set nvmf_tgt_br down 00:12:25.844 Cannot find device "nvmf_tgt_br" 00:12:25.844 11:34:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@168 -- # true 00:12:25.844 11:34:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@169 -- # ip link set nvmf_tgt_br2 down 00:12:25.844 Cannot find device "nvmf_tgt_br2" 00:12:25.844 11:34:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@169 -- # true 00:12:25.844 11:34:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@170 -- # ip link delete nvmf_br type bridge 00:12:25.844 Cannot find device "nvmf_br" 00:12:25.844 11:34:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@170 -- # true 00:12:25.844 11:34:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@171 -- # ip link delete nvmf_init_if 00:12:25.844 Cannot find device "nvmf_init_if" 00:12:25.844 11:34:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@171 -- # true 00:12:25.844 11:34:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@172 -- # ip link delete nvmf_init_if2 00:12:25.844 Cannot find device "nvmf_init_if2" 00:12:25.844 11:34:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@172 -- # true 00:12:25.844 11:34:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@173 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:12:25.844 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:12:25.844 11:34:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@173 -- # true 00:12:25.844 11:34:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@174 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:12:25.844 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:12:25.844 11:34:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@174 -- # true 00:12:25.844 11:34:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@177 -- # ip netns add nvmf_tgt_ns_spdk 00:12:25.844 11:34:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@180 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:12:25.844 11:34:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@181 -- # ip link add nvmf_init_if2 type veth peer name nvmf_init_br2 00:12:26.108 11:34:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@182 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:12:26.108 11:34:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@183 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:12:26.108 11:34:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:12:26.108 11:34:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@187 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:12:26.108 11:34:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@190 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:12:26.108 11:34:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@191 -- # ip addr add 10.0.0.2/24 dev nvmf_init_if2 00:12:26.108 11:34:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@192 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if 00:12:26.108 11:34:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@193 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.4/24 dev nvmf_tgt_if2 00:12:26.108 11:34:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@196 -- # ip link set nvmf_init_if up 00:12:26.108 11:34:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@197 -- # ip link set nvmf_init_if2 up 00:12:26.108 11:34:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@198 -- # ip link set nvmf_init_br up 00:12:26.108 11:34:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@199 -- # ip link set nvmf_init_br2 up 00:12:26.108 11:34:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@200 -- # ip link set nvmf_tgt_br up 00:12:26.108 11:34:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@201 -- # ip link set nvmf_tgt_br2 up 00:12:26.108 11:34:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@202 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:12:26.108 11:34:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@203 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:12:26.108 11:34:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@204 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:12:26.108 11:34:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@207 -- # ip link add nvmf_br type bridge 00:12:26.108 11:34:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@208 -- # ip link set nvmf_br up 00:12:26.108 11:34:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@211 -- # ip link set nvmf_init_br master nvmf_br 00:12:26.108 11:34:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@212 -- # ip link set nvmf_init_br2 master nvmf_br 00:12:26.108 11:34:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@213 -- # ip link set nvmf_tgt_br master nvmf_br 00:12:26.108 11:34:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@214 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:12:26.108 11:34:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@217 -- # ipts -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:12:26.108 11:34:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT' 00:12:26.108 11:34:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@218 -- # ipts -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT 00:12:26.108 11:34:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT' 00:12:26.108 11:34:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@219 -- # ipts -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:12:26.108 11:34:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@790 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT -m comment --comment 'SPDK_NVMF:-A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT' 00:12:26.108 11:34:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@222 -- # ping -c 1 10.0.0.3 00:12:26.108 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:12:26.108 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.060 ms 00:12:26.108 00:12:26.108 --- 10.0.0.3 ping statistics --- 00:12:26.108 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:12:26.108 rtt min/avg/max/mdev = 0.060/0.060/0.060/0.000 ms 00:12:26.108 11:34:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@223 -- # ping -c 1 10.0.0.4 00:12:26.108 PING 10.0.0.4 (10.0.0.4) 56(84) bytes of data. 00:12:26.108 64 bytes from 10.0.0.4: icmp_seq=1 ttl=64 time=0.057 ms 00:12:26.108 00:12:26.108 --- 10.0.0.4 ping statistics --- 00:12:26.108 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:12:26.108 rtt min/avg/max/mdev = 0.057/0.057/0.057/0.000 ms 00:12:26.108 11:34:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@224 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:12:26.108 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:12:26.108 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.033 ms 00:12:26.108 00:12:26.108 --- 10.0.0.1 ping statistics --- 00:12:26.108 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:12:26.108 rtt min/avg/max/mdev = 0.033/0.033/0.033/0.000 ms 00:12:26.108 11:34:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@225 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.2 00:12:26.108 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:12:26.108 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.158 ms 00:12:26.108 00:12:26.108 --- 10.0.0.2 ping statistics --- 00:12:26.108 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:12:26.108 rtt min/avg/max/mdev = 0.158/0.158/0.158/0.000 ms 00:12:26.108 11:34:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@227 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:12:26.108 11:34:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@461 -- # return 0 00:12:26.108 11:34:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:12:26.108 11:34:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:12:26.108 11:34:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:12:26.108 11:34:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:12:26.108 11:34:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:12:26.108 11:34:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:12:26.108 11:34:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:12:26.109 11:34:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@87 -- # nvmfappstart -L nvmf_auth 00:12:26.109 11:34:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:12:26.109 11:34:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@726 -- # xtrace_disable 00:12:26.109 11:34:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:26.109 11:34:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@509 -- # nvmfpid=78376 00:12:26.109 11:34:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@508 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -L nvmf_auth 00:12:26.109 11:34:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@510 -- # waitforlisten 78376 00:12:26.109 11:34:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@835 -- # '[' -z 78376 ']' 00:12:26.109 11:34:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:12:26.109 11:34:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@840 -- # local max_retries=100 00:12:26.109 11:34:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:12:26.109 11:34:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@844 -- # xtrace_disable 00:12:26.109 11:34:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:26.676 11:34:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:12:26.676 11:34:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@868 -- # return 0 00:12:26.676 11:34:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:12:26.676 11:34:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@732 -- # xtrace_disable 00:12:26.676 11:34:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:26.676 11:34:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:12:26.676 11:34:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@89 -- # hostpid=78401 00:12:26.676 11:34:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@88 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 2 -r /var/tmp/host.sock -L nvme_auth 00:12:26.676 11:34:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@91 -- # trap 'dumplogs; cleanup' SIGINT SIGTERM EXIT 00:12:26.676 11:34:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # gen_dhchap_key null 48 00:12:26.676 11:34:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@751 -- # local digest len file key 00:12:26.676 11:34:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@752 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:12:26.676 11:34:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@752 -- # local -A digests 00:12:26.676 11:34:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@754 -- # digest=null 00:12:26.676 11:34:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@754 -- # len=48 00:12:26.676 11:34:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@755 -- # xxd -p -c0 -l 24 /dev/urandom 00:12:26.676 11:34:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@755 -- # key=cbbdb7117f706aa32ad0d7ea810ddf45f23dfb372f30bcbe 00:12:26.676 11:34:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@756 -- # mktemp -t spdk.key-null.XXX 00:12:26.676 11:34:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@756 -- # file=/tmp/spdk.key-null.LLV 00:12:26.676 11:34:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@757 -- # format_dhchap_key cbbdb7117f706aa32ad0d7ea810ddf45f23dfb372f30bcbe 0 00:12:26.676 11:34:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@747 -- # format_key DHHC-1 cbbdb7117f706aa32ad0d7ea810ddf45f23dfb372f30bcbe 0 00:12:26.676 11:34:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@730 -- # local prefix key digest 00:12:26.676 11:34:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # prefix=DHHC-1 00:12:26.676 11:34:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # key=cbbdb7117f706aa32ad0d7ea810ddf45f23dfb372f30bcbe 00:12:26.676 11:34:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # digest=0 00:12:26.676 11:34:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@733 -- # python - 00:12:26.676 11:34:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@758 -- # chmod 0600 /tmp/spdk.key-null.LLV 00:12:26.676 11:34:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@760 -- # echo /tmp/spdk.key-null.LLV 00:12:26.676 11:34:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # keys[0]=/tmp/spdk.key-null.LLV 00:12:26.676 11:34:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # gen_dhchap_key sha512 64 00:12:26.676 11:34:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@751 -- # local digest len file key 00:12:26.676 11:34:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@752 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:12:26.676 11:34:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@752 -- # local -A digests 00:12:26.676 11:34:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@754 -- # digest=sha512 00:12:26.676 11:34:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@754 -- # len=64 00:12:26.676 11:34:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@755 -- # xxd -p -c0 -l 32 /dev/urandom 00:12:26.676 11:34:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@755 -- # key=a81da0e07c756e5b644ae0c3a3b8b230534c949b3836832e619e8fca20e9cb9e 00:12:26.676 11:34:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@756 -- # mktemp -t spdk.key-sha512.XXX 00:12:26.676 11:34:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@756 -- # file=/tmp/spdk.key-sha512.CuG 00:12:26.676 11:34:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@757 -- # format_dhchap_key a81da0e07c756e5b644ae0c3a3b8b230534c949b3836832e619e8fca20e9cb9e 3 00:12:26.676 11:34:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@747 -- # format_key DHHC-1 a81da0e07c756e5b644ae0c3a3b8b230534c949b3836832e619e8fca20e9cb9e 3 00:12:26.676 11:34:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@730 -- # local prefix key digest 00:12:26.676 11:34:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # prefix=DHHC-1 00:12:26.676 11:34:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # key=a81da0e07c756e5b644ae0c3a3b8b230534c949b3836832e619e8fca20e9cb9e 00:12:26.676 11:34:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # digest=3 00:12:26.676 11:34:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@733 -- # python - 00:12:26.676 11:34:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@758 -- # chmod 0600 /tmp/spdk.key-sha512.CuG 00:12:26.676 11:34:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@760 -- # echo /tmp/spdk.key-sha512.CuG 00:12:26.676 11:34:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # ckeys[0]=/tmp/spdk.key-sha512.CuG 00:12:26.676 11:34:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@95 -- # gen_dhchap_key sha256 32 00:12:26.676 11:34:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@751 -- # local digest len file key 00:12:26.676 11:34:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@752 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:12:26.676 11:34:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@752 -- # local -A digests 00:12:26.676 11:34:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@754 -- # digest=sha256 00:12:26.676 11:34:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@754 -- # len=32 00:12:26.676 11:34:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@755 -- # xxd -p -c0 -l 16 /dev/urandom 00:12:26.935 11:34:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@755 -- # key=75b0816f2b85768f65aa75992202ce9e 00:12:26.935 11:34:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@756 -- # mktemp -t spdk.key-sha256.XXX 00:12:26.935 11:34:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@756 -- # file=/tmp/spdk.key-sha256.Qet 00:12:26.935 11:34:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@757 -- # format_dhchap_key 75b0816f2b85768f65aa75992202ce9e 1 00:12:26.935 11:34:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@747 -- # format_key DHHC-1 75b0816f2b85768f65aa75992202ce9e 1 00:12:26.935 11:34:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@730 -- # local prefix key digest 00:12:26.935 11:34:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # prefix=DHHC-1 00:12:26.935 11:34:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # key=75b0816f2b85768f65aa75992202ce9e 00:12:26.935 11:34:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # digest=1 00:12:26.935 11:34:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@733 -- # python - 00:12:26.935 11:34:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@758 -- # chmod 0600 /tmp/spdk.key-sha256.Qet 00:12:26.935 11:34:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@760 -- # echo /tmp/spdk.key-sha256.Qet 00:12:26.935 11:34:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@95 -- # keys[1]=/tmp/spdk.key-sha256.Qet 00:12:26.935 11:34:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@95 -- # gen_dhchap_key sha384 48 00:12:26.935 11:34:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@751 -- # local digest len file key 00:12:26.935 11:34:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@752 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:12:26.935 11:34:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@752 -- # local -A digests 00:12:26.935 11:34:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@754 -- # digest=sha384 00:12:26.935 11:34:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@754 -- # len=48 00:12:26.935 11:34:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@755 -- # xxd -p -c0 -l 24 /dev/urandom 00:12:26.936 11:34:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@755 -- # key=14d002e36489c524add95c094f147550e63d66f8133513f2 00:12:26.936 11:34:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@756 -- # mktemp -t spdk.key-sha384.XXX 00:12:26.936 11:34:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@756 -- # file=/tmp/spdk.key-sha384.ams 00:12:26.936 11:34:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@757 -- # format_dhchap_key 14d002e36489c524add95c094f147550e63d66f8133513f2 2 00:12:26.936 11:34:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@747 -- # format_key DHHC-1 14d002e36489c524add95c094f147550e63d66f8133513f2 2 00:12:26.936 11:34:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@730 -- # local prefix key digest 00:12:26.936 11:34:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # prefix=DHHC-1 00:12:26.936 11:34:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # key=14d002e36489c524add95c094f147550e63d66f8133513f2 00:12:26.936 11:34:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # digest=2 00:12:26.936 11:34:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@733 -- # python - 00:12:26.936 11:34:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@758 -- # chmod 0600 /tmp/spdk.key-sha384.ams 00:12:26.936 11:34:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@760 -- # echo /tmp/spdk.key-sha384.ams 00:12:26.936 11:34:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@95 -- # ckeys[1]=/tmp/spdk.key-sha384.ams 00:12:26.936 11:34:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # gen_dhchap_key sha384 48 00:12:26.936 11:34:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@751 -- # local digest len file key 00:12:26.936 11:34:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@752 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:12:26.936 11:34:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@752 -- # local -A digests 00:12:26.936 11:34:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@754 -- # digest=sha384 00:12:26.936 11:34:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@754 -- # len=48 00:12:26.936 11:34:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@755 -- # xxd -p -c0 -l 24 /dev/urandom 00:12:26.936 11:34:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@755 -- # key=d647f07e90158c7d4be17f9e106b729bf57d34752eebadbc 00:12:26.936 11:34:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@756 -- # mktemp -t spdk.key-sha384.XXX 00:12:26.936 11:34:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@756 -- # file=/tmp/spdk.key-sha384.BTV 00:12:26.936 11:34:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@757 -- # format_dhchap_key d647f07e90158c7d4be17f9e106b729bf57d34752eebadbc 2 00:12:26.936 11:34:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@747 -- # format_key DHHC-1 d647f07e90158c7d4be17f9e106b729bf57d34752eebadbc 2 00:12:26.936 11:34:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@730 -- # local prefix key digest 00:12:26.936 11:34:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # prefix=DHHC-1 00:12:26.936 11:34:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # key=d647f07e90158c7d4be17f9e106b729bf57d34752eebadbc 00:12:26.936 11:34:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # digest=2 00:12:26.936 11:34:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@733 -- # python - 00:12:26.936 11:34:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@758 -- # chmod 0600 /tmp/spdk.key-sha384.BTV 00:12:26.936 11:34:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@760 -- # echo /tmp/spdk.key-sha384.BTV 00:12:26.936 11:34:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # keys[2]=/tmp/spdk.key-sha384.BTV 00:12:26.936 11:34:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # gen_dhchap_key sha256 32 00:12:26.936 11:34:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@751 -- # local digest len file key 00:12:26.936 11:34:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@752 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:12:26.936 11:34:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@752 -- # local -A digests 00:12:26.936 11:34:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@754 -- # digest=sha256 00:12:26.936 11:34:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@754 -- # len=32 00:12:26.936 11:34:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@755 -- # xxd -p -c0 -l 16 /dev/urandom 00:12:26.936 11:34:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@755 -- # key=8fc1930534bafe49e18c46dddb755d59 00:12:26.936 11:34:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@756 -- # mktemp -t spdk.key-sha256.XXX 00:12:26.936 11:34:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@756 -- # file=/tmp/spdk.key-sha256.ZAi 00:12:26.936 11:34:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@757 -- # format_dhchap_key 8fc1930534bafe49e18c46dddb755d59 1 00:12:26.936 11:34:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@747 -- # format_key DHHC-1 8fc1930534bafe49e18c46dddb755d59 1 00:12:26.936 11:34:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@730 -- # local prefix key digest 00:12:26.936 11:34:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # prefix=DHHC-1 00:12:26.936 11:34:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # key=8fc1930534bafe49e18c46dddb755d59 00:12:26.936 11:34:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # digest=1 00:12:26.936 11:34:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@733 -- # python - 00:12:26.936 11:34:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@758 -- # chmod 0600 /tmp/spdk.key-sha256.ZAi 00:12:26.936 11:34:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@760 -- # echo /tmp/spdk.key-sha256.ZAi 00:12:26.936 11:34:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # ckeys[2]=/tmp/spdk.key-sha256.ZAi 00:12:26.936 11:34:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@97 -- # gen_dhchap_key sha512 64 00:12:26.936 11:34:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@751 -- # local digest len file key 00:12:26.936 11:34:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@752 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:12:26.936 11:34:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@752 -- # local -A digests 00:12:26.936 11:34:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@754 -- # digest=sha512 00:12:26.936 11:34:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@754 -- # len=64 00:12:27.194 11:34:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@755 -- # xxd -p -c0 -l 32 /dev/urandom 00:12:27.194 11:34:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@755 -- # key=1b7db82ef66d1512af86ccbd310592fb44ea51493e11f50c709aa9515d9cd9b3 00:12:27.194 11:34:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@756 -- # mktemp -t spdk.key-sha512.XXX 00:12:27.195 11:34:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@756 -- # file=/tmp/spdk.key-sha512.Riu 00:12:27.195 11:34:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@757 -- # format_dhchap_key 1b7db82ef66d1512af86ccbd310592fb44ea51493e11f50c709aa9515d9cd9b3 3 00:12:27.195 11:34:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@747 -- # format_key DHHC-1 1b7db82ef66d1512af86ccbd310592fb44ea51493e11f50c709aa9515d9cd9b3 3 00:12:27.195 11:34:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@730 -- # local prefix key digest 00:12:27.195 11:34:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # prefix=DHHC-1 00:12:27.195 11:34:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # key=1b7db82ef66d1512af86ccbd310592fb44ea51493e11f50c709aa9515d9cd9b3 00:12:27.195 11:34:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # digest=3 00:12:27.195 11:34:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@733 -- # python - 00:12:27.195 11:34:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@758 -- # chmod 0600 /tmp/spdk.key-sha512.Riu 00:12:27.195 11:34:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@760 -- # echo /tmp/spdk.key-sha512.Riu 00:12:27.195 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:12:27.195 11:34:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@97 -- # keys[3]=/tmp/spdk.key-sha512.Riu 00:12:27.195 11:34:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@97 -- # ckeys[3]= 00:12:27.195 11:34:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@99 -- # waitforlisten 78376 00:12:27.195 11:34:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@835 -- # '[' -z 78376 ']' 00:12:27.195 11:34:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:12:27.195 11:34:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@840 -- # local max_retries=100 00:12:27.195 11:34:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:12:27.195 11:34:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@844 -- # xtrace_disable 00:12:27.195 11:34:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:27.453 Waiting for process to start up and listen on UNIX domain socket /var/tmp/host.sock... 00:12:27.453 11:34:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:12:27.453 11:34:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@868 -- # return 0 00:12:27.453 11:34:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@100 -- # waitforlisten 78401 /var/tmp/host.sock 00:12:27.453 11:34:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@835 -- # '[' -z 78401 ']' 00:12:27.453 11:34:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/host.sock 00:12:27.453 11:34:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@840 -- # local max_retries=100 00:12:27.453 11:34:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/host.sock...' 00:12:27.453 11:34:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@844 -- # xtrace_disable 00:12:27.453 11:34:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:27.711 11:34:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:12:27.711 11:34:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@868 -- # return 0 00:12:27.711 11:34:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@101 -- # rpc_cmd 00:12:27.711 11:34:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:27.711 11:34:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:27.711 11:34:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:27.711 11:34:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@108 -- # for i in "${!keys[@]}" 00:12:27.711 11:34:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@109 -- # rpc_cmd keyring_file_add_key key0 /tmp/spdk.key-null.LLV 00:12:27.711 11:34:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:27.711 11:34:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:27.711 11:34:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:27.711 11:34:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@110 -- # hostrpc keyring_file_add_key key0 /tmp/spdk.key-null.LLV 00:12:27.711 11:34:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock keyring_file_add_key key0 /tmp/spdk.key-null.LLV 00:12:28.280 11:34:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@111 -- # [[ -n /tmp/spdk.key-sha512.CuG ]] 00:12:28.280 11:34:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@112 -- # rpc_cmd keyring_file_add_key ckey0 /tmp/spdk.key-sha512.CuG 00:12:28.280 11:34:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:28.280 11:34:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:28.280 11:34:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:28.280 11:34:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@113 -- # hostrpc keyring_file_add_key ckey0 /tmp/spdk.key-sha512.CuG 00:12:28.280 11:34:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock keyring_file_add_key ckey0 /tmp/spdk.key-sha512.CuG 00:12:28.539 11:34:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@108 -- # for i in "${!keys[@]}" 00:12:28.539 11:34:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@109 -- # rpc_cmd keyring_file_add_key key1 /tmp/spdk.key-sha256.Qet 00:12:28.539 11:34:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:28.539 11:34:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:28.539 11:34:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:28.539 11:34:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@110 -- # hostrpc keyring_file_add_key key1 /tmp/spdk.key-sha256.Qet 00:12:28.539 11:34:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock keyring_file_add_key key1 /tmp/spdk.key-sha256.Qet 00:12:28.797 11:34:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@111 -- # [[ -n /tmp/spdk.key-sha384.ams ]] 00:12:28.797 11:34:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@112 -- # rpc_cmd keyring_file_add_key ckey1 /tmp/spdk.key-sha384.ams 00:12:28.797 11:34:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:28.797 11:34:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:28.797 11:34:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:28.797 11:34:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@113 -- # hostrpc keyring_file_add_key ckey1 /tmp/spdk.key-sha384.ams 00:12:28.797 11:34:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock keyring_file_add_key ckey1 /tmp/spdk.key-sha384.ams 00:12:29.055 11:34:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@108 -- # for i in "${!keys[@]}" 00:12:29.055 11:34:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@109 -- # rpc_cmd keyring_file_add_key key2 /tmp/spdk.key-sha384.BTV 00:12:29.055 11:34:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:29.055 11:34:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:29.055 11:34:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:29.055 11:34:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@110 -- # hostrpc keyring_file_add_key key2 /tmp/spdk.key-sha384.BTV 00:12:29.055 11:34:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock keyring_file_add_key key2 /tmp/spdk.key-sha384.BTV 00:12:29.313 11:34:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@111 -- # [[ -n /tmp/spdk.key-sha256.ZAi ]] 00:12:29.313 11:34:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@112 -- # rpc_cmd keyring_file_add_key ckey2 /tmp/spdk.key-sha256.ZAi 00:12:29.313 11:34:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:29.313 11:34:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:29.313 11:34:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:29.313 11:34:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@113 -- # hostrpc keyring_file_add_key ckey2 /tmp/spdk.key-sha256.ZAi 00:12:29.313 11:34:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock keyring_file_add_key ckey2 /tmp/spdk.key-sha256.ZAi 00:12:29.571 11:34:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@108 -- # for i in "${!keys[@]}" 00:12:29.571 11:34:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@109 -- # rpc_cmd keyring_file_add_key key3 /tmp/spdk.key-sha512.Riu 00:12:29.571 11:34:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:29.571 11:34:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:29.571 11:34:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:29.571 11:34:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@110 -- # hostrpc keyring_file_add_key key3 /tmp/spdk.key-sha512.Riu 00:12:29.571 11:34:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock keyring_file_add_key key3 /tmp/spdk.key-sha512.Riu 00:12:29.829 11:34:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@111 -- # [[ -n '' ]] 00:12:29.829 11:34:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@118 -- # for digest in "${digests[@]}" 00:12:29.829 11:34:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:12:29.829 11:34:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:12:29.829 11:34:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups null 00:12:29.829 11:34:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups null 00:12:30.087 11:34:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 null 0 00:12:30.087 11:34:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:12:30.087 11:34:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:12:30.087 11:34:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=null 00:12:30.087 11:34:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:12:30.087 11:34:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:12:30.087 11:34:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:2fb5c03d-b67f-4471-b8a9-a83bf7b15be8 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:12:30.087 11:34:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:30.087 11:34:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:30.087 11:34:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:30.087 11:34:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:12:30.087 11:34:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:2fb5c03d-b67f-4471-b8a9-a83bf7b15be8 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:12:30.087 11:34:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:2fb5c03d-b67f-4471-b8a9-a83bf7b15be8 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:12:30.397 00:12:30.656 11:34:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:12:30.656 11:34:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:12:30.656 11:34:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:12:30.915 11:34:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:12:30.915 11:34:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:12:30.915 11:34:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:30.915 11:34:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:30.915 11:34:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:30.915 11:34:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:12:30.915 { 00:12:30.915 "auth": { 00:12:30.915 "dhgroup": "null", 00:12:30.915 "digest": "sha256", 00:12:30.915 "state": "completed" 00:12:30.915 }, 00:12:30.915 "cntlid": 1, 00:12:30.915 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:2fb5c03d-b67f-4471-b8a9-a83bf7b15be8", 00:12:30.915 "listen_address": { 00:12:30.915 "adrfam": "IPv4", 00:12:30.915 "traddr": "10.0.0.3", 00:12:30.915 "trsvcid": "4420", 00:12:30.915 "trtype": "TCP" 00:12:30.915 }, 00:12:30.915 "peer_address": { 00:12:30.915 "adrfam": "IPv4", 00:12:30.915 "traddr": "10.0.0.1", 00:12:30.915 "trsvcid": "52008", 00:12:30.915 "trtype": "TCP" 00:12:30.915 }, 00:12:30.915 "qid": 0, 00:12:30.915 "state": "enabled", 00:12:30.915 "thread": "nvmf_tgt_poll_group_000" 00:12:30.915 } 00:12:30.915 ]' 00:12:30.915 11:34:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:12:30.915 11:34:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:12:30.915 11:34:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:12:30.915 11:34:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ null == \n\u\l\l ]] 00:12:30.915 11:34:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:12:30.915 11:34:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:12:30.915 11:34:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:12:30.915 11:34:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:12:31.173 11:34:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:Y2JiZGI3MTE3ZjcwNmFhMzJhZDBkN2VhODEwZGRmNDVmMjNkZmIzNzJmMzBiY2JlF9S+tg==: --dhchap-ctrl-secret DHHC-1:03:YTgxZGEwZTA3Yzc1NmU1YjY0NGFlMGMzYTNiOGIyMzA1MzRjOTQ5YjM4MzY4MzJlNjE5ZThmY2EyMGU5Y2I5ZcLXVvg=: 00:12:31.173 11:34:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:2fb5c03d-b67f-4471-b8a9-a83bf7b15be8 --hostid 2fb5c03d-b67f-4471-b8a9-a83bf7b15be8 -l 0 --dhchap-secret DHHC-1:00:Y2JiZGI3MTE3ZjcwNmFhMzJhZDBkN2VhODEwZGRmNDVmMjNkZmIzNzJmMzBiY2JlF9S+tg==: --dhchap-ctrl-secret DHHC-1:03:YTgxZGEwZTA3Yzc1NmU1YjY0NGFlMGMzYTNiOGIyMzA1MzRjOTQ5YjM4MzY4MzJlNjE5ZThmY2EyMGU5Y2I5ZcLXVvg=: 00:12:36.447 11:34:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:12:36.447 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:12:36.447 11:34:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:2fb5c03d-b67f-4471-b8a9-a83bf7b15be8 00:12:36.447 11:34:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:36.447 11:34:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:36.447 11:34:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:36.447 11:34:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:12:36.447 11:34:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups null 00:12:36.447 11:34:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups null 00:12:36.447 11:34:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 null 1 00:12:36.447 11:34:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:12:36.447 11:34:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:12:36.447 11:34:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=null 00:12:36.447 11:34:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:12:36.447 11:34:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:12:36.447 11:34:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:2fb5c03d-b67f-4471-b8a9-a83bf7b15be8 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:12:36.447 11:34:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:36.447 11:34:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:36.447 11:34:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:36.447 11:34:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:12:36.447 11:34:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:2fb5c03d-b67f-4471-b8a9-a83bf7b15be8 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:12:36.447 11:34:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:2fb5c03d-b67f-4471-b8a9-a83bf7b15be8 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:12:36.447 00:12:36.447 11:34:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:12:36.447 11:34:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:12:36.447 11:34:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:12:36.706 11:34:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:12:36.706 11:34:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:12:36.706 11:34:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:36.706 11:34:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:36.706 11:34:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:36.706 11:34:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:12:36.706 { 00:12:36.706 "auth": { 00:12:36.706 "dhgroup": "null", 00:12:36.706 "digest": "sha256", 00:12:36.706 "state": "completed" 00:12:36.706 }, 00:12:36.706 "cntlid": 3, 00:12:36.706 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:2fb5c03d-b67f-4471-b8a9-a83bf7b15be8", 00:12:36.706 "listen_address": { 00:12:36.706 "adrfam": "IPv4", 00:12:36.706 "traddr": "10.0.0.3", 00:12:36.706 "trsvcid": "4420", 00:12:36.706 "trtype": "TCP" 00:12:36.706 }, 00:12:36.706 "peer_address": { 00:12:36.706 "adrfam": "IPv4", 00:12:36.706 "traddr": "10.0.0.1", 00:12:36.706 "trsvcid": "34560", 00:12:36.706 "trtype": "TCP" 00:12:36.706 }, 00:12:36.706 "qid": 0, 00:12:36.706 "state": "enabled", 00:12:36.706 "thread": "nvmf_tgt_poll_group_000" 00:12:36.706 } 00:12:36.706 ]' 00:12:36.706 11:34:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:12:36.706 11:34:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:12:36.706 11:34:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:12:36.706 11:34:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ null == \n\u\l\l ]] 00:12:36.706 11:34:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:12:36.706 11:34:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:12:36.706 11:34:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:12:36.706 11:34:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:12:36.964 11:34:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:NzViMDgxNmYyYjg1NzY4ZjY1YWE3NTk5MjIwMmNlOWUtjHHm: --dhchap-ctrl-secret DHHC-1:02:MTRkMDAyZTM2NDg5YzUyNGFkZDk1YzA5NGYxNDc1NTBlNjNkNjZmODEzMzUxM2Yy4R272Q==: 00:12:36.964 11:34:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:2fb5c03d-b67f-4471-b8a9-a83bf7b15be8 --hostid 2fb5c03d-b67f-4471-b8a9-a83bf7b15be8 -l 0 --dhchap-secret DHHC-1:01:NzViMDgxNmYyYjg1NzY4ZjY1YWE3NTk5MjIwMmNlOWUtjHHm: --dhchap-ctrl-secret DHHC-1:02:MTRkMDAyZTM2NDg5YzUyNGFkZDk1YzA5NGYxNDc1NTBlNjNkNjZmODEzMzUxM2Yy4R272Q==: 00:12:37.899 11:34:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:12:37.899 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:12:37.899 11:34:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:2fb5c03d-b67f-4471-b8a9-a83bf7b15be8 00:12:37.899 11:34:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:37.899 11:34:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:37.899 11:34:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:37.899 11:34:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:12:37.899 11:34:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups null 00:12:37.899 11:34:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups null 00:12:38.158 11:34:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 null 2 00:12:38.158 11:34:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:12:38.158 11:34:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:12:38.158 11:34:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=null 00:12:38.158 11:34:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:12:38.158 11:34:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:12:38.158 11:34:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:2fb5c03d-b67f-4471-b8a9-a83bf7b15be8 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:12:38.158 11:34:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:38.158 11:34:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:38.158 11:34:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:38.158 11:34:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:12:38.158 11:34:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:2fb5c03d-b67f-4471-b8a9-a83bf7b15be8 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:12:38.158 11:34:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:2fb5c03d-b67f-4471-b8a9-a83bf7b15be8 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:12:38.416 00:12:38.416 11:34:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:12:38.416 11:34:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:12:38.416 11:34:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:12:38.674 11:34:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:12:38.674 11:34:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:12:38.674 11:34:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:38.674 11:34:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:38.674 11:34:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:38.674 11:34:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:12:38.674 { 00:12:38.674 "auth": { 00:12:38.674 "dhgroup": "null", 00:12:38.674 "digest": "sha256", 00:12:38.674 "state": "completed" 00:12:38.674 }, 00:12:38.674 "cntlid": 5, 00:12:38.674 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:2fb5c03d-b67f-4471-b8a9-a83bf7b15be8", 00:12:38.674 "listen_address": { 00:12:38.674 "adrfam": "IPv4", 00:12:38.674 "traddr": "10.0.0.3", 00:12:38.674 "trsvcid": "4420", 00:12:38.674 "trtype": "TCP" 00:12:38.674 }, 00:12:38.674 "peer_address": { 00:12:38.674 "adrfam": "IPv4", 00:12:38.674 "traddr": "10.0.0.1", 00:12:38.674 "trsvcid": "34578", 00:12:38.674 "trtype": "TCP" 00:12:38.674 }, 00:12:38.674 "qid": 0, 00:12:38.674 "state": "enabled", 00:12:38.674 "thread": "nvmf_tgt_poll_group_000" 00:12:38.674 } 00:12:38.674 ]' 00:12:38.674 11:34:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:12:38.674 11:34:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:12:38.674 11:34:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:12:38.932 11:34:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ null == \n\u\l\l ]] 00:12:38.932 11:34:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:12:38.932 11:34:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:12:38.933 11:34:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:12:38.933 11:34:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:12:39.191 11:34:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:ZDY0N2YwN2U5MDE1OGM3ZDRiZTE3ZjllMTA2YjcyOWJmNTdkMzQ3NTJlZWJhZGJjQ1cSZQ==: --dhchap-ctrl-secret DHHC-1:01:OGZjMTkzMDUzNGJhZmU0OWUxOGM0NmRkZGI3NTVkNTktkkdU: 00:12:39.191 11:34:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:2fb5c03d-b67f-4471-b8a9-a83bf7b15be8 --hostid 2fb5c03d-b67f-4471-b8a9-a83bf7b15be8 -l 0 --dhchap-secret DHHC-1:02:ZDY0N2YwN2U5MDE1OGM3ZDRiZTE3ZjllMTA2YjcyOWJmNTdkMzQ3NTJlZWJhZGJjQ1cSZQ==: --dhchap-ctrl-secret DHHC-1:01:OGZjMTkzMDUzNGJhZmU0OWUxOGM0NmRkZGI3NTVkNTktkkdU: 00:12:39.758 11:34:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:12:39.758 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:12:39.759 11:34:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:2fb5c03d-b67f-4471-b8a9-a83bf7b15be8 00:12:39.759 11:34:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:39.759 11:34:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:39.759 11:34:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:39.759 11:34:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:12:39.759 11:34:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups null 00:12:39.759 11:34:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups null 00:12:40.325 11:34:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 null 3 00:12:40.325 11:34:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:12:40.325 11:34:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:12:40.325 11:34:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=null 00:12:40.325 11:34:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:12:40.325 11:34:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:12:40.325 11:34:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:2fb5c03d-b67f-4471-b8a9-a83bf7b15be8 --dhchap-key key3 00:12:40.325 11:34:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:40.325 11:34:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:40.325 11:34:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:40.325 11:34:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:12:40.325 11:34:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:2fb5c03d-b67f-4471-b8a9-a83bf7b15be8 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:12:40.325 11:34:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:2fb5c03d-b67f-4471-b8a9-a83bf7b15be8 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:12:40.583 00:12:40.583 11:34:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:12:40.583 11:34:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:12:40.583 11:34:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:12:40.842 11:34:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:12:40.842 11:34:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:12:40.842 11:34:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:40.842 11:34:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:40.842 11:34:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:40.842 11:34:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:12:40.842 { 00:12:40.842 "auth": { 00:12:40.842 "dhgroup": "null", 00:12:40.842 "digest": "sha256", 00:12:40.842 "state": "completed" 00:12:40.842 }, 00:12:40.842 "cntlid": 7, 00:12:40.842 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:2fb5c03d-b67f-4471-b8a9-a83bf7b15be8", 00:12:40.842 "listen_address": { 00:12:40.842 "adrfam": "IPv4", 00:12:40.842 "traddr": "10.0.0.3", 00:12:40.842 "trsvcid": "4420", 00:12:40.842 "trtype": "TCP" 00:12:40.842 }, 00:12:40.842 "peer_address": { 00:12:40.842 "adrfam": "IPv4", 00:12:40.842 "traddr": "10.0.0.1", 00:12:40.842 "trsvcid": "34598", 00:12:40.842 "trtype": "TCP" 00:12:40.842 }, 00:12:40.842 "qid": 0, 00:12:40.842 "state": "enabled", 00:12:40.842 "thread": "nvmf_tgt_poll_group_000" 00:12:40.842 } 00:12:40.842 ]' 00:12:40.842 11:34:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:12:40.842 11:34:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:12:40.842 11:34:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:12:40.842 11:34:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ null == \n\u\l\l ]] 00:12:40.842 11:34:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:12:41.100 11:34:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:12:41.100 11:34:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:12:41.100 11:34:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:12:41.358 11:34:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:MWI3ZGI4MmVmNjZkMTUxMmFmODZjY2JkMzEwNTkyZmI0NGVhNTE0OTNlMTFmNTBjNzA5YWE5NTE1ZDljZDliM/NYdV4=: 00:12:41.358 11:34:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:2fb5c03d-b67f-4471-b8a9-a83bf7b15be8 --hostid 2fb5c03d-b67f-4471-b8a9-a83bf7b15be8 -l 0 --dhchap-secret DHHC-1:03:MWI3ZGI4MmVmNjZkMTUxMmFmODZjY2JkMzEwNTkyZmI0NGVhNTE0OTNlMTFmNTBjNzA5YWE5NTE1ZDljZDliM/NYdV4=: 00:12:41.926 11:34:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:12:41.926 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:12:41.926 11:34:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:2fb5c03d-b67f-4471-b8a9-a83bf7b15be8 00:12:41.926 11:34:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:41.926 11:34:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:42.184 11:34:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:42.184 11:34:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:12:42.184 11:34:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:12:42.184 11:34:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:12:42.184 11:34:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:12:42.442 11:34:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe2048 0 00:12:42.442 11:34:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:12:42.442 11:34:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:12:42.442 11:34:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe2048 00:12:42.442 11:34:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:12:42.442 11:34:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:12:42.442 11:34:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:2fb5c03d-b67f-4471-b8a9-a83bf7b15be8 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:12:42.442 11:34:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:42.442 11:34:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:42.442 11:34:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:42.442 11:34:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:12:42.442 11:34:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:2fb5c03d-b67f-4471-b8a9-a83bf7b15be8 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:12:42.442 11:34:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:2fb5c03d-b67f-4471-b8a9-a83bf7b15be8 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:12:42.700 00:12:42.700 11:34:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:12:42.700 11:34:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:12:42.700 11:34:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:12:42.959 11:34:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:12:42.959 11:34:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:12:42.959 11:34:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:42.959 11:34:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:42.959 11:34:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:42.959 11:34:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:12:42.959 { 00:12:42.959 "auth": { 00:12:42.959 "dhgroup": "ffdhe2048", 00:12:42.959 "digest": "sha256", 00:12:42.959 "state": "completed" 00:12:42.959 }, 00:12:42.959 "cntlid": 9, 00:12:42.959 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:2fb5c03d-b67f-4471-b8a9-a83bf7b15be8", 00:12:42.959 "listen_address": { 00:12:42.959 "adrfam": "IPv4", 00:12:42.959 "traddr": "10.0.0.3", 00:12:42.959 "trsvcid": "4420", 00:12:42.959 "trtype": "TCP" 00:12:42.959 }, 00:12:42.959 "peer_address": { 00:12:42.959 "adrfam": "IPv4", 00:12:42.959 "traddr": "10.0.0.1", 00:12:42.959 "trsvcid": "34628", 00:12:42.959 "trtype": "TCP" 00:12:42.959 }, 00:12:42.959 "qid": 0, 00:12:42.959 "state": "enabled", 00:12:42.959 "thread": "nvmf_tgt_poll_group_000" 00:12:42.959 } 00:12:42.959 ]' 00:12:42.959 11:34:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:12:42.959 11:34:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:12:42.959 11:34:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:12:43.217 11:34:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:12:43.217 11:34:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:12:43.217 11:34:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:12:43.217 11:34:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:12:43.217 11:34:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:12:43.475 11:34:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:Y2JiZGI3MTE3ZjcwNmFhMzJhZDBkN2VhODEwZGRmNDVmMjNkZmIzNzJmMzBiY2JlF9S+tg==: --dhchap-ctrl-secret DHHC-1:03:YTgxZGEwZTA3Yzc1NmU1YjY0NGFlMGMzYTNiOGIyMzA1MzRjOTQ5YjM4MzY4MzJlNjE5ZThmY2EyMGU5Y2I5ZcLXVvg=: 00:12:43.475 11:34:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:2fb5c03d-b67f-4471-b8a9-a83bf7b15be8 --hostid 2fb5c03d-b67f-4471-b8a9-a83bf7b15be8 -l 0 --dhchap-secret DHHC-1:00:Y2JiZGI3MTE3ZjcwNmFhMzJhZDBkN2VhODEwZGRmNDVmMjNkZmIzNzJmMzBiY2JlF9S+tg==: --dhchap-ctrl-secret DHHC-1:03:YTgxZGEwZTA3Yzc1NmU1YjY0NGFlMGMzYTNiOGIyMzA1MzRjOTQ5YjM4MzY4MzJlNjE5ZThmY2EyMGU5Y2I5ZcLXVvg=: 00:12:44.410 11:34:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:12:44.410 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:12:44.410 11:34:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:2fb5c03d-b67f-4471-b8a9-a83bf7b15be8 00:12:44.410 11:34:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:44.410 11:34:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:44.410 11:34:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:44.410 11:34:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:12:44.410 11:34:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:12:44.410 11:34:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:12:44.410 11:34:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe2048 1 00:12:44.410 11:34:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:12:44.410 11:34:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:12:44.410 11:34:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe2048 00:12:44.410 11:34:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:12:44.410 11:34:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:12:44.410 11:34:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:2fb5c03d-b67f-4471-b8a9-a83bf7b15be8 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:12:44.410 11:34:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:44.410 11:34:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:44.410 11:34:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:44.410 11:34:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:12:44.410 11:34:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:2fb5c03d-b67f-4471-b8a9-a83bf7b15be8 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:12:44.410 11:34:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:2fb5c03d-b67f-4471-b8a9-a83bf7b15be8 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:12:44.977 00:12:44.977 11:34:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:12:44.977 11:34:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:12:44.977 11:34:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:12:45.235 11:34:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:12:45.235 11:34:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:12:45.235 11:34:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:45.235 11:34:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:45.235 11:34:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:45.235 11:34:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:12:45.235 { 00:12:45.235 "auth": { 00:12:45.235 "dhgroup": "ffdhe2048", 00:12:45.235 "digest": "sha256", 00:12:45.235 "state": "completed" 00:12:45.235 }, 00:12:45.235 "cntlid": 11, 00:12:45.235 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:2fb5c03d-b67f-4471-b8a9-a83bf7b15be8", 00:12:45.235 "listen_address": { 00:12:45.235 "adrfam": "IPv4", 00:12:45.235 "traddr": "10.0.0.3", 00:12:45.235 "trsvcid": "4420", 00:12:45.235 "trtype": "TCP" 00:12:45.235 }, 00:12:45.235 "peer_address": { 00:12:45.235 "adrfam": "IPv4", 00:12:45.235 "traddr": "10.0.0.1", 00:12:45.235 "trsvcid": "43818", 00:12:45.235 "trtype": "TCP" 00:12:45.235 }, 00:12:45.235 "qid": 0, 00:12:45.235 "state": "enabled", 00:12:45.235 "thread": "nvmf_tgt_poll_group_000" 00:12:45.235 } 00:12:45.235 ]' 00:12:45.235 11:34:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:12:45.235 11:34:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:12:45.235 11:34:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:12:45.235 11:34:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:12:45.235 11:34:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:12:45.493 11:34:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:12:45.493 11:34:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:12:45.493 11:34:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:12:45.752 11:34:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:NzViMDgxNmYyYjg1NzY4ZjY1YWE3NTk5MjIwMmNlOWUtjHHm: --dhchap-ctrl-secret DHHC-1:02:MTRkMDAyZTM2NDg5YzUyNGFkZDk1YzA5NGYxNDc1NTBlNjNkNjZmODEzMzUxM2Yy4R272Q==: 00:12:45.752 11:34:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:2fb5c03d-b67f-4471-b8a9-a83bf7b15be8 --hostid 2fb5c03d-b67f-4471-b8a9-a83bf7b15be8 -l 0 --dhchap-secret DHHC-1:01:NzViMDgxNmYyYjg1NzY4ZjY1YWE3NTk5MjIwMmNlOWUtjHHm: --dhchap-ctrl-secret DHHC-1:02:MTRkMDAyZTM2NDg5YzUyNGFkZDk1YzA5NGYxNDc1NTBlNjNkNjZmODEzMzUxM2Yy4R272Q==: 00:12:46.318 11:34:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:12:46.318 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:12:46.318 11:34:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:2fb5c03d-b67f-4471-b8a9-a83bf7b15be8 00:12:46.318 11:34:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:46.318 11:34:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:46.318 11:34:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:46.318 11:34:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:12:46.318 11:34:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:12:46.318 11:34:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:12:46.576 11:34:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe2048 2 00:12:46.576 11:34:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:12:46.576 11:34:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:12:46.576 11:34:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe2048 00:12:46.576 11:34:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:12:46.576 11:34:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:12:46.576 11:34:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:2fb5c03d-b67f-4471-b8a9-a83bf7b15be8 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:12:46.576 11:34:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:46.576 11:34:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:46.835 11:34:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:46.835 11:34:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:12:46.835 11:34:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:2fb5c03d-b67f-4471-b8a9-a83bf7b15be8 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:12:46.835 11:34:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:2fb5c03d-b67f-4471-b8a9-a83bf7b15be8 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:12:47.094 00:12:47.094 11:34:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:12:47.094 11:34:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:12:47.094 11:34:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:12:47.397 11:34:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:12:47.397 11:34:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:12:47.397 11:34:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:47.397 11:34:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:47.397 11:34:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:47.397 11:34:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:12:47.397 { 00:12:47.397 "auth": { 00:12:47.397 "dhgroup": "ffdhe2048", 00:12:47.397 "digest": "sha256", 00:12:47.397 "state": "completed" 00:12:47.397 }, 00:12:47.397 "cntlid": 13, 00:12:47.397 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:2fb5c03d-b67f-4471-b8a9-a83bf7b15be8", 00:12:47.397 "listen_address": { 00:12:47.397 "adrfam": "IPv4", 00:12:47.397 "traddr": "10.0.0.3", 00:12:47.397 "trsvcid": "4420", 00:12:47.397 "trtype": "TCP" 00:12:47.397 }, 00:12:47.397 "peer_address": { 00:12:47.397 "adrfam": "IPv4", 00:12:47.397 "traddr": "10.0.0.1", 00:12:47.397 "trsvcid": "43836", 00:12:47.397 "trtype": "TCP" 00:12:47.397 }, 00:12:47.397 "qid": 0, 00:12:47.397 "state": "enabled", 00:12:47.397 "thread": "nvmf_tgt_poll_group_000" 00:12:47.397 } 00:12:47.397 ]' 00:12:47.397 11:34:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:12:47.397 11:34:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:12:47.397 11:34:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:12:47.397 11:34:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:12:47.397 11:34:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:12:47.655 11:34:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:12:47.655 11:34:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:12:47.655 11:34:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:12:47.913 11:34:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:ZDY0N2YwN2U5MDE1OGM3ZDRiZTE3ZjllMTA2YjcyOWJmNTdkMzQ3NTJlZWJhZGJjQ1cSZQ==: --dhchap-ctrl-secret DHHC-1:01:OGZjMTkzMDUzNGJhZmU0OWUxOGM0NmRkZGI3NTVkNTktkkdU: 00:12:47.913 11:34:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:2fb5c03d-b67f-4471-b8a9-a83bf7b15be8 --hostid 2fb5c03d-b67f-4471-b8a9-a83bf7b15be8 -l 0 --dhchap-secret DHHC-1:02:ZDY0N2YwN2U5MDE1OGM3ZDRiZTE3ZjllMTA2YjcyOWJmNTdkMzQ3NTJlZWJhZGJjQ1cSZQ==: --dhchap-ctrl-secret DHHC-1:01:OGZjMTkzMDUzNGJhZmU0OWUxOGM0NmRkZGI3NTVkNTktkkdU: 00:12:48.479 11:34:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:12:48.479 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:12:48.480 11:34:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:2fb5c03d-b67f-4471-b8a9-a83bf7b15be8 00:12:48.480 11:34:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:48.480 11:34:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:48.480 11:34:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:48.480 11:34:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:12:48.480 11:34:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:12:48.480 11:34:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:12:48.738 11:34:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe2048 3 00:12:48.738 11:34:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:12:48.738 11:34:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:12:48.738 11:34:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe2048 00:12:48.738 11:34:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:12:48.738 11:34:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:12:48.738 11:34:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:2fb5c03d-b67f-4471-b8a9-a83bf7b15be8 --dhchap-key key3 00:12:48.738 11:34:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:48.738 11:34:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:48.738 11:34:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:48.738 11:34:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:12:48.738 11:34:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:2fb5c03d-b67f-4471-b8a9-a83bf7b15be8 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:12:48.738 11:34:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:2fb5c03d-b67f-4471-b8a9-a83bf7b15be8 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:12:49.304 00:12:49.304 11:34:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:12:49.304 11:34:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:12:49.304 11:34:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:12:49.563 11:34:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:12:49.563 11:34:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:12:49.563 11:34:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:49.563 11:34:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:49.563 11:34:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:49.563 11:34:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:12:49.563 { 00:12:49.563 "auth": { 00:12:49.563 "dhgroup": "ffdhe2048", 00:12:49.563 "digest": "sha256", 00:12:49.563 "state": "completed" 00:12:49.563 }, 00:12:49.563 "cntlid": 15, 00:12:49.563 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:2fb5c03d-b67f-4471-b8a9-a83bf7b15be8", 00:12:49.563 "listen_address": { 00:12:49.563 "adrfam": "IPv4", 00:12:49.563 "traddr": "10.0.0.3", 00:12:49.563 "trsvcid": "4420", 00:12:49.563 "trtype": "TCP" 00:12:49.563 }, 00:12:49.563 "peer_address": { 00:12:49.563 "adrfam": "IPv4", 00:12:49.563 "traddr": "10.0.0.1", 00:12:49.563 "trsvcid": "43870", 00:12:49.563 "trtype": "TCP" 00:12:49.563 }, 00:12:49.563 "qid": 0, 00:12:49.563 "state": "enabled", 00:12:49.563 "thread": "nvmf_tgt_poll_group_000" 00:12:49.563 } 00:12:49.563 ]' 00:12:49.563 11:34:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:12:49.563 11:34:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:12:49.563 11:34:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:12:49.563 11:34:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:12:49.563 11:34:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:12:49.563 11:34:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:12:49.563 11:34:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:12:49.563 11:34:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:12:50.130 11:34:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:MWI3ZGI4MmVmNjZkMTUxMmFmODZjY2JkMzEwNTkyZmI0NGVhNTE0OTNlMTFmNTBjNzA5YWE5NTE1ZDljZDliM/NYdV4=: 00:12:50.130 11:34:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:2fb5c03d-b67f-4471-b8a9-a83bf7b15be8 --hostid 2fb5c03d-b67f-4471-b8a9-a83bf7b15be8 -l 0 --dhchap-secret DHHC-1:03:MWI3ZGI4MmVmNjZkMTUxMmFmODZjY2JkMzEwNTkyZmI0NGVhNTE0OTNlMTFmNTBjNzA5YWE5NTE1ZDljZDliM/NYdV4=: 00:12:50.697 11:34:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:12:50.697 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:12:50.697 11:34:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:2fb5c03d-b67f-4471-b8a9-a83bf7b15be8 00:12:50.697 11:34:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:50.697 11:34:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:50.697 11:34:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:50.697 11:34:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:12:50.697 11:34:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:12:50.697 11:34:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:12:50.697 11:34:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:12:50.956 11:34:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe3072 0 00:12:50.956 11:34:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:12:50.956 11:34:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:12:50.956 11:34:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe3072 00:12:50.956 11:34:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:12:50.956 11:34:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:12:50.956 11:34:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:2fb5c03d-b67f-4471-b8a9-a83bf7b15be8 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:12:50.956 11:34:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:50.956 11:34:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:50.956 11:34:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:50.956 11:34:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:12:50.956 11:34:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:2fb5c03d-b67f-4471-b8a9-a83bf7b15be8 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:12:50.956 11:34:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:2fb5c03d-b67f-4471-b8a9-a83bf7b15be8 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:12:51.523 00:12:51.523 11:34:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:12:51.523 11:34:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:12:51.523 11:34:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:12:51.782 11:34:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:12:51.782 11:34:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:12:51.782 11:34:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:51.782 11:34:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:51.782 11:34:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:51.782 11:34:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:12:51.782 { 00:12:51.782 "auth": { 00:12:51.782 "dhgroup": "ffdhe3072", 00:12:51.782 "digest": "sha256", 00:12:51.782 "state": "completed" 00:12:51.782 }, 00:12:51.782 "cntlid": 17, 00:12:51.782 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:2fb5c03d-b67f-4471-b8a9-a83bf7b15be8", 00:12:51.782 "listen_address": { 00:12:51.782 "adrfam": "IPv4", 00:12:51.782 "traddr": "10.0.0.3", 00:12:51.782 "trsvcid": "4420", 00:12:51.782 "trtype": "TCP" 00:12:51.782 }, 00:12:51.782 "peer_address": { 00:12:51.782 "adrfam": "IPv4", 00:12:51.782 "traddr": "10.0.0.1", 00:12:51.782 "trsvcid": "43900", 00:12:51.782 "trtype": "TCP" 00:12:51.782 }, 00:12:51.782 "qid": 0, 00:12:51.782 "state": "enabled", 00:12:51.782 "thread": "nvmf_tgt_poll_group_000" 00:12:51.782 } 00:12:51.782 ]' 00:12:51.782 11:34:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:12:51.782 11:34:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:12:51.782 11:34:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:12:51.782 11:34:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:12:51.782 11:34:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:12:51.782 11:34:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:12:51.782 11:34:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:12:51.782 11:34:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:12:52.041 11:34:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:Y2JiZGI3MTE3ZjcwNmFhMzJhZDBkN2VhODEwZGRmNDVmMjNkZmIzNzJmMzBiY2JlF9S+tg==: --dhchap-ctrl-secret DHHC-1:03:YTgxZGEwZTA3Yzc1NmU1YjY0NGFlMGMzYTNiOGIyMzA1MzRjOTQ5YjM4MzY4MzJlNjE5ZThmY2EyMGU5Y2I5ZcLXVvg=: 00:12:52.041 11:34:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:2fb5c03d-b67f-4471-b8a9-a83bf7b15be8 --hostid 2fb5c03d-b67f-4471-b8a9-a83bf7b15be8 -l 0 --dhchap-secret DHHC-1:00:Y2JiZGI3MTE3ZjcwNmFhMzJhZDBkN2VhODEwZGRmNDVmMjNkZmIzNzJmMzBiY2JlF9S+tg==: --dhchap-ctrl-secret DHHC-1:03:YTgxZGEwZTA3Yzc1NmU1YjY0NGFlMGMzYTNiOGIyMzA1MzRjOTQ5YjM4MzY4MzJlNjE5ZThmY2EyMGU5Y2I5ZcLXVvg=: 00:12:53.030 11:34:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:12:53.030 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:12:53.030 11:34:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:2fb5c03d-b67f-4471-b8a9-a83bf7b15be8 00:12:53.030 11:34:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:53.030 11:34:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:53.030 11:34:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:53.030 11:34:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:12:53.030 11:34:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:12:53.030 11:34:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:12:53.288 11:34:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe3072 1 00:12:53.288 11:34:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:12:53.288 11:34:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:12:53.288 11:34:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe3072 00:12:53.288 11:34:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:12:53.288 11:34:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:12:53.288 11:34:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:2fb5c03d-b67f-4471-b8a9-a83bf7b15be8 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:12:53.288 11:34:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:53.288 11:34:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:53.288 11:34:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:53.288 11:34:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:12:53.288 11:34:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:2fb5c03d-b67f-4471-b8a9-a83bf7b15be8 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:12:53.288 11:34:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:2fb5c03d-b67f-4471-b8a9-a83bf7b15be8 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:12:53.547 00:12:53.547 11:34:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:12:53.547 11:34:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:12:53.547 11:34:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:12:53.805 11:34:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:12:53.805 11:34:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:12:53.805 11:34:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:53.805 11:34:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:54.063 11:34:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:54.063 11:34:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:12:54.063 { 00:12:54.063 "auth": { 00:12:54.063 "dhgroup": "ffdhe3072", 00:12:54.063 "digest": "sha256", 00:12:54.063 "state": "completed" 00:12:54.063 }, 00:12:54.063 "cntlid": 19, 00:12:54.063 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:2fb5c03d-b67f-4471-b8a9-a83bf7b15be8", 00:12:54.063 "listen_address": { 00:12:54.063 "adrfam": "IPv4", 00:12:54.063 "traddr": "10.0.0.3", 00:12:54.063 "trsvcid": "4420", 00:12:54.063 "trtype": "TCP" 00:12:54.063 }, 00:12:54.063 "peer_address": { 00:12:54.063 "adrfam": "IPv4", 00:12:54.063 "traddr": "10.0.0.1", 00:12:54.063 "trsvcid": "43930", 00:12:54.063 "trtype": "TCP" 00:12:54.063 }, 00:12:54.063 "qid": 0, 00:12:54.063 "state": "enabled", 00:12:54.063 "thread": "nvmf_tgt_poll_group_000" 00:12:54.063 } 00:12:54.063 ]' 00:12:54.063 11:34:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:12:54.063 11:34:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:12:54.063 11:34:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:12:54.063 11:34:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:12:54.063 11:34:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:12:54.063 11:34:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:12:54.063 11:34:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:12:54.063 11:34:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:12:54.628 11:34:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:NzViMDgxNmYyYjg1NzY4ZjY1YWE3NTk5MjIwMmNlOWUtjHHm: --dhchap-ctrl-secret DHHC-1:02:MTRkMDAyZTM2NDg5YzUyNGFkZDk1YzA5NGYxNDc1NTBlNjNkNjZmODEzMzUxM2Yy4R272Q==: 00:12:54.628 11:34:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:2fb5c03d-b67f-4471-b8a9-a83bf7b15be8 --hostid 2fb5c03d-b67f-4471-b8a9-a83bf7b15be8 -l 0 --dhchap-secret DHHC-1:01:NzViMDgxNmYyYjg1NzY4ZjY1YWE3NTk5MjIwMmNlOWUtjHHm: --dhchap-ctrl-secret DHHC-1:02:MTRkMDAyZTM2NDg5YzUyNGFkZDk1YzA5NGYxNDc1NTBlNjNkNjZmODEzMzUxM2Yy4R272Q==: 00:12:55.195 11:34:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:12:55.195 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:12:55.195 11:34:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:2fb5c03d-b67f-4471-b8a9-a83bf7b15be8 00:12:55.195 11:34:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:55.195 11:34:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:55.195 11:34:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:55.195 11:34:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:12:55.195 11:34:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:12:55.195 11:34:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:12:55.453 11:34:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe3072 2 00:12:55.453 11:34:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:12:55.453 11:34:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:12:55.453 11:34:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe3072 00:12:55.453 11:34:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:12:55.453 11:34:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:12:55.454 11:34:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:2fb5c03d-b67f-4471-b8a9-a83bf7b15be8 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:12:55.454 11:34:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:55.454 11:34:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:55.454 11:34:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:55.454 11:34:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:12:55.454 11:34:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:2fb5c03d-b67f-4471-b8a9-a83bf7b15be8 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:12:55.454 11:34:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:2fb5c03d-b67f-4471-b8a9-a83bf7b15be8 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:12:55.712 00:12:55.712 11:34:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:12:55.712 11:34:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:12:55.712 11:34:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:12:55.969 11:34:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:12:55.969 11:34:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:12:55.969 11:34:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:55.969 11:34:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:56.227 11:34:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:56.227 11:34:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:12:56.227 { 00:12:56.227 "auth": { 00:12:56.227 "dhgroup": "ffdhe3072", 00:12:56.227 "digest": "sha256", 00:12:56.227 "state": "completed" 00:12:56.227 }, 00:12:56.227 "cntlid": 21, 00:12:56.227 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:2fb5c03d-b67f-4471-b8a9-a83bf7b15be8", 00:12:56.227 "listen_address": { 00:12:56.227 "adrfam": "IPv4", 00:12:56.227 "traddr": "10.0.0.3", 00:12:56.227 "trsvcid": "4420", 00:12:56.227 "trtype": "TCP" 00:12:56.227 }, 00:12:56.227 "peer_address": { 00:12:56.227 "adrfam": "IPv4", 00:12:56.227 "traddr": "10.0.0.1", 00:12:56.227 "trsvcid": "59564", 00:12:56.227 "trtype": "TCP" 00:12:56.228 }, 00:12:56.228 "qid": 0, 00:12:56.228 "state": "enabled", 00:12:56.228 "thread": "nvmf_tgt_poll_group_000" 00:12:56.228 } 00:12:56.228 ]' 00:12:56.228 11:34:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:12:56.228 11:34:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:12:56.228 11:34:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:12:56.228 11:34:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:12:56.228 11:34:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:12:56.228 11:34:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:12:56.228 11:34:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:12:56.228 11:34:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:12:56.794 11:34:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:ZDY0N2YwN2U5MDE1OGM3ZDRiZTE3ZjllMTA2YjcyOWJmNTdkMzQ3NTJlZWJhZGJjQ1cSZQ==: --dhchap-ctrl-secret DHHC-1:01:OGZjMTkzMDUzNGJhZmU0OWUxOGM0NmRkZGI3NTVkNTktkkdU: 00:12:56.794 11:34:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:2fb5c03d-b67f-4471-b8a9-a83bf7b15be8 --hostid 2fb5c03d-b67f-4471-b8a9-a83bf7b15be8 -l 0 --dhchap-secret DHHC-1:02:ZDY0N2YwN2U5MDE1OGM3ZDRiZTE3ZjllMTA2YjcyOWJmNTdkMzQ3NTJlZWJhZGJjQ1cSZQ==: --dhchap-ctrl-secret DHHC-1:01:OGZjMTkzMDUzNGJhZmU0OWUxOGM0NmRkZGI3NTVkNTktkkdU: 00:12:57.374 11:34:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:12:57.374 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:12:57.374 11:34:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:2fb5c03d-b67f-4471-b8a9-a83bf7b15be8 00:12:57.374 11:34:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:57.374 11:34:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:57.374 11:34:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:57.374 11:34:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:12:57.374 11:34:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:12:57.374 11:34:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:12:57.631 11:34:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe3072 3 00:12:57.631 11:34:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:12:57.631 11:34:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:12:57.631 11:34:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe3072 00:12:57.631 11:34:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:12:57.631 11:34:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:12:57.631 11:34:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:2fb5c03d-b67f-4471-b8a9-a83bf7b15be8 --dhchap-key key3 00:12:57.631 11:34:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:57.631 11:34:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:57.631 11:34:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:57.631 11:34:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:12:57.631 11:34:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:2fb5c03d-b67f-4471-b8a9-a83bf7b15be8 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:12:57.631 11:34:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:2fb5c03d-b67f-4471-b8a9-a83bf7b15be8 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:12:57.889 00:12:57.889 11:34:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:12:57.889 11:34:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:12:57.889 11:34:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:12:58.455 11:34:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:12:58.455 11:34:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:12:58.455 11:34:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:58.455 11:34:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:58.455 11:34:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:58.455 11:34:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:12:58.455 { 00:12:58.455 "auth": { 00:12:58.455 "dhgroup": "ffdhe3072", 00:12:58.455 "digest": "sha256", 00:12:58.455 "state": "completed" 00:12:58.455 }, 00:12:58.455 "cntlid": 23, 00:12:58.455 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:2fb5c03d-b67f-4471-b8a9-a83bf7b15be8", 00:12:58.455 "listen_address": { 00:12:58.455 "adrfam": "IPv4", 00:12:58.455 "traddr": "10.0.0.3", 00:12:58.455 "trsvcid": "4420", 00:12:58.455 "trtype": "TCP" 00:12:58.455 }, 00:12:58.455 "peer_address": { 00:12:58.455 "adrfam": "IPv4", 00:12:58.455 "traddr": "10.0.0.1", 00:12:58.455 "trsvcid": "59598", 00:12:58.455 "trtype": "TCP" 00:12:58.455 }, 00:12:58.455 "qid": 0, 00:12:58.455 "state": "enabled", 00:12:58.455 "thread": "nvmf_tgt_poll_group_000" 00:12:58.455 } 00:12:58.455 ]' 00:12:58.455 11:34:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:12:58.455 11:34:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:12:58.455 11:34:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:12:58.455 11:34:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:12:58.455 11:34:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:12:58.455 11:34:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:12:58.455 11:34:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:12:58.455 11:34:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:12:58.713 11:34:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:MWI3ZGI4MmVmNjZkMTUxMmFmODZjY2JkMzEwNTkyZmI0NGVhNTE0OTNlMTFmNTBjNzA5YWE5NTE1ZDljZDliM/NYdV4=: 00:12:58.713 11:34:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:2fb5c03d-b67f-4471-b8a9-a83bf7b15be8 --hostid 2fb5c03d-b67f-4471-b8a9-a83bf7b15be8 -l 0 --dhchap-secret DHHC-1:03:MWI3ZGI4MmVmNjZkMTUxMmFmODZjY2JkMzEwNTkyZmI0NGVhNTE0OTNlMTFmNTBjNzA5YWE5NTE1ZDljZDliM/NYdV4=: 00:12:59.279 11:34:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:12:59.279 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:12:59.279 11:34:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:2fb5c03d-b67f-4471-b8a9-a83bf7b15be8 00:12:59.279 11:34:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:59.279 11:34:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:59.279 11:34:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:59.279 11:34:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:12:59.279 11:34:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:12:59.279 11:34:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:12:59.279 11:34:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:12:59.537 11:35:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe4096 0 00:12:59.537 11:35:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:12:59.537 11:35:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:12:59.537 11:35:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe4096 00:12:59.537 11:35:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:12:59.537 11:35:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:12:59.537 11:35:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:2fb5c03d-b67f-4471-b8a9-a83bf7b15be8 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:12:59.537 11:35:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:59.537 11:35:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:59.537 11:35:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:59.537 11:35:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:12:59.537 11:35:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:2fb5c03d-b67f-4471-b8a9-a83bf7b15be8 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:12:59.537 11:35:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:2fb5c03d-b67f-4471-b8a9-a83bf7b15be8 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:13:00.103 00:13:00.103 11:35:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:13:00.103 11:35:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:13:00.103 11:35:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:13:00.362 11:35:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:13:00.362 11:35:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:13:00.362 11:35:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:00.362 11:35:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:00.362 11:35:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:00.362 11:35:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:13:00.362 { 00:13:00.362 "auth": { 00:13:00.362 "dhgroup": "ffdhe4096", 00:13:00.362 "digest": "sha256", 00:13:00.362 "state": "completed" 00:13:00.362 }, 00:13:00.362 "cntlid": 25, 00:13:00.362 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:2fb5c03d-b67f-4471-b8a9-a83bf7b15be8", 00:13:00.362 "listen_address": { 00:13:00.362 "adrfam": "IPv4", 00:13:00.362 "traddr": "10.0.0.3", 00:13:00.362 "trsvcid": "4420", 00:13:00.362 "trtype": "TCP" 00:13:00.362 }, 00:13:00.362 "peer_address": { 00:13:00.362 "adrfam": "IPv4", 00:13:00.362 "traddr": "10.0.0.1", 00:13:00.362 "trsvcid": "59626", 00:13:00.362 "trtype": "TCP" 00:13:00.362 }, 00:13:00.362 "qid": 0, 00:13:00.362 "state": "enabled", 00:13:00.362 "thread": "nvmf_tgt_poll_group_000" 00:13:00.362 } 00:13:00.362 ]' 00:13:00.362 11:35:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:13:00.362 11:35:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:13:00.362 11:35:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:13:00.362 11:35:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:13:00.362 11:35:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:13:00.620 11:35:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:13:00.620 11:35:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:13:00.620 11:35:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:13:00.878 11:35:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:Y2JiZGI3MTE3ZjcwNmFhMzJhZDBkN2VhODEwZGRmNDVmMjNkZmIzNzJmMzBiY2JlF9S+tg==: --dhchap-ctrl-secret DHHC-1:03:YTgxZGEwZTA3Yzc1NmU1YjY0NGFlMGMzYTNiOGIyMzA1MzRjOTQ5YjM4MzY4MzJlNjE5ZThmY2EyMGU5Y2I5ZcLXVvg=: 00:13:00.878 11:35:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:2fb5c03d-b67f-4471-b8a9-a83bf7b15be8 --hostid 2fb5c03d-b67f-4471-b8a9-a83bf7b15be8 -l 0 --dhchap-secret DHHC-1:00:Y2JiZGI3MTE3ZjcwNmFhMzJhZDBkN2VhODEwZGRmNDVmMjNkZmIzNzJmMzBiY2JlF9S+tg==: --dhchap-ctrl-secret DHHC-1:03:YTgxZGEwZTA3Yzc1NmU1YjY0NGFlMGMzYTNiOGIyMzA1MzRjOTQ5YjM4MzY4MzJlNjE5ZThmY2EyMGU5Y2I5ZcLXVvg=: 00:13:01.444 11:35:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:13:01.444 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:13:01.444 11:35:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:2fb5c03d-b67f-4471-b8a9-a83bf7b15be8 00:13:01.444 11:35:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:01.444 11:35:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:01.444 11:35:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:01.444 11:35:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:13:01.444 11:35:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:13:01.444 11:35:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:13:01.702 11:35:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe4096 1 00:13:01.702 11:35:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:13:01.702 11:35:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:13:01.702 11:35:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe4096 00:13:01.702 11:35:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:13:01.702 11:35:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:13:01.702 11:35:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:2fb5c03d-b67f-4471-b8a9-a83bf7b15be8 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:13:01.702 11:35:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:01.702 11:35:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:01.702 11:35:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:01.702 11:35:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:13:01.702 11:35:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:2fb5c03d-b67f-4471-b8a9-a83bf7b15be8 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:13:01.702 11:35:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:2fb5c03d-b67f-4471-b8a9-a83bf7b15be8 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:13:02.310 00:13:02.310 11:35:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:13:02.310 11:35:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:13:02.310 11:35:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:13:02.568 11:35:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:13:02.568 11:35:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:13:02.568 11:35:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:02.568 11:35:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:02.568 11:35:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:02.568 11:35:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:13:02.568 { 00:13:02.568 "auth": { 00:13:02.568 "dhgroup": "ffdhe4096", 00:13:02.568 "digest": "sha256", 00:13:02.568 "state": "completed" 00:13:02.568 }, 00:13:02.568 "cntlid": 27, 00:13:02.568 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:2fb5c03d-b67f-4471-b8a9-a83bf7b15be8", 00:13:02.568 "listen_address": { 00:13:02.568 "adrfam": "IPv4", 00:13:02.568 "traddr": "10.0.0.3", 00:13:02.568 "trsvcid": "4420", 00:13:02.568 "trtype": "TCP" 00:13:02.568 }, 00:13:02.568 "peer_address": { 00:13:02.568 "adrfam": "IPv4", 00:13:02.568 "traddr": "10.0.0.1", 00:13:02.568 "trsvcid": "59660", 00:13:02.568 "trtype": "TCP" 00:13:02.568 }, 00:13:02.568 "qid": 0, 00:13:02.568 "state": "enabled", 00:13:02.568 "thread": "nvmf_tgt_poll_group_000" 00:13:02.568 } 00:13:02.568 ]' 00:13:02.568 11:35:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:13:02.568 11:35:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:13:02.568 11:35:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:13:02.568 11:35:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:13:02.568 11:35:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:13:02.568 11:35:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:13:02.568 11:35:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:13:02.568 11:35:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:13:02.826 11:35:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:NzViMDgxNmYyYjg1NzY4ZjY1YWE3NTk5MjIwMmNlOWUtjHHm: --dhchap-ctrl-secret DHHC-1:02:MTRkMDAyZTM2NDg5YzUyNGFkZDk1YzA5NGYxNDc1NTBlNjNkNjZmODEzMzUxM2Yy4R272Q==: 00:13:02.826 11:35:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:2fb5c03d-b67f-4471-b8a9-a83bf7b15be8 --hostid 2fb5c03d-b67f-4471-b8a9-a83bf7b15be8 -l 0 --dhchap-secret DHHC-1:01:NzViMDgxNmYyYjg1NzY4ZjY1YWE3NTk5MjIwMmNlOWUtjHHm: --dhchap-ctrl-secret DHHC-1:02:MTRkMDAyZTM2NDg5YzUyNGFkZDk1YzA5NGYxNDc1NTBlNjNkNjZmODEzMzUxM2Yy4R272Q==: 00:13:03.761 11:35:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:13:03.761 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:13:03.761 11:35:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:2fb5c03d-b67f-4471-b8a9-a83bf7b15be8 00:13:03.761 11:35:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:03.761 11:35:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:03.761 11:35:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:03.761 11:35:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:13:03.761 11:35:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:13:03.761 11:35:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:13:04.019 11:35:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe4096 2 00:13:04.019 11:35:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:13:04.019 11:35:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:13:04.019 11:35:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe4096 00:13:04.019 11:35:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:13:04.019 11:35:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:13:04.019 11:35:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:2fb5c03d-b67f-4471-b8a9-a83bf7b15be8 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:13:04.019 11:35:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:04.019 11:35:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:04.019 11:35:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:04.019 11:35:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:13:04.020 11:35:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:2fb5c03d-b67f-4471-b8a9-a83bf7b15be8 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:13:04.020 11:35:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:2fb5c03d-b67f-4471-b8a9-a83bf7b15be8 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:13:04.278 00:13:04.278 11:35:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:13:04.278 11:35:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:13:04.278 11:35:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:13:04.843 11:35:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:13:04.843 11:35:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:13:04.843 11:35:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:04.843 11:35:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:04.843 11:35:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:04.843 11:35:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:13:04.843 { 00:13:04.843 "auth": { 00:13:04.843 "dhgroup": "ffdhe4096", 00:13:04.843 "digest": "sha256", 00:13:04.843 "state": "completed" 00:13:04.843 }, 00:13:04.843 "cntlid": 29, 00:13:04.843 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:2fb5c03d-b67f-4471-b8a9-a83bf7b15be8", 00:13:04.843 "listen_address": { 00:13:04.843 "adrfam": "IPv4", 00:13:04.843 "traddr": "10.0.0.3", 00:13:04.843 "trsvcid": "4420", 00:13:04.843 "trtype": "TCP" 00:13:04.843 }, 00:13:04.843 "peer_address": { 00:13:04.843 "adrfam": "IPv4", 00:13:04.843 "traddr": "10.0.0.1", 00:13:04.843 "trsvcid": "51360", 00:13:04.843 "trtype": "TCP" 00:13:04.843 }, 00:13:04.843 "qid": 0, 00:13:04.843 "state": "enabled", 00:13:04.843 "thread": "nvmf_tgt_poll_group_000" 00:13:04.843 } 00:13:04.843 ]' 00:13:04.843 11:35:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:13:04.843 11:35:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:13:04.843 11:35:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:13:04.843 11:35:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:13:04.843 11:35:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:13:04.843 11:35:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:13:04.843 11:35:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:13:04.843 11:35:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:13:05.101 11:35:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:ZDY0N2YwN2U5MDE1OGM3ZDRiZTE3ZjllMTA2YjcyOWJmNTdkMzQ3NTJlZWJhZGJjQ1cSZQ==: --dhchap-ctrl-secret DHHC-1:01:OGZjMTkzMDUzNGJhZmU0OWUxOGM0NmRkZGI3NTVkNTktkkdU: 00:13:05.101 11:35:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:2fb5c03d-b67f-4471-b8a9-a83bf7b15be8 --hostid 2fb5c03d-b67f-4471-b8a9-a83bf7b15be8 -l 0 --dhchap-secret DHHC-1:02:ZDY0N2YwN2U5MDE1OGM3ZDRiZTE3ZjllMTA2YjcyOWJmNTdkMzQ3NTJlZWJhZGJjQ1cSZQ==: --dhchap-ctrl-secret DHHC-1:01:OGZjMTkzMDUzNGJhZmU0OWUxOGM0NmRkZGI3NTVkNTktkkdU: 00:13:06.036 11:35:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:13:06.036 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:13:06.036 11:35:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:2fb5c03d-b67f-4471-b8a9-a83bf7b15be8 00:13:06.036 11:35:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:06.036 11:35:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:06.036 11:35:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:06.036 11:35:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:13:06.036 11:35:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:13:06.036 11:35:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:13:06.294 11:35:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe4096 3 00:13:06.294 11:35:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:13:06.294 11:35:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:13:06.294 11:35:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe4096 00:13:06.294 11:35:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:13:06.294 11:35:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:13:06.294 11:35:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:2fb5c03d-b67f-4471-b8a9-a83bf7b15be8 --dhchap-key key3 00:13:06.294 11:35:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:06.294 11:35:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:06.294 11:35:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:06.294 11:35:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:13:06.294 11:35:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:2fb5c03d-b67f-4471-b8a9-a83bf7b15be8 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:13:06.294 11:35:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:2fb5c03d-b67f-4471-b8a9-a83bf7b15be8 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:13:06.552 00:13:06.552 11:35:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:13:06.552 11:35:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:13:06.553 11:35:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:13:07.148 11:35:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:13:07.148 11:35:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:13:07.148 11:35:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:07.148 11:35:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:07.148 11:35:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:07.148 11:35:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:13:07.148 { 00:13:07.148 "auth": { 00:13:07.148 "dhgroup": "ffdhe4096", 00:13:07.148 "digest": "sha256", 00:13:07.148 "state": "completed" 00:13:07.148 }, 00:13:07.148 "cntlid": 31, 00:13:07.148 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:2fb5c03d-b67f-4471-b8a9-a83bf7b15be8", 00:13:07.148 "listen_address": { 00:13:07.148 "adrfam": "IPv4", 00:13:07.148 "traddr": "10.0.0.3", 00:13:07.148 "trsvcid": "4420", 00:13:07.148 "trtype": "TCP" 00:13:07.148 }, 00:13:07.148 "peer_address": { 00:13:07.148 "adrfam": "IPv4", 00:13:07.148 "traddr": "10.0.0.1", 00:13:07.148 "trsvcid": "51398", 00:13:07.148 "trtype": "TCP" 00:13:07.148 }, 00:13:07.148 "qid": 0, 00:13:07.148 "state": "enabled", 00:13:07.148 "thread": "nvmf_tgt_poll_group_000" 00:13:07.148 } 00:13:07.148 ]' 00:13:07.148 11:35:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:13:07.148 11:35:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:13:07.148 11:35:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:13:07.148 11:35:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:13:07.148 11:35:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:13:07.148 11:35:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:13:07.148 11:35:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:13:07.148 11:35:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:13:07.406 11:35:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:MWI3ZGI4MmVmNjZkMTUxMmFmODZjY2JkMzEwNTkyZmI0NGVhNTE0OTNlMTFmNTBjNzA5YWE5NTE1ZDljZDliM/NYdV4=: 00:13:07.406 11:35:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:2fb5c03d-b67f-4471-b8a9-a83bf7b15be8 --hostid 2fb5c03d-b67f-4471-b8a9-a83bf7b15be8 -l 0 --dhchap-secret DHHC-1:03:MWI3ZGI4MmVmNjZkMTUxMmFmODZjY2JkMzEwNTkyZmI0NGVhNTE0OTNlMTFmNTBjNzA5YWE5NTE1ZDljZDliM/NYdV4=: 00:13:07.972 11:35:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:13:07.972 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:13:07.972 11:35:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:2fb5c03d-b67f-4471-b8a9-a83bf7b15be8 00:13:07.972 11:35:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:07.972 11:35:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:07.972 11:35:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:07.972 11:35:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:13:07.972 11:35:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:13:07.972 11:35:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:13:07.972 11:35:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:13:08.231 11:35:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe6144 0 00:13:08.231 11:35:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:13:08.231 11:35:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:13:08.231 11:35:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe6144 00:13:08.231 11:35:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:13:08.231 11:35:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:13:08.231 11:35:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:2fb5c03d-b67f-4471-b8a9-a83bf7b15be8 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:13:08.231 11:35:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:08.231 11:35:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:08.231 11:35:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:08.231 11:35:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:13:08.231 11:35:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:2fb5c03d-b67f-4471-b8a9-a83bf7b15be8 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:13:08.231 11:35:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:2fb5c03d-b67f-4471-b8a9-a83bf7b15be8 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:13:08.797 00:13:08.797 11:35:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:13:08.797 11:35:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:13:08.798 11:35:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:13:09.055 11:35:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:13:09.056 11:35:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:13:09.056 11:35:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:09.056 11:35:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:09.056 11:35:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:09.056 11:35:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:13:09.056 { 00:13:09.056 "auth": { 00:13:09.056 "dhgroup": "ffdhe6144", 00:13:09.056 "digest": "sha256", 00:13:09.056 "state": "completed" 00:13:09.056 }, 00:13:09.056 "cntlid": 33, 00:13:09.056 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:2fb5c03d-b67f-4471-b8a9-a83bf7b15be8", 00:13:09.056 "listen_address": { 00:13:09.056 "adrfam": "IPv4", 00:13:09.056 "traddr": "10.0.0.3", 00:13:09.056 "trsvcid": "4420", 00:13:09.056 "trtype": "TCP" 00:13:09.056 }, 00:13:09.056 "peer_address": { 00:13:09.056 "adrfam": "IPv4", 00:13:09.056 "traddr": "10.0.0.1", 00:13:09.056 "trsvcid": "51424", 00:13:09.056 "trtype": "TCP" 00:13:09.056 }, 00:13:09.056 "qid": 0, 00:13:09.056 "state": "enabled", 00:13:09.056 "thread": "nvmf_tgt_poll_group_000" 00:13:09.056 } 00:13:09.056 ]' 00:13:09.056 11:35:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:13:09.056 11:35:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:13:09.056 11:35:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:13:09.314 11:35:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:13:09.314 11:35:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:13:09.314 11:35:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:13:09.314 11:35:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:13:09.314 11:35:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:13:09.572 11:35:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:Y2JiZGI3MTE3ZjcwNmFhMzJhZDBkN2VhODEwZGRmNDVmMjNkZmIzNzJmMzBiY2JlF9S+tg==: --dhchap-ctrl-secret DHHC-1:03:YTgxZGEwZTA3Yzc1NmU1YjY0NGFlMGMzYTNiOGIyMzA1MzRjOTQ5YjM4MzY4MzJlNjE5ZThmY2EyMGU5Y2I5ZcLXVvg=: 00:13:09.572 11:35:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:2fb5c03d-b67f-4471-b8a9-a83bf7b15be8 --hostid 2fb5c03d-b67f-4471-b8a9-a83bf7b15be8 -l 0 --dhchap-secret DHHC-1:00:Y2JiZGI3MTE3ZjcwNmFhMzJhZDBkN2VhODEwZGRmNDVmMjNkZmIzNzJmMzBiY2JlF9S+tg==: --dhchap-ctrl-secret DHHC-1:03:YTgxZGEwZTA3Yzc1NmU1YjY0NGFlMGMzYTNiOGIyMzA1MzRjOTQ5YjM4MzY4MzJlNjE5ZThmY2EyMGU5Y2I5ZcLXVvg=: 00:13:10.505 11:35:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:13:10.505 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:13:10.505 11:35:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:2fb5c03d-b67f-4471-b8a9-a83bf7b15be8 00:13:10.505 11:35:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:10.505 11:35:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:10.505 11:35:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:10.505 11:35:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:13:10.505 11:35:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:13:10.505 11:35:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:13:10.505 11:35:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe6144 1 00:13:10.505 11:35:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:13:10.505 11:35:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:13:10.505 11:35:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe6144 00:13:10.505 11:35:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:13:10.505 11:35:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:13:10.505 11:35:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:2fb5c03d-b67f-4471-b8a9-a83bf7b15be8 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:13:10.505 11:35:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:10.505 11:35:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:10.505 11:35:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:10.505 11:35:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:13:10.505 11:35:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:2fb5c03d-b67f-4471-b8a9-a83bf7b15be8 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:13:10.505 11:35:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:2fb5c03d-b67f-4471-b8a9-a83bf7b15be8 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:13:11.070 00:13:11.070 11:35:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:13:11.070 11:35:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:13:11.070 11:35:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:13:11.328 11:35:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:13:11.328 11:35:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:13:11.328 11:35:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:11.328 11:35:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:11.328 11:35:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:11.328 11:35:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:13:11.328 { 00:13:11.328 "auth": { 00:13:11.328 "dhgroup": "ffdhe6144", 00:13:11.328 "digest": "sha256", 00:13:11.328 "state": "completed" 00:13:11.328 }, 00:13:11.328 "cntlid": 35, 00:13:11.328 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:2fb5c03d-b67f-4471-b8a9-a83bf7b15be8", 00:13:11.328 "listen_address": { 00:13:11.328 "adrfam": "IPv4", 00:13:11.328 "traddr": "10.0.0.3", 00:13:11.328 "trsvcid": "4420", 00:13:11.328 "trtype": "TCP" 00:13:11.328 }, 00:13:11.328 "peer_address": { 00:13:11.328 "adrfam": "IPv4", 00:13:11.328 "traddr": "10.0.0.1", 00:13:11.328 "trsvcid": "51450", 00:13:11.328 "trtype": "TCP" 00:13:11.328 }, 00:13:11.328 "qid": 0, 00:13:11.328 "state": "enabled", 00:13:11.328 "thread": "nvmf_tgt_poll_group_000" 00:13:11.328 } 00:13:11.328 ]' 00:13:11.328 11:35:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:13:11.595 11:35:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:13:11.595 11:35:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:13:11.595 11:35:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:13:11.595 11:35:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:13:11.595 11:35:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:13:11.595 11:35:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:13:11.595 11:35:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:13:11.867 11:35:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:NzViMDgxNmYyYjg1NzY4ZjY1YWE3NTk5MjIwMmNlOWUtjHHm: --dhchap-ctrl-secret DHHC-1:02:MTRkMDAyZTM2NDg5YzUyNGFkZDk1YzA5NGYxNDc1NTBlNjNkNjZmODEzMzUxM2Yy4R272Q==: 00:13:11.867 11:35:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:2fb5c03d-b67f-4471-b8a9-a83bf7b15be8 --hostid 2fb5c03d-b67f-4471-b8a9-a83bf7b15be8 -l 0 --dhchap-secret DHHC-1:01:NzViMDgxNmYyYjg1NzY4ZjY1YWE3NTk5MjIwMmNlOWUtjHHm: --dhchap-ctrl-secret DHHC-1:02:MTRkMDAyZTM2NDg5YzUyNGFkZDk1YzA5NGYxNDc1NTBlNjNkNjZmODEzMzUxM2Yy4R272Q==: 00:13:12.801 11:35:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:13:12.801 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:13:12.801 11:35:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:2fb5c03d-b67f-4471-b8a9-a83bf7b15be8 00:13:12.801 11:35:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:12.801 11:35:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:12.801 11:35:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:12.801 11:35:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:13:12.801 11:35:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:13:12.801 11:35:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:13:12.801 11:35:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe6144 2 00:13:12.801 11:35:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:13:12.801 11:35:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:13:12.801 11:35:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe6144 00:13:12.801 11:35:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:13:12.801 11:35:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:13:12.801 11:35:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:2fb5c03d-b67f-4471-b8a9-a83bf7b15be8 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:13:12.801 11:35:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:12.801 11:35:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:12.801 11:35:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:12.801 11:35:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:13:12.801 11:35:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:2fb5c03d-b67f-4471-b8a9-a83bf7b15be8 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:13:12.801 11:35:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:2fb5c03d-b67f-4471-b8a9-a83bf7b15be8 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:13:13.368 00:13:13.368 11:35:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:13:13.368 11:35:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:13:13.368 11:35:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:13:13.626 11:35:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:13:13.626 11:35:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:13:13.626 11:35:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:13.626 11:35:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:13.884 11:35:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:13.884 11:35:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:13:13.884 { 00:13:13.884 "auth": { 00:13:13.884 "dhgroup": "ffdhe6144", 00:13:13.884 "digest": "sha256", 00:13:13.884 "state": "completed" 00:13:13.884 }, 00:13:13.884 "cntlid": 37, 00:13:13.884 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:2fb5c03d-b67f-4471-b8a9-a83bf7b15be8", 00:13:13.884 "listen_address": { 00:13:13.884 "adrfam": "IPv4", 00:13:13.884 "traddr": "10.0.0.3", 00:13:13.884 "trsvcid": "4420", 00:13:13.884 "trtype": "TCP" 00:13:13.884 }, 00:13:13.884 "peer_address": { 00:13:13.884 "adrfam": "IPv4", 00:13:13.884 "traddr": "10.0.0.1", 00:13:13.884 "trsvcid": "51466", 00:13:13.884 "trtype": "TCP" 00:13:13.884 }, 00:13:13.884 "qid": 0, 00:13:13.884 "state": "enabled", 00:13:13.884 "thread": "nvmf_tgt_poll_group_000" 00:13:13.884 } 00:13:13.884 ]' 00:13:13.884 11:35:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:13:13.884 11:35:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:13:13.884 11:35:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:13:13.884 11:35:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:13:13.884 11:35:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:13:13.884 11:35:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:13:13.884 11:35:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:13:13.885 11:35:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:13:14.142 11:35:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:ZDY0N2YwN2U5MDE1OGM3ZDRiZTE3ZjllMTA2YjcyOWJmNTdkMzQ3NTJlZWJhZGJjQ1cSZQ==: --dhchap-ctrl-secret DHHC-1:01:OGZjMTkzMDUzNGJhZmU0OWUxOGM0NmRkZGI3NTVkNTktkkdU: 00:13:14.143 11:35:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:2fb5c03d-b67f-4471-b8a9-a83bf7b15be8 --hostid 2fb5c03d-b67f-4471-b8a9-a83bf7b15be8 -l 0 --dhchap-secret DHHC-1:02:ZDY0N2YwN2U5MDE1OGM3ZDRiZTE3ZjllMTA2YjcyOWJmNTdkMzQ3NTJlZWJhZGJjQ1cSZQ==: --dhchap-ctrl-secret DHHC-1:01:OGZjMTkzMDUzNGJhZmU0OWUxOGM0NmRkZGI3NTVkNTktkkdU: 00:13:15.076 11:35:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:13:15.076 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:13:15.076 11:35:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:2fb5c03d-b67f-4471-b8a9-a83bf7b15be8 00:13:15.076 11:35:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:15.076 11:35:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:15.076 11:35:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:15.076 11:35:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:13:15.076 11:35:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:13:15.076 11:35:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:13:15.335 11:35:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe6144 3 00:13:15.335 11:35:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:13:15.335 11:35:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:13:15.335 11:35:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe6144 00:13:15.335 11:35:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:13:15.335 11:35:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:13:15.335 11:35:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:2fb5c03d-b67f-4471-b8a9-a83bf7b15be8 --dhchap-key key3 00:13:15.335 11:35:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:15.335 11:35:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:15.335 11:35:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:15.335 11:35:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:13:15.335 11:35:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:2fb5c03d-b67f-4471-b8a9-a83bf7b15be8 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:13:15.335 11:35:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:2fb5c03d-b67f-4471-b8a9-a83bf7b15be8 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:13:15.901 00:13:15.901 11:35:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:13:15.901 11:35:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:13:15.901 11:35:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:13:16.159 11:35:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:13:16.159 11:35:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:13:16.159 11:35:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:16.159 11:35:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:16.159 11:35:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:16.159 11:35:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:13:16.159 { 00:13:16.159 "auth": { 00:13:16.159 "dhgroup": "ffdhe6144", 00:13:16.159 "digest": "sha256", 00:13:16.159 "state": "completed" 00:13:16.159 }, 00:13:16.159 "cntlid": 39, 00:13:16.159 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:2fb5c03d-b67f-4471-b8a9-a83bf7b15be8", 00:13:16.159 "listen_address": { 00:13:16.159 "adrfam": "IPv4", 00:13:16.159 "traddr": "10.0.0.3", 00:13:16.159 "trsvcid": "4420", 00:13:16.159 "trtype": "TCP" 00:13:16.159 }, 00:13:16.159 "peer_address": { 00:13:16.159 "adrfam": "IPv4", 00:13:16.159 "traddr": "10.0.0.1", 00:13:16.159 "trsvcid": "45910", 00:13:16.159 "trtype": "TCP" 00:13:16.159 }, 00:13:16.159 "qid": 0, 00:13:16.159 "state": "enabled", 00:13:16.159 "thread": "nvmf_tgt_poll_group_000" 00:13:16.159 } 00:13:16.159 ]' 00:13:16.159 11:35:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:13:16.159 11:35:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:13:16.159 11:35:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:13:16.159 11:35:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:13:16.159 11:35:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:13:16.159 11:35:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:13:16.159 11:35:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:13:16.159 11:35:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:13:16.416 11:35:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:MWI3ZGI4MmVmNjZkMTUxMmFmODZjY2JkMzEwNTkyZmI0NGVhNTE0OTNlMTFmNTBjNzA5YWE5NTE1ZDljZDliM/NYdV4=: 00:13:16.416 11:35:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:2fb5c03d-b67f-4471-b8a9-a83bf7b15be8 --hostid 2fb5c03d-b67f-4471-b8a9-a83bf7b15be8 -l 0 --dhchap-secret DHHC-1:03:MWI3ZGI4MmVmNjZkMTUxMmFmODZjY2JkMzEwNTkyZmI0NGVhNTE0OTNlMTFmNTBjNzA5YWE5NTE1ZDljZDliM/NYdV4=: 00:13:17.354 11:35:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:13:17.354 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:13:17.354 11:35:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:2fb5c03d-b67f-4471-b8a9-a83bf7b15be8 00:13:17.354 11:35:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:17.354 11:35:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:17.354 11:35:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:17.354 11:35:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:13:17.354 11:35:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:13:17.354 11:35:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:13:17.354 11:35:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:13:17.613 11:35:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe8192 0 00:13:17.613 11:35:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:13:17.613 11:35:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:13:17.613 11:35:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe8192 00:13:17.613 11:35:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:13:17.613 11:35:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:13:17.613 11:35:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:2fb5c03d-b67f-4471-b8a9-a83bf7b15be8 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:13:17.613 11:35:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:17.613 11:35:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:17.613 11:35:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:17.613 11:35:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:13:17.613 11:35:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:2fb5c03d-b67f-4471-b8a9-a83bf7b15be8 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:13:17.613 11:35:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:2fb5c03d-b67f-4471-b8a9-a83bf7b15be8 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:13:18.179 00:13:18.179 11:35:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:13:18.179 11:35:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:13:18.179 11:35:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:13:18.437 11:35:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:13:18.437 11:35:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:13:18.437 11:35:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:18.437 11:35:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:18.437 11:35:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:18.437 11:35:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:13:18.437 { 00:13:18.437 "auth": { 00:13:18.437 "dhgroup": "ffdhe8192", 00:13:18.437 "digest": "sha256", 00:13:18.437 "state": "completed" 00:13:18.437 }, 00:13:18.437 "cntlid": 41, 00:13:18.437 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:2fb5c03d-b67f-4471-b8a9-a83bf7b15be8", 00:13:18.437 "listen_address": { 00:13:18.437 "adrfam": "IPv4", 00:13:18.437 "traddr": "10.0.0.3", 00:13:18.437 "trsvcid": "4420", 00:13:18.437 "trtype": "TCP" 00:13:18.437 }, 00:13:18.437 "peer_address": { 00:13:18.437 "adrfam": "IPv4", 00:13:18.437 "traddr": "10.0.0.1", 00:13:18.437 "trsvcid": "45944", 00:13:18.437 "trtype": "TCP" 00:13:18.437 }, 00:13:18.437 "qid": 0, 00:13:18.437 "state": "enabled", 00:13:18.437 "thread": "nvmf_tgt_poll_group_000" 00:13:18.437 } 00:13:18.437 ]' 00:13:18.437 11:35:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:13:18.695 11:35:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:13:18.695 11:35:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:13:18.695 11:35:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:13:18.695 11:35:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:13:18.695 11:35:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:13:18.695 11:35:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:13:18.695 11:35:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:13:18.953 11:35:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:Y2JiZGI3MTE3ZjcwNmFhMzJhZDBkN2VhODEwZGRmNDVmMjNkZmIzNzJmMzBiY2JlF9S+tg==: --dhchap-ctrl-secret DHHC-1:03:YTgxZGEwZTA3Yzc1NmU1YjY0NGFlMGMzYTNiOGIyMzA1MzRjOTQ5YjM4MzY4MzJlNjE5ZThmY2EyMGU5Y2I5ZcLXVvg=: 00:13:18.953 11:35:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:2fb5c03d-b67f-4471-b8a9-a83bf7b15be8 --hostid 2fb5c03d-b67f-4471-b8a9-a83bf7b15be8 -l 0 --dhchap-secret DHHC-1:00:Y2JiZGI3MTE3ZjcwNmFhMzJhZDBkN2VhODEwZGRmNDVmMjNkZmIzNzJmMzBiY2JlF9S+tg==: --dhchap-ctrl-secret DHHC-1:03:YTgxZGEwZTA3Yzc1NmU1YjY0NGFlMGMzYTNiOGIyMzA1MzRjOTQ5YjM4MzY4MzJlNjE5ZThmY2EyMGU5Y2I5ZcLXVvg=: 00:13:19.887 11:35:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:13:19.887 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:13:19.887 11:35:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:2fb5c03d-b67f-4471-b8a9-a83bf7b15be8 00:13:19.887 11:35:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:19.887 11:35:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:19.887 11:35:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:19.887 11:35:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:13:19.887 11:35:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:13:19.887 11:35:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:13:19.887 11:35:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe8192 1 00:13:19.887 11:35:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:13:19.887 11:35:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:13:19.887 11:35:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe8192 00:13:19.887 11:35:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:13:19.887 11:35:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:13:19.887 11:35:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:2fb5c03d-b67f-4471-b8a9-a83bf7b15be8 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:13:19.887 11:35:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:19.887 11:35:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:20.146 11:35:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:20.146 11:35:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:13:20.146 11:35:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:2fb5c03d-b67f-4471-b8a9-a83bf7b15be8 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:13:20.146 11:35:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:2fb5c03d-b67f-4471-b8a9-a83bf7b15be8 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:13:20.713 00:13:20.713 11:35:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:13:20.713 11:35:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:13:20.713 11:35:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:13:20.971 11:35:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:13:20.971 11:35:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:13:20.971 11:35:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:20.971 11:35:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:20.971 11:35:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:20.971 11:35:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:13:20.971 { 00:13:20.971 "auth": { 00:13:20.971 "dhgroup": "ffdhe8192", 00:13:20.971 "digest": "sha256", 00:13:20.971 "state": "completed" 00:13:20.971 }, 00:13:20.971 "cntlid": 43, 00:13:20.971 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:2fb5c03d-b67f-4471-b8a9-a83bf7b15be8", 00:13:20.971 "listen_address": { 00:13:20.971 "adrfam": "IPv4", 00:13:20.971 "traddr": "10.0.0.3", 00:13:20.971 "trsvcid": "4420", 00:13:20.971 "trtype": "TCP" 00:13:20.971 }, 00:13:20.971 "peer_address": { 00:13:20.971 "adrfam": "IPv4", 00:13:20.971 "traddr": "10.0.0.1", 00:13:20.971 "trsvcid": "45974", 00:13:20.971 "trtype": "TCP" 00:13:20.971 }, 00:13:20.971 "qid": 0, 00:13:20.971 "state": "enabled", 00:13:20.971 "thread": "nvmf_tgt_poll_group_000" 00:13:20.971 } 00:13:20.971 ]' 00:13:20.971 11:35:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:13:20.971 11:35:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:13:20.971 11:35:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:13:20.971 11:35:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:13:20.971 11:35:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:13:21.230 11:35:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:13:21.230 11:35:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:13:21.230 11:35:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:13:21.488 11:35:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:NzViMDgxNmYyYjg1NzY4ZjY1YWE3NTk5MjIwMmNlOWUtjHHm: --dhchap-ctrl-secret DHHC-1:02:MTRkMDAyZTM2NDg5YzUyNGFkZDk1YzA5NGYxNDc1NTBlNjNkNjZmODEzMzUxM2Yy4R272Q==: 00:13:21.488 11:35:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:2fb5c03d-b67f-4471-b8a9-a83bf7b15be8 --hostid 2fb5c03d-b67f-4471-b8a9-a83bf7b15be8 -l 0 --dhchap-secret DHHC-1:01:NzViMDgxNmYyYjg1NzY4ZjY1YWE3NTk5MjIwMmNlOWUtjHHm: --dhchap-ctrl-secret DHHC-1:02:MTRkMDAyZTM2NDg5YzUyNGFkZDk1YzA5NGYxNDc1NTBlNjNkNjZmODEzMzUxM2Yy4R272Q==: 00:13:22.130 11:35:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:13:22.130 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:13:22.130 11:35:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:2fb5c03d-b67f-4471-b8a9-a83bf7b15be8 00:13:22.130 11:35:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:22.130 11:35:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:22.130 11:35:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:22.130 11:35:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:13:22.130 11:35:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:13:22.130 11:35:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:13:22.395 11:35:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe8192 2 00:13:22.395 11:35:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:13:22.395 11:35:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:13:22.395 11:35:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe8192 00:13:22.395 11:35:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:13:22.395 11:35:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:13:22.395 11:35:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:2fb5c03d-b67f-4471-b8a9-a83bf7b15be8 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:13:22.395 11:35:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:22.395 11:35:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:22.395 11:35:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:22.395 11:35:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:13:22.395 11:35:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:2fb5c03d-b67f-4471-b8a9-a83bf7b15be8 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:13:22.395 11:35:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:2fb5c03d-b67f-4471-b8a9-a83bf7b15be8 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:13:22.962 00:13:22.962 11:35:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:13:22.962 11:35:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:13:22.962 11:35:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:13:23.529 11:35:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:13:23.529 11:35:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:13:23.529 11:35:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:23.529 11:35:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:23.529 11:35:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:23.529 11:35:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:13:23.529 { 00:13:23.529 "auth": { 00:13:23.529 "dhgroup": "ffdhe8192", 00:13:23.529 "digest": "sha256", 00:13:23.529 "state": "completed" 00:13:23.529 }, 00:13:23.529 "cntlid": 45, 00:13:23.529 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:2fb5c03d-b67f-4471-b8a9-a83bf7b15be8", 00:13:23.529 "listen_address": { 00:13:23.529 "adrfam": "IPv4", 00:13:23.529 "traddr": "10.0.0.3", 00:13:23.529 "trsvcid": "4420", 00:13:23.529 "trtype": "TCP" 00:13:23.529 }, 00:13:23.529 "peer_address": { 00:13:23.529 "adrfam": "IPv4", 00:13:23.529 "traddr": "10.0.0.1", 00:13:23.529 "trsvcid": "46006", 00:13:23.529 "trtype": "TCP" 00:13:23.529 }, 00:13:23.529 "qid": 0, 00:13:23.529 "state": "enabled", 00:13:23.529 "thread": "nvmf_tgt_poll_group_000" 00:13:23.529 } 00:13:23.529 ]' 00:13:23.529 11:35:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:13:23.529 11:35:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:13:23.529 11:35:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:13:23.529 11:35:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:13:23.529 11:35:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:13:23.529 11:35:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:13:23.529 11:35:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:13:23.529 11:35:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:13:23.788 11:35:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:ZDY0N2YwN2U5MDE1OGM3ZDRiZTE3ZjllMTA2YjcyOWJmNTdkMzQ3NTJlZWJhZGJjQ1cSZQ==: --dhchap-ctrl-secret DHHC-1:01:OGZjMTkzMDUzNGJhZmU0OWUxOGM0NmRkZGI3NTVkNTktkkdU: 00:13:23.788 11:35:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:2fb5c03d-b67f-4471-b8a9-a83bf7b15be8 --hostid 2fb5c03d-b67f-4471-b8a9-a83bf7b15be8 -l 0 --dhchap-secret DHHC-1:02:ZDY0N2YwN2U5MDE1OGM3ZDRiZTE3ZjllMTA2YjcyOWJmNTdkMzQ3NTJlZWJhZGJjQ1cSZQ==: --dhchap-ctrl-secret DHHC-1:01:OGZjMTkzMDUzNGJhZmU0OWUxOGM0NmRkZGI3NTVkNTktkkdU: 00:13:24.723 11:35:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:13:24.723 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:13:24.723 11:35:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:2fb5c03d-b67f-4471-b8a9-a83bf7b15be8 00:13:24.723 11:35:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:24.723 11:35:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:24.723 11:35:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:24.723 11:35:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:13:24.723 11:35:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:13:24.723 11:35:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:13:24.982 11:35:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe8192 3 00:13:24.982 11:35:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:13:24.982 11:35:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:13:24.982 11:35:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe8192 00:13:24.982 11:35:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:13:24.982 11:35:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:13:24.982 11:35:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:2fb5c03d-b67f-4471-b8a9-a83bf7b15be8 --dhchap-key key3 00:13:24.982 11:35:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:24.982 11:35:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:24.982 11:35:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:24.982 11:35:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:13:24.982 11:35:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:2fb5c03d-b67f-4471-b8a9-a83bf7b15be8 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:13:24.982 11:35:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:2fb5c03d-b67f-4471-b8a9-a83bf7b15be8 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:13:25.549 00:13:25.549 11:35:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:13:25.549 11:35:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:13:25.549 11:35:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:13:25.807 11:35:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:13:25.807 11:35:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:13:25.807 11:35:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:25.807 11:35:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:25.807 11:35:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:25.807 11:35:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:13:25.807 { 00:13:25.807 "auth": { 00:13:25.807 "dhgroup": "ffdhe8192", 00:13:25.807 "digest": "sha256", 00:13:25.807 "state": "completed" 00:13:25.807 }, 00:13:25.807 "cntlid": 47, 00:13:25.807 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:2fb5c03d-b67f-4471-b8a9-a83bf7b15be8", 00:13:25.807 "listen_address": { 00:13:25.807 "adrfam": "IPv4", 00:13:25.807 "traddr": "10.0.0.3", 00:13:25.807 "trsvcid": "4420", 00:13:25.807 "trtype": "TCP" 00:13:25.807 }, 00:13:25.807 "peer_address": { 00:13:25.807 "adrfam": "IPv4", 00:13:25.807 "traddr": "10.0.0.1", 00:13:25.807 "trsvcid": "53874", 00:13:25.807 "trtype": "TCP" 00:13:25.807 }, 00:13:25.807 "qid": 0, 00:13:25.807 "state": "enabled", 00:13:25.807 "thread": "nvmf_tgt_poll_group_000" 00:13:25.807 } 00:13:25.807 ]' 00:13:25.807 11:35:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:13:26.066 11:35:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:13:26.066 11:35:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:13:26.066 11:35:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:13:26.066 11:35:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:13:26.066 11:35:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:13:26.066 11:35:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:13:26.066 11:35:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:13:26.324 11:35:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:MWI3ZGI4MmVmNjZkMTUxMmFmODZjY2JkMzEwNTkyZmI0NGVhNTE0OTNlMTFmNTBjNzA5YWE5NTE1ZDljZDliM/NYdV4=: 00:13:26.324 11:35:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:2fb5c03d-b67f-4471-b8a9-a83bf7b15be8 --hostid 2fb5c03d-b67f-4471-b8a9-a83bf7b15be8 -l 0 --dhchap-secret DHHC-1:03:MWI3ZGI4MmVmNjZkMTUxMmFmODZjY2JkMzEwNTkyZmI0NGVhNTE0OTNlMTFmNTBjNzA5YWE5NTE1ZDljZDliM/NYdV4=: 00:13:27.259 11:35:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:13:27.259 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:13:27.259 11:35:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:2fb5c03d-b67f-4471-b8a9-a83bf7b15be8 00:13:27.259 11:35:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:27.259 11:35:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:27.259 11:35:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:27.259 11:35:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@118 -- # for digest in "${digests[@]}" 00:13:27.259 11:35:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:13:27.259 11:35:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:13:27.259 11:35:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups null 00:13:27.259 11:35:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups null 00:13:27.517 11:35:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 null 0 00:13:27.517 11:35:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:13:27.517 11:35:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:13:27.517 11:35:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=null 00:13:27.517 11:35:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:13:27.517 11:35:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:13:27.517 11:35:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:2fb5c03d-b67f-4471-b8a9-a83bf7b15be8 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:13:27.517 11:35:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:27.517 11:35:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:27.517 11:35:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:27.517 11:35:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:13:27.517 11:35:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:2fb5c03d-b67f-4471-b8a9-a83bf7b15be8 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:13:27.517 11:35:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:2fb5c03d-b67f-4471-b8a9-a83bf7b15be8 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:13:27.774 00:13:27.774 11:35:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:13:27.774 11:35:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:13:27.774 11:35:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:13:28.033 11:35:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:13:28.033 11:35:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:13:28.033 11:35:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:28.033 11:35:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:28.291 11:35:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:28.291 11:35:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:13:28.291 { 00:13:28.291 "auth": { 00:13:28.291 "dhgroup": "null", 00:13:28.291 "digest": "sha384", 00:13:28.291 "state": "completed" 00:13:28.291 }, 00:13:28.291 "cntlid": 49, 00:13:28.291 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:2fb5c03d-b67f-4471-b8a9-a83bf7b15be8", 00:13:28.291 "listen_address": { 00:13:28.291 "adrfam": "IPv4", 00:13:28.291 "traddr": "10.0.0.3", 00:13:28.291 "trsvcid": "4420", 00:13:28.291 "trtype": "TCP" 00:13:28.291 }, 00:13:28.291 "peer_address": { 00:13:28.291 "adrfam": "IPv4", 00:13:28.291 "traddr": "10.0.0.1", 00:13:28.291 "trsvcid": "53908", 00:13:28.291 "trtype": "TCP" 00:13:28.291 }, 00:13:28.291 "qid": 0, 00:13:28.291 "state": "enabled", 00:13:28.291 "thread": "nvmf_tgt_poll_group_000" 00:13:28.291 } 00:13:28.291 ]' 00:13:28.292 11:35:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:13:28.292 11:35:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:13:28.292 11:35:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:13:28.292 11:35:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ null == \n\u\l\l ]] 00:13:28.292 11:35:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:13:28.292 11:35:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:13:28.292 11:35:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:13:28.292 11:35:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:13:28.550 11:35:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:Y2JiZGI3MTE3ZjcwNmFhMzJhZDBkN2VhODEwZGRmNDVmMjNkZmIzNzJmMzBiY2JlF9S+tg==: --dhchap-ctrl-secret DHHC-1:03:YTgxZGEwZTA3Yzc1NmU1YjY0NGFlMGMzYTNiOGIyMzA1MzRjOTQ5YjM4MzY4MzJlNjE5ZThmY2EyMGU5Y2I5ZcLXVvg=: 00:13:28.550 11:35:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:2fb5c03d-b67f-4471-b8a9-a83bf7b15be8 --hostid 2fb5c03d-b67f-4471-b8a9-a83bf7b15be8 -l 0 --dhchap-secret DHHC-1:00:Y2JiZGI3MTE3ZjcwNmFhMzJhZDBkN2VhODEwZGRmNDVmMjNkZmIzNzJmMzBiY2JlF9S+tg==: --dhchap-ctrl-secret DHHC-1:03:YTgxZGEwZTA3Yzc1NmU1YjY0NGFlMGMzYTNiOGIyMzA1MzRjOTQ5YjM4MzY4MzJlNjE5ZThmY2EyMGU5Y2I5ZcLXVvg=: 00:13:29.485 11:35:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:13:29.485 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:13:29.485 11:35:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:2fb5c03d-b67f-4471-b8a9-a83bf7b15be8 00:13:29.485 11:35:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:29.485 11:35:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:29.485 11:35:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:29.485 11:35:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:13:29.485 11:35:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups null 00:13:29.485 11:35:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups null 00:13:29.744 11:35:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 null 1 00:13:29.744 11:35:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:13:29.744 11:35:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:13:29.744 11:35:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=null 00:13:29.744 11:35:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:13:29.744 11:35:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:13:29.744 11:35:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:2fb5c03d-b67f-4471-b8a9-a83bf7b15be8 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:13:29.744 11:35:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:29.744 11:35:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:29.744 11:35:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:29.744 11:35:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:13:29.744 11:35:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:2fb5c03d-b67f-4471-b8a9-a83bf7b15be8 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:13:29.744 11:35:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:2fb5c03d-b67f-4471-b8a9-a83bf7b15be8 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:13:30.002 00:13:30.002 11:35:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:13:30.002 11:35:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:13:30.002 11:35:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:13:30.261 11:35:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:13:30.261 11:35:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:13:30.261 11:35:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:30.261 11:35:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:30.261 11:35:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:30.261 11:35:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:13:30.261 { 00:13:30.261 "auth": { 00:13:30.261 "dhgroup": "null", 00:13:30.261 "digest": "sha384", 00:13:30.261 "state": "completed" 00:13:30.261 }, 00:13:30.261 "cntlid": 51, 00:13:30.261 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:2fb5c03d-b67f-4471-b8a9-a83bf7b15be8", 00:13:30.261 "listen_address": { 00:13:30.261 "adrfam": "IPv4", 00:13:30.261 "traddr": "10.0.0.3", 00:13:30.261 "trsvcid": "4420", 00:13:30.261 "trtype": "TCP" 00:13:30.261 }, 00:13:30.261 "peer_address": { 00:13:30.261 "adrfam": "IPv4", 00:13:30.261 "traddr": "10.0.0.1", 00:13:30.261 "trsvcid": "53924", 00:13:30.261 "trtype": "TCP" 00:13:30.261 }, 00:13:30.261 "qid": 0, 00:13:30.261 "state": "enabled", 00:13:30.261 "thread": "nvmf_tgt_poll_group_000" 00:13:30.261 } 00:13:30.261 ]' 00:13:30.261 11:35:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:13:30.261 11:35:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:13:30.261 11:35:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:13:30.261 11:35:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ null == \n\u\l\l ]] 00:13:30.261 11:35:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:13:30.520 11:35:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:13:30.520 11:35:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:13:30.520 11:35:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:13:30.778 11:35:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:NzViMDgxNmYyYjg1NzY4ZjY1YWE3NTk5MjIwMmNlOWUtjHHm: --dhchap-ctrl-secret DHHC-1:02:MTRkMDAyZTM2NDg5YzUyNGFkZDk1YzA5NGYxNDc1NTBlNjNkNjZmODEzMzUxM2Yy4R272Q==: 00:13:30.778 11:35:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:2fb5c03d-b67f-4471-b8a9-a83bf7b15be8 --hostid 2fb5c03d-b67f-4471-b8a9-a83bf7b15be8 -l 0 --dhchap-secret DHHC-1:01:NzViMDgxNmYyYjg1NzY4ZjY1YWE3NTk5MjIwMmNlOWUtjHHm: --dhchap-ctrl-secret DHHC-1:02:MTRkMDAyZTM2NDg5YzUyNGFkZDk1YzA5NGYxNDc1NTBlNjNkNjZmODEzMzUxM2Yy4R272Q==: 00:13:31.345 11:35:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:13:31.345 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:13:31.345 11:35:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:2fb5c03d-b67f-4471-b8a9-a83bf7b15be8 00:13:31.345 11:35:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:31.345 11:35:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:31.345 11:35:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:31.345 11:35:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:13:31.345 11:35:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups null 00:13:31.345 11:35:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups null 00:13:31.605 11:35:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 null 2 00:13:31.605 11:35:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:13:31.605 11:35:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:13:31.605 11:35:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=null 00:13:31.605 11:35:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:13:31.605 11:35:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:13:31.605 11:35:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:2fb5c03d-b67f-4471-b8a9-a83bf7b15be8 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:13:31.605 11:35:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:31.605 11:35:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:31.605 11:35:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:31.605 11:35:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:13:31.605 11:35:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:2fb5c03d-b67f-4471-b8a9-a83bf7b15be8 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:13:31.605 11:35:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:2fb5c03d-b67f-4471-b8a9-a83bf7b15be8 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:13:32.173 00:13:32.173 11:35:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:13:32.173 11:35:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:13:32.173 11:35:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:13:32.431 11:35:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:13:32.431 11:35:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:13:32.431 11:35:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:32.431 11:35:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:32.431 11:35:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:32.431 11:35:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:13:32.431 { 00:13:32.431 "auth": { 00:13:32.431 "dhgroup": "null", 00:13:32.431 "digest": "sha384", 00:13:32.431 "state": "completed" 00:13:32.431 }, 00:13:32.431 "cntlid": 53, 00:13:32.431 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:2fb5c03d-b67f-4471-b8a9-a83bf7b15be8", 00:13:32.431 "listen_address": { 00:13:32.431 "adrfam": "IPv4", 00:13:32.431 "traddr": "10.0.0.3", 00:13:32.431 "trsvcid": "4420", 00:13:32.431 "trtype": "TCP" 00:13:32.431 }, 00:13:32.431 "peer_address": { 00:13:32.431 "adrfam": "IPv4", 00:13:32.431 "traddr": "10.0.0.1", 00:13:32.431 "trsvcid": "53948", 00:13:32.431 "trtype": "TCP" 00:13:32.431 }, 00:13:32.431 "qid": 0, 00:13:32.431 "state": "enabled", 00:13:32.431 "thread": "nvmf_tgt_poll_group_000" 00:13:32.431 } 00:13:32.431 ]' 00:13:32.431 11:35:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:13:32.431 11:35:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:13:32.431 11:35:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:13:32.431 11:35:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ null == \n\u\l\l ]] 00:13:32.431 11:35:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:13:32.431 11:35:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:13:32.431 11:35:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:13:32.431 11:35:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:13:32.690 11:35:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:ZDY0N2YwN2U5MDE1OGM3ZDRiZTE3ZjllMTA2YjcyOWJmNTdkMzQ3NTJlZWJhZGJjQ1cSZQ==: --dhchap-ctrl-secret DHHC-1:01:OGZjMTkzMDUzNGJhZmU0OWUxOGM0NmRkZGI3NTVkNTktkkdU: 00:13:32.690 11:35:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:2fb5c03d-b67f-4471-b8a9-a83bf7b15be8 --hostid 2fb5c03d-b67f-4471-b8a9-a83bf7b15be8 -l 0 --dhchap-secret DHHC-1:02:ZDY0N2YwN2U5MDE1OGM3ZDRiZTE3ZjllMTA2YjcyOWJmNTdkMzQ3NTJlZWJhZGJjQ1cSZQ==: --dhchap-ctrl-secret DHHC-1:01:OGZjMTkzMDUzNGJhZmU0OWUxOGM0NmRkZGI3NTVkNTktkkdU: 00:13:33.257 11:35:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:13:33.257 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:13:33.257 11:35:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:2fb5c03d-b67f-4471-b8a9-a83bf7b15be8 00:13:33.257 11:35:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:33.257 11:35:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:33.516 11:35:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:33.516 11:35:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:13:33.516 11:35:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups null 00:13:33.516 11:35:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups null 00:13:33.775 11:35:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 null 3 00:13:33.775 11:35:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:13:33.775 11:35:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:13:33.775 11:35:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=null 00:13:33.775 11:35:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:13:33.775 11:35:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:13:33.775 11:35:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:2fb5c03d-b67f-4471-b8a9-a83bf7b15be8 --dhchap-key key3 00:13:33.775 11:35:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:33.775 11:35:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:33.775 11:35:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:33.775 11:35:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:13:33.775 11:35:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:2fb5c03d-b67f-4471-b8a9-a83bf7b15be8 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:13:33.775 11:35:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:2fb5c03d-b67f-4471-b8a9-a83bf7b15be8 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:13:34.034 00:13:34.034 11:35:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:13:34.034 11:35:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:13:34.034 11:35:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:13:34.293 11:35:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:13:34.293 11:35:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:13:34.293 11:35:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:34.293 11:35:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:34.293 11:35:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:34.293 11:35:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:13:34.293 { 00:13:34.293 "auth": { 00:13:34.293 "dhgroup": "null", 00:13:34.293 "digest": "sha384", 00:13:34.293 "state": "completed" 00:13:34.293 }, 00:13:34.293 "cntlid": 55, 00:13:34.293 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:2fb5c03d-b67f-4471-b8a9-a83bf7b15be8", 00:13:34.293 "listen_address": { 00:13:34.293 "adrfam": "IPv4", 00:13:34.293 "traddr": "10.0.0.3", 00:13:34.293 "trsvcid": "4420", 00:13:34.293 "trtype": "TCP" 00:13:34.293 }, 00:13:34.293 "peer_address": { 00:13:34.293 "adrfam": "IPv4", 00:13:34.293 "traddr": "10.0.0.1", 00:13:34.293 "trsvcid": "55494", 00:13:34.293 "trtype": "TCP" 00:13:34.293 }, 00:13:34.293 "qid": 0, 00:13:34.293 "state": "enabled", 00:13:34.293 "thread": "nvmf_tgt_poll_group_000" 00:13:34.293 } 00:13:34.293 ]' 00:13:34.293 11:35:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:13:34.553 11:35:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:13:34.553 11:35:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:13:34.553 11:35:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ null == \n\u\l\l ]] 00:13:34.553 11:35:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:13:34.553 11:35:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:13:34.553 11:35:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:13:34.553 11:35:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:13:34.812 11:35:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:MWI3ZGI4MmVmNjZkMTUxMmFmODZjY2JkMzEwNTkyZmI0NGVhNTE0OTNlMTFmNTBjNzA5YWE5NTE1ZDljZDliM/NYdV4=: 00:13:34.812 11:35:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:2fb5c03d-b67f-4471-b8a9-a83bf7b15be8 --hostid 2fb5c03d-b67f-4471-b8a9-a83bf7b15be8 -l 0 --dhchap-secret DHHC-1:03:MWI3ZGI4MmVmNjZkMTUxMmFmODZjY2JkMzEwNTkyZmI0NGVhNTE0OTNlMTFmNTBjNzA5YWE5NTE1ZDljZDliM/NYdV4=: 00:13:35.807 11:35:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:13:35.807 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:13:35.807 11:35:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:2fb5c03d-b67f-4471-b8a9-a83bf7b15be8 00:13:35.807 11:35:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:35.807 11:35:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:35.807 11:35:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:35.807 11:35:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:13:35.807 11:35:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:13:35.807 11:35:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:13:35.807 11:35:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:13:35.807 11:35:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe2048 0 00:13:35.807 11:35:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:13:35.807 11:35:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:13:35.807 11:35:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe2048 00:13:35.807 11:35:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:13:35.807 11:35:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:13:35.807 11:35:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:2fb5c03d-b67f-4471-b8a9-a83bf7b15be8 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:13:35.807 11:35:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:35.807 11:35:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:35.807 11:35:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:35.807 11:35:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:13:35.807 11:35:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:2fb5c03d-b67f-4471-b8a9-a83bf7b15be8 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:13:35.807 11:35:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:2fb5c03d-b67f-4471-b8a9-a83bf7b15be8 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:13:36.066 00:13:36.324 11:35:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:13:36.324 11:35:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:13:36.324 11:35:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:13:36.583 11:35:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:13:36.583 11:35:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:13:36.583 11:35:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:36.583 11:35:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:36.583 11:35:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:36.583 11:35:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:13:36.583 { 00:13:36.583 "auth": { 00:13:36.583 "dhgroup": "ffdhe2048", 00:13:36.583 "digest": "sha384", 00:13:36.583 "state": "completed" 00:13:36.583 }, 00:13:36.583 "cntlid": 57, 00:13:36.583 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:2fb5c03d-b67f-4471-b8a9-a83bf7b15be8", 00:13:36.583 "listen_address": { 00:13:36.583 "adrfam": "IPv4", 00:13:36.583 "traddr": "10.0.0.3", 00:13:36.583 "trsvcid": "4420", 00:13:36.583 "trtype": "TCP" 00:13:36.583 }, 00:13:36.583 "peer_address": { 00:13:36.583 "adrfam": "IPv4", 00:13:36.583 "traddr": "10.0.0.1", 00:13:36.583 "trsvcid": "55502", 00:13:36.583 "trtype": "TCP" 00:13:36.583 }, 00:13:36.583 "qid": 0, 00:13:36.583 "state": "enabled", 00:13:36.583 "thread": "nvmf_tgt_poll_group_000" 00:13:36.583 } 00:13:36.583 ]' 00:13:36.583 11:35:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:13:36.583 11:35:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:13:36.583 11:35:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:13:36.583 11:35:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:13:36.583 11:35:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:13:36.583 11:35:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:13:36.583 11:35:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:13:36.583 11:35:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:13:37.149 11:35:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:Y2JiZGI3MTE3ZjcwNmFhMzJhZDBkN2VhODEwZGRmNDVmMjNkZmIzNzJmMzBiY2JlF9S+tg==: --dhchap-ctrl-secret DHHC-1:03:YTgxZGEwZTA3Yzc1NmU1YjY0NGFlMGMzYTNiOGIyMzA1MzRjOTQ5YjM4MzY4MzJlNjE5ZThmY2EyMGU5Y2I5ZcLXVvg=: 00:13:37.149 11:35:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:2fb5c03d-b67f-4471-b8a9-a83bf7b15be8 --hostid 2fb5c03d-b67f-4471-b8a9-a83bf7b15be8 -l 0 --dhchap-secret DHHC-1:00:Y2JiZGI3MTE3ZjcwNmFhMzJhZDBkN2VhODEwZGRmNDVmMjNkZmIzNzJmMzBiY2JlF9S+tg==: --dhchap-ctrl-secret DHHC-1:03:YTgxZGEwZTA3Yzc1NmU1YjY0NGFlMGMzYTNiOGIyMzA1MzRjOTQ5YjM4MzY4MzJlNjE5ZThmY2EyMGU5Y2I5ZcLXVvg=: 00:13:37.716 11:35:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:13:37.716 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:13:37.716 11:35:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:2fb5c03d-b67f-4471-b8a9-a83bf7b15be8 00:13:37.716 11:35:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:37.716 11:35:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:37.716 11:35:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:37.716 11:35:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:13:37.716 11:35:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:13:37.716 11:35:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:13:37.976 11:35:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe2048 1 00:13:37.976 11:35:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:13:37.976 11:35:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:13:37.976 11:35:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe2048 00:13:37.976 11:35:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:13:37.976 11:35:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:13:37.976 11:35:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:2fb5c03d-b67f-4471-b8a9-a83bf7b15be8 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:13:37.976 11:35:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:37.976 11:35:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:37.976 11:35:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:37.976 11:35:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:13:37.976 11:35:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:2fb5c03d-b67f-4471-b8a9-a83bf7b15be8 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:13:37.976 11:35:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:2fb5c03d-b67f-4471-b8a9-a83bf7b15be8 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:13:38.234 00:13:38.234 11:35:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:13:38.234 11:35:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:13:38.234 11:35:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:13:38.493 11:35:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:13:38.493 11:35:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:13:38.493 11:35:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:38.493 11:35:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:38.493 11:35:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:38.493 11:35:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:13:38.493 { 00:13:38.493 "auth": { 00:13:38.493 "dhgroup": "ffdhe2048", 00:13:38.493 "digest": "sha384", 00:13:38.493 "state": "completed" 00:13:38.493 }, 00:13:38.493 "cntlid": 59, 00:13:38.493 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:2fb5c03d-b67f-4471-b8a9-a83bf7b15be8", 00:13:38.493 "listen_address": { 00:13:38.493 "adrfam": "IPv4", 00:13:38.493 "traddr": "10.0.0.3", 00:13:38.493 "trsvcid": "4420", 00:13:38.493 "trtype": "TCP" 00:13:38.493 }, 00:13:38.493 "peer_address": { 00:13:38.493 "adrfam": "IPv4", 00:13:38.493 "traddr": "10.0.0.1", 00:13:38.493 "trsvcid": "55534", 00:13:38.493 "trtype": "TCP" 00:13:38.493 }, 00:13:38.493 "qid": 0, 00:13:38.493 "state": "enabled", 00:13:38.493 "thread": "nvmf_tgt_poll_group_000" 00:13:38.493 } 00:13:38.493 ]' 00:13:38.493 11:35:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:13:38.751 11:35:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:13:38.751 11:35:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:13:38.751 11:35:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:13:38.751 11:35:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:13:38.751 11:35:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:13:38.751 11:35:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:13:38.751 11:35:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:13:39.010 11:35:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:NzViMDgxNmYyYjg1NzY4ZjY1YWE3NTk5MjIwMmNlOWUtjHHm: --dhchap-ctrl-secret DHHC-1:02:MTRkMDAyZTM2NDg5YzUyNGFkZDk1YzA5NGYxNDc1NTBlNjNkNjZmODEzMzUxM2Yy4R272Q==: 00:13:39.010 11:35:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:2fb5c03d-b67f-4471-b8a9-a83bf7b15be8 --hostid 2fb5c03d-b67f-4471-b8a9-a83bf7b15be8 -l 0 --dhchap-secret DHHC-1:01:NzViMDgxNmYyYjg1NzY4ZjY1YWE3NTk5MjIwMmNlOWUtjHHm: --dhchap-ctrl-secret DHHC-1:02:MTRkMDAyZTM2NDg5YzUyNGFkZDk1YzA5NGYxNDc1NTBlNjNkNjZmODEzMzUxM2Yy4R272Q==: 00:13:39.944 11:35:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:13:39.944 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:13:39.944 11:35:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:2fb5c03d-b67f-4471-b8a9-a83bf7b15be8 00:13:39.944 11:35:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:39.944 11:35:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:39.944 11:35:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:39.944 11:35:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:13:39.944 11:35:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:13:39.944 11:35:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:13:39.944 11:35:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe2048 2 00:13:39.944 11:35:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:13:39.944 11:35:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:13:39.944 11:35:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe2048 00:13:39.944 11:35:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:13:39.944 11:35:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:13:39.944 11:35:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:2fb5c03d-b67f-4471-b8a9-a83bf7b15be8 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:13:39.944 11:35:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:39.944 11:35:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:39.944 11:35:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:39.944 11:35:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:13:39.944 11:35:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:2fb5c03d-b67f-4471-b8a9-a83bf7b15be8 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:13:39.944 11:35:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:2fb5c03d-b67f-4471-b8a9-a83bf7b15be8 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:13:40.201 00:13:40.459 11:35:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:13:40.459 11:35:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:13:40.459 11:35:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:13:40.717 11:35:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:13:40.717 11:35:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:13:40.717 11:35:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:40.717 11:35:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:40.717 11:35:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:40.717 11:35:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:13:40.717 { 00:13:40.717 "auth": { 00:13:40.717 "dhgroup": "ffdhe2048", 00:13:40.717 "digest": "sha384", 00:13:40.717 "state": "completed" 00:13:40.717 }, 00:13:40.717 "cntlid": 61, 00:13:40.717 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:2fb5c03d-b67f-4471-b8a9-a83bf7b15be8", 00:13:40.717 "listen_address": { 00:13:40.717 "adrfam": "IPv4", 00:13:40.717 "traddr": "10.0.0.3", 00:13:40.717 "trsvcid": "4420", 00:13:40.717 "trtype": "TCP" 00:13:40.717 }, 00:13:40.717 "peer_address": { 00:13:40.717 "adrfam": "IPv4", 00:13:40.717 "traddr": "10.0.0.1", 00:13:40.717 "trsvcid": "55572", 00:13:40.717 "trtype": "TCP" 00:13:40.717 }, 00:13:40.717 "qid": 0, 00:13:40.717 "state": "enabled", 00:13:40.718 "thread": "nvmf_tgt_poll_group_000" 00:13:40.718 } 00:13:40.718 ]' 00:13:40.718 11:35:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:13:40.718 11:35:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:13:40.718 11:35:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:13:40.718 11:35:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:13:40.718 11:35:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:13:40.718 11:35:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:13:40.718 11:35:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:13:40.718 11:35:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:13:40.977 11:35:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:ZDY0N2YwN2U5MDE1OGM3ZDRiZTE3ZjllMTA2YjcyOWJmNTdkMzQ3NTJlZWJhZGJjQ1cSZQ==: --dhchap-ctrl-secret DHHC-1:01:OGZjMTkzMDUzNGJhZmU0OWUxOGM0NmRkZGI3NTVkNTktkkdU: 00:13:40.977 11:35:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:2fb5c03d-b67f-4471-b8a9-a83bf7b15be8 --hostid 2fb5c03d-b67f-4471-b8a9-a83bf7b15be8 -l 0 --dhchap-secret DHHC-1:02:ZDY0N2YwN2U5MDE1OGM3ZDRiZTE3ZjllMTA2YjcyOWJmNTdkMzQ3NTJlZWJhZGJjQ1cSZQ==: --dhchap-ctrl-secret DHHC-1:01:OGZjMTkzMDUzNGJhZmU0OWUxOGM0NmRkZGI3NTVkNTktkkdU: 00:13:41.911 11:35:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:13:41.911 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:13:41.911 11:35:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:2fb5c03d-b67f-4471-b8a9-a83bf7b15be8 00:13:41.911 11:35:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:41.911 11:35:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:41.911 11:35:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:41.911 11:35:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:13:41.911 11:35:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:13:41.911 11:35:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:13:42.169 11:35:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe2048 3 00:13:42.169 11:35:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:13:42.169 11:35:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:13:42.169 11:35:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe2048 00:13:42.169 11:35:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:13:42.169 11:35:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:13:42.169 11:35:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:2fb5c03d-b67f-4471-b8a9-a83bf7b15be8 --dhchap-key key3 00:13:42.169 11:35:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:42.169 11:35:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:42.169 11:35:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:42.169 11:35:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:13:42.169 11:35:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:2fb5c03d-b67f-4471-b8a9-a83bf7b15be8 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:13:42.169 11:35:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:2fb5c03d-b67f-4471-b8a9-a83bf7b15be8 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:13:42.427 00:13:42.427 11:35:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:13:42.427 11:35:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:13:42.427 11:35:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:13:42.685 11:35:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:13:42.685 11:35:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:13:42.685 11:35:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:42.685 11:35:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:42.685 11:35:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:42.685 11:35:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:13:42.685 { 00:13:42.685 "auth": { 00:13:42.685 "dhgroup": "ffdhe2048", 00:13:42.685 "digest": "sha384", 00:13:42.685 "state": "completed" 00:13:42.685 }, 00:13:42.685 "cntlid": 63, 00:13:42.685 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:2fb5c03d-b67f-4471-b8a9-a83bf7b15be8", 00:13:42.685 "listen_address": { 00:13:42.685 "adrfam": "IPv4", 00:13:42.685 "traddr": "10.0.0.3", 00:13:42.685 "trsvcid": "4420", 00:13:42.685 "trtype": "TCP" 00:13:42.685 }, 00:13:42.685 "peer_address": { 00:13:42.685 "adrfam": "IPv4", 00:13:42.685 "traddr": "10.0.0.1", 00:13:42.685 "trsvcid": "55586", 00:13:42.685 "trtype": "TCP" 00:13:42.685 }, 00:13:42.685 "qid": 0, 00:13:42.685 "state": "enabled", 00:13:42.685 "thread": "nvmf_tgt_poll_group_000" 00:13:42.685 } 00:13:42.685 ]' 00:13:42.685 11:35:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:13:42.685 11:35:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:13:42.685 11:35:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:13:42.943 11:35:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:13:42.943 11:35:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:13:42.943 11:35:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:13:42.943 11:35:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:13:42.943 11:35:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:13:43.202 11:35:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:MWI3ZGI4MmVmNjZkMTUxMmFmODZjY2JkMzEwNTkyZmI0NGVhNTE0OTNlMTFmNTBjNzA5YWE5NTE1ZDljZDliM/NYdV4=: 00:13:43.202 11:35:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:2fb5c03d-b67f-4471-b8a9-a83bf7b15be8 --hostid 2fb5c03d-b67f-4471-b8a9-a83bf7b15be8 -l 0 --dhchap-secret DHHC-1:03:MWI3ZGI4MmVmNjZkMTUxMmFmODZjY2JkMzEwNTkyZmI0NGVhNTE0OTNlMTFmNTBjNzA5YWE5NTE1ZDljZDliM/NYdV4=: 00:13:43.768 11:35:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:13:43.768 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:13:43.768 11:35:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:2fb5c03d-b67f-4471-b8a9-a83bf7b15be8 00:13:43.768 11:35:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:43.768 11:35:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:43.768 11:35:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:43.768 11:35:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:13:43.768 11:35:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:13:43.768 11:35:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:13:43.768 11:35:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:13:44.335 11:35:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe3072 0 00:13:44.335 11:35:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:13:44.335 11:35:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:13:44.335 11:35:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe3072 00:13:44.335 11:35:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:13:44.335 11:35:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:13:44.335 11:35:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:2fb5c03d-b67f-4471-b8a9-a83bf7b15be8 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:13:44.335 11:35:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:44.335 11:35:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:44.335 11:35:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:44.335 11:35:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:13:44.335 11:35:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:2fb5c03d-b67f-4471-b8a9-a83bf7b15be8 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:13:44.335 11:35:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:2fb5c03d-b67f-4471-b8a9-a83bf7b15be8 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:13:44.593 00:13:44.593 11:35:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:13:44.593 11:35:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:13:44.593 11:35:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:13:44.850 11:35:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:13:44.850 11:35:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:13:44.850 11:35:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:44.850 11:35:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:44.850 11:35:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:44.850 11:35:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:13:44.850 { 00:13:44.850 "auth": { 00:13:44.850 "dhgroup": "ffdhe3072", 00:13:44.850 "digest": "sha384", 00:13:44.850 "state": "completed" 00:13:44.850 }, 00:13:44.850 "cntlid": 65, 00:13:44.850 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:2fb5c03d-b67f-4471-b8a9-a83bf7b15be8", 00:13:44.850 "listen_address": { 00:13:44.850 "adrfam": "IPv4", 00:13:44.850 "traddr": "10.0.0.3", 00:13:44.850 "trsvcid": "4420", 00:13:44.850 "trtype": "TCP" 00:13:44.850 }, 00:13:44.850 "peer_address": { 00:13:44.850 "adrfam": "IPv4", 00:13:44.850 "traddr": "10.0.0.1", 00:13:44.850 "trsvcid": "48114", 00:13:44.850 "trtype": "TCP" 00:13:44.850 }, 00:13:44.850 "qid": 0, 00:13:44.850 "state": "enabled", 00:13:44.850 "thread": "nvmf_tgt_poll_group_000" 00:13:44.850 } 00:13:44.850 ]' 00:13:44.850 11:35:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:13:44.850 11:35:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:13:44.850 11:35:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:13:44.850 11:35:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:13:44.850 11:35:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:13:45.123 11:35:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:13:45.123 11:35:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:13:45.123 11:35:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:13:45.407 11:35:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:Y2JiZGI3MTE3ZjcwNmFhMzJhZDBkN2VhODEwZGRmNDVmMjNkZmIzNzJmMzBiY2JlF9S+tg==: --dhchap-ctrl-secret DHHC-1:03:YTgxZGEwZTA3Yzc1NmU1YjY0NGFlMGMzYTNiOGIyMzA1MzRjOTQ5YjM4MzY4MzJlNjE5ZThmY2EyMGU5Y2I5ZcLXVvg=: 00:13:45.407 11:35:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:2fb5c03d-b67f-4471-b8a9-a83bf7b15be8 --hostid 2fb5c03d-b67f-4471-b8a9-a83bf7b15be8 -l 0 --dhchap-secret DHHC-1:00:Y2JiZGI3MTE3ZjcwNmFhMzJhZDBkN2VhODEwZGRmNDVmMjNkZmIzNzJmMzBiY2JlF9S+tg==: --dhchap-ctrl-secret DHHC-1:03:YTgxZGEwZTA3Yzc1NmU1YjY0NGFlMGMzYTNiOGIyMzA1MzRjOTQ5YjM4MzY4MzJlNjE5ZThmY2EyMGU5Y2I5ZcLXVvg=: 00:13:45.974 11:35:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:13:45.974 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:13:45.974 11:35:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:2fb5c03d-b67f-4471-b8a9-a83bf7b15be8 00:13:45.974 11:35:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:45.974 11:35:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:45.974 11:35:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:45.974 11:35:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:13:45.974 11:35:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:13:45.974 11:35:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:13:46.233 11:35:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe3072 1 00:13:46.233 11:35:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:13:46.233 11:35:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:13:46.233 11:35:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe3072 00:13:46.233 11:35:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:13:46.233 11:35:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:13:46.233 11:35:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:2fb5c03d-b67f-4471-b8a9-a83bf7b15be8 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:13:46.233 11:35:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:46.233 11:35:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:46.233 11:35:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:46.233 11:35:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:13:46.233 11:35:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:2fb5c03d-b67f-4471-b8a9-a83bf7b15be8 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:13:46.233 11:35:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:2fb5c03d-b67f-4471-b8a9-a83bf7b15be8 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:13:46.799 00:13:46.799 11:35:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:13:46.799 11:35:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:13:46.800 11:35:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:13:47.058 11:35:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:13:47.058 11:35:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:13:47.058 11:35:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:47.058 11:35:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:47.058 11:35:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:47.058 11:35:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:13:47.058 { 00:13:47.058 "auth": { 00:13:47.058 "dhgroup": "ffdhe3072", 00:13:47.058 "digest": "sha384", 00:13:47.058 "state": "completed" 00:13:47.058 }, 00:13:47.058 "cntlid": 67, 00:13:47.058 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:2fb5c03d-b67f-4471-b8a9-a83bf7b15be8", 00:13:47.058 "listen_address": { 00:13:47.058 "adrfam": "IPv4", 00:13:47.058 "traddr": "10.0.0.3", 00:13:47.058 "trsvcid": "4420", 00:13:47.058 "trtype": "TCP" 00:13:47.058 }, 00:13:47.058 "peer_address": { 00:13:47.058 "adrfam": "IPv4", 00:13:47.058 "traddr": "10.0.0.1", 00:13:47.058 "trsvcid": "48136", 00:13:47.058 "trtype": "TCP" 00:13:47.058 }, 00:13:47.058 "qid": 0, 00:13:47.058 "state": "enabled", 00:13:47.058 "thread": "nvmf_tgt_poll_group_000" 00:13:47.058 } 00:13:47.058 ]' 00:13:47.058 11:35:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:13:47.058 11:35:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:13:47.058 11:35:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:13:47.058 11:35:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:13:47.058 11:35:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:13:47.058 11:35:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:13:47.058 11:35:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:13:47.058 11:35:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:13:47.316 11:35:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:NzViMDgxNmYyYjg1NzY4ZjY1YWE3NTk5MjIwMmNlOWUtjHHm: --dhchap-ctrl-secret DHHC-1:02:MTRkMDAyZTM2NDg5YzUyNGFkZDk1YzA5NGYxNDc1NTBlNjNkNjZmODEzMzUxM2Yy4R272Q==: 00:13:47.317 11:35:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:2fb5c03d-b67f-4471-b8a9-a83bf7b15be8 --hostid 2fb5c03d-b67f-4471-b8a9-a83bf7b15be8 -l 0 --dhchap-secret DHHC-1:01:NzViMDgxNmYyYjg1NzY4ZjY1YWE3NTk5MjIwMmNlOWUtjHHm: --dhchap-ctrl-secret DHHC-1:02:MTRkMDAyZTM2NDg5YzUyNGFkZDk1YzA5NGYxNDc1NTBlNjNkNjZmODEzMzUxM2Yy4R272Q==: 00:13:47.882 11:35:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:13:48.141 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:13:48.141 11:35:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:2fb5c03d-b67f-4471-b8a9-a83bf7b15be8 00:13:48.141 11:35:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:48.141 11:35:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:48.141 11:35:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:48.141 11:35:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:13:48.141 11:35:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:13:48.141 11:35:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:13:48.399 11:35:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe3072 2 00:13:48.399 11:35:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:13:48.399 11:35:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:13:48.399 11:35:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe3072 00:13:48.399 11:35:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:13:48.399 11:35:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:13:48.399 11:35:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:2fb5c03d-b67f-4471-b8a9-a83bf7b15be8 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:13:48.399 11:35:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:48.399 11:35:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:48.399 11:35:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:48.399 11:35:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:13:48.399 11:35:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:2fb5c03d-b67f-4471-b8a9-a83bf7b15be8 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:13:48.400 11:35:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:2fb5c03d-b67f-4471-b8a9-a83bf7b15be8 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:13:48.658 00:13:48.658 11:35:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:13:48.658 11:35:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:13:48.658 11:35:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:13:48.916 11:35:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:13:48.916 11:35:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:13:48.916 11:35:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:48.916 11:35:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:48.916 11:35:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:48.916 11:35:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:13:48.916 { 00:13:48.916 "auth": { 00:13:48.916 "dhgroup": "ffdhe3072", 00:13:48.916 "digest": "sha384", 00:13:48.916 "state": "completed" 00:13:48.916 }, 00:13:48.916 "cntlid": 69, 00:13:48.916 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:2fb5c03d-b67f-4471-b8a9-a83bf7b15be8", 00:13:48.916 "listen_address": { 00:13:48.916 "adrfam": "IPv4", 00:13:48.916 "traddr": "10.0.0.3", 00:13:48.916 "trsvcid": "4420", 00:13:48.916 "trtype": "TCP" 00:13:48.916 }, 00:13:48.916 "peer_address": { 00:13:48.916 "adrfam": "IPv4", 00:13:48.916 "traddr": "10.0.0.1", 00:13:48.916 "trsvcid": "48154", 00:13:48.916 "trtype": "TCP" 00:13:48.916 }, 00:13:48.916 "qid": 0, 00:13:48.916 "state": "enabled", 00:13:48.916 "thread": "nvmf_tgt_poll_group_000" 00:13:48.916 } 00:13:48.916 ]' 00:13:48.916 11:35:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:13:49.175 11:35:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:13:49.175 11:35:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:13:49.175 11:35:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:13:49.175 11:35:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:13:49.175 11:35:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:13:49.175 11:35:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:13:49.175 11:35:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:13:49.432 11:35:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:ZDY0N2YwN2U5MDE1OGM3ZDRiZTE3ZjllMTA2YjcyOWJmNTdkMzQ3NTJlZWJhZGJjQ1cSZQ==: --dhchap-ctrl-secret DHHC-1:01:OGZjMTkzMDUzNGJhZmU0OWUxOGM0NmRkZGI3NTVkNTktkkdU: 00:13:49.432 11:35:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:2fb5c03d-b67f-4471-b8a9-a83bf7b15be8 --hostid 2fb5c03d-b67f-4471-b8a9-a83bf7b15be8 -l 0 --dhchap-secret DHHC-1:02:ZDY0N2YwN2U5MDE1OGM3ZDRiZTE3ZjllMTA2YjcyOWJmNTdkMzQ3NTJlZWJhZGJjQ1cSZQ==: --dhchap-ctrl-secret DHHC-1:01:OGZjMTkzMDUzNGJhZmU0OWUxOGM0NmRkZGI3NTVkNTktkkdU: 00:13:49.999 11:35:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:13:49.999 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:13:49.999 11:35:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:2fb5c03d-b67f-4471-b8a9-a83bf7b15be8 00:13:49.999 11:35:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:49.999 11:35:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:49.999 11:35:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:49.999 11:35:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:13:49.999 11:35:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:13:49.999 11:35:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:13:50.257 11:35:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe3072 3 00:13:50.257 11:35:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:13:50.257 11:35:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:13:50.257 11:35:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe3072 00:13:50.257 11:35:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:13:50.257 11:35:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:13:50.257 11:35:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:2fb5c03d-b67f-4471-b8a9-a83bf7b15be8 --dhchap-key key3 00:13:50.257 11:35:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:50.257 11:35:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:50.257 11:35:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:50.257 11:35:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:13:50.257 11:35:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:2fb5c03d-b67f-4471-b8a9-a83bf7b15be8 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:13:50.257 11:35:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:2fb5c03d-b67f-4471-b8a9-a83bf7b15be8 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:13:50.823 00:13:50.823 11:35:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:13:50.824 11:35:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:13:50.824 11:35:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:13:51.082 11:35:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:13:51.082 11:35:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:13:51.082 11:35:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:51.082 11:35:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:51.082 11:35:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:51.082 11:35:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:13:51.082 { 00:13:51.082 "auth": { 00:13:51.082 "dhgroup": "ffdhe3072", 00:13:51.082 "digest": "sha384", 00:13:51.082 "state": "completed" 00:13:51.082 }, 00:13:51.082 "cntlid": 71, 00:13:51.082 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:2fb5c03d-b67f-4471-b8a9-a83bf7b15be8", 00:13:51.082 "listen_address": { 00:13:51.082 "adrfam": "IPv4", 00:13:51.082 "traddr": "10.0.0.3", 00:13:51.082 "trsvcid": "4420", 00:13:51.082 "trtype": "TCP" 00:13:51.082 }, 00:13:51.082 "peer_address": { 00:13:51.082 "adrfam": "IPv4", 00:13:51.082 "traddr": "10.0.0.1", 00:13:51.082 "trsvcid": "48184", 00:13:51.082 "trtype": "TCP" 00:13:51.082 }, 00:13:51.082 "qid": 0, 00:13:51.082 "state": "enabled", 00:13:51.082 "thread": "nvmf_tgt_poll_group_000" 00:13:51.082 } 00:13:51.082 ]' 00:13:51.082 11:35:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:13:51.082 11:35:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:13:51.082 11:35:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:13:51.082 11:35:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:13:51.082 11:35:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:13:51.082 11:35:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:13:51.082 11:35:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:13:51.082 11:35:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:13:51.341 11:35:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:MWI3ZGI4MmVmNjZkMTUxMmFmODZjY2JkMzEwNTkyZmI0NGVhNTE0OTNlMTFmNTBjNzA5YWE5NTE1ZDljZDliM/NYdV4=: 00:13:51.341 11:35:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:2fb5c03d-b67f-4471-b8a9-a83bf7b15be8 --hostid 2fb5c03d-b67f-4471-b8a9-a83bf7b15be8 -l 0 --dhchap-secret DHHC-1:03:MWI3ZGI4MmVmNjZkMTUxMmFmODZjY2JkMzEwNTkyZmI0NGVhNTE0OTNlMTFmNTBjNzA5YWE5NTE1ZDljZDliM/NYdV4=: 00:13:52.276 11:35:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:13:52.276 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:13:52.276 11:35:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:2fb5c03d-b67f-4471-b8a9-a83bf7b15be8 00:13:52.276 11:35:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:52.276 11:35:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:52.276 11:35:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:52.276 11:35:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:13:52.276 11:35:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:13:52.276 11:35:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:13:52.276 11:35:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:13:52.535 11:35:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe4096 0 00:13:52.535 11:35:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:13:52.535 11:35:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:13:52.535 11:35:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe4096 00:13:52.535 11:35:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:13:52.535 11:35:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:13:52.535 11:35:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:2fb5c03d-b67f-4471-b8a9-a83bf7b15be8 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:13:52.535 11:35:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:52.535 11:35:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:52.535 11:35:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:52.535 11:35:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:13:52.535 11:35:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:2fb5c03d-b67f-4471-b8a9-a83bf7b15be8 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:13:52.535 11:35:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:2fb5c03d-b67f-4471-b8a9-a83bf7b15be8 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:13:52.794 00:13:52.794 11:35:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:13:52.794 11:35:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:13:52.794 11:35:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:13:53.361 11:35:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:13:53.361 11:35:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:13:53.361 11:35:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:53.361 11:35:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:53.362 11:35:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:53.362 11:35:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:13:53.362 { 00:13:53.362 "auth": { 00:13:53.362 "dhgroup": "ffdhe4096", 00:13:53.362 "digest": "sha384", 00:13:53.362 "state": "completed" 00:13:53.362 }, 00:13:53.362 "cntlid": 73, 00:13:53.362 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:2fb5c03d-b67f-4471-b8a9-a83bf7b15be8", 00:13:53.362 "listen_address": { 00:13:53.362 "adrfam": "IPv4", 00:13:53.362 "traddr": "10.0.0.3", 00:13:53.362 "trsvcid": "4420", 00:13:53.362 "trtype": "TCP" 00:13:53.362 }, 00:13:53.362 "peer_address": { 00:13:53.362 "adrfam": "IPv4", 00:13:53.362 "traddr": "10.0.0.1", 00:13:53.362 "trsvcid": "48220", 00:13:53.362 "trtype": "TCP" 00:13:53.362 }, 00:13:53.362 "qid": 0, 00:13:53.362 "state": "enabled", 00:13:53.362 "thread": "nvmf_tgt_poll_group_000" 00:13:53.362 } 00:13:53.362 ]' 00:13:53.362 11:35:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:13:53.362 11:35:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:13:53.362 11:35:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:13:53.362 11:35:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:13:53.362 11:35:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:13:53.362 11:35:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:13:53.362 11:35:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:13:53.362 11:35:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:13:53.620 11:35:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:Y2JiZGI3MTE3ZjcwNmFhMzJhZDBkN2VhODEwZGRmNDVmMjNkZmIzNzJmMzBiY2JlF9S+tg==: --dhchap-ctrl-secret DHHC-1:03:YTgxZGEwZTA3Yzc1NmU1YjY0NGFlMGMzYTNiOGIyMzA1MzRjOTQ5YjM4MzY4MzJlNjE5ZThmY2EyMGU5Y2I5ZcLXVvg=: 00:13:53.620 11:35:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:2fb5c03d-b67f-4471-b8a9-a83bf7b15be8 --hostid 2fb5c03d-b67f-4471-b8a9-a83bf7b15be8 -l 0 --dhchap-secret DHHC-1:00:Y2JiZGI3MTE3ZjcwNmFhMzJhZDBkN2VhODEwZGRmNDVmMjNkZmIzNzJmMzBiY2JlF9S+tg==: --dhchap-ctrl-secret DHHC-1:03:YTgxZGEwZTA3Yzc1NmU1YjY0NGFlMGMzYTNiOGIyMzA1MzRjOTQ5YjM4MzY4MzJlNjE5ZThmY2EyMGU5Y2I5ZcLXVvg=: 00:13:54.553 11:35:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:13:54.553 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:13:54.553 11:35:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:2fb5c03d-b67f-4471-b8a9-a83bf7b15be8 00:13:54.553 11:35:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:54.553 11:35:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:54.553 11:35:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:54.553 11:35:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:13:54.553 11:35:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:13:54.553 11:35:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:13:54.811 11:35:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe4096 1 00:13:54.811 11:35:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:13:54.811 11:35:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:13:54.811 11:35:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe4096 00:13:54.811 11:35:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:13:54.811 11:35:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:13:54.811 11:35:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:2fb5c03d-b67f-4471-b8a9-a83bf7b15be8 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:13:54.811 11:35:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:54.811 11:35:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:54.811 11:35:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:54.811 11:35:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:13:54.811 11:35:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:2fb5c03d-b67f-4471-b8a9-a83bf7b15be8 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:13:54.811 11:35:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:2fb5c03d-b67f-4471-b8a9-a83bf7b15be8 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:13:55.068 00:13:55.068 11:35:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:13:55.068 11:35:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:13:55.068 11:35:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:13:55.326 11:35:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:13:55.326 11:35:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:13:55.326 11:35:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:55.326 11:35:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:55.326 11:35:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:55.326 11:35:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:13:55.326 { 00:13:55.326 "auth": { 00:13:55.326 "dhgroup": "ffdhe4096", 00:13:55.326 "digest": "sha384", 00:13:55.326 "state": "completed" 00:13:55.326 }, 00:13:55.326 "cntlid": 75, 00:13:55.326 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:2fb5c03d-b67f-4471-b8a9-a83bf7b15be8", 00:13:55.326 "listen_address": { 00:13:55.326 "adrfam": "IPv4", 00:13:55.326 "traddr": "10.0.0.3", 00:13:55.326 "trsvcid": "4420", 00:13:55.326 "trtype": "TCP" 00:13:55.326 }, 00:13:55.326 "peer_address": { 00:13:55.326 "adrfam": "IPv4", 00:13:55.326 "traddr": "10.0.0.1", 00:13:55.326 "trsvcid": "49486", 00:13:55.326 "trtype": "TCP" 00:13:55.326 }, 00:13:55.326 "qid": 0, 00:13:55.326 "state": "enabled", 00:13:55.326 "thread": "nvmf_tgt_poll_group_000" 00:13:55.326 } 00:13:55.326 ]' 00:13:55.326 11:35:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:13:55.585 11:35:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:13:55.585 11:35:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:13:55.585 11:35:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:13:55.585 11:35:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:13:55.585 11:35:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:13:55.585 11:35:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:13:55.585 11:35:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:13:55.843 11:35:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:NzViMDgxNmYyYjg1NzY4ZjY1YWE3NTk5MjIwMmNlOWUtjHHm: --dhchap-ctrl-secret DHHC-1:02:MTRkMDAyZTM2NDg5YzUyNGFkZDk1YzA5NGYxNDc1NTBlNjNkNjZmODEzMzUxM2Yy4R272Q==: 00:13:55.843 11:35:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:2fb5c03d-b67f-4471-b8a9-a83bf7b15be8 --hostid 2fb5c03d-b67f-4471-b8a9-a83bf7b15be8 -l 0 --dhchap-secret DHHC-1:01:NzViMDgxNmYyYjg1NzY4ZjY1YWE3NTk5MjIwMmNlOWUtjHHm: --dhchap-ctrl-secret DHHC-1:02:MTRkMDAyZTM2NDg5YzUyNGFkZDk1YzA5NGYxNDc1NTBlNjNkNjZmODEzMzUxM2Yy4R272Q==: 00:13:56.777 11:35:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:13:56.777 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:13:56.777 11:35:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:2fb5c03d-b67f-4471-b8a9-a83bf7b15be8 00:13:56.777 11:35:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:56.777 11:35:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:56.777 11:35:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:56.777 11:35:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:13:56.777 11:35:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:13:56.777 11:35:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:13:56.777 11:35:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe4096 2 00:13:57.037 11:35:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:13:57.037 11:35:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:13:57.037 11:35:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe4096 00:13:57.037 11:35:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:13:57.037 11:35:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:13:57.037 11:35:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:2fb5c03d-b67f-4471-b8a9-a83bf7b15be8 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:13:57.037 11:35:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:57.037 11:35:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:57.037 11:35:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:57.037 11:35:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:13:57.037 11:35:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:2fb5c03d-b67f-4471-b8a9-a83bf7b15be8 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:13:57.037 11:35:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:2fb5c03d-b67f-4471-b8a9-a83bf7b15be8 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:13:57.295 00:13:57.295 11:35:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:13:57.295 11:35:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:13:57.295 11:35:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:13:57.554 11:35:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:13:57.554 11:35:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:13:57.554 11:35:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:57.554 11:35:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:57.554 11:35:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:57.554 11:35:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:13:57.554 { 00:13:57.554 "auth": { 00:13:57.554 "dhgroup": "ffdhe4096", 00:13:57.554 "digest": "sha384", 00:13:57.554 "state": "completed" 00:13:57.554 }, 00:13:57.554 "cntlid": 77, 00:13:57.554 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:2fb5c03d-b67f-4471-b8a9-a83bf7b15be8", 00:13:57.554 "listen_address": { 00:13:57.554 "adrfam": "IPv4", 00:13:57.554 "traddr": "10.0.0.3", 00:13:57.554 "trsvcid": "4420", 00:13:57.554 "trtype": "TCP" 00:13:57.554 }, 00:13:57.554 "peer_address": { 00:13:57.554 "adrfam": "IPv4", 00:13:57.554 "traddr": "10.0.0.1", 00:13:57.554 "trsvcid": "49504", 00:13:57.554 "trtype": "TCP" 00:13:57.554 }, 00:13:57.554 "qid": 0, 00:13:57.554 "state": "enabled", 00:13:57.554 "thread": "nvmf_tgt_poll_group_000" 00:13:57.554 } 00:13:57.554 ]' 00:13:57.554 11:35:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:13:57.554 11:35:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:13:57.554 11:35:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:13:57.812 11:35:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:13:57.812 11:35:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:13:57.812 11:35:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:13:57.812 11:35:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:13:57.812 11:35:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:13:58.071 11:35:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:ZDY0N2YwN2U5MDE1OGM3ZDRiZTE3ZjllMTA2YjcyOWJmNTdkMzQ3NTJlZWJhZGJjQ1cSZQ==: --dhchap-ctrl-secret DHHC-1:01:OGZjMTkzMDUzNGJhZmU0OWUxOGM0NmRkZGI3NTVkNTktkkdU: 00:13:58.071 11:35:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:2fb5c03d-b67f-4471-b8a9-a83bf7b15be8 --hostid 2fb5c03d-b67f-4471-b8a9-a83bf7b15be8 -l 0 --dhchap-secret DHHC-1:02:ZDY0N2YwN2U5MDE1OGM3ZDRiZTE3ZjllMTA2YjcyOWJmNTdkMzQ3NTJlZWJhZGJjQ1cSZQ==: --dhchap-ctrl-secret DHHC-1:01:OGZjMTkzMDUzNGJhZmU0OWUxOGM0NmRkZGI3NTVkNTktkkdU: 00:13:58.638 11:35:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:13:58.897 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:13:58.897 11:35:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:2fb5c03d-b67f-4471-b8a9-a83bf7b15be8 00:13:58.897 11:35:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:58.897 11:35:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:58.897 11:35:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:58.897 11:35:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:13:58.897 11:35:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:13:58.897 11:35:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:13:59.156 11:35:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe4096 3 00:13:59.156 11:35:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:13:59.156 11:35:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:13:59.156 11:35:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe4096 00:13:59.156 11:35:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:13:59.156 11:35:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:13:59.156 11:35:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:2fb5c03d-b67f-4471-b8a9-a83bf7b15be8 --dhchap-key key3 00:13:59.156 11:35:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:59.156 11:35:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:59.156 11:35:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:59.156 11:35:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:13:59.156 11:35:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:2fb5c03d-b67f-4471-b8a9-a83bf7b15be8 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:13:59.156 11:35:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:2fb5c03d-b67f-4471-b8a9-a83bf7b15be8 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:13:59.414 00:13:59.415 11:36:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:13:59.415 11:36:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:13:59.415 11:36:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:13:59.673 11:36:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:13:59.673 11:36:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:13:59.673 11:36:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:59.673 11:36:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:59.673 11:36:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:59.673 11:36:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:13:59.673 { 00:13:59.673 "auth": { 00:13:59.673 "dhgroup": "ffdhe4096", 00:13:59.673 "digest": "sha384", 00:13:59.673 "state": "completed" 00:13:59.673 }, 00:13:59.673 "cntlid": 79, 00:13:59.673 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:2fb5c03d-b67f-4471-b8a9-a83bf7b15be8", 00:13:59.673 "listen_address": { 00:13:59.673 "adrfam": "IPv4", 00:13:59.673 "traddr": "10.0.0.3", 00:13:59.673 "trsvcid": "4420", 00:13:59.673 "trtype": "TCP" 00:13:59.673 }, 00:13:59.673 "peer_address": { 00:13:59.673 "adrfam": "IPv4", 00:13:59.673 "traddr": "10.0.0.1", 00:13:59.673 "trsvcid": "49534", 00:13:59.673 "trtype": "TCP" 00:13:59.673 }, 00:13:59.673 "qid": 0, 00:13:59.673 "state": "enabled", 00:13:59.673 "thread": "nvmf_tgt_poll_group_000" 00:13:59.673 } 00:13:59.673 ]' 00:13:59.673 11:36:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:13:59.938 11:36:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:13:59.938 11:36:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:13:59.938 11:36:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:13:59.938 11:36:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:13:59.938 11:36:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:13:59.938 11:36:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:13:59.938 11:36:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:14:00.228 11:36:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:MWI3ZGI4MmVmNjZkMTUxMmFmODZjY2JkMzEwNTkyZmI0NGVhNTE0OTNlMTFmNTBjNzA5YWE5NTE1ZDljZDliM/NYdV4=: 00:14:00.228 11:36:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:2fb5c03d-b67f-4471-b8a9-a83bf7b15be8 --hostid 2fb5c03d-b67f-4471-b8a9-a83bf7b15be8 -l 0 --dhchap-secret DHHC-1:03:MWI3ZGI4MmVmNjZkMTUxMmFmODZjY2JkMzEwNTkyZmI0NGVhNTE0OTNlMTFmNTBjNzA5YWE5NTE1ZDljZDliM/NYdV4=: 00:14:00.799 11:36:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:14:00.799 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:14:00.799 11:36:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:2fb5c03d-b67f-4471-b8a9-a83bf7b15be8 00:14:00.799 11:36:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:00.799 11:36:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:00.799 11:36:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:00.799 11:36:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:14:00.799 11:36:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:14:00.799 11:36:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:14:00.799 11:36:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:14:01.367 11:36:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe6144 0 00:14:01.367 11:36:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:14:01.367 11:36:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:14:01.367 11:36:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe6144 00:14:01.367 11:36:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:14:01.367 11:36:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:14:01.367 11:36:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:2fb5c03d-b67f-4471-b8a9-a83bf7b15be8 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:14:01.367 11:36:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:01.367 11:36:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:01.367 11:36:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:01.367 11:36:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:14:01.367 11:36:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:2fb5c03d-b67f-4471-b8a9-a83bf7b15be8 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:14:01.367 11:36:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:2fb5c03d-b67f-4471-b8a9-a83bf7b15be8 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:14:01.625 00:14:01.625 11:36:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:14:01.625 11:36:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:14:01.625 11:36:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:14:02.192 11:36:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:14:02.192 11:36:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:14:02.192 11:36:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:02.192 11:36:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:02.192 11:36:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:02.192 11:36:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:14:02.192 { 00:14:02.192 "auth": { 00:14:02.192 "dhgroup": "ffdhe6144", 00:14:02.192 "digest": "sha384", 00:14:02.192 "state": "completed" 00:14:02.192 }, 00:14:02.192 "cntlid": 81, 00:14:02.192 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:2fb5c03d-b67f-4471-b8a9-a83bf7b15be8", 00:14:02.192 "listen_address": { 00:14:02.192 "adrfam": "IPv4", 00:14:02.192 "traddr": "10.0.0.3", 00:14:02.192 "trsvcid": "4420", 00:14:02.192 "trtype": "TCP" 00:14:02.192 }, 00:14:02.192 "peer_address": { 00:14:02.192 "adrfam": "IPv4", 00:14:02.192 "traddr": "10.0.0.1", 00:14:02.192 "trsvcid": "49550", 00:14:02.192 "trtype": "TCP" 00:14:02.192 }, 00:14:02.192 "qid": 0, 00:14:02.192 "state": "enabled", 00:14:02.192 "thread": "nvmf_tgt_poll_group_000" 00:14:02.192 } 00:14:02.192 ]' 00:14:02.192 11:36:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:14:02.192 11:36:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:14:02.192 11:36:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:14:02.192 11:36:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:14:02.192 11:36:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:14:02.192 11:36:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:14:02.192 11:36:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:14:02.192 11:36:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:14:02.451 11:36:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:Y2JiZGI3MTE3ZjcwNmFhMzJhZDBkN2VhODEwZGRmNDVmMjNkZmIzNzJmMzBiY2JlF9S+tg==: --dhchap-ctrl-secret DHHC-1:03:YTgxZGEwZTA3Yzc1NmU1YjY0NGFlMGMzYTNiOGIyMzA1MzRjOTQ5YjM4MzY4MzJlNjE5ZThmY2EyMGU5Y2I5ZcLXVvg=: 00:14:02.451 11:36:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:2fb5c03d-b67f-4471-b8a9-a83bf7b15be8 --hostid 2fb5c03d-b67f-4471-b8a9-a83bf7b15be8 -l 0 --dhchap-secret DHHC-1:00:Y2JiZGI3MTE3ZjcwNmFhMzJhZDBkN2VhODEwZGRmNDVmMjNkZmIzNzJmMzBiY2JlF9S+tg==: --dhchap-ctrl-secret DHHC-1:03:YTgxZGEwZTA3Yzc1NmU1YjY0NGFlMGMzYTNiOGIyMzA1MzRjOTQ5YjM4MzY4MzJlNjE5ZThmY2EyMGU5Y2I5ZcLXVvg=: 00:14:03.387 11:36:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:14:03.387 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:14:03.387 11:36:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:2fb5c03d-b67f-4471-b8a9-a83bf7b15be8 00:14:03.387 11:36:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:03.387 11:36:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:03.387 11:36:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:03.387 11:36:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:14:03.387 11:36:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:14:03.387 11:36:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:14:03.646 11:36:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe6144 1 00:14:03.646 11:36:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:14:03.646 11:36:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:14:03.646 11:36:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe6144 00:14:03.646 11:36:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:14:03.646 11:36:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:14:03.646 11:36:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:2fb5c03d-b67f-4471-b8a9-a83bf7b15be8 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:14:03.646 11:36:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:03.646 11:36:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:03.646 11:36:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:03.646 11:36:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:14:03.646 11:36:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:2fb5c03d-b67f-4471-b8a9-a83bf7b15be8 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:14:03.646 11:36:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:2fb5c03d-b67f-4471-b8a9-a83bf7b15be8 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:14:03.904 00:14:04.162 11:36:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:14:04.163 11:36:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:14:04.163 11:36:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:14:04.462 11:36:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:14:04.462 11:36:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:14:04.462 11:36:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:04.462 11:36:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:04.462 11:36:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:04.462 11:36:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:14:04.462 { 00:14:04.462 "auth": { 00:14:04.462 "dhgroup": "ffdhe6144", 00:14:04.462 "digest": "sha384", 00:14:04.462 "state": "completed" 00:14:04.462 }, 00:14:04.462 "cntlid": 83, 00:14:04.463 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:2fb5c03d-b67f-4471-b8a9-a83bf7b15be8", 00:14:04.463 "listen_address": { 00:14:04.463 "adrfam": "IPv4", 00:14:04.463 "traddr": "10.0.0.3", 00:14:04.463 "trsvcid": "4420", 00:14:04.463 "trtype": "TCP" 00:14:04.463 }, 00:14:04.463 "peer_address": { 00:14:04.463 "adrfam": "IPv4", 00:14:04.463 "traddr": "10.0.0.1", 00:14:04.463 "trsvcid": "54076", 00:14:04.463 "trtype": "TCP" 00:14:04.463 }, 00:14:04.463 "qid": 0, 00:14:04.463 "state": "enabled", 00:14:04.463 "thread": "nvmf_tgt_poll_group_000" 00:14:04.463 } 00:14:04.463 ]' 00:14:04.463 11:36:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:14:04.463 11:36:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:14:04.463 11:36:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:14:04.463 11:36:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:14:04.463 11:36:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:14:04.463 11:36:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:14:04.463 11:36:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:14:04.463 11:36:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:14:04.720 11:36:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:NzViMDgxNmYyYjg1NzY4ZjY1YWE3NTk5MjIwMmNlOWUtjHHm: --dhchap-ctrl-secret DHHC-1:02:MTRkMDAyZTM2NDg5YzUyNGFkZDk1YzA5NGYxNDc1NTBlNjNkNjZmODEzMzUxM2Yy4R272Q==: 00:14:04.720 11:36:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:2fb5c03d-b67f-4471-b8a9-a83bf7b15be8 --hostid 2fb5c03d-b67f-4471-b8a9-a83bf7b15be8 -l 0 --dhchap-secret DHHC-1:01:NzViMDgxNmYyYjg1NzY4ZjY1YWE3NTk5MjIwMmNlOWUtjHHm: --dhchap-ctrl-secret DHHC-1:02:MTRkMDAyZTM2NDg5YzUyNGFkZDk1YzA5NGYxNDc1NTBlNjNkNjZmODEzMzUxM2Yy4R272Q==: 00:14:05.656 11:36:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:14:05.656 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:14:05.656 11:36:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:2fb5c03d-b67f-4471-b8a9-a83bf7b15be8 00:14:05.656 11:36:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:05.656 11:36:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:05.656 11:36:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:05.656 11:36:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:14:05.656 11:36:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:14:05.656 11:36:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:14:05.914 11:36:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe6144 2 00:14:05.914 11:36:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:14:05.914 11:36:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:14:05.914 11:36:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe6144 00:14:05.914 11:36:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:14:05.914 11:36:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:14:05.914 11:36:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:2fb5c03d-b67f-4471-b8a9-a83bf7b15be8 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:14:05.914 11:36:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:05.914 11:36:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:05.914 11:36:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:05.914 11:36:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:14:05.914 11:36:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:2fb5c03d-b67f-4471-b8a9-a83bf7b15be8 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:14:05.914 11:36:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:2fb5c03d-b67f-4471-b8a9-a83bf7b15be8 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:14:06.481 00:14:06.481 11:36:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:14:06.481 11:36:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:14:06.481 11:36:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:14:06.741 11:36:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:14:06.741 11:36:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:14:06.741 11:36:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:06.741 11:36:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:06.741 11:36:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:06.741 11:36:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:14:06.741 { 00:14:06.741 "auth": { 00:14:06.741 "dhgroup": "ffdhe6144", 00:14:06.741 "digest": "sha384", 00:14:06.741 "state": "completed" 00:14:06.741 }, 00:14:06.741 "cntlid": 85, 00:14:06.741 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:2fb5c03d-b67f-4471-b8a9-a83bf7b15be8", 00:14:06.741 "listen_address": { 00:14:06.741 "adrfam": "IPv4", 00:14:06.741 "traddr": "10.0.0.3", 00:14:06.741 "trsvcid": "4420", 00:14:06.741 "trtype": "TCP" 00:14:06.741 }, 00:14:06.741 "peer_address": { 00:14:06.741 "adrfam": "IPv4", 00:14:06.741 "traddr": "10.0.0.1", 00:14:06.741 "trsvcid": "54114", 00:14:06.741 "trtype": "TCP" 00:14:06.741 }, 00:14:06.741 "qid": 0, 00:14:06.741 "state": "enabled", 00:14:06.741 "thread": "nvmf_tgt_poll_group_000" 00:14:06.741 } 00:14:06.741 ]' 00:14:06.741 11:36:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:14:06.741 11:36:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:14:06.741 11:36:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:14:06.741 11:36:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:14:06.741 11:36:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:14:06.741 11:36:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:14:06.741 11:36:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:14:06.741 11:36:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:14:06.999 11:36:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:ZDY0N2YwN2U5MDE1OGM3ZDRiZTE3ZjllMTA2YjcyOWJmNTdkMzQ3NTJlZWJhZGJjQ1cSZQ==: --dhchap-ctrl-secret DHHC-1:01:OGZjMTkzMDUzNGJhZmU0OWUxOGM0NmRkZGI3NTVkNTktkkdU: 00:14:06.999 11:36:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:2fb5c03d-b67f-4471-b8a9-a83bf7b15be8 --hostid 2fb5c03d-b67f-4471-b8a9-a83bf7b15be8 -l 0 --dhchap-secret DHHC-1:02:ZDY0N2YwN2U5MDE1OGM3ZDRiZTE3ZjllMTA2YjcyOWJmNTdkMzQ3NTJlZWJhZGJjQ1cSZQ==: --dhchap-ctrl-secret DHHC-1:01:OGZjMTkzMDUzNGJhZmU0OWUxOGM0NmRkZGI3NTVkNTktkkdU: 00:14:07.935 11:36:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:14:07.935 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:14:07.935 11:36:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:2fb5c03d-b67f-4471-b8a9-a83bf7b15be8 00:14:07.935 11:36:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:07.935 11:36:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:07.935 11:36:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:07.935 11:36:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:14:07.936 11:36:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:14:07.936 11:36:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:14:08.211 11:36:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe6144 3 00:14:08.211 11:36:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:14:08.211 11:36:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:14:08.211 11:36:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe6144 00:14:08.211 11:36:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:14:08.211 11:36:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:14:08.211 11:36:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:2fb5c03d-b67f-4471-b8a9-a83bf7b15be8 --dhchap-key key3 00:14:08.211 11:36:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:08.211 11:36:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:08.211 11:36:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:08.211 11:36:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:14:08.211 11:36:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:2fb5c03d-b67f-4471-b8a9-a83bf7b15be8 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:14:08.211 11:36:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:2fb5c03d-b67f-4471-b8a9-a83bf7b15be8 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:14:08.470 00:14:08.728 11:36:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:14:08.728 11:36:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:14:08.728 11:36:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:14:09.001 11:36:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:14:09.001 11:36:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:14:09.001 11:36:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:09.001 11:36:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:09.001 11:36:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:09.001 11:36:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:14:09.001 { 00:14:09.001 "auth": { 00:14:09.001 "dhgroup": "ffdhe6144", 00:14:09.001 "digest": "sha384", 00:14:09.001 "state": "completed" 00:14:09.001 }, 00:14:09.001 "cntlid": 87, 00:14:09.001 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:2fb5c03d-b67f-4471-b8a9-a83bf7b15be8", 00:14:09.001 "listen_address": { 00:14:09.001 "adrfam": "IPv4", 00:14:09.001 "traddr": "10.0.0.3", 00:14:09.001 "trsvcid": "4420", 00:14:09.001 "trtype": "TCP" 00:14:09.001 }, 00:14:09.001 "peer_address": { 00:14:09.001 "adrfam": "IPv4", 00:14:09.001 "traddr": "10.0.0.1", 00:14:09.001 "trsvcid": "54146", 00:14:09.001 "trtype": "TCP" 00:14:09.001 }, 00:14:09.001 "qid": 0, 00:14:09.001 "state": "enabled", 00:14:09.001 "thread": "nvmf_tgt_poll_group_000" 00:14:09.001 } 00:14:09.001 ]' 00:14:09.001 11:36:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:14:09.001 11:36:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:14:09.001 11:36:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:14:09.001 11:36:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:14:09.001 11:36:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:14:09.272 11:36:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:14:09.272 11:36:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:14:09.272 11:36:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:14:09.531 11:36:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:MWI3ZGI4MmVmNjZkMTUxMmFmODZjY2JkMzEwNTkyZmI0NGVhNTE0OTNlMTFmNTBjNzA5YWE5NTE1ZDljZDliM/NYdV4=: 00:14:09.531 11:36:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:2fb5c03d-b67f-4471-b8a9-a83bf7b15be8 --hostid 2fb5c03d-b67f-4471-b8a9-a83bf7b15be8 -l 0 --dhchap-secret DHHC-1:03:MWI3ZGI4MmVmNjZkMTUxMmFmODZjY2JkMzEwNTkyZmI0NGVhNTE0OTNlMTFmNTBjNzA5YWE5NTE1ZDljZDliM/NYdV4=: 00:14:10.098 11:36:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:14:10.098 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:14:10.098 11:36:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:2fb5c03d-b67f-4471-b8a9-a83bf7b15be8 00:14:10.098 11:36:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:10.098 11:36:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:10.098 11:36:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:10.098 11:36:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:14:10.098 11:36:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:14:10.098 11:36:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:14:10.098 11:36:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:14:10.357 11:36:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe8192 0 00:14:10.357 11:36:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:14:10.357 11:36:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:14:10.357 11:36:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe8192 00:14:10.357 11:36:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:14:10.357 11:36:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:14:10.357 11:36:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:2fb5c03d-b67f-4471-b8a9-a83bf7b15be8 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:14:10.357 11:36:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:10.357 11:36:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:10.357 11:36:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:10.357 11:36:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:14:10.357 11:36:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:2fb5c03d-b67f-4471-b8a9-a83bf7b15be8 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:14:10.357 11:36:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:2fb5c03d-b67f-4471-b8a9-a83bf7b15be8 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:14:11.292 00:14:11.292 11:36:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:14:11.292 11:36:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:14:11.292 11:36:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:14:11.293 11:36:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:14:11.293 11:36:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:14:11.293 11:36:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:11.293 11:36:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:11.293 11:36:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:11.293 11:36:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:14:11.293 { 00:14:11.293 "auth": { 00:14:11.293 "dhgroup": "ffdhe8192", 00:14:11.293 "digest": "sha384", 00:14:11.293 "state": "completed" 00:14:11.293 }, 00:14:11.293 "cntlid": 89, 00:14:11.293 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:2fb5c03d-b67f-4471-b8a9-a83bf7b15be8", 00:14:11.293 "listen_address": { 00:14:11.293 "adrfam": "IPv4", 00:14:11.293 "traddr": "10.0.0.3", 00:14:11.293 "trsvcid": "4420", 00:14:11.293 "trtype": "TCP" 00:14:11.293 }, 00:14:11.293 "peer_address": { 00:14:11.293 "adrfam": "IPv4", 00:14:11.293 "traddr": "10.0.0.1", 00:14:11.293 "trsvcid": "54174", 00:14:11.293 "trtype": "TCP" 00:14:11.293 }, 00:14:11.293 "qid": 0, 00:14:11.293 "state": "enabled", 00:14:11.293 "thread": "nvmf_tgt_poll_group_000" 00:14:11.293 } 00:14:11.293 ]' 00:14:11.293 11:36:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:14:11.552 11:36:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:14:11.552 11:36:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:14:11.552 11:36:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:14:11.552 11:36:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:14:11.552 11:36:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:14:11.552 11:36:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:14:11.552 11:36:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:14:11.811 11:36:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:Y2JiZGI3MTE3ZjcwNmFhMzJhZDBkN2VhODEwZGRmNDVmMjNkZmIzNzJmMzBiY2JlF9S+tg==: --dhchap-ctrl-secret DHHC-1:03:YTgxZGEwZTA3Yzc1NmU1YjY0NGFlMGMzYTNiOGIyMzA1MzRjOTQ5YjM4MzY4MzJlNjE5ZThmY2EyMGU5Y2I5ZcLXVvg=: 00:14:11.811 11:36:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:2fb5c03d-b67f-4471-b8a9-a83bf7b15be8 --hostid 2fb5c03d-b67f-4471-b8a9-a83bf7b15be8 -l 0 --dhchap-secret DHHC-1:00:Y2JiZGI3MTE3ZjcwNmFhMzJhZDBkN2VhODEwZGRmNDVmMjNkZmIzNzJmMzBiY2JlF9S+tg==: --dhchap-ctrl-secret DHHC-1:03:YTgxZGEwZTA3Yzc1NmU1YjY0NGFlMGMzYTNiOGIyMzA1MzRjOTQ5YjM4MzY4MzJlNjE5ZThmY2EyMGU5Y2I5ZcLXVvg=: 00:14:12.378 11:36:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:14:12.378 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:14:12.378 11:36:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:2fb5c03d-b67f-4471-b8a9-a83bf7b15be8 00:14:12.378 11:36:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:12.378 11:36:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:12.378 11:36:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:12.378 11:36:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:14:12.378 11:36:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:14:12.378 11:36:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:14:12.945 11:36:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe8192 1 00:14:12.945 11:36:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:14:12.945 11:36:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:14:12.945 11:36:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe8192 00:14:12.945 11:36:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:14:12.945 11:36:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:14:12.945 11:36:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:2fb5c03d-b67f-4471-b8a9-a83bf7b15be8 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:14:12.945 11:36:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:12.945 11:36:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:12.945 11:36:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:12.945 11:36:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:14:12.945 11:36:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:2fb5c03d-b67f-4471-b8a9-a83bf7b15be8 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:14:12.945 11:36:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:2fb5c03d-b67f-4471-b8a9-a83bf7b15be8 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:14:13.512 00:14:13.512 11:36:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:14:13.512 11:36:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:14:13.512 11:36:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:14:13.771 11:36:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:14:13.771 11:36:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:14:13.771 11:36:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:13.771 11:36:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:13.771 11:36:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:13.771 11:36:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:14:13.771 { 00:14:13.771 "auth": { 00:14:13.771 "dhgroup": "ffdhe8192", 00:14:13.771 "digest": "sha384", 00:14:13.771 "state": "completed" 00:14:13.771 }, 00:14:13.771 "cntlid": 91, 00:14:13.771 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:2fb5c03d-b67f-4471-b8a9-a83bf7b15be8", 00:14:13.771 "listen_address": { 00:14:13.771 "adrfam": "IPv4", 00:14:13.771 "traddr": "10.0.0.3", 00:14:13.771 "trsvcid": "4420", 00:14:13.771 "trtype": "TCP" 00:14:13.771 }, 00:14:13.771 "peer_address": { 00:14:13.771 "adrfam": "IPv4", 00:14:13.771 "traddr": "10.0.0.1", 00:14:13.771 "trsvcid": "54194", 00:14:13.771 "trtype": "TCP" 00:14:13.771 }, 00:14:13.771 "qid": 0, 00:14:13.771 "state": "enabled", 00:14:13.771 "thread": "nvmf_tgt_poll_group_000" 00:14:13.771 } 00:14:13.771 ]' 00:14:13.771 11:36:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:14:13.771 11:36:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:14:13.771 11:36:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:14:13.771 11:36:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:14:13.771 11:36:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:14:14.030 11:36:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:14:14.030 11:36:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:14:14.030 11:36:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:14:14.303 11:36:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:NzViMDgxNmYyYjg1NzY4ZjY1YWE3NTk5MjIwMmNlOWUtjHHm: --dhchap-ctrl-secret DHHC-1:02:MTRkMDAyZTM2NDg5YzUyNGFkZDk1YzA5NGYxNDc1NTBlNjNkNjZmODEzMzUxM2Yy4R272Q==: 00:14:14.303 11:36:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:2fb5c03d-b67f-4471-b8a9-a83bf7b15be8 --hostid 2fb5c03d-b67f-4471-b8a9-a83bf7b15be8 -l 0 --dhchap-secret DHHC-1:01:NzViMDgxNmYyYjg1NzY4ZjY1YWE3NTk5MjIwMmNlOWUtjHHm: --dhchap-ctrl-secret DHHC-1:02:MTRkMDAyZTM2NDg5YzUyNGFkZDk1YzA5NGYxNDc1NTBlNjNkNjZmODEzMzUxM2Yy4R272Q==: 00:14:14.897 11:36:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:14:14.897 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:14:14.897 11:36:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:2fb5c03d-b67f-4471-b8a9-a83bf7b15be8 00:14:14.897 11:36:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:14.897 11:36:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:14.897 11:36:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:14.897 11:36:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:14:14.897 11:36:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:14:14.897 11:36:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:14:15.464 11:36:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe8192 2 00:14:15.464 11:36:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:14:15.464 11:36:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:14:15.464 11:36:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe8192 00:14:15.464 11:36:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:14:15.464 11:36:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:14:15.464 11:36:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:2fb5c03d-b67f-4471-b8a9-a83bf7b15be8 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:14:15.464 11:36:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:15.464 11:36:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:15.464 11:36:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:15.464 11:36:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:14:15.464 11:36:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:2fb5c03d-b67f-4471-b8a9-a83bf7b15be8 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:14:15.464 11:36:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:2fb5c03d-b67f-4471-b8a9-a83bf7b15be8 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:14:16.030 00:14:16.030 11:36:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:14:16.030 11:36:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:14:16.030 11:36:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:14:16.289 11:36:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:14:16.289 11:36:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:14:16.289 11:36:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:16.289 11:36:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:16.289 11:36:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:16.289 11:36:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:14:16.289 { 00:14:16.289 "auth": { 00:14:16.289 "dhgroup": "ffdhe8192", 00:14:16.289 "digest": "sha384", 00:14:16.289 "state": "completed" 00:14:16.289 }, 00:14:16.289 "cntlid": 93, 00:14:16.289 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:2fb5c03d-b67f-4471-b8a9-a83bf7b15be8", 00:14:16.289 "listen_address": { 00:14:16.289 "adrfam": "IPv4", 00:14:16.289 "traddr": "10.0.0.3", 00:14:16.289 "trsvcid": "4420", 00:14:16.289 "trtype": "TCP" 00:14:16.289 }, 00:14:16.289 "peer_address": { 00:14:16.289 "adrfam": "IPv4", 00:14:16.289 "traddr": "10.0.0.1", 00:14:16.289 "trsvcid": "34188", 00:14:16.289 "trtype": "TCP" 00:14:16.289 }, 00:14:16.289 "qid": 0, 00:14:16.289 "state": "enabled", 00:14:16.289 "thread": "nvmf_tgt_poll_group_000" 00:14:16.289 } 00:14:16.289 ]' 00:14:16.289 11:36:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:14:16.289 11:36:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:14:16.289 11:36:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:14:16.548 11:36:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:14:16.548 11:36:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:14:16.548 11:36:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:14:16.548 11:36:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:14:16.548 11:36:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:14:16.806 11:36:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:ZDY0N2YwN2U5MDE1OGM3ZDRiZTE3ZjllMTA2YjcyOWJmNTdkMzQ3NTJlZWJhZGJjQ1cSZQ==: --dhchap-ctrl-secret DHHC-1:01:OGZjMTkzMDUzNGJhZmU0OWUxOGM0NmRkZGI3NTVkNTktkkdU: 00:14:16.806 11:36:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:2fb5c03d-b67f-4471-b8a9-a83bf7b15be8 --hostid 2fb5c03d-b67f-4471-b8a9-a83bf7b15be8 -l 0 --dhchap-secret DHHC-1:02:ZDY0N2YwN2U5MDE1OGM3ZDRiZTE3ZjllMTA2YjcyOWJmNTdkMzQ3NTJlZWJhZGJjQ1cSZQ==: --dhchap-ctrl-secret DHHC-1:01:OGZjMTkzMDUzNGJhZmU0OWUxOGM0NmRkZGI3NTVkNTktkkdU: 00:14:17.373 11:36:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:14:17.373 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:14:17.373 11:36:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:2fb5c03d-b67f-4471-b8a9-a83bf7b15be8 00:14:17.373 11:36:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:17.373 11:36:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:17.373 11:36:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:17.373 11:36:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:14:17.373 11:36:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:14:17.373 11:36:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:14:17.631 11:36:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe8192 3 00:14:17.631 11:36:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:14:17.631 11:36:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:14:17.631 11:36:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe8192 00:14:17.631 11:36:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:14:17.631 11:36:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:14:17.631 11:36:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:2fb5c03d-b67f-4471-b8a9-a83bf7b15be8 --dhchap-key key3 00:14:17.631 11:36:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:17.631 11:36:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:17.631 11:36:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:17.631 11:36:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:14:17.631 11:36:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:2fb5c03d-b67f-4471-b8a9-a83bf7b15be8 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:14:17.632 11:36:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:2fb5c03d-b67f-4471-b8a9-a83bf7b15be8 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:14:18.567 00:14:18.567 11:36:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:14:18.567 11:36:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:14:18.567 11:36:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:14:18.567 11:36:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:14:18.567 11:36:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:14:18.567 11:36:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:18.567 11:36:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:18.567 11:36:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:18.567 11:36:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:14:18.567 { 00:14:18.567 "auth": { 00:14:18.567 "dhgroup": "ffdhe8192", 00:14:18.567 "digest": "sha384", 00:14:18.567 "state": "completed" 00:14:18.567 }, 00:14:18.567 "cntlid": 95, 00:14:18.567 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:2fb5c03d-b67f-4471-b8a9-a83bf7b15be8", 00:14:18.567 "listen_address": { 00:14:18.567 "adrfam": "IPv4", 00:14:18.567 "traddr": "10.0.0.3", 00:14:18.567 "trsvcid": "4420", 00:14:18.567 "trtype": "TCP" 00:14:18.567 }, 00:14:18.567 "peer_address": { 00:14:18.567 "adrfam": "IPv4", 00:14:18.567 "traddr": "10.0.0.1", 00:14:18.567 "trsvcid": "34222", 00:14:18.567 "trtype": "TCP" 00:14:18.567 }, 00:14:18.567 "qid": 0, 00:14:18.567 "state": "enabled", 00:14:18.567 "thread": "nvmf_tgt_poll_group_000" 00:14:18.567 } 00:14:18.567 ]' 00:14:18.567 11:36:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:14:18.825 11:36:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:14:18.825 11:36:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:14:18.825 11:36:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:14:18.825 11:36:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:14:18.825 11:36:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:14:18.825 11:36:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:14:18.825 11:36:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:14:19.085 11:36:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:MWI3ZGI4MmVmNjZkMTUxMmFmODZjY2JkMzEwNTkyZmI0NGVhNTE0OTNlMTFmNTBjNzA5YWE5NTE1ZDljZDliM/NYdV4=: 00:14:19.085 11:36:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:2fb5c03d-b67f-4471-b8a9-a83bf7b15be8 --hostid 2fb5c03d-b67f-4471-b8a9-a83bf7b15be8 -l 0 --dhchap-secret DHHC-1:03:MWI3ZGI4MmVmNjZkMTUxMmFmODZjY2JkMzEwNTkyZmI0NGVhNTE0OTNlMTFmNTBjNzA5YWE5NTE1ZDljZDliM/NYdV4=: 00:14:20.020 11:36:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:14:20.020 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:14:20.020 11:36:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:2fb5c03d-b67f-4471-b8a9-a83bf7b15be8 00:14:20.020 11:36:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:20.020 11:36:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:20.020 11:36:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:20.020 11:36:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@118 -- # for digest in "${digests[@]}" 00:14:20.020 11:36:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:14:20.020 11:36:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:14:20.020 11:36:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups null 00:14:20.020 11:36:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups null 00:14:20.020 11:36:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 null 0 00:14:20.020 11:36:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:14:20.020 11:36:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:14:20.020 11:36:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=null 00:14:20.020 11:36:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:14:20.020 11:36:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:14:20.020 11:36:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:2fb5c03d-b67f-4471-b8a9-a83bf7b15be8 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:14:20.020 11:36:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:20.020 11:36:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:20.020 11:36:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:20.020 11:36:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:14:20.020 11:36:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:2fb5c03d-b67f-4471-b8a9-a83bf7b15be8 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:14:20.020 11:36:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:2fb5c03d-b67f-4471-b8a9-a83bf7b15be8 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:14:20.587 00:14:20.587 11:36:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:14:20.587 11:36:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:14:20.587 11:36:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:14:20.846 11:36:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:14:20.846 11:36:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:14:20.846 11:36:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:20.846 11:36:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:20.846 11:36:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:20.846 11:36:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:14:20.846 { 00:14:20.846 "auth": { 00:14:20.846 "dhgroup": "null", 00:14:20.846 "digest": "sha512", 00:14:20.846 "state": "completed" 00:14:20.846 }, 00:14:20.846 "cntlid": 97, 00:14:20.846 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:2fb5c03d-b67f-4471-b8a9-a83bf7b15be8", 00:14:20.846 "listen_address": { 00:14:20.846 "adrfam": "IPv4", 00:14:20.846 "traddr": "10.0.0.3", 00:14:20.846 "trsvcid": "4420", 00:14:20.846 "trtype": "TCP" 00:14:20.846 }, 00:14:20.846 "peer_address": { 00:14:20.846 "adrfam": "IPv4", 00:14:20.846 "traddr": "10.0.0.1", 00:14:20.846 "trsvcid": "34240", 00:14:20.846 "trtype": "TCP" 00:14:20.846 }, 00:14:20.846 "qid": 0, 00:14:20.846 "state": "enabled", 00:14:20.846 "thread": "nvmf_tgt_poll_group_000" 00:14:20.846 } 00:14:20.846 ]' 00:14:20.846 11:36:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:14:20.846 11:36:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:14:20.846 11:36:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:14:20.846 11:36:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ null == \n\u\l\l ]] 00:14:20.846 11:36:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:14:20.846 11:36:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:14:20.846 11:36:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:14:20.846 11:36:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:14:21.105 11:36:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:Y2JiZGI3MTE3ZjcwNmFhMzJhZDBkN2VhODEwZGRmNDVmMjNkZmIzNzJmMzBiY2JlF9S+tg==: --dhchap-ctrl-secret DHHC-1:03:YTgxZGEwZTA3Yzc1NmU1YjY0NGFlMGMzYTNiOGIyMzA1MzRjOTQ5YjM4MzY4MzJlNjE5ZThmY2EyMGU5Y2I5ZcLXVvg=: 00:14:21.105 11:36:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:2fb5c03d-b67f-4471-b8a9-a83bf7b15be8 --hostid 2fb5c03d-b67f-4471-b8a9-a83bf7b15be8 -l 0 --dhchap-secret DHHC-1:00:Y2JiZGI3MTE3ZjcwNmFhMzJhZDBkN2VhODEwZGRmNDVmMjNkZmIzNzJmMzBiY2JlF9S+tg==: --dhchap-ctrl-secret DHHC-1:03:YTgxZGEwZTA3Yzc1NmU1YjY0NGFlMGMzYTNiOGIyMzA1MzRjOTQ5YjM4MzY4MzJlNjE5ZThmY2EyMGU5Y2I5ZcLXVvg=: 00:14:22.040 11:36:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:14:22.040 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:14:22.040 11:36:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:2fb5c03d-b67f-4471-b8a9-a83bf7b15be8 00:14:22.040 11:36:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:22.040 11:36:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:22.040 11:36:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:22.040 11:36:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:14:22.040 11:36:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups null 00:14:22.040 11:36:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups null 00:14:22.298 11:36:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 null 1 00:14:22.298 11:36:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:14:22.298 11:36:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:14:22.298 11:36:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=null 00:14:22.298 11:36:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:14:22.298 11:36:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:14:22.298 11:36:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:2fb5c03d-b67f-4471-b8a9-a83bf7b15be8 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:14:22.298 11:36:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:22.298 11:36:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:22.298 11:36:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:22.298 11:36:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:14:22.298 11:36:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:2fb5c03d-b67f-4471-b8a9-a83bf7b15be8 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:14:22.298 11:36:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:2fb5c03d-b67f-4471-b8a9-a83bf7b15be8 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:14:22.557 00:14:22.557 11:36:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:14:22.557 11:36:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:14:22.557 11:36:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:14:22.815 11:36:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:14:22.815 11:36:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:14:22.815 11:36:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:22.815 11:36:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:22.815 11:36:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:22.815 11:36:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:14:22.815 { 00:14:22.815 "auth": { 00:14:22.815 "dhgroup": "null", 00:14:22.815 "digest": "sha512", 00:14:22.815 "state": "completed" 00:14:22.815 }, 00:14:22.815 "cntlid": 99, 00:14:22.815 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:2fb5c03d-b67f-4471-b8a9-a83bf7b15be8", 00:14:22.815 "listen_address": { 00:14:22.815 "adrfam": "IPv4", 00:14:22.815 "traddr": "10.0.0.3", 00:14:22.815 "trsvcid": "4420", 00:14:22.815 "trtype": "TCP" 00:14:22.815 }, 00:14:22.815 "peer_address": { 00:14:22.815 "adrfam": "IPv4", 00:14:22.815 "traddr": "10.0.0.1", 00:14:22.815 "trsvcid": "34264", 00:14:22.815 "trtype": "TCP" 00:14:22.815 }, 00:14:22.815 "qid": 0, 00:14:22.815 "state": "enabled", 00:14:22.815 "thread": "nvmf_tgt_poll_group_000" 00:14:22.815 } 00:14:22.815 ]' 00:14:22.815 11:36:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:14:22.815 11:36:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:14:22.815 11:36:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:14:23.073 11:36:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ null == \n\u\l\l ]] 00:14:23.073 11:36:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:14:23.073 11:36:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:14:23.073 11:36:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:14:23.073 11:36:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:14:23.332 11:36:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:NzViMDgxNmYyYjg1NzY4ZjY1YWE3NTk5MjIwMmNlOWUtjHHm: --dhchap-ctrl-secret DHHC-1:02:MTRkMDAyZTM2NDg5YzUyNGFkZDk1YzA5NGYxNDc1NTBlNjNkNjZmODEzMzUxM2Yy4R272Q==: 00:14:23.332 11:36:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:2fb5c03d-b67f-4471-b8a9-a83bf7b15be8 --hostid 2fb5c03d-b67f-4471-b8a9-a83bf7b15be8 -l 0 --dhchap-secret DHHC-1:01:NzViMDgxNmYyYjg1NzY4ZjY1YWE3NTk5MjIwMmNlOWUtjHHm: --dhchap-ctrl-secret DHHC-1:02:MTRkMDAyZTM2NDg5YzUyNGFkZDk1YzA5NGYxNDc1NTBlNjNkNjZmODEzMzUxM2Yy4R272Q==: 00:14:23.897 11:36:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:14:23.897 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:14:23.898 11:36:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:2fb5c03d-b67f-4471-b8a9-a83bf7b15be8 00:14:23.898 11:36:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:23.898 11:36:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:23.898 11:36:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:23.898 11:36:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:14:23.898 11:36:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups null 00:14:23.898 11:36:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups null 00:14:24.156 11:36:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 null 2 00:14:24.156 11:36:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:14:24.156 11:36:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:14:24.156 11:36:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=null 00:14:24.156 11:36:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:14:24.156 11:36:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:14:24.156 11:36:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:2fb5c03d-b67f-4471-b8a9-a83bf7b15be8 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:14:24.156 11:36:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:24.156 11:36:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:24.156 11:36:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:24.156 11:36:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:14:24.156 11:36:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:2fb5c03d-b67f-4471-b8a9-a83bf7b15be8 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:14:24.156 11:36:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:2fb5c03d-b67f-4471-b8a9-a83bf7b15be8 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:14:24.722 00:14:24.722 11:36:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:14:24.722 11:36:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:14:24.722 11:36:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:14:24.981 11:36:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:14:24.981 11:36:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:14:24.981 11:36:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:24.981 11:36:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:24.981 11:36:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:24.981 11:36:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:14:24.981 { 00:14:24.981 "auth": { 00:14:24.981 "dhgroup": "null", 00:14:24.981 "digest": "sha512", 00:14:24.981 "state": "completed" 00:14:24.981 }, 00:14:24.981 "cntlid": 101, 00:14:24.981 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:2fb5c03d-b67f-4471-b8a9-a83bf7b15be8", 00:14:24.981 "listen_address": { 00:14:24.981 "adrfam": "IPv4", 00:14:24.981 "traddr": "10.0.0.3", 00:14:24.981 "trsvcid": "4420", 00:14:24.981 "trtype": "TCP" 00:14:24.981 }, 00:14:24.981 "peer_address": { 00:14:24.981 "adrfam": "IPv4", 00:14:24.981 "traddr": "10.0.0.1", 00:14:24.981 "trsvcid": "52926", 00:14:24.981 "trtype": "TCP" 00:14:24.981 }, 00:14:24.981 "qid": 0, 00:14:24.981 "state": "enabled", 00:14:24.981 "thread": "nvmf_tgt_poll_group_000" 00:14:24.981 } 00:14:24.981 ]' 00:14:24.981 11:36:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:14:24.981 11:36:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:14:24.981 11:36:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:14:24.981 11:36:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ null == \n\u\l\l ]] 00:14:24.981 11:36:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:14:24.981 11:36:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:14:24.981 11:36:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:14:24.981 11:36:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:14:25.239 11:36:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:ZDY0N2YwN2U5MDE1OGM3ZDRiZTE3ZjllMTA2YjcyOWJmNTdkMzQ3NTJlZWJhZGJjQ1cSZQ==: --dhchap-ctrl-secret DHHC-1:01:OGZjMTkzMDUzNGJhZmU0OWUxOGM0NmRkZGI3NTVkNTktkkdU: 00:14:25.239 11:36:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:2fb5c03d-b67f-4471-b8a9-a83bf7b15be8 --hostid 2fb5c03d-b67f-4471-b8a9-a83bf7b15be8 -l 0 --dhchap-secret DHHC-1:02:ZDY0N2YwN2U5MDE1OGM3ZDRiZTE3ZjllMTA2YjcyOWJmNTdkMzQ3NTJlZWJhZGJjQ1cSZQ==: --dhchap-ctrl-secret DHHC-1:01:OGZjMTkzMDUzNGJhZmU0OWUxOGM0NmRkZGI3NTVkNTktkkdU: 00:14:26.174 11:36:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:14:26.174 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:14:26.174 11:36:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:2fb5c03d-b67f-4471-b8a9-a83bf7b15be8 00:14:26.174 11:36:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:26.174 11:36:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:26.174 11:36:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:26.174 11:36:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:14:26.174 11:36:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups null 00:14:26.174 11:36:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups null 00:14:26.432 11:36:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 null 3 00:14:26.432 11:36:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:14:26.432 11:36:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:14:26.432 11:36:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=null 00:14:26.432 11:36:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:14:26.432 11:36:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:14:26.433 11:36:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:2fb5c03d-b67f-4471-b8a9-a83bf7b15be8 --dhchap-key key3 00:14:26.433 11:36:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:26.433 11:36:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:26.433 11:36:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:26.433 11:36:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:14:26.433 11:36:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:2fb5c03d-b67f-4471-b8a9-a83bf7b15be8 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:14:26.433 11:36:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:2fb5c03d-b67f-4471-b8a9-a83bf7b15be8 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:14:26.691 00:14:26.691 11:36:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:14:26.691 11:36:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:14:26.691 11:36:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:14:26.949 11:36:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:14:26.949 11:36:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:14:26.949 11:36:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:26.949 11:36:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:26.949 11:36:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:26.949 11:36:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:14:26.949 { 00:14:26.949 "auth": { 00:14:26.949 "dhgroup": "null", 00:14:26.949 "digest": "sha512", 00:14:26.949 "state": "completed" 00:14:26.949 }, 00:14:26.949 "cntlid": 103, 00:14:26.949 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:2fb5c03d-b67f-4471-b8a9-a83bf7b15be8", 00:14:26.949 "listen_address": { 00:14:26.949 "adrfam": "IPv4", 00:14:26.949 "traddr": "10.0.0.3", 00:14:26.949 "trsvcid": "4420", 00:14:26.949 "trtype": "TCP" 00:14:26.949 }, 00:14:26.949 "peer_address": { 00:14:26.949 "adrfam": "IPv4", 00:14:26.949 "traddr": "10.0.0.1", 00:14:26.949 "trsvcid": "52946", 00:14:26.949 "trtype": "TCP" 00:14:26.949 }, 00:14:26.949 "qid": 0, 00:14:26.949 "state": "enabled", 00:14:26.949 "thread": "nvmf_tgt_poll_group_000" 00:14:26.949 } 00:14:26.949 ]' 00:14:26.949 11:36:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:14:26.949 11:36:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:14:26.949 11:36:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:14:26.949 11:36:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ null == \n\u\l\l ]] 00:14:26.949 11:36:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:14:27.207 11:36:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:14:27.207 11:36:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:14:27.207 11:36:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:14:27.466 11:36:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:MWI3ZGI4MmVmNjZkMTUxMmFmODZjY2JkMzEwNTkyZmI0NGVhNTE0OTNlMTFmNTBjNzA5YWE5NTE1ZDljZDliM/NYdV4=: 00:14:27.466 11:36:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:2fb5c03d-b67f-4471-b8a9-a83bf7b15be8 --hostid 2fb5c03d-b67f-4471-b8a9-a83bf7b15be8 -l 0 --dhchap-secret DHHC-1:03:MWI3ZGI4MmVmNjZkMTUxMmFmODZjY2JkMzEwNTkyZmI0NGVhNTE0OTNlMTFmNTBjNzA5YWE5NTE1ZDljZDliM/NYdV4=: 00:14:28.034 11:36:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:14:28.034 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:14:28.034 11:36:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:2fb5c03d-b67f-4471-b8a9-a83bf7b15be8 00:14:28.034 11:36:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:28.034 11:36:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:28.034 11:36:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:28.034 11:36:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:14:28.034 11:36:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:14:28.034 11:36:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:14:28.034 11:36:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:14:28.292 11:36:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe2048 0 00:14:28.292 11:36:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:14:28.292 11:36:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:14:28.292 11:36:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe2048 00:14:28.292 11:36:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:14:28.292 11:36:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:14:28.292 11:36:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:2fb5c03d-b67f-4471-b8a9-a83bf7b15be8 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:14:28.292 11:36:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:28.292 11:36:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:28.292 11:36:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:28.292 11:36:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:14:28.292 11:36:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:2fb5c03d-b67f-4471-b8a9-a83bf7b15be8 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:14:28.292 11:36:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:2fb5c03d-b67f-4471-b8a9-a83bf7b15be8 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:14:28.551 00:14:28.551 11:36:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:14:28.551 11:36:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:14:28.551 11:36:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:14:29.119 11:36:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:14:29.119 11:36:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:14:29.119 11:36:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:29.119 11:36:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:29.119 11:36:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:29.119 11:36:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:14:29.119 { 00:14:29.119 "auth": { 00:14:29.119 "dhgroup": "ffdhe2048", 00:14:29.119 "digest": "sha512", 00:14:29.119 "state": "completed" 00:14:29.119 }, 00:14:29.119 "cntlid": 105, 00:14:29.119 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:2fb5c03d-b67f-4471-b8a9-a83bf7b15be8", 00:14:29.119 "listen_address": { 00:14:29.119 "adrfam": "IPv4", 00:14:29.119 "traddr": "10.0.0.3", 00:14:29.119 "trsvcid": "4420", 00:14:29.119 "trtype": "TCP" 00:14:29.119 }, 00:14:29.119 "peer_address": { 00:14:29.119 "adrfam": "IPv4", 00:14:29.119 "traddr": "10.0.0.1", 00:14:29.119 "trsvcid": "52960", 00:14:29.119 "trtype": "TCP" 00:14:29.119 }, 00:14:29.119 "qid": 0, 00:14:29.119 "state": "enabled", 00:14:29.119 "thread": "nvmf_tgt_poll_group_000" 00:14:29.119 } 00:14:29.119 ]' 00:14:29.119 11:36:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:14:29.119 11:36:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:14:29.119 11:36:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:14:29.119 11:36:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:14:29.119 11:36:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:14:29.119 11:36:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:14:29.119 11:36:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:14:29.119 11:36:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:14:29.380 11:36:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:Y2JiZGI3MTE3ZjcwNmFhMzJhZDBkN2VhODEwZGRmNDVmMjNkZmIzNzJmMzBiY2JlF9S+tg==: --dhchap-ctrl-secret DHHC-1:03:YTgxZGEwZTA3Yzc1NmU1YjY0NGFlMGMzYTNiOGIyMzA1MzRjOTQ5YjM4MzY4MzJlNjE5ZThmY2EyMGU5Y2I5ZcLXVvg=: 00:14:29.380 11:36:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:2fb5c03d-b67f-4471-b8a9-a83bf7b15be8 --hostid 2fb5c03d-b67f-4471-b8a9-a83bf7b15be8 -l 0 --dhchap-secret DHHC-1:00:Y2JiZGI3MTE3ZjcwNmFhMzJhZDBkN2VhODEwZGRmNDVmMjNkZmIzNzJmMzBiY2JlF9S+tg==: --dhchap-ctrl-secret DHHC-1:03:YTgxZGEwZTA3Yzc1NmU1YjY0NGFlMGMzYTNiOGIyMzA1MzRjOTQ5YjM4MzY4MzJlNjE5ZThmY2EyMGU5Y2I5ZcLXVvg=: 00:14:29.947 11:36:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:14:29.947 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:14:29.947 11:36:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:2fb5c03d-b67f-4471-b8a9-a83bf7b15be8 00:14:29.947 11:36:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:29.947 11:36:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:29.947 11:36:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:29.947 11:36:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:14:29.947 11:36:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:14:29.947 11:36:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:14:30.515 11:36:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe2048 1 00:14:30.515 11:36:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:14:30.515 11:36:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:14:30.515 11:36:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe2048 00:14:30.515 11:36:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:14:30.515 11:36:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:14:30.515 11:36:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:2fb5c03d-b67f-4471-b8a9-a83bf7b15be8 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:14:30.515 11:36:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:30.515 11:36:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:30.515 11:36:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:30.515 11:36:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:14:30.515 11:36:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:2fb5c03d-b67f-4471-b8a9-a83bf7b15be8 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:14:30.515 11:36:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:2fb5c03d-b67f-4471-b8a9-a83bf7b15be8 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:14:30.773 00:14:30.773 11:36:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:14:30.773 11:36:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:14:30.773 11:36:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:14:31.032 11:36:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:14:31.032 11:36:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:14:31.032 11:36:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:31.032 11:36:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:31.032 11:36:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:31.032 11:36:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:14:31.032 { 00:14:31.032 "auth": { 00:14:31.032 "dhgroup": "ffdhe2048", 00:14:31.032 "digest": "sha512", 00:14:31.032 "state": "completed" 00:14:31.032 }, 00:14:31.032 "cntlid": 107, 00:14:31.032 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:2fb5c03d-b67f-4471-b8a9-a83bf7b15be8", 00:14:31.032 "listen_address": { 00:14:31.032 "adrfam": "IPv4", 00:14:31.032 "traddr": "10.0.0.3", 00:14:31.032 "trsvcid": "4420", 00:14:31.032 "trtype": "TCP" 00:14:31.032 }, 00:14:31.032 "peer_address": { 00:14:31.032 "adrfam": "IPv4", 00:14:31.032 "traddr": "10.0.0.1", 00:14:31.032 "trsvcid": "52982", 00:14:31.032 "trtype": "TCP" 00:14:31.032 }, 00:14:31.032 "qid": 0, 00:14:31.032 "state": "enabled", 00:14:31.032 "thread": "nvmf_tgt_poll_group_000" 00:14:31.032 } 00:14:31.032 ]' 00:14:31.032 11:36:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:14:31.291 11:36:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:14:31.291 11:36:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:14:31.291 11:36:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:14:31.291 11:36:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:14:31.291 11:36:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:14:31.291 11:36:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:14:31.291 11:36:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:14:31.549 11:36:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:NzViMDgxNmYyYjg1NzY4ZjY1YWE3NTk5MjIwMmNlOWUtjHHm: --dhchap-ctrl-secret DHHC-1:02:MTRkMDAyZTM2NDg5YzUyNGFkZDk1YzA5NGYxNDc1NTBlNjNkNjZmODEzMzUxM2Yy4R272Q==: 00:14:31.549 11:36:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:2fb5c03d-b67f-4471-b8a9-a83bf7b15be8 --hostid 2fb5c03d-b67f-4471-b8a9-a83bf7b15be8 -l 0 --dhchap-secret DHHC-1:01:NzViMDgxNmYyYjg1NzY4ZjY1YWE3NTk5MjIwMmNlOWUtjHHm: --dhchap-ctrl-secret DHHC-1:02:MTRkMDAyZTM2NDg5YzUyNGFkZDk1YzA5NGYxNDc1NTBlNjNkNjZmODEzMzUxM2Yy4R272Q==: 00:14:32.117 11:36:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:14:32.117 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:14:32.117 11:36:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:2fb5c03d-b67f-4471-b8a9-a83bf7b15be8 00:14:32.117 11:36:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:32.117 11:36:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:32.117 11:36:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:32.117 11:36:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:14:32.375 11:36:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:14:32.375 11:36:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:14:32.634 11:36:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe2048 2 00:14:32.634 11:36:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:14:32.634 11:36:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:14:32.634 11:36:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe2048 00:14:32.634 11:36:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:14:32.634 11:36:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:14:32.634 11:36:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:2fb5c03d-b67f-4471-b8a9-a83bf7b15be8 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:14:32.634 11:36:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:32.634 11:36:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:32.634 11:36:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:32.634 11:36:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:14:32.634 11:36:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:2fb5c03d-b67f-4471-b8a9-a83bf7b15be8 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:14:32.634 11:36:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:2fb5c03d-b67f-4471-b8a9-a83bf7b15be8 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:14:32.893 00:14:32.893 11:36:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:14:32.893 11:36:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:14:32.893 11:36:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:14:33.151 11:36:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:14:33.151 11:36:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:14:33.151 11:36:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:33.151 11:36:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:33.151 11:36:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:33.151 11:36:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:14:33.151 { 00:14:33.151 "auth": { 00:14:33.151 "dhgroup": "ffdhe2048", 00:14:33.151 "digest": "sha512", 00:14:33.151 "state": "completed" 00:14:33.151 }, 00:14:33.151 "cntlid": 109, 00:14:33.151 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:2fb5c03d-b67f-4471-b8a9-a83bf7b15be8", 00:14:33.151 "listen_address": { 00:14:33.151 "adrfam": "IPv4", 00:14:33.151 "traddr": "10.0.0.3", 00:14:33.151 "trsvcid": "4420", 00:14:33.151 "trtype": "TCP" 00:14:33.151 }, 00:14:33.151 "peer_address": { 00:14:33.151 "adrfam": "IPv4", 00:14:33.151 "traddr": "10.0.0.1", 00:14:33.151 "trsvcid": "53012", 00:14:33.151 "trtype": "TCP" 00:14:33.151 }, 00:14:33.151 "qid": 0, 00:14:33.151 "state": "enabled", 00:14:33.151 "thread": "nvmf_tgt_poll_group_000" 00:14:33.151 } 00:14:33.151 ]' 00:14:33.151 11:36:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:14:33.410 11:36:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:14:33.410 11:36:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:14:33.410 11:36:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:14:33.410 11:36:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:14:33.410 11:36:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:14:33.410 11:36:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:14:33.410 11:36:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:14:33.668 11:36:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:ZDY0N2YwN2U5MDE1OGM3ZDRiZTE3ZjllMTA2YjcyOWJmNTdkMzQ3NTJlZWJhZGJjQ1cSZQ==: --dhchap-ctrl-secret DHHC-1:01:OGZjMTkzMDUzNGJhZmU0OWUxOGM0NmRkZGI3NTVkNTktkkdU: 00:14:33.668 11:36:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:2fb5c03d-b67f-4471-b8a9-a83bf7b15be8 --hostid 2fb5c03d-b67f-4471-b8a9-a83bf7b15be8 -l 0 --dhchap-secret DHHC-1:02:ZDY0N2YwN2U5MDE1OGM3ZDRiZTE3ZjllMTA2YjcyOWJmNTdkMzQ3NTJlZWJhZGJjQ1cSZQ==: --dhchap-ctrl-secret DHHC-1:01:OGZjMTkzMDUzNGJhZmU0OWUxOGM0NmRkZGI3NTVkNTktkkdU: 00:14:34.236 11:36:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:14:34.236 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:14:34.236 11:36:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:2fb5c03d-b67f-4471-b8a9-a83bf7b15be8 00:14:34.236 11:36:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:34.236 11:36:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:34.236 11:36:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:34.236 11:36:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:14:34.236 11:36:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:14:34.236 11:36:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:14:34.497 11:36:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe2048 3 00:14:34.497 11:36:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:14:34.497 11:36:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:14:34.497 11:36:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe2048 00:14:34.497 11:36:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:14:34.497 11:36:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:14:34.497 11:36:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:2fb5c03d-b67f-4471-b8a9-a83bf7b15be8 --dhchap-key key3 00:14:34.497 11:36:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:34.497 11:36:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:34.497 11:36:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:34.497 11:36:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:14:34.497 11:36:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:2fb5c03d-b67f-4471-b8a9-a83bf7b15be8 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:14:34.759 11:36:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:2fb5c03d-b67f-4471-b8a9-a83bf7b15be8 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:14:35.021 00:14:35.021 11:36:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:14:35.021 11:36:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:14:35.021 11:36:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:14:35.280 11:36:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:14:35.280 11:36:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:14:35.280 11:36:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:35.280 11:36:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:35.280 11:36:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:35.280 11:36:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:14:35.280 { 00:14:35.280 "auth": { 00:14:35.280 "dhgroup": "ffdhe2048", 00:14:35.280 "digest": "sha512", 00:14:35.280 "state": "completed" 00:14:35.280 }, 00:14:35.280 "cntlid": 111, 00:14:35.280 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:2fb5c03d-b67f-4471-b8a9-a83bf7b15be8", 00:14:35.280 "listen_address": { 00:14:35.280 "adrfam": "IPv4", 00:14:35.280 "traddr": "10.0.0.3", 00:14:35.280 "trsvcid": "4420", 00:14:35.280 "trtype": "TCP" 00:14:35.280 }, 00:14:35.280 "peer_address": { 00:14:35.280 "adrfam": "IPv4", 00:14:35.280 "traddr": "10.0.0.1", 00:14:35.280 "trsvcid": "45774", 00:14:35.280 "trtype": "TCP" 00:14:35.280 }, 00:14:35.280 "qid": 0, 00:14:35.280 "state": "enabled", 00:14:35.280 "thread": "nvmf_tgt_poll_group_000" 00:14:35.280 } 00:14:35.280 ]' 00:14:35.280 11:36:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:14:35.280 11:36:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:14:35.280 11:36:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:14:35.539 11:36:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:14:35.539 11:36:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:14:35.539 11:36:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:14:35.539 11:36:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:14:35.539 11:36:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:14:35.797 11:36:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:MWI3ZGI4MmVmNjZkMTUxMmFmODZjY2JkMzEwNTkyZmI0NGVhNTE0OTNlMTFmNTBjNzA5YWE5NTE1ZDljZDliM/NYdV4=: 00:14:35.797 11:36:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:2fb5c03d-b67f-4471-b8a9-a83bf7b15be8 --hostid 2fb5c03d-b67f-4471-b8a9-a83bf7b15be8 -l 0 --dhchap-secret DHHC-1:03:MWI3ZGI4MmVmNjZkMTUxMmFmODZjY2JkMzEwNTkyZmI0NGVhNTE0OTNlMTFmNTBjNzA5YWE5NTE1ZDljZDliM/NYdV4=: 00:14:36.363 11:36:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:14:36.363 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:14:36.363 11:36:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:2fb5c03d-b67f-4471-b8a9-a83bf7b15be8 00:14:36.363 11:36:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:36.363 11:36:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:36.363 11:36:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:36.364 11:36:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:14:36.364 11:36:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:14:36.364 11:36:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:14:36.364 11:36:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:14:36.622 11:36:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe3072 0 00:14:36.622 11:36:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:14:36.622 11:36:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:14:36.622 11:36:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe3072 00:14:36.622 11:36:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:14:36.622 11:36:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:14:36.622 11:36:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:2fb5c03d-b67f-4471-b8a9-a83bf7b15be8 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:14:36.622 11:36:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:36.622 11:36:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:36.622 11:36:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:36.622 11:36:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:14:36.622 11:36:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:2fb5c03d-b67f-4471-b8a9-a83bf7b15be8 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:14:36.622 11:36:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:2fb5c03d-b67f-4471-b8a9-a83bf7b15be8 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:14:37.190 00:14:37.190 11:36:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:14:37.190 11:36:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:14:37.190 11:36:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:14:37.449 11:36:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:14:37.449 11:36:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:14:37.449 11:36:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:37.449 11:36:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:37.449 11:36:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:37.449 11:36:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:14:37.449 { 00:14:37.449 "auth": { 00:14:37.449 "dhgroup": "ffdhe3072", 00:14:37.449 "digest": "sha512", 00:14:37.449 "state": "completed" 00:14:37.449 }, 00:14:37.449 "cntlid": 113, 00:14:37.449 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:2fb5c03d-b67f-4471-b8a9-a83bf7b15be8", 00:14:37.449 "listen_address": { 00:14:37.449 "adrfam": "IPv4", 00:14:37.449 "traddr": "10.0.0.3", 00:14:37.449 "trsvcid": "4420", 00:14:37.449 "trtype": "TCP" 00:14:37.449 }, 00:14:37.449 "peer_address": { 00:14:37.449 "adrfam": "IPv4", 00:14:37.449 "traddr": "10.0.0.1", 00:14:37.449 "trsvcid": "45800", 00:14:37.449 "trtype": "TCP" 00:14:37.449 }, 00:14:37.449 "qid": 0, 00:14:37.449 "state": "enabled", 00:14:37.449 "thread": "nvmf_tgt_poll_group_000" 00:14:37.449 } 00:14:37.449 ]' 00:14:37.449 11:36:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:14:37.449 11:36:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:14:37.449 11:36:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:14:37.449 11:36:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:14:37.449 11:36:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:14:37.449 11:36:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:14:37.449 11:36:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:14:37.449 11:36:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:14:38.016 11:36:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:Y2JiZGI3MTE3ZjcwNmFhMzJhZDBkN2VhODEwZGRmNDVmMjNkZmIzNzJmMzBiY2JlF9S+tg==: --dhchap-ctrl-secret DHHC-1:03:YTgxZGEwZTA3Yzc1NmU1YjY0NGFlMGMzYTNiOGIyMzA1MzRjOTQ5YjM4MzY4MzJlNjE5ZThmY2EyMGU5Y2I5ZcLXVvg=: 00:14:38.016 11:36:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:2fb5c03d-b67f-4471-b8a9-a83bf7b15be8 --hostid 2fb5c03d-b67f-4471-b8a9-a83bf7b15be8 -l 0 --dhchap-secret DHHC-1:00:Y2JiZGI3MTE3ZjcwNmFhMzJhZDBkN2VhODEwZGRmNDVmMjNkZmIzNzJmMzBiY2JlF9S+tg==: --dhchap-ctrl-secret DHHC-1:03:YTgxZGEwZTA3Yzc1NmU1YjY0NGFlMGMzYTNiOGIyMzA1MzRjOTQ5YjM4MzY4MzJlNjE5ZThmY2EyMGU5Y2I5ZcLXVvg=: 00:14:38.582 11:36:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:14:38.582 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:14:38.582 11:36:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:2fb5c03d-b67f-4471-b8a9-a83bf7b15be8 00:14:38.582 11:36:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:38.582 11:36:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:38.582 11:36:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:38.582 11:36:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:14:38.582 11:36:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:14:38.582 11:36:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:14:38.841 11:36:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe3072 1 00:14:38.841 11:36:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:14:38.841 11:36:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:14:38.841 11:36:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe3072 00:14:38.841 11:36:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:14:38.841 11:36:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:14:38.841 11:36:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:2fb5c03d-b67f-4471-b8a9-a83bf7b15be8 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:14:38.841 11:36:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:38.841 11:36:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:38.841 11:36:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:38.841 11:36:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:14:38.841 11:36:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:2fb5c03d-b67f-4471-b8a9-a83bf7b15be8 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:14:38.841 11:36:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:2fb5c03d-b67f-4471-b8a9-a83bf7b15be8 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:14:39.100 00:14:39.100 11:36:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:14:39.100 11:36:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:14:39.100 11:36:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:14:39.667 11:36:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:14:39.667 11:36:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:14:39.667 11:36:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:39.667 11:36:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:39.667 11:36:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:39.667 11:36:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:14:39.667 { 00:14:39.667 "auth": { 00:14:39.667 "dhgroup": "ffdhe3072", 00:14:39.667 "digest": "sha512", 00:14:39.667 "state": "completed" 00:14:39.667 }, 00:14:39.667 "cntlid": 115, 00:14:39.667 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:2fb5c03d-b67f-4471-b8a9-a83bf7b15be8", 00:14:39.667 "listen_address": { 00:14:39.667 "adrfam": "IPv4", 00:14:39.667 "traddr": "10.0.0.3", 00:14:39.667 "trsvcid": "4420", 00:14:39.667 "trtype": "TCP" 00:14:39.667 }, 00:14:39.667 "peer_address": { 00:14:39.667 "adrfam": "IPv4", 00:14:39.667 "traddr": "10.0.0.1", 00:14:39.667 "trsvcid": "45822", 00:14:39.667 "trtype": "TCP" 00:14:39.667 }, 00:14:39.667 "qid": 0, 00:14:39.667 "state": "enabled", 00:14:39.667 "thread": "nvmf_tgt_poll_group_000" 00:14:39.667 } 00:14:39.667 ]' 00:14:39.667 11:36:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:14:39.667 11:36:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:14:39.667 11:36:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:14:39.668 11:36:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:14:39.668 11:36:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:14:39.668 11:36:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:14:39.668 11:36:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:14:39.668 11:36:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:14:39.930 11:36:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:NzViMDgxNmYyYjg1NzY4ZjY1YWE3NTk5MjIwMmNlOWUtjHHm: --dhchap-ctrl-secret DHHC-1:02:MTRkMDAyZTM2NDg5YzUyNGFkZDk1YzA5NGYxNDc1NTBlNjNkNjZmODEzMzUxM2Yy4R272Q==: 00:14:39.930 11:36:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:2fb5c03d-b67f-4471-b8a9-a83bf7b15be8 --hostid 2fb5c03d-b67f-4471-b8a9-a83bf7b15be8 -l 0 --dhchap-secret DHHC-1:01:NzViMDgxNmYyYjg1NzY4ZjY1YWE3NTk5MjIwMmNlOWUtjHHm: --dhchap-ctrl-secret DHHC-1:02:MTRkMDAyZTM2NDg5YzUyNGFkZDk1YzA5NGYxNDc1NTBlNjNkNjZmODEzMzUxM2Yy4R272Q==: 00:14:40.503 11:36:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:14:40.503 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:14:40.503 11:36:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:2fb5c03d-b67f-4471-b8a9-a83bf7b15be8 00:14:40.503 11:36:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:40.503 11:36:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:40.762 11:36:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:40.762 11:36:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:14:40.762 11:36:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:14:40.762 11:36:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:14:41.021 11:36:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe3072 2 00:14:41.021 11:36:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:14:41.021 11:36:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:14:41.021 11:36:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe3072 00:14:41.021 11:36:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:14:41.021 11:36:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:14:41.021 11:36:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:2fb5c03d-b67f-4471-b8a9-a83bf7b15be8 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:14:41.021 11:36:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:41.021 11:36:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:41.021 11:36:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:41.021 11:36:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:14:41.021 11:36:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:2fb5c03d-b67f-4471-b8a9-a83bf7b15be8 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:14:41.021 11:36:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:2fb5c03d-b67f-4471-b8a9-a83bf7b15be8 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:14:41.279 00:14:41.279 11:36:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:14:41.279 11:36:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:14:41.279 11:36:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:14:41.538 11:36:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:14:41.538 11:36:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:14:41.538 11:36:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:41.538 11:36:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:41.538 11:36:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:41.538 11:36:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:14:41.538 { 00:14:41.538 "auth": { 00:14:41.538 "dhgroup": "ffdhe3072", 00:14:41.538 "digest": "sha512", 00:14:41.538 "state": "completed" 00:14:41.538 }, 00:14:41.538 "cntlid": 117, 00:14:41.538 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:2fb5c03d-b67f-4471-b8a9-a83bf7b15be8", 00:14:41.538 "listen_address": { 00:14:41.538 "adrfam": "IPv4", 00:14:41.538 "traddr": "10.0.0.3", 00:14:41.538 "trsvcid": "4420", 00:14:41.538 "trtype": "TCP" 00:14:41.538 }, 00:14:41.538 "peer_address": { 00:14:41.538 "adrfam": "IPv4", 00:14:41.538 "traddr": "10.0.0.1", 00:14:41.538 "trsvcid": "45848", 00:14:41.538 "trtype": "TCP" 00:14:41.538 }, 00:14:41.538 "qid": 0, 00:14:41.538 "state": "enabled", 00:14:41.538 "thread": "nvmf_tgt_poll_group_000" 00:14:41.538 } 00:14:41.538 ]' 00:14:41.538 11:36:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:14:41.538 11:36:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:14:41.538 11:36:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:14:41.797 11:36:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:14:41.797 11:36:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:14:41.797 11:36:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:14:41.797 11:36:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:14:41.797 11:36:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:14:42.055 11:36:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:ZDY0N2YwN2U5MDE1OGM3ZDRiZTE3ZjllMTA2YjcyOWJmNTdkMzQ3NTJlZWJhZGJjQ1cSZQ==: --dhchap-ctrl-secret DHHC-1:01:OGZjMTkzMDUzNGJhZmU0OWUxOGM0NmRkZGI3NTVkNTktkkdU: 00:14:42.055 11:36:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:2fb5c03d-b67f-4471-b8a9-a83bf7b15be8 --hostid 2fb5c03d-b67f-4471-b8a9-a83bf7b15be8 -l 0 --dhchap-secret DHHC-1:02:ZDY0N2YwN2U5MDE1OGM3ZDRiZTE3ZjllMTA2YjcyOWJmNTdkMzQ3NTJlZWJhZGJjQ1cSZQ==: --dhchap-ctrl-secret DHHC-1:01:OGZjMTkzMDUzNGJhZmU0OWUxOGM0NmRkZGI3NTVkNTktkkdU: 00:14:42.622 11:36:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:14:42.622 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:14:42.622 11:36:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:2fb5c03d-b67f-4471-b8a9-a83bf7b15be8 00:14:42.622 11:36:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:42.622 11:36:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:42.622 11:36:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:42.622 11:36:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:14:42.622 11:36:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:14:42.622 11:36:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:14:42.880 11:36:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe3072 3 00:14:42.880 11:36:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:14:42.880 11:36:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:14:42.880 11:36:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe3072 00:14:42.880 11:36:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:14:42.880 11:36:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:14:42.880 11:36:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:2fb5c03d-b67f-4471-b8a9-a83bf7b15be8 --dhchap-key key3 00:14:42.880 11:36:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:42.880 11:36:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:42.880 11:36:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:42.881 11:36:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:14:42.881 11:36:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:2fb5c03d-b67f-4471-b8a9-a83bf7b15be8 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:14:42.881 11:36:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:2fb5c03d-b67f-4471-b8a9-a83bf7b15be8 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:14:43.448 00:14:43.448 11:36:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:14:43.448 11:36:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:14:43.448 11:36:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:14:43.707 11:36:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:14:43.707 11:36:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:14:43.707 11:36:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:43.707 11:36:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:43.707 11:36:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:43.707 11:36:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:14:43.707 { 00:14:43.707 "auth": { 00:14:43.707 "dhgroup": "ffdhe3072", 00:14:43.707 "digest": "sha512", 00:14:43.707 "state": "completed" 00:14:43.707 }, 00:14:43.707 "cntlid": 119, 00:14:43.707 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:2fb5c03d-b67f-4471-b8a9-a83bf7b15be8", 00:14:43.707 "listen_address": { 00:14:43.707 "adrfam": "IPv4", 00:14:43.707 "traddr": "10.0.0.3", 00:14:43.707 "trsvcid": "4420", 00:14:43.707 "trtype": "TCP" 00:14:43.707 }, 00:14:43.707 "peer_address": { 00:14:43.707 "adrfam": "IPv4", 00:14:43.707 "traddr": "10.0.0.1", 00:14:43.707 "trsvcid": "45890", 00:14:43.707 "trtype": "TCP" 00:14:43.707 }, 00:14:43.707 "qid": 0, 00:14:43.707 "state": "enabled", 00:14:43.707 "thread": "nvmf_tgt_poll_group_000" 00:14:43.707 } 00:14:43.707 ]' 00:14:43.707 11:36:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:14:43.707 11:36:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:14:43.707 11:36:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:14:43.707 11:36:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:14:43.707 11:36:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:14:43.707 11:36:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:14:43.707 11:36:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:14:43.707 11:36:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:14:44.274 11:36:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:MWI3ZGI4MmVmNjZkMTUxMmFmODZjY2JkMzEwNTkyZmI0NGVhNTE0OTNlMTFmNTBjNzA5YWE5NTE1ZDljZDliM/NYdV4=: 00:14:44.274 11:36:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:2fb5c03d-b67f-4471-b8a9-a83bf7b15be8 --hostid 2fb5c03d-b67f-4471-b8a9-a83bf7b15be8 -l 0 --dhchap-secret DHHC-1:03:MWI3ZGI4MmVmNjZkMTUxMmFmODZjY2JkMzEwNTkyZmI0NGVhNTE0OTNlMTFmNTBjNzA5YWE5NTE1ZDljZDliM/NYdV4=: 00:14:44.840 11:36:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:14:44.840 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:14:44.840 11:36:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:2fb5c03d-b67f-4471-b8a9-a83bf7b15be8 00:14:44.840 11:36:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:44.840 11:36:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:44.840 11:36:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:44.840 11:36:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:14:44.840 11:36:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:14:44.840 11:36:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:14:44.840 11:36:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:14:45.098 11:36:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe4096 0 00:14:45.098 11:36:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:14:45.098 11:36:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:14:45.098 11:36:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe4096 00:14:45.098 11:36:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:14:45.098 11:36:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:14:45.098 11:36:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:2fb5c03d-b67f-4471-b8a9-a83bf7b15be8 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:14:45.098 11:36:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:45.098 11:36:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:45.098 11:36:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:45.098 11:36:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:14:45.098 11:36:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:2fb5c03d-b67f-4471-b8a9-a83bf7b15be8 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:14:45.098 11:36:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:2fb5c03d-b67f-4471-b8a9-a83bf7b15be8 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:14:45.357 00:14:45.615 11:36:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:14:45.615 11:36:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:14:45.615 11:36:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:14:45.873 11:36:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:14:45.873 11:36:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:14:45.873 11:36:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:45.873 11:36:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:45.873 11:36:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:45.873 11:36:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:14:45.873 { 00:14:45.873 "auth": { 00:14:45.873 "dhgroup": "ffdhe4096", 00:14:45.873 "digest": "sha512", 00:14:45.873 "state": "completed" 00:14:45.873 }, 00:14:45.873 "cntlid": 121, 00:14:45.873 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:2fb5c03d-b67f-4471-b8a9-a83bf7b15be8", 00:14:45.873 "listen_address": { 00:14:45.873 "adrfam": "IPv4", 00:14:45.873 "traddr": "10.0.0.3", 00:14:45.873 "trsvcid": "4420", 00:14:45.873 "trtype": "TCP" 00:14:45.873 }, 00:14:45.873 "peer_address": { 00:14:45.873 "adrfam": "IPv4", 00:14:45.873 "traddr": "10.0.0.1", 00:14:45.873 "trsvcid": "39944", 00:14:45.873 "trtype": "TCP" 00:14:45.873 }, 00:14:45.873 "qid": 0, 00:14:45.873 "state": "enabled", 00:14:45.873 "thread": "nvmf_tgt_poll_group_000" 00:14:45.873 } 00:14:45.873 ]' 00:14:45.873 11:36:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:14:45.873 11:36:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:14:45.873 11:36:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:14:45.873 11:36:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:14:45.873 11:36:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:14:45.873 11:36:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:14:45.873 11:36:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:14:45.874 11:36:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:14:46.131 11:36:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:Y2JiZGI3MTE3ZjcwNmFhMzJhZDBkN2VhODEwZGRmNDVmMjNkZmIzNzJmMzBiY2JlF9S+tg==: --dhchap-ctrl-secret DHHC-1:03:YTgxZGEwZTA3Yzc1NmU1YjY0NGFlMGMzYTNiOGIyMzA1MzRjOTQ5YjM4MzY4MzJlNjE5ZThmY2EyMGU5Y2I5ZcLXVvg=: 00:14:46.388 11:36:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:2fb5c03d-b67f-4471-b8a9-a83bf7b15be8 --hostid 2fb5c03d-b67f-4471-b8a9-a83bf7b15be8 -l 0 --dhchap-secret DHHC-1:00:Y2JiZGI3MTE3ZjcwNmFhMzJhZDBkN2VhODEwZGRmNDVmMjNkZmIzNzJmMzBiY2JlF9S+tg==: --dhchap-ctrl-secret DHHC-1:03:YTgxZGEwZTA3Yzc1NmU1YjY0NGFlMGMzYTNiOGIyMzA1MzRjOTQ5YjM4MzY4MzJlNjE5ZThmY2EyMGU5Y2I5ZcLXVvg=: 00:14:46.954 11:36:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:14:46.954 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:14:46.954 11:36:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:2fb5c03d-b67f-4471-b8a9-a83bf7b15be8 00:14:46.954 11:36:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:46.954 11:36:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:46.954 11:36:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:46.954 11:36:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:14:46.954 11:36:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:14:46.954 11:36:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:14:47.212 11:36:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe4096 1 00:14:47.212 11:36:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:14:47.212 11:36:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:14:47.212 11:36:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe4096 00:14:47.212 11:36:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:14:47.212 11:36:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:14:47.212 11:36:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:2fb5c03d-b67f-4471-b8a9-a83bf7b15be8 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:14:47.212 11:36:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:47.212 11:36:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:47.212 11:36:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:47.212 11:36:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:14:47.212 11:36:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:2fb5c03d-b67f-4471-b8a9-a83bf7b15be8 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:14:47.212 11:36:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:2fb5c03d-b67f-4471-b8a9-a83bf7b15be8 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:14:47.777 00:14:47.777 11:36:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:14:47.777 11:36:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:14:47.777 11:36:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:14:48.036 11:36:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:14:48.036 11:36:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:14:48.036 11:36:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:48.036 11:36:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:48.036 11:36:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:48.036 11:36:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:14:48.036 { 00:14:48.036 "auth": { 00:14:48.036 "dhgroup": "ffdhe4096", 00:14:48.036 "digest": "sha512", 00:14:48.036 "state": "completed" 00:14:48.036 }, 00:14:48.036 "cntlid": 123, 00:14:48.036 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:2fb5c03d-b67f-4471-b8a9-a83bf7b15be8", 00:14:48.036 "listen_address": { 00:14:48.036 "adrfam": "IPv4", 00:14:48.036 "traddr": "10.0.0.3", 00:14:48.036 "trsvcid": "4420", 00:14:48.036 "trtype": "TCP" 00:14:48.036 }, 00:14:48.036 "peer_address": { 00:14:48.036 "adrfam": "IPv4", 00:14:48.036 "traddr": "10.0.0.1", 00:14:48.036 "trsvcid": "39974", 00:14:48.036 "trtype": "TCP" 00:14:48.036 }, 00:14:48.036 "qid": 0, 00:14:48.036 "state": "enabled", 00:14:48.036 "thread": "nvmf_tgt_poll_group_000" 00:14:48.036 } 00:14:48.036 ]' 00:14:48.036 11:36:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:14:48.036 11:36:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:14:48.036 11:36:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:14:48.036 11:36:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:14:48.036 11:36:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:14:48.036 11:36:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:14:48.036 11:36:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:14:48.036 11:36:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:14:48.294 11:36:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:NzViMDgxNmYyYjg1NzY4ZjY1YWE3NTk5MjIwMmNlOWUtjHHm: --dhchap-ctrl-secret DHHC-1:02:MTRkMDAyZTM2NDg5YzUyNGFkZDk1YzA5NGYxNDc1NTBlNjNkNjZmODEzMzUxM2Yy4R272Q==: 00:14:48.294 11:36:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:2fb5c03d-b67f-4471-b8a9-a83bf7b15be8 --hostid 2fb5c03d-b67f-4471-b8a9-a83bf7b15be8 -l 0 --dhchap-secret DHHC-1:01:NzViMDgxNmYyYjg1NzY4ZjY1YWE3NTk5MjIwMmNlOWUtjHHm: --dhchap-ctrl-secret DHHC-1:02:MTRkMDAyZTM2NDg5YzUyNGFkZDk1YzA5NGYxNDc1NTBlNjNkNjZmODEzMzUxM2Yy4R272Q==: 00:14:49.229 11:36:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:14:49.229 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:14:49.229 11:36:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:2fb5c03d-b67f-4471-b8a9-a83bf7b15be8 00:14:49.229 11:36:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:49.229 11:36:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:49.229 11:36:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:49.229 11:36:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:14:49.229 11:36:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:14:49.229 11:36:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:14:49.487 11:36:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe4096 2 00:14:49.487 11:36:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:14:49.487 11:36:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:14:49.487 11:36:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe4096 00:14:49.487 11:36:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:14:49.487 11:36:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:14:49.487 11:36:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:2fb5c03d-b67f-4471-b8a9-a83bf7b15be8 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:14:49.487 11:36:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:49.487 11:36:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:49.487 11:36:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:49.487 11:36:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:14:49.488 11:36:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:2fb5c03d-b67f-4471-b8a9-a83bf7b15be8 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:14:49.488 11:36:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:2fb5c03d-b67f-4471-b8a9-a83bf7b15be8 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:14:49.746 00:14:49.746 11:36:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:14:49.746 11:36:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:14:49.746 11:36:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:14:50.004 11:36:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:14:50.004 11:36:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:14:50.004 11:36:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:50.004 11:36:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:50.004 11:36:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:50.004 11:36:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:14:50.004 { 00:14:50.004 "auth": { 00:14:50.004 "dhgroup": "ffdhe4096", 00:14:50.004 "digest": "sha512", 00:14:50.004 "state": "completed" 00:14:50.004 }, 00:14:50.004 "cntlid": 125, 00:14:50.004 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:2fb5c03d-b67f-4471-b8a9-a83bf7b15be8", 00:14:50.004 "listen_address": { 00:14:50.004 "adrfam": "IPv4", 00:14:50.004 "traddr": "10.0.0.3", 00:14:50.004 "trsvcid": "4420", 00:14:50.004 "trtype": "TCP" 00:14:50.004 }, 00:14:50.004 "peer_address": { 00:14:50.004 "adrfam": "IPv4", 00:14:50.004 "traddr": "10.0.0.1", 00:14:50.004 "trsvcid": "40004", 00:14:50.004 "trtype": "TCP" 00:14:50.004 }, 00:14:50.004 "qid": 0, 00:14:50.004 "state": "enabled", 00:14:50.004 "thread": "nvmf_tgt_poll_group_000" 00:14:50.004 } 00:14:50.004 ]' 00:14:50.004 11:36:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:14:50.263 11:36:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:14:50.263 11:36:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:14:50.263 11:36:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:14:50.263 11:36:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:14:50.263 11:36:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:14:50.263 11:36:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:14:50.263 11:36:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:14:50.522 11:36:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:ZDY0N2YwN2U5MDE1OGM3ZDRiZTE3ZjllMTA2YjcyOWJmNTdkMzQ3NTJlZWJhZGJjQ1cSZQ==: --dhchap-ctrl-secret DHHC-1:01:OGZjMTkzMDUzNGJhZmU0OWUxOGM0NmRkZGI3NTVkNTktkkdU: 00:14:50.522 11:36:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:2fb5c03d-b67f-4471-b8a9-a83bf7b15be8 --hostid 2fb5c03d-b67f-4471-b8a9-a83bf7b15be8 -l 0 --dhchap-secret DHHC-1:02:ZDY0N2YwN2U5MDE1OGM3ZDRiZTE3ZjllMTA2YjcyOWJmNTdkMzQ3NTJlZWJhZGJjQ1cSZQ==: --dhchap-ctrl-secret DHHC-1:01:OGZjMTkzMDUzNGJhZmU0OWUxOGM0NmRkZGI3NTVkNTktkkdU: 00:14:51.092 11:36:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:14:51.092 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:14:51.092 11:36:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:2fb5c03d-b67f-4471-b8a9-a83bf7b15be8 00:14:51.092 11:36:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:51.092 11:36:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:51.092 11:36:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:51.092 11:36:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:14:51.092 11:36:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:14:51.092 11:36:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:14:51.362 11:36:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe4096 3 00:14:51.362 11:36:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:14:51.362 11:36:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:14:51.362 11:36:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe4096 00:14:51.362 11:36:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:14:51.362 11:36:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:14:51.362 11:36:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:2fb5c03d-b67f-4471-b8a9-a83bf7b15be8 --dhchap-key key3 00:14:51.362 11:36:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:51.362 11:36:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:51.626 11:36:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:51.626 11:36:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:14:51.626 11:36:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:2fb5c03d-b67f-4471-b8a9-a83bf7b15be8 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:14:51.626 11:36:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:2fb5c03d-b67f-4471-b8a9-a83bf7b15be8 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:14:51.885 00:14:51.885 11:36:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:14:51.885 11:36:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:14:51.885 11:36:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:14:52.143 11:36:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:14:52.143 11:36:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:14:52.143 11:36:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:52.143 11:36:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:52.143 11:36:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:52.144 11:36:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:14:52.144 { 00:14:52.144 "auth": { 00:14:52.144 "dhgroup": "ffdhe4096", 00:14:52.144 "digest": "sha512", 00:14:52.144 "state": "completed" 00:14:52.144 }, 00:14:52.144 "cntlid": 127, 00:14:52.144 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:2fb5c03d-b67f-4471-b8a9-a83bf7b15be8", 00:14:52.144 "listen_address": { 00:14:52.144 "adrfam": "IPv4", 00:14:52.144 "traddr": "10.0.0.3", 00:14:52.144 "trsvcid": "4420", 00:14:52.144 "trtype": "TCP" 00:14:52.144 }, 00:14:52.144 "peer_address": { 00:14:52.144 "adrfam": "IPv4", 00:14:52.144 "traddr": "10.0.0.1", 00:14:52.144 "trsvcid": "40034", 00:14:52.144 "trtype": "TCP" 00:14:52.144 }, 00:14:52.144 "qid": 0, 00:14:52.144 "state": "enabled", 00:14:52.144 "thread": "nvmf_tgt_poll_group_000" 00:14:52.144 } 00:14:52.144 ]' 00:14:52.144 11:36:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:14:52.144 11:36:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:14:52.144 11:36:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:14:52.402 11:36:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:14:52.402 11:36:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:14:52.402 11:36:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:14:52.402 11:36:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:14:52.402 11:36:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:14:52.661 11:36:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:MWI3ZGI4MmVmNjZkMTUxMmFmODZjY2JkMzEwNTkyZmI0NGVhNTE0OTNlMTFmNTBjNzA5YWE5NTE1ZDljZDliM/NYdV4=: 00:14:52.661 11:36:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:2fb5c03d-b67f-4471-b8a9-a83bf7b15be8 --hostid 2fb5c03d-b67f-4471-b8a9-a83bf7b15be8 -l 0 --dhchap-secret DHHC-1:03:MWI3ZGI4MmVmNjZkMTUxMmFmODZjY2JkMzEwNTkyZmI0NGVhNTE0OTNlMTFmNTBjNzA5YWE5NTE1ZDljZDliM/NYdV4=: 00:14:53.228 11:36:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:14:53.228 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:14:53.486 11:36:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:2fb5c03d-b67f-4471-b8a9-a83bf7b15be8 00:14:53.486 11:36:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:53.486 11:36:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:53.486 11:36:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:53.486 11:36:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:14:53.486 11:36:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:14:53.486 11:36:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:14:53.486 11:36:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:14:53.744 11:36:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe6144 0 00:14:53.744 11:36:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:14:53.744 11:36:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:14:53.744 11:36:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe6144 00:14:53.744 11:36:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:14:53.744 11:36:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:14:53.745 11:36:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:2fb5c03d-b67f-4471-b8a9-a83bf7b15be8 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:14:53.745 11:36:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:53.745 11:36:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:53.745 11:36:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:53.745 11:36:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:14:53.745 11:36:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:2fb5c03d-b67f-4471-b8a9-a83bf7b15be8 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:14:53.745 11:36:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:2fb5c03d-b67f-4471-b8a9-a83bf7b15be8 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:14:54.003 00:14:54.261 11:36:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:14:54.261 11:36:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:14:54.261 11:36:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:14:54.524 11:36:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:14:54.524 11:36:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:14:54.524 11:36:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:54.524 11:36:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:54.524 11:36:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:54.524 11:36:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:14:54.524 { 00:14:54.524 "auth": { 00:14:54.524 "dhgroup": "ffdhe6144", 00:14:54.524 "digest": "sha512", 00:14:54.524 "state": "completed" 00:14:54.524 }, 00:14:54.524 "cntlid": 129, 00:14:54.524 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:2fb5c03d-b67f-4471-b8a9-a83bf7b15be8", 00:14:54.524 "listen_address": { 00:14:54.524 "adrfam": "IPv4", 00:14:54.524 "traddr": "10.0.0.3", 00:14:54.524 "trsvcid": "4420", 00:14:54.524 "trtype": "TCP" 00:14:54.524 }, 00:14:54.524 "peer_address": { 00:14:54.524 "adrfam": "IPv4", 00:14:54.524 "traddr": "10.0.0.1", 00:14:54.524 "trsvcid": "48644", 00:14:54.524 "trtype": "TCP" 00:14:54.524 }, 00:14:54.524 "qid": 0, 00:14:54.524 "state": "enabled", 00:14:54.524 "thread": "nvmf_tgt_poll_group_000" 00:14:54.524 } 00:14:54.524 ]' 00:14:54.524 11:36:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:14:54.524 11:36:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:14:54.524 11:36:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:14:54.524 11:36:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:14:54.524 11:36:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:14:54.524 11:36:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:14:54.524 11:36:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:14:54.524 11:36:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:14:54.784 11:36:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:Y2JiZGI3MTE3ZjcwNmFhMzJhZDBkN2VhODEwZGRmNDVmMjNkZmIzNzJmMzBiY2JlF9S+tg==: --dhchap-ctrl-secret DHHC-1:03:YTgxZGEwZTA3Yzc1NmU1YjY0NGFlMGMzYTNiOGIyMzA1MzRjOTQ5YjM4MzY4MzJlNjE5ZThmY2EyMGU5Y2I5ZcLXVvg=: 00:14:54.785 11:36:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:2fb5c03d-b67f-4471-b8a9-a83bf7b15be8 --hostid 2fb5c03d-b67f-4471-b8a9-a83bf7b15be8 -l 0 --dhchap-secret DHHC-1:00:Y2JiZGI3MTE3ZjcwNmFhMzJhZDBkN2VhODEwZGRmNDVmMjNkZmIzNzJmMzBiY2JlF9S+tg==: --dhchap-ctrl-secret DHHC-1:03:YTgxZGEwZTA3Yzc1NmU1YjY0NGFlMGMzYTNiOGIyMzA1MzRjOTQ5YjM4MzY4MzJlNjE5ZThmY2EyMGU5Y2I5ZcLXVvg=: 00:14:55.719 11:36:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:14:55.719 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:14:55.719 11:36:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:2fb5c03d-b67f-4471-b8a9-a83bf7b15be8 00:14:55.719 11:36:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:55.719 11:36:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:55.719 11:36:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:55.719 11:36:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:14:55.719 11:36:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:14:55.719 11:36:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:14:55.719 11:36:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe6144 1 00:14:55.719 11:36:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:14:55.719 11:36:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:14:55.719 11:36:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe6144 00:14:55.719 11:36:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:14:55.719 11:36:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:14:55.719 11:36:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:2fb5c03d-b67f-4471-b8a9-a83bf7b15be8 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:14:55.719 11:36:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:55.720 11:36:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:55.720 11:36:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:55.720 11:36:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:14:55.720 11:36:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:2fb5c03d-b67f-4471-b8a9-a83bf7b15be8 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:14:55.720 11:36:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:2fb5c03d-b67f-4471-b8a9-a83bf7b15be8 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:14:56.288 00:14:56.288 11:36:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:14:56.288 11:36:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:14:56.288 11:36:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:14:56.546 11:36:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:14:56.546 11:36:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:14:56.546 11:36:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:56.546 11:36:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:56.546 11:36:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:56.546 11:36:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:14:56.546 { 00:14:56.546 "auth": { 00:14:56.546 "dhgroup": "ffdhe6144", 00:14:56.546 "digest": "sha512", 00:14:56.546 "state": "completed" 00:14:56.546 }, 00:14:56.546 "cntlid": 131, 00:14:56.546 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:2fb5c03d-b67f-4471-b8a9-a83bf7b15be8", 00:14:56.546 "listen_address": { 00:14:56.546 "adrfam": "IPv4", 00:14:56.546 "traddr": "10.0.0.3", 00:14:56.546 "trsvcid": "4420", 00:14:56.546 "trtype": "TCP" 00:14:56.546 }, 00:14:56.546 "peer_address": { 00:14:56.546 "adrfam": "IPv4", 00:14:56.546 "traddr": "10.0.0.1", 00:14:56.546 "trsvcid": "48664", 00:14:56.546 "trtype": "TCP" 00:14:56.546 }, 00:14:56.546 "qid": 0, 00:14:56.546 "state": "enabled", 00:14:56.546 "thread": "nvmf_tgt_poll_group_000" 00:14:56.546 } 00:14:56.546 ]' 00:14:56.546 11:36:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:14:56.546 11:36:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:14:56.546 11:36:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:14:56.805 11:36:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:14:56.805 11:36:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:14:56.805 11:36:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:14:56.805 11:36:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:14:56.805 11:36:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:14:57.065 11:36:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:NzViMDgxNmYyYjg1NzY4ZjY1YWE3NTk5MjIwMmNlOWUtjHHm: --dhchap-ctrl-secret DHHC-1:02:MTRkMDAyZTM2NDg5YzUyNGFkZDk1YzA5NGYxNDc1NTBlNjNkNjZmODEzMzUxM2Yy4R272Q==: 00:14:57.065 11:36:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:2fb5c03d-b67f-4471-b8a9-a83bf7b15be8 --hostid 2fb5c03d-b67f-4471-b8a9-a83bf7b15be8 -l 0 --dhchap-secret DHHC-1:01:NzViMDgxNmYyYjg1NzY4ZjY1YWE3NTk5MjIwMmNlOWUtjHHm: --dhchap-ctrl-secret DHHC-1:02:MTRkMDAyZTM2NDg5YzUyNGFkZDk1YzA5NGYxNDc1NTBlNjNkNjZmODEzMzUxM2Yy4R272Q==: 00:14:57.632 11:36:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:14:57.632 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:14:57.632 11:36:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:2fb5c03d-b67f-4471-b8a9-a83bf7b15be8 00:14:57.632 11:36:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:57.632 11:36:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:57.632 11:36:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:57.632 11:36:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:14:57.632 11:36:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:14:57.632 11:36:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:14:58.199 11:36:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe6144 2 00:14:58.199 11:36:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:14:58.199 11:36:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:14:58.199 11:36:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe6144 00:14:58.199 11:36:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:14:58.199 11:36:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:14:58.199 11:36:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:2fb5c03d-b67f-4471-b8a9-a83bf7b15be8 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:14:58.199 11:36:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:58.199 11:36:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:58.199 11:36:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:58.199 11:36:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:14:58.199 11:36:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:2fb5c03d-b67f-4471-b8a9-a83bf7b15be8 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:14:58.199 11:36:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:2fb5c03d-b67f-4471-b8a9-a83bf7b15be8 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:14:58.458 00:14:58.458 11:36:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:14:58.458 11:36:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:14:58.458 11:36:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:14:58.717 11:36:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:14:58.717 11:36:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:14:58.717 11:36:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:58.717 11:36:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:58.717 11:36:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:58.717 11:36:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:14:58.717 { 00:14:58.717 "auth": { 00:14:58.717 "dhgroup": "ffdhe6144", 00:14:58.717 "digest": "sha512", 00:14:58.717 "state": "completed" 00:14:58.717 }, 00:14:58.717 "cntlid": 133, 00:14:58.717 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:2fb5c03d-b67f-4471-b8a9-a83bf7b15be8", 00:14:58.717 "listen_address": { 00:14:58.717 "adrfam": "IPv4", 00:14:58.717 "traddr": "10.0.0.3", 00:14:58.717 "trsvcid": "4420", 00:14:58.717 "trtype": "TCP" 00:14:58.717 }, 00:14:58.717 "peer_address": { 00:14:58.717 "adrfam": "IPv4", 00:14:58.717 "traddr": "10.0.0.1", 00:14:58.717 "trsvcid": "48690", 00:14:58.717 "trtype": "TCP" 00:14:58.717 }, 00:14:58.717 "qid": 0, 00:14:58.717 "state": "enabled", 00:14:58.717 "thread": "nvmf_tgt_poll_group_000" 00:14:58.717 } 00:14:58.717 ]' 00:14:58.717 11:36:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:14:58.976 11:36:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:14:58.976 11:36:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:14:58.976 11:36:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:14:58.976 11:36:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:14:58.976 11:36:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:14:58.976 11:36:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:14:58.976 11:36:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:14:59.234 11:36:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:ZDY0N2YwN2U5MDE1OGM3ZDRiZTE3ZjllMTA2YjcyOWJmNTdkMzQ3NTJlZWJhZGJjQ1cSZQ==: --dhchap-ctrl-secret DHHC-1:01:OGZjMTkzMDUzNGJhZmU0OWUxOGM0NmRkZGI3NTVkNTktkkdU: 00:14:59.234 11:36:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:2fb5c03d-b67f-4471-b8a9-a83bf7b15be8 --hostid 2fb5c03d-b67f-4471-b8a9-a83bf7b15be8 -l 0 --dhchap-secret DHHC-1:02:ZDY0N2YwN2U5MDE1OGM3ZDRiZTE3ZjllMTA2YjcyOWJmNTdkMzQ3NTJlZWJhZGJjQ1cSZQ==: --dhchap-ctrl-secret DHHC-1:01:OGZjMTkzMDUzNGJhZmU0OWUxOGM0NmRkZGI3NTVkNTktkkdU: 00:14:59.801 11:37:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:14:59.801 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:14:59.801 11:37:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:2fb5c03d-b67f-4471-b8a9-a83bf7b15be8 00:14:59.801 11:37:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:59.801 11:37:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:59.801 11:37:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:59.801 11:37:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:14:59.801 11:37:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:14:59.801 11:37:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:15:00.060 11:37:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe6144 3 00:15:00.060 11:37:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:15:00.060 11:37:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:15:00.060 11:37:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe6144 00:15:00.060 11:37:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:15:00.060 11:37:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:15:00.060 11:37:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:2fb5c03d-b67f-4471-b8a9-a83bf7b15be8 --dhchap-key key3 00:15:00.061 11:37:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:00.061 11:37:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:00.061 11:37:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:00.061 11:37:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:15:00.061 11:37:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:2fb5c03d-b67f-4471-b8a9-a83bf7b15be8 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:15:00.061 11:37:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:2fb5c03d-b67f-4471-b8a9-a83bf7b15be8 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:15:00.627 00:15:00.627 11:37:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:15:00.627 11:37:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:15:00.627 11:37:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:15:01.194 11:37:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:15:01.194 11:37:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:15:01.194 11:37:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:01.194 11:37:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:01.194 11:37:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:01.194 11:37:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:15:01.194 { 00:15:01.194 "auth": { 00:15:01.194 "dhgroup": "ffdhe6144", 00:15:01.194 "digest": "sha512", 00:15:01.194 "state": "completed" 00:15:01.194 }, 00:15:01.194 "cntlid": 135, 00:15:01.194 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:2fb5c03d-b67f-4471-b8a9-a83bf7b15be8", 00:15:01.194 "listen_address": { 00:15:01.194 "adrfam": "IPv4", 00:15:01.194 "traddr": "10.0.0.3", 00:15:01.194 "trsvcid": "4420", 00:15:01.194 "trtype": "TCP" 00:15:01.194 }, 00:15:01.194 "peer_address": { 00:15:01.194 "adrfam": "IPv4", 00:15:01.194 "traddr": "10.0.0.1", 00:15:01.194 "trsvcid": "48716", 00:15:01.194 "trtype": "TCP" 00:15:01.194 }, 00:15:01.194 "qid": 0, 00:15:01.194 "state": "enabled", 00:15:01.194 "thread": "nvmf_tgt_poll_group_000" 00:15:01.194 } 00:15:01.194 ]' 00:15:01.194 11:37:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:15:01.194 11:37:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:15:01.194 11:37:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:15:01.194 11:37:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:15:01.194 11:37:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:15:01.194 11:37:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:15:01.194 11:37:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:15:01.194 11:37:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:15:01.452 11:37:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:MWI3ZGI4MmVmNjZkMTUxMmFmODZjY2JkMzEwNTkyZmI0NGVhNTE0OTNlMTFmNTBjNzA5YWE5NTE1ZDljZDliM/NYdV4=: 00:15:01.452 11:37:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:2fb5c03d-b67f-4471-b8a9-a83bf7b15be8 --hostid 2fb5c03d-b67f-4471-b8a9-a83bf7b15be8 -l 0 --dhchap-secret DHHC-1:03:MWI3ZGI4MmVmNjZkMTUxMmFmODZjY2JkMzEwNTkyZmI0NGVhNTE0OTNlMTFmNTBjNzA5YWE5NTE1ZDljZDliM/NYdV4=: 00:15:02.019 11:37:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:15:02.019 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:15:02.019 11:37:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:2fb5c03d-b67f-4471-b8a9-a83bf7b15be8 00:15:02.019 11:37:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:02.019 11:37:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:02.019 11:37:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:02.019 11:37:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:15:02.019 11:37:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:15:02.019 11:37:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:15:02.019 11:37:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:15:02.586 11:37:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe8192 0 00:15:02.586 11:37:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:15:02.586 11:37:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:15:02.586 11:37:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe8192 00:15:02.586 11:37:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:15:02.586 11:37:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:15:02.586 11:37:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:2fb5c03d-b67f-4471-b8a9-a83bf7b15be8 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:15:02.586 11:37:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:02.586 11:37:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:02.586 11:37:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:02.586 11:37:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:15:02.586 11:37:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:2fb5c03d-b67f-4471-b8a9-a83bf7b15be8 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:15:02.586 11:37:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:2fb5c03d-b67f-4471-b8a9-a83bf7b15be8 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:15:03.152 00:15:03.152 11:37:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:15:03.152 11:37:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:15:03.152 11:37:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:15:03.411 11:37:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:15:03.411 11:37:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:15:03.411 11:37:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:03.411 11:37:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:03.411 11:37:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:03.411 11:37:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:15:03.411 { 00:15:03.411 "auth": { 00:15:03.411 "dhgroup": "ffdhe8192", 00:15:03.411 "digest": "sha512", 00:15:03.411 "state": "completed" 00:15:03.411 }, 00:15:03.411 "cntlid": 137, 00:15:03.411 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:2fb5c03d-b67f-4471-b8a9-a83bf7b15be8", 00:15:03.411 "listen_address": { 00:15:03.411 "adrfam": "IPv4", 00:15:03.411 "traddr": "10.0.0.3", 00:15:03.411 "trsvcid": "4420", 00:15:03.411 "trtype": "TCP" 00:15:03.411 }, 00:15:03.411 "peer_address": { 00:15:03.411 "adrfam": "IPv4", 00:15:03.411 "traddr": "10.0.0.1", 00:15:03.411 "trsvcid": "48736", 00:15:03.411 "trtype": "TCP" 00:15:03.411 }, 00:15:03.411 "qid": 0, 00:15:03.411 "state": "enabled", 00:15:03.411 "thread": "nvmf_tgt_poll_group_000" 00:15:03.411 } 00:15:03.411 ]' 00:15:03.411 11:37:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:15:03.411 11:37:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:15:03.411 11:37:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:15:03.669 11:37:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:15:03.669 11:37:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:15:03.669 11:37:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:15:03.669 11:37:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:15:03.669 11:37:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:15:03.927 11:37:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:Y2JiZGI3MTE3ZjcwNmFhMzJhZDBkN2VhODEwZGRmNDVmMjNkZmIzNzJmMzBiY2JlF9S+tg==: --dhchap-ctrl-secret DHHC-1:03:YTgxZGEwZTA3Yzc1NmU1YjY0NGFlMGMzYTNiOGIyMzA1MzRjOTQ5YjM4MzY4MzJlNjE5ZThmY2EyMGU5Y2I5ZcLXVvg=: 00:15:03.927 11:37:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:2fb5c03d-b67f-4471-b8a9-a83bf7b15be8 --hostid 2fb5c03d-b67f-4471-b8a9-a83bf7b15be8 -l 0 --dhchap-secret DHHC-1:00:Y2JiZGI3MTE3ZjcwNmFhMzJhZDBkN2VhODEwZGRmNDVmMjNkZmIzNzJmMzBiY2JlF9S+tg==: --dhchap-ctrl-secret DHHC-1:03:YTgxZGEwZTA3Yzc1NmU1YjY0NGFlMGMzYTNiOGIyMzA1MzRjOTQ5YjM4MzY4MzJlNjE5ZThmY2EyMGU5Y2I5ZcLXVvg=: 00:15:04.505 11:37:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:15:04.505 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:15:04.505 11:37:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:2fb5c03d-b67f-4471-b8a9-a83bf7b15be8 00:15:04.505 11:37:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:04.505 11:37:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:04.505 11:37:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:04.505 11:37:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:15:04.505 11:37:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:15:04.505 11:37:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:15:05.072 11:37:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe8192 1 00:15:05.072 11:37:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:15:05.072 11:37:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:15:05.072 11:37:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe8192 00:15:05.072 11:37:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:15:05.072 11:37:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:15:05.072 11:37:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:2fb5c03d-b67f-4471-b8a9-a83bf7b15be8 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:15:05.072 11:37:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:05.072 11:37:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:05.072 11:37:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:05.072 11:37:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:15:05.072 11:37:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:2fb5c03d-b67f-4471-b8a9-a83bf7b15be8 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:15:05.072 11:37:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:2fb5c03d-b67f-4471-b8a9-a83bf7b15be8 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:15:05.640 00:15:05.640 11:37:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:15:05.640 11:37:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:15:05.640 11:37:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:15:05.898 11:37:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:15:05.898 11:37:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:15:05.898 11:37:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:05.898 11:37:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:05.898 11:37:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:05.898 11:37:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:15:05.898 { 00:15:05.898 "auth": { 00:15:05.898 "dhgroup": "ffdhe8192", 00:15:05.898 "digest": "sha512", 00:15:05.898 "state": "completed" 00:15:05.898 }, 00:15:05.898 "cntlid": 139, 00:15:05.898 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:2fb5c03d-b67f-4471-b8a9-a83bf7b15be8", 00:15:05.898 "listen_address": { 00:15:05.898 "adrfam": "IPv4", 00:15:05.898 "traddr": "10.0.0.3", 00:15:05.898 "trsvcid": "4420", 00:15:05.898 "trtype": "TCP" 00:15:05.898 }, 00:15:05.898 "peer_address": { 00:15:05.898 "adrfam": "IPv4", 00:15:05.898 "traddr": "10.0.0.1", 00:15:05.898 "trsvcid": "33550", 00:15:05.898 "trtype": "TCP" 00:15:05.898 }, 00:15:05.898 "qid": 0, 00:15:05.898 "state": "enabled", 00:15:05.898 "thread": "nvmf_tgt_poll_group_000" 00:15:05.898 } 00:15:05.898 ]' 00:15:05.898 11:37:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:15:05.898 11:37:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:15:05.898 11:37:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:15:05.898 11:37:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:15:05.898 11:37:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:15:06.156 11:37:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:15:06.156 11:37:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:15:06.156 11:37:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:15:06.415 11:37:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:NzViMDgxNmYyYjg1NzY4ZjY1YWE3NTk5MjIwMmNlOWUtjHHm: --dhchap-ctrl-secret DHHC-1:02:MTRkMDAyZTM2NDg5YzUyNGFkZDk1YzA5NGYxNDc1NTBlNjNkNjZmODEzMzUxM2Yy4R272Q==: 00:15:06.415 11:37:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:2fb5c03d-b67f-4471-b8a9-a83bf7b15be8 --hostid 2fb5c03d-b67f-4471-b8a9-a83bf7b15be8 -l 0 --dhchap-secret DHHC-1:01:NzViMDgxNmYyYjg1NzY4ZjY1YWE3NTk5MjIwMmNlOWUtjHHm: --dhchap-ctrl-secret DHHC-1:02:MTRkMDAyZTM2NDg5YzUyNGFkZDk1YzA5NGYxNDc1NTBlNjNkNjZmODEzMzUxM2Yy4R272Q==: 00:15:06.981 11:37:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:15:06.981 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:15:06.981 11:37:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:2fb5c03d-b67f-4471-b8a9-a83bf7b15be8 00:15:06.981 11:37:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:06.982 11:37:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:06.982 11:37:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:06.982 11:37:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:15:06.982 11:37:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:15:06.982 11:37:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:15:07.240 11:37:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe8192 2 00:15:07.240 11:37:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:15:07.240 11:37:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:15:07.240 11:37:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe8192 00:15:07.240 11:37:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:15:07.240 11:37:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:15:07.240 11:37:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:2fb5c03d-b67f-4471-b8a9-a83bf7b15be8 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:15:07.240 11:37:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:07.240 11:37:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:07.240 11:37:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:07.240 11:37:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:15:07.240 11:37:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:2fb5c03d-b67f-4471-b8a9-a83bf7b15be8 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:15:07.240 11:37:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:2fb5c03d-b67f-4471-b8a9-a83bf7b15be8 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:15:08.174 00:15:08.174 11:37:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:15:08.174 11:37:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:15:08.174 11:37:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:15:08.174 11:37:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:15:08.174 11:37:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:15:08.174 11:37:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:08.174 11:37:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:08.174 11:37:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:08.174 11:37:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:15:08.174 { 00:15:08.174 "auth": { 00:15:08.175 "dhgroup": "ffdhe8192", 00:15:08.175 "digest": "sha512", 00:15:08.175 "state": "completed" 00:15:08.175 }, 00:15:08.175 "cntlid": 141, 00:15:08.175 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:2fb5c03d-b67f-4471-b8a9-a83bf7b15be8", 00:15:08.175 "listen_address": { 00:15:08.175 "adrfam": "IPv4", 00:15:08.175 "traddr": "10.0.0.3", 00:15:08.175 "trsvcid": "4420", 00:15:08.175 "trtype": "TCP" 00:15:08.175 }, 00:15:08.175 "peer_address": { 00:15:08.175 "adrfam": "IPv4", 00:15:08.175 "traddr": "10.0.0.1", 00:15:08.175 "trsvcid": "33592", 00:15:08.175 "trtype": "TCP" 00:15:08.175 }, 00:15:08.175 "qid": 0, 00:15:08.175 "state": "enabled", 00:15:08.175 "thread": "nvmf_tgt_poll_group_000" 00:15:08.175 } 00:15:08.175 ]' 00:15:08.175 11:37:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:15:08.433 11:37:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:15:08.433 11:37:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:15:08.433 11:37:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:15:08.433 11:37:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:15:08.433 11:37:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:15:08.433 11:37:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:15:08.433 11:37:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:15:08.691 11:37:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:ZDY0N2YwN2U5MDE1OGM3ZDRiZTE3ZjllMTA2YjcyOWJmNTdkMzQ3NTJlZWJhZGJjQ1cSZQ==: --dhchap-ctrl-secret DHHC-1:01:OGZjMTkzMDUzNGJhZmU0OWUxOGM0NmRkZGI3NTVkNTktkkdU: 00:15:08.691 11:37:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:2fb5c03d-b67f-4471-b8a9-a83bf7b15be8 --hostid 2fb5c03d-b67f-4471-b8a9-a83bf7b15be8 -l 0 --dhchap-secret DHHC-1:02:ZDY0N2YwN2U5MDE1OGM3ZDRiZTE3ZjllMTA2YjcyOWJmNTdkMzQ3NTJlZWJhZGJjQ1cSZQ==: --dhchap-ctrl-secret DHHC-1:01:OGZjMTkzMDUzNGJhZmU0OWUxOGM0NmRkZGI3NTVkNTktkkdU: 00:15:09.258 11:37:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:15:09.258 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:15:09.258 11:37:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:2fb5c03d-b67f-4471-b8a9-a83bf7b15be8 00:15:09.258 11:37:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:09.258 11:37:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:09.258 11:37:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:09.258 11:37:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:15:09.258 11:37:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:15:09.258 11:37:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:15:09.516 11:37:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe8192 3 00:15:09.516 11:37:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:15:09.516 11:37:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:15:09.516 11:37:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe8192 00:15:09.516 11:37:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:15:09.516 11:37:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:15:09.516 11:37:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:2fb5c03d-b67f-4471-b8a9-a83bf7b15be8 --dhchap-key key3 00:15:09.516 11:37:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:09.516 11:37:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:09.774 11:37:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:09.774 11:37:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:15:09.774 11:37:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:2fb5c03d-b67f-4471-b8a9-a83bf7b15be8 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:15:09.774 11:37:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:2fb5c03d-b67f-4471-b8a9-a83bf7b15be8 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:15:10.390 00:15:10.390 11:37:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:15:10.390 11:37:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:15:10.390 11:37:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:15:10.648 11:37:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:15:10.648 11:37:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:15:10.648 11:37:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:10.648 11:37:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:10.648 11:37:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:10.648 11:37:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:15:10.648 { 00:15:10.648 "auth": { 00:15:10.648 "dhgroup": "ffdhe8192", 00:15:10.648 "digest": "sha512", 00:15:10.648 "state": "completed" 00:15:10.648 }, 00:15:10.648 "cntlid": 143, 00:15:10.648 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:2fb5c03d-b67f-4471-b8a9-a83bf7b15be8", 00:15:10.648 "listen_address": { 00:15:10.648 "adrfam": "IPv4", 00:15:10.648 "traddr": "10.0.0.3", 00:15:10.648 "trsvcid": "4420", 00:15:10.648 "trtype": "TCP" 00:15:10.648 }, 00:15:10.648 "peer_address": { 00:15:10.648 "adrfam": "IPv4", 00:15:10.648 "traddr": "10.0.0.1", 00:15:10.648 "trsvcid": "33614", 00:15:10.648 "trtype": "TCP" 00:15:10.648 }, 00:15:10.648 "qid": 0, 00:15:10.648 "state": "enabled", 00:15:10.648 "thread": "nvmf_tgt_poll_group_000" 00:15:10.648 } 00:15:10.648 ]' 00:15:10.648 11:37:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:15:10.648 11:37:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:15:10.648 11:37:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:15:10.648 11:37:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:15:10.648 11:37:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:15:10.648 11:37:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:15:10.648 11:37:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:15:10.648 11:37:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:15:11.215 11:37:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:MWI3ZGI4MmVmNjZkMTUxMmFmODZjY2JkMzEwNTkyZmI0NGVhNTE0OTNlMTFmNTBjNzA5YWE5NTE1ZDljZDliM/NYdV4=: 00:15:11.215 11:37:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:2fb5c03d-b67f-4471-b8a9-a83bf7b15be8 --hostid 2fb5c03d-b67f-4471-b8a9-a83bf7b15be8 -l 0 --dhchap-secret DHHC-1:03:MWI3ZGI4MmVmNjZkMTUxMmFmODZjY2JkMzEwNTkyZmI0NGVhNTE0OTNlMTFmNTBjNzA5YWE5NTE1ZDljZDliM/NYdV4=: 00:15:11.780 11:37:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:15:11.780 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:15:11.780 11:37:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:2fb5c03d-b67f-4471-b8a9-a83bf7b15be8 00:15:11.780 11:37:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:11.780 11:37:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:11.780 11:37:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:11.780 11:37:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@129 -- # IFS=, 00:15:11.780 11:37:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@130 -- # printf %s sha256,sha384,sha512 00:15:11.780 11:37:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@129 -- # IFS=, 00:15:11.780 11:37:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@130 -- # printf %s null,ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 00:15:11.780 11:37:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@129 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256,sha384,sha512 --dhchap-dhgroups null,ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 00:15:11.780 11:37:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256,sha384,sha512 --dhchap-dhgroups null,ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 00:15:12.038 11:37:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@141 -- # connect_authenticate sha512 ffdhe8192 0 00:15:12.038 11:37:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:15:12.038 11:37:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:15:12.038 11:37:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe8192 00:15:12.038 11:37:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:15:12.038 11:37:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:15:12.038 11:37:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:2fb5c03d-b67f-4471-b8a9-a83bf7b15be8 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:15:12.038 11:37:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:12.038 11:37:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:12.038 11:37:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:12.038 11:37:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:15:12.038 11:37:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:2fb5c03d-b67f-4471-b8a9-a83bf7b15be8 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:15:12.038 11:37:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:2fb5c03d-b67f-4471-b8a9-a83bf7b15be8 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:15:12.972 00:15:12.972 11:37:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:15:12.972 11:37:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:15:12.972 11:37:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:15:12.972 11:37:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:15:12.972 11:37:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:15:12.972 11:37:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:12.972 11:37:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:12.972 11:37:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:12.972 11:37:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:15:12.972 { 00:15:12.972 "auth": { 00:15:12.972 "dhgroup": "ffdhe8192", 00:15:12.972 "digest": "sha512", 00:15:12.972 "state": "completed" 00:15:12.972 }, 00:15:12.972 "cntlid": 145, 00:15:12.972 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:2fb5c03d-b67f-4471-b8a9-a83bf7b15be8", 00:15:12.972 "listen_address": { 00:15:12.972 "adrfam": "IPv4", 00:15:12.972 "traddr": "10.0.0.3", 00:15:12.972 "trsvcid": "4420", 00:15:12.972 "trtype": "TCP" 00:15:12.972 }, 00:15:12.972 "peer_address": { 00:15:12.972 "adrfam": "IPv4", 00:15:12.972 "traddr": "10.0.0.1", 00:15:12.972 "trsvcid": "33636", 00:15:12.972 "trtype": "TCP" 00:15:12.972 }, 00:15:12.972 "qid": 0, 00:15:12.972 "state": "enabled", 00:15:12.972 "thread": "nvmf_tgt_poll_group_000" 00:15:12.972 } 00:15:12.972 ]' 00:15:12.972 11:37:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:15:12.972 11:37:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:15:12.972 11:37:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:15:13.230 11:37:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:15:13.230 11:37:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:15:13.230 11:37:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:15:13.230 11:37:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:15:13.230 11:37:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:15:13.487 11:37:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:Y2JiZGI3MTE3ZjcwNmFhMzJhZDBkN2VhODEwZGRmNDVmMjNkZmIzNzJmMzBiY2JlF9S+tg==: --dhchap-ctrl-secret DHHC-1:03:YTgxZGEwZTA3Yzc1NmU1YjY0NGFlMGMzYTNiOGIyMzA1MzRjOTQ5YjM4MzY4MzJlNjE5ZThmY2EyMGU5Y2I5ZcLXVvg=: 00:15:13.487 11:37:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:2fb5c03d-b67f-4471-b8a9-a83bf7b15be8 --hostid 2fb5c03d-b67f-4471-b8a9-a83bf7b15be8 -l 0 --dhchap-secret DHHC-1:00:Y2JiZGI3MTE3ZjcwNmFhMzJhZDBkN2VhODEwZGRmNDVmMjNkZmIzNzJmMzBiY2JlF9S+tg==: --dhchap-ctrl-secret DHHC-1:03:YTgxZGEwZTA3Yzc1NmU1YjY0NGFlMGMzYTNiOGIyMzA1MzRjOTQ5YjM4MzY4MzJlNjE5ZThmY2EyMGU5Y2I5ZcLXVvg=: 00:15:14.052 11:37:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:15:14.052 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:15:14.052 11:37:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:2fb5c03d-b67f-4471-b8a9-a83bf7b15be8 00:15:14.052 11:37:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:14.052 11:37:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:14.052 11:37:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:14.052 11:37:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@144 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:2fb5c03d-b67f-4471-b8a9-a83bf7b15be8 --dhchap-key key1 00:15:14.052 11:37:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:14.052 11:37:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:14.052 11:37:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:14.052 11:37:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@145 -- # NOT bdev_connect -b nvme0 --dhchap-key key2 00:15:14.052 11:37:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@652 -- # local es=0 00:15:14.052 11:37:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@654 -- # valid_exec_arg bdev_connect -b nvme0 --dhchap-key key2 00:15:14.052 11:37:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@640 -- # local arg=bdev_connect 00:15:14.052 11:37:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:15:14.052 11:37:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # type -t bdev_connect 00:15:14.052 11:37:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:15:14.052 11:37:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@655 -- # bdev_connect -b nvme0 --dhchap-key key2 00:15:14.052 11:37:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:2fb5c03d-b67f-4471-b8a9-a83bf7b15be8 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 00:15:14.052 11:37:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:2fb5c03d-b67f-4471-b8a9-a83bf7b15be8 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 00:15:14.616 2024/12/16 11:37:15 error on JSON-RPC call, method: bdev_nvme_attach_controller, params: map[adrfam:ipv4 allow_unrecognized_csi:%!s(bool=false) ddgst:%!s(bool=false) dhchap_key:key2 hdgst:%!s(bool=false) hostnqn:nqn.2014-08.org.nvmexpress:uuid:2fb5c03d-b67f-4471-b8a9-a83bf7b15be8 name:nvme0 prchk_guard:%!s(bool=false) prchk_reftag:%!s(bool=false) subnqn:nqn.2024-03.io.spdk:cnode0 traddr:10.0.0.3 trsvcid:4420 trtype:tcp], err: error received for bdev_nvme_attach_controller method, err: Code=-5 Msg=Input/output error 00:15:14.616 request: 00:15:14.616 { 00:15:14.616 "method": "bdev_nvme_attach_controller", 00:15:14.616 "params": { 00:15:14.616 "name": "nvme0", 00:15:14.616 "trtype": "tcp", 00:15:14.616 "traddr": "10.0.0.3", 00:15:14.616 "adrfam": "ipv4", 00:15:14.616 "trsvcid": "4420", 00:15:14.616 "subnqn": "nqn.2024-03.io.spdk:cnode0", 00:15:14.616 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:2fb5c03d-b67f-4471-b8a9-a83bf7b15be8", 00:15:14.616 "prchk_reftag": false, 00:15:14.616 "prchk_guard": false, 00:15:14.616 "hdgst": false, 00:15:14.616 "ddgst": false, 00:15:14.616 "dhchap_key": "key2", 00:15:14.616 "allow_unrecognized_csi": false 00:15:14.616 } 00:15:14.616 } 00:15:14.616 Got JSON-RPC error response 00:15:14.616 GoRPCClient: error on JSON-RPC call 00:15:14.616 11:37:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@655 -- # es=1 00:15:14.616 11:37:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:15:14.616 11:37:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:15:14.616 11:37:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:15:14.616 11:37:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@146 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:2fb5c03d-b67f-4471-b8a9-a83bf7b15be8 00:15:14.616 11:37:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:14.616 11:37:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:14.616 11:37:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:14.616 11:37:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@149 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:2fb5c03d-b67f-4471-b8a9-a83bf7b15be8 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:15:14.616 11:37:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:14.616 11:37:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:14.616 11:37:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:14.873 11:37:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@150 -- # NOT bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey2 00:15:14.873 11:37:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@652 -- # local es=0 00:15:14.874 11:37:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@654 -- # valid_exec_arg bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey2 00:15:14.874 11:37:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@640 -- # local arg=bdev_connect 00:15:14.874 11:37:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:15:14.874 11:37:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # type -t bdev_connect 00:15:14.874 11:37:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:15:14.874 11:37:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@655 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey2 00:15:14.874 11:37:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:2fb5c03d-b67f-4471-b8a9-a83bf7b15be8 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey2 00:15:14.874 11:37:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:2fb5c03d-b67f-4471-b8a9-a83bf7b15be8 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey2 00:15:15.440 2024/12/16 11:37:15 error on JSON-RPC call, method: bdev_nvme_attach_controller, params: map[adrfam:ipv4 allow_unrecognized_csi:%!s(bool=false) ddgst:%!s(bool=false) dhchap_ctrlr_key:ckey2 dhchap_key:key1 hdgst:%!s(bool=false) hostnqn:nqn.2014-08.org.nvmexpress:uuid:2fb5c03d-b67f-4471-b8a9-a83bf7b15be8 name:nvme0 prchk_guard:%!s(bool=false) prchk_reftag:%!s(bool=false) subnqn:nqn.2024-03.io.spdk:cnode0 traddr:10.0.0.3 trsvcid:4420 trtype:tcp], err: error received for bdev_nvme_attach_controller method, err: Code=-5 Msg=Input/output error 00:15:15.440 request: 00:15:15.440 { 00:15:15.440 "method": "bdev_nvme_attach_controller", 00:15:15.440 "params": { 00:15:15.440 "name": "nvme0", 00:15:15.440 "trtype": "tcp", 00:15:15.440 "traddr": "10.0.0.3", 00:15:15.440 "adrfam": "ipv4", 00:15:15.440 "trsvcid": "4420", 00:15:15.440 "subnqn": "nqn.2024-03.io.spdk:cnode0", 00:15:15.440 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:2fb5c03d-b67f-4471-b8a9-a83bf7b15be8", 00:15:15.440 "prchk_reftag": false, 00:15:15.440 "prchk_guard": false, 00:15:15.440 "hdgst": false, 00:15:15.440 "ddgst": false, 00:15:15.440 "dhchap_key": "key1", 00:15:15.440 "dhchap_ctrlr_key": "ckey2", 00:15:15.440 "allow_unrecognized_csi": false 00:15:15.440 } 00:15:15.440 } 00:15:15.440 Got JSON-RPC error response 00:15:15.440 GoRPCClient: error on JSON-RPC call 00:15:15.440 11:37:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@655 -- # es=1 00:15:15.440 11:37:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:15:15.440 11:37:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:15:15.440 11:37:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:15:15.440 11:37:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@151 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:2fb5c03d-b67f-4471-b8a9-a83bf7b15be8 00:15:15.440 11:37:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:15.440 11:37:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:15.440 11:37:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:15.440 11:37:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@154 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:2fb5c03d-b67f-4471-b8a9-a83bf7b15be8 --dhchap-key key1 00:15:15.440 11:37:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:15.440 11:37:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:15.440 11:37:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:15.440 11:37:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@155 -- # NOT bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:15:15.440 11:37:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@652 -- # local es=0 00:15:15.440 11:37:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@654 -- # valid_exec_arg bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:15:15.440 11:37:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@640 -- # local arg=bdev_connect 00:15:15.440 11:37:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:15:15.440 11:37:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # type -t bdev_connect 00:15:15.440 11:37:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:15:15.440 11:37:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@655 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:15:15.440 11:37:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:2fb5c03d-b67f-4471-b8a9-a83bf7b15be8 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:15:15.440 11:37:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:2fb5c03d-b67f-4471-b8a9-a83bf7b15be8 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:15:16.084 2024/12/16 11:37:16 error on JSON-RPC call, method: bdev_nvme_attach_controller, params: map[adrfam:ipv4 allow_unrecognized_csi:%!s(bool=false) ddgst:%!s(bool=false) dhchap_ctrlr_key:ckey1 dhchap_key:key1 hdgst:%!s(bool=false) hostnqn:nqn.2014-08.org.nvmexpress:uuid:2fb5c03d-b67f-4471-b8a9-a83bf7b15be8 name:nvme0 prchk_guard:%!s(bool=false) prchk_reftag:%!s(bool=false) subnqn:nqn.2024-03.io.spdk:cnode0 traddr:10.0.0.3 trsvcid:4420 trtype:tcp], err: error received for bdev_nvme_attach_controller method, err: Code=-5 Msg=Input/output error 00:15:16.084 request: 00:15:16.084 { 00:15:16.084 "method": "bdev_nvme_attach_controller", 00:15:16.084 "params": { 00:15:16.084 "name": "nvme0", 00:15:16.084 "trtype": "tcp", 00:15:16.084 "traddr": "10.0.0.3", 00:15:16.084 "adrfam": "ipv4", 00:15:16.084 "trsvcid": "4420", 00:15:16.084 "subnqn": "nqn.2024-03.io.spdk:cnode0", 00:15:16.084 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:2fb5c03d-b67f-4471-b8a9-a83bf7b15be8", 00:15:16.084 "prchk_reftag": false, 00:15:16.084 "prchk_guard": false, 00:15:16.084 "hdgst": false, 00:15:16.084 "ddgst": false, 00:15:16.084 "dhchap_key": "key1", 00:15:16.084 "dhchap_ctrlr_key": "ckey1", 00:15:16.084 "allow_unrecognized_csi": false 00:15:16.084 } 00:15:16.084 } 00:15:16.084 Got JSON-RPC error response 00:15:16.084 GoRPCClient: error on JSON-RPC call 00:15:16.084 11:37:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@655 -- # es=1 00:15:16.084 11:37:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:15:16.084 11:37:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:15:16.084 11:37:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:15:16.084 11:37:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@156 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:2fb5c03d-b67f-4471-b8a9-a83bf7b15be8 00:15:16.084 11:37:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:16.084 11:37:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:16.084 11:37:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:16.084 11:37:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@159 -- # killprocess 78376 00:15:16.084 11:37:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@954 -- # '[' -z 78376 ']' 00:15:16.084 11:37:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@958 -- # kill -0 78376 00:15:16.084 11:37:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@959 -- # uname 00:15:16.084 11:37:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:15:16.084 11:37:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 78376 00:15:16.084 killing process with pid 78376 00:15:16.084 11:37:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:15:16.084 11:37:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:15:16.084 11:37:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@972 -- # echo 'killing process with pid 78376' 00:15:16.084 11:37:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@973 -- # kill 78376 00:15:16.084 11:37:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@978 -- # wait 78376 00:15:16.343 11:37:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@160 -- # nvmfappstart --wait-for-rpc -L nvmf_auth 00:15:16.343 11:37:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:15:16.343 11:37:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@726 -- # xtrace_disable 00:15:16.343 11:37:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:16.343 11:37:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@509 -- # nvmfpid=83317 00:15:16.343 11:37:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@508 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --wait-for-rpc -L nvmf_auth 00:15:16.343 11:37:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@510 -- # waitforlisten 83317 00:15:16.343 11:37:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@835 -- # '[' -z 83317 ']' 00:15:16.343 11:37:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:15:16.343 11:37:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@840 -- # local max_retries=100 00:15:16.343 11:37:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:15:16.343 11:37:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@844 -- # xtrace_disable 00:15:16.343 11:37:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:16.601 11:37:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:15:16.601 11:37:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@868 -- # return 0 00:15:16.601 11:37:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:15:16.601 11:37:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@732 -- # xtrace_disable 00:15:16.601 11:37:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:16.601 11:37:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:15:16.601 11:37:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@161 -- # trap 'dumplogs; cleanup' SIGINT SIGTERM EXIT 00:15:16.601 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:15:16.601 11:37:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@163 -- # waitforlisten 83317 00:15:16.601 11:37:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@835 -- # '[' -z 83317 ']' 00:15:16.601 11:37:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:15:16.601 11:37:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@840 -- # local max_retries=100 00:15:16.601 11:37:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:15:16.601 11:37:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@844 -- # xtrace_disable 00:15:16.601 11:37:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:16.859 11:37:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:15:16.859 11:37:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@868 -- # return 0 00:15:16.859 11:37:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@164 -- # rpc_cmd 00:15:16.859 11:37:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:16.859 11:37:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:17.117 null0 00:15:17.117 11:37:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:17.117 11:37:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@174 -- # for i in "${!keys[@]}" 00:15:17.117 11:37:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@175 -- # rpc_cmd keyring_file_add_key key0 /tmp/spdk.key-null.LLV 00:15:17.117 11:37:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:17.117 11:37:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:17.117 11:37:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:17.117 11:37:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@176 -- # [[ -n /tmp/spdk.key-sha512.CuG ]] 00:15:17.117 11:37:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@176 -- # rpc_cmd keyring_file_add_key ckey0 /tmp/spdk.key-sha512.CuG 00:15:17.117 11:37:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:17.117 11:37:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:17.117 11:37:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:17.117 11:37:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@174 -- # for i in "${!keys[@]}" 00:15:17.117 11:37:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@175 -- # rpc_cmd keyring_file_add_key key1 /tmp/spdk.key-sha256.Qet 00:15:17.117 11:37:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:17.117 11:37:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:17.117 11:37:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:17.117 11:37:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@176 -- # [[ -n /tmp/spdk.key-sha384.ams ]] 00:15:17.117 11:37:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@176 -- # rpc_cmd keyring_file_add_key ckey1 /tmp/spdk.key-sha384.ams 00:15:17.117 11:37:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:17.117 11:37:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:17.117 11:37:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:17.117 11:37:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@174 -- # for i in "${!keys[@]}" 00:15:17.117 11:37:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@175 -- # rpc_cmd keyring_file_add_key key2 /tmp/spdk.key-sha384.BTV 00:15:17.117 11:37:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:17.117 11:37:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:17.117 11:37:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:17.117 11:37:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@176 -- # [[ -n /tmp/spdk.key-sha256.ZAi ]] 00:15:17.117 11:37:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@176 -- # rpc_cmd keyring_file_add_key ckey2 /tmp/spdk.key-sha256.ZAi 00:15:17.117 11:37:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:17.117 11:37:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:17.117 11:37:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:17.117 11:37:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@174 -- # for i in "${!keys[@]}" 00:15:17.117 11:37:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@175 -- # rpc_cmd keyring_file_add_key key3 /tmp/spdk.key-sha512.Riu 00:15:17.117 11:37:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:17.117 11:37:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:17.117 11:37:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:17.117 11:37:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@176 -- # [[ -n '' ]] 00:15:17.117 11:37:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@179 -- # connect_authenticate sha512 ffdhe8192 3 00:15:17.117 11:37:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:15:17.117 11:37:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:15:17.117 11:37:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe8192 00:15:17.117 11:37:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:15:17.117 11:37:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:15:17.118 11:37:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:2fb5c03d-b67f-4471-b8a9-a83bf7b15be8 --dhchap-key key3 00:15:17.118 11:37:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:17.118 11:37:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:17.118 11:37:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:17.118 11:37:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:15:17.118 11:37:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:2fb5c03d-b67f-4471-b8a9-a83bf7b15be8 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:15:17.118 11:37:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:2fb5c03d-b67f-4471-b8a9-a83bf7b15be8 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:15:18.052 nvme0n1 00:15:18.052 11:37:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:15:18.052 11:37:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:15:18.052 11:37:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:15:18.619 11:37:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:15:18.619 11:37:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:15:18.619 11:37:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:18.619 11:37:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:18.619 11:37:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:18.619 11:37:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:15:18.619 { 00:15:18.619 "auth": { 00:15:18.619 "dhgroup": "ffdhe8192", 00:15:18.619 "digest": "sha512", 00:15:18.619 "state": "completed" 00:15:18.619 }, 00:15:18.619 "cntlid": 1, 00:15:18.619 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:2fb5c03d-b67f-4471-b8a9-a83bf7b15be8", 00:15:18.619 "listen_address": { 00:15:18.619 "adrfam": "IPv4", 00:15:18.619 "traddr": "10.0.0.3", 00:15:18.619 "trsvcid": "4420", 00:15:18.619 "trtype": "TCP" 00:15:18.619 }, 00:15:18.619 "peer_address": { 00:15:18.619 "adrfam": "IPv4", 00:15:18.619 "traddr": "10.0.0.1", 00:15:18.619 "trsvcid": "51032", 00:15:18.619 "trtype": "TCP" 00:15:18.619 }, 00:15:18.619 "qid": 0, 00:15:18.619 "state": "enabled", 00:15:18.619 "thread": "nvmf_tgt_poll_group_000" 00:15:18.619 } 00:15:18.619 ]' 00:15:18.619 11:37:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:15:18.619 11:37:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:15:18.619 11:37:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:15:18.619 11:37:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:15:18.619 11:37:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:15:18.619 11:37:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:15:18.619 11:37:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:15:18.619 11:37:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:15:18.878 11:37:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:MWI3ZGI4MmVmNjZkMTUxMmFmODZjY2JkMzEwNTkyZmI0NGVhNTE0OTNlMTFmNTBjNzA5YWE5NTE1ZDljZDliM/NYdV4=: 00:15:18.878 11:37:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:2fb5c03d-b67f-4471-b8a9-a83bf7b15be8 --hostid 2fb5c03d-b67f-4471-b8a9-a83bf7b15be8 -l 0 --dhchap-secret DHHC-1:03:MWI3ZGI4MmVmNjZkMTUxMmFmODZjY2JkMzEwNTkyZmI0NGVhNTE0OTNlMTFmNTBjNzA5YWE5NTE1ZDljZDliM/NYdV4=: 00:15:19.813 11:37:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:15:19.813 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:15:19.813 11:37:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:2fb5c03d-b67f-4471-b8a9-a83bf7b15be8 00:15:19.813 11:37:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:19.813 11:37:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:19.813 11:37:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:19.813 11:37:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@182 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:2fb5c03d-b67f-4471-b8a9-a83bf7b15be8 --dhchap-key key3 00:15:19.813 11:37:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:19.813 11:37:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:19.813 11:37:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:19.813 11:37:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@183 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 00:15:19.813 11:37:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 00:15:20.071 11:37:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@184 -- # NOT bdev_connect -b nvme0 --dhchap-key key3 00:15:20.071 11:37:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@652 -- # local es=0 00:15:20.071 11:37:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@654 -- # valid_exec_arg bdev_connect -b nvme0 --dhchap-key key3 00:15:20.071 11:37:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@640 -- # local arg=bdev_connect 00:15:20.071 11:37:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:15:20.071 11:37:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # type -t bdev_connect 00:15:20.071 11:37:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:15:20.071 11:37:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@655 -- # bdev_connect -b nvme0 --dhchap-key key3 00:15:20.071 11:37:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:2fb5c03d-b67f-4471-b8a9-a83bf7b15be8 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:15:20.071 11:37:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:2fb5c03d-b67f-4471-b8a9-a83bf7b15be8 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:15:20.329 2024/12/16 11:37:20 error on JSON-RPC call, method: bdev_nvme_attach_controller, params: map[adrfam:ipv4 allow_unrecognized_csi:%!s(bool=false) ddgst:%!s(bool=false) dhchap_key:key3 hdgst:%!s(bool=false) hostnqn:nqn.2014-08.org.nvmexpress:uuid:2fb5c03d-b67f-4471-b8a9-a83bf7b15be8 name:nvme0 prchk_guard:%!s(bool=false) prchk_reftag:%!s(bool=false) subnqn:nqn.2024-03.io.spdk:cnode0 traddr:10.0.0.3 trsvcid:4420 trtype:tcp], err: error received for bdev_nvme_attach_controller method, err: Code=-5 Msg=Input/output error 00:15:20.329 request: 00:15:20.329 { 00:15:20.329 "method": "bdev_nvme_attach_controller", 00:15:20.329 "params": { 00:15:20.329 "name": "nvme0", 00:15:20.329 "trtype": "tcp", 00:15:20.329 "traddr": "10.0.0.3", 00:15:20.329 "adrfam": "ipv4", 00:15:20.329 "trsvcid": "4420", 00:15:20.329 "subnqn": "nqn.2024-03.io.spdk:cnode0", 00:15:20.330 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:2fb5c03d-b67f-4471-b8a9-a83bf7b15be8", 00:15:20.330 "prchk_reftag": false, 00:15:20.330 "prchk_guard": false, 00:15:20.330 "hdgst": false, 00:15:20.330 "ddgst": false, 00:15:20.330 "dhchap_key": "key3", 00:15:20.330 "allow_unrecognized_csi": false 00:15:20.330 } 00:15:20.330 } 00:15:20.330 Got JSON-RPC error response 00:15:20.330 GoRPCClient: error on JSON-RPC call 00:15:20.330 11:37:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@655 -- # es=1 00:15:20.330 11:37:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:15:20.330 11:37:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:15:20.330 11:37:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:15:20.330 11:37:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@187 -- # IFS=, 00:15:20.330 11:37:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@188 -- # printf %s sha256,sha384,sha512 00:15:20.330 11:37:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@187 -- # hostrpc bdev_nvme_set_options --dhchap-dhgroups ffdhe2048 --dhchap-digests sha256,sha384,sha512 00:15:20.330 11:37:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-dhgroups ffdhe2048 --dhchap-digests sha256,sha384,sha512 00:15:20.588 11:37:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@193 -- # NOT bdev_connect -b nvme0 --dhchap-key key3 00:15:20.588 11:37:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@652 -- # local es=0 00:15:20.588 11:37:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@654 -- # valid_exec_arg bdev_connect -b nvme0 --dhchap-key key3 00:15:20.588 11:37:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@640 -- # local arg=bdev_connect 00:15:20.588 11:37:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:15:20.588 11:37:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # type -t bdev_connect 00:15:20.588 11:37:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:15:20.588 11:37:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@655 -- # bdev_connect -b nvme0 --dhchap-key key3 00:15:20.588 11:37:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:2fb5c03d-b67f-4471-b8a9-a83bf7b15be8 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:15:20.588 11:37:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:2fb5c03d-b67f-4471-b8a9-a83bf7b15be8 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:15:20.887 2024/12/16 11:37:21 error on JSON-RPC call, method: bdev_nvme_attach_controller, params: map[adrfam:ipv4 allow_unrecognized_csi:%!s(bool=false) ddgst:%!s(bool=false) dhchap_key:key3 hdgst:%!s(bool=false) hostnqn:nqn.2014-08.org.nvmexpress:uuid:2fb5c03d-b67f-4471-b8a9-a83bf7b15be8 name:nvme0 prchk_guard:%!s(bool=false) prchk_reftag:%!s(bool=false) subnqn:nqn.2024-03.io.spdk:cnode0 traddr:10.0.0.3 trsvcid:4420 trtype:tcp], err: error received for bdev_nvme_attach_controller method, err: Code=-5 Msg=Input/output error 00:15:20.887 request: 00:15:20.887 { 00:15:20.887 "method": "bdev_nvme_attach_controller", 00:15:20.887 "params": { 00:15:20.887 "name": "nvme0", 00:15:20.887 "trtype": "tcp", 00:15:20.887 "traddr": "10.0.0.3", 00:15:20.887 "adrfam": "ipv4", 00:15:20.887 "trsvcid": "4420", 00:15:20.887 "subnqn": "nqn.2024-03.io.spdk:cnode0", 00:15:20.887 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:2fb5c03d-b67f-4471-b8a9-a83bf7b15be8", 00:15:20.887 "prchk_reftag": false, 00:15:20.887 "prchk_guard": false, 00:15:20.887 "hdgst": false, 00:15:20.887 "ddgst": false, 00:15:20.887 "dhchap_key": "key3", 00:15:20.887 "allow_unrecognized_csi": false 00:15:20.887 } 00:15:20.887 } 00:15:20.887 Got JSON-RPC error response 00:15:20.887 GoRPCClient: error on JSON-RPC call 00:15:20.887 11:37:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@655 -- # es=1 00:15:20.887 11:37:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:15:20.887 11:37:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:15:20.887 11:37:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:15:20.887 11:37:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@197 -- # IFS=, 00:15:20.887 11:37:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@198 -- # printf %s sha256,sha384,sha512 00:15:20.887 11:37:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@197 -- # IFS=, 00:15:20.887 11:37:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@198 -- # printf %s null,ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 00:15:20.887 11:37:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@197 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256,sha384,sha512 --dhchap-dhgroups null,ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 00:15:20.887 11:37:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256,sha384,sha512 --dhchap-dhgroups null,ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 00:15:21.164 11:37:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@208 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:2fb5c03d-b67f-4471-b8a9-a83bf7b15be8 00:15:21.164 11:37:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:21.164 11:37:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:21.164 11:37:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:21.164 11:37:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@209 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:2fb5c03d-b67f-4471-b8a9-a83bf7b15be8 00:15:21.164 11:37:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:21.164 11:37:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:21.164 11:37:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:21.164 11:37:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@210 -- # NOT bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key key1 00:15:21.164 11:37:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@652 -- # local es=0 00:15:21.164 11:37:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@654 -- # valid_exec_arg bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key key1 00:15:21.164 11:37:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@640 -- # local arg=bdev_connect 00:15:21.164 11:37:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:15:21.164 11:37:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # type -t bdev_connect 00:15:21.164 11:37:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:15:21.164 11:37:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@655 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key key1 00:15:21.164 11:37:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:2fb5c03d-b67f-4471-b8a9-a83bf7b15be8 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key key1 00:15:21.164 11:37:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:2fb5c03d-b67f-4471-b8a9-a83bf7b15be8 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key key1 00:15:21.731 2024/12/16 11:37:22 error on JSON-RPC call, method: bdev_nvme_attach_controller, params: map[adrfam:ipv4 allow_unrecognized_csi:%!s(bool=false) ddgst:%!s(bool=false) dhchap_ctrlr_key:key1 dhchap_key:key0 hdgst:%!s(bool=false) hostnqn:nqn.2014-08.org.nvmexpress:uuid:2fb5c03d-b67f-4471-b8a9-a83bf7b15be8 name:nvme0 prchk_guard:%!s(bool=false) prchk_reftag:%!s(bool=false) subnqn:nqn.2024-03.io.spdk:cnode0 traddr:10.0.0.3 trsvcid:4420 trtype:tcp], err: error received for bdev_nvme_attach_controller method, err: Code=-5 Msg=Input/output error 00:15:21.731 request: 00:15:21.731 { 00:15:21.731 "method": "bdev_nvme_attach_controller", 00:15:21.731 "params": { 00:15:21.731 "name": "nvme0", 00:15:21.731 "trtype": "tcp", 00:15:21.731 "traddr": "10.0.0.3", 00:15:21.731 "adrfam": "ipv4", 00:15:21.731 "trsvcid": "4420", 00:15:21.731 "subnqn": "nqn.2024-03.io.spdk:cnode0", 00:15:21.731 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:2fb5c03d-b67f-4471-b8a9-a83bf7b15be8", 00:15:21.731 "prchk_reftag": false, 00:15:21.731 "prchk_guard": false, 00:15:21.731 "hdgst": false, 00:15:21.731 "ddgst": false, 00:15:21.731 "dhchap_key": "key0", 00:15:21.731 "dhchap_ctrlr_key": "key1", 00:15:21.731 "allow_unrecognized_csi": false 00:15:21.731 } 00:15:21.731 } 00:15:21.731 Got JSON-RPC error response 00:15:21.731 GoRPCClient: error on JSON-RPC call 00:15:21.731 11:37:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@655 -- # es=1 00:15:21.731 11:37:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:15:21.731 11:37:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:15:21.731 11:37:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:15:21.731 11:37:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@213 -- # bdev_connect -b nvme0 --dhchap-key key0 00:15:21.731 11:37:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:2fb5c03d-b67f-4471-b8a9-a83bf7b15be8 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 00:15:21.731 11:37:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:2fb5c03d-b67f-4471-b8a9-a83bf7b15be8 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 00:15:21.990 nvme0n1 00:15:21.990 11:37:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@214 -- # hostrpc bdev_nvme_get_controllers 00:15:21.990 11:37:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:15:21.990 11:37:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@214 -- # jq -r '.[].name' 00:15:22.248 11:37:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@214 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:15:22.248 11:37:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@215 -- # hostrpc bdev_nvme_detach_controller nvme0 00:15:22.249 11:37:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:15:22.507 11:37:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@218 -- # rpc_cmd nvmf_subsystem_set_keys nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:2fb5c03d-b67f-4471-b8a9-a83bf7b15be8 --dhchap-key key1 00:15:22.507 11:37:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:22.507 11:37:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:22.507 11:37:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:22.507 11:37:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@219 -- # bdev_connect -b nvme0 --dhchap-key key1 00:15:22.507 11:37:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:2fb5c03d-b67f-4471-b8a9-a83bf7b15be8 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 00:15:22.507 11:37:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:2fb5c03d-b67f-4471-b8a9-a83bf7b15be8 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 00:15:23.881 nvme0n1 00:15:23.881 11:37:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@220 -- # hostrpc bdev_nvme_get_controllers 00:15:23.881 11:37:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:15:23.881 11:37:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@220 -- # jq -r '.[].name' 00:15:23.881 11:37:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@220 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:15:23.881 11:37:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@222 -- # rpc_cmd nvmf_subsystem_set_keys nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:2fb5c03d-b67f-4471-b8a9-a83bf7b15be8 --dhchap-key key2 --dhchap-ctrlr-key key3 00:15:23.881 11:37:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:23.881 11:37:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:23.881 11:37:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:23.881 11:37:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@223 -- # hostrpc bdev_nvme_get_controllers 00:15:23.881 11:37:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@223 -- # jq -r '.[].name' 00:15:23.881 11:37:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:15:24.140 11:37:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@223 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:15:24.140 11:37:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@225 -- # nvme_connect --dhchap-secret DHHC-1:02:ZDY0N2YwN2U5MDE1OGM3ZDRiZTE3ZjllMTA2YjcyOWJmNTdkMzQ3NTJlZWJhZGJjQ1cSZQ==: --dhchap-ctrl-secret DHHC-1:03:MWI3ZGI4MmVmNjZkMTUxMmFmODZjY2JkMzEwNTkyZmI0NGVhNTE0OTNlMTFmNTBjNzA5YWE5NTE1ZDljZDliM/NYdV4=: 00:15:24.140 11:37:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:2fb5c03d-b67f-4471-b8a9-a83bf7b15be8 --hostid 2fb5c03d-b67f-4471-b8a9-a83bf7b15be8 -l 0 --dhchap-secret DHHC-1:02:ZDY0N2YwN2U5MDE1OGM3ZDRiZTE3ZjllMTA2YjcyOWJmNTdkMzQ3NTJlZWJhZGJjQ1cSZQ==: --dhchap-ctrl-secret DHHC-1:03:MWI3ZGI4MmVmNjZkMTUxMmFmODZjY2JkMzEwNTkyZmI0NGVhNTE0OTNlMTFmNTBjNzA5YWE5NTE1ZDljZDliM/NYdV4=: 00:15:25.078 11:37:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@226 -- # nvme_get_ctrlr 00:15:25.078 11:37:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@41 -- # local dev 00:15:25.078 11:37:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@43 -- # for dev in /sys/devices/virtual/nvme-fabrics/ctl/nvme* 00:15:25.078 11:37:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nqn.2024-03.io.spdk:cnode0 == \n\q\n\.\2\0\2\4\-\0\3\.\i\o\.\s\p\d\k\:\c\n\o\d\e\0 ]] 00:15:25.078 11:37:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # echo nvme0 00:15:25.078 11:37:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # break 00:15:25.078 11:37:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@226 -- # nctrlr=nvme0 00:15:25.078 11:37:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@227 -- # hostrpc bdev_nvme_detach_controller nvme0 00:15:25.078 11:37:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:15:25.337 11:37:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@228 -- # NOT bdev_connect -b nvme0 --dhchap-key key1 00:15:25.337 11:37:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@652 -- # local es=0 00:15:25.337 11:37:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@654 -- # valid_exec_arg bdev_connect -b nvme0 --dhchap-key key1 00:15:25.337 11:37:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@640 -- # local arg=bdev_connect 00:15:25.337 11:37:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:15:25.337 11:37:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # type -t bdev_connect 00:15:25.337 11:37:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:15:25.337 11:37:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@655 -- # bdev_connect -b nvme0 --dhchap-key key1 00:15:25.337 11:37:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:2fb5c03d-b67f-4471-b8a9-a83bf7b15be8 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 00:15:25.337 11:37:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:2fb5c03d-b67f-4471-b8a9-a83bf7b15be8 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 00:15:25.904 2024/12/16 11:37:26 error on JSON-RPC call, method: bdev_nvme_attach_controller, params: map[adrfam:ipv4 allow_unrecognized_csi:%!s(bool=false) ddgst:%!s(bool=false) dhchap_key:key1 hdgst:%!s(bool=false) hostnqn:nqn.2014-08.org.nvmexpress:uuid:2fb5c03d-b67f-4471-b8a9-a83bf7b15be8 name:nvme0 prchk_guard:%!s(bool=false) prchk_reftag:%!s(bool=false) subnqn:nqn.2024-03.io.spdk:cnode0 traddr:10.0.0.3 trsvcid:4420 trtype:tcp], err: error received for bdev_nvme_attach_controller method, err: Code=-5 Msg=Input/output error 00:15:25.904 request: 00:15:25.904 { 00:15:25.904 "method": "bdev_nvme_attach_controller", 00:15:25.904 "params": { 00:15:25.904 "name": "nvme0", 00:15:25.904 "trtype": "tcp", 00:15:25.904 "traddr": "10.0.0.3", 00:15:25.904 "adrfam": "ipv4", 00:15:25.904 "trsvcid": "4420", 00:15:25.904 "subnqn": "nqn.2024-03.io.spdk:cnode0", 00:15:25.904 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:2fb5c03d-b67f-4471-b8a9-a83bf7b15be8", 00:15:25.904 "prchk_reftag": false, 00:15:25.904 "prchk_guard": false, 00:15:25.904 "hdgst": false, 00:15:25.904 "ddgst": false, 00:15:25.904 "dhchap_key": "key1", 00:15:25.904 "allow_unrecognized_csi": false 00:15:25.904 } 00:15:25.904 } 00:15:25.904 Got JSON-RPC error response 00:15:25.904 GoRPCClient: error on JSON-RPC call 00:15:25.904 11:37:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@655 -- # es=1 00:15:25.904 11:37:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:15:25.904 11:37:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:15:25.904 11:37:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:15:25.904 11:37:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@229 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key key3 00:15:25.904 11:37:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:2fb5c03d-b67f-4471-b8a9-a83bf7b15be8 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key key3 00:15:25.904 11:37:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:2fb5c03d-b67f-4471-b8a9-a83bf7b15be8 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key key3 00:15:26.839 nvme0n1 00:15:26.839 11:37:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@230 -- # hostrpc bdev_nvme_get_controllers 00:15:26.839 11:37:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:15:26.839 11:37:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@230 -- # jq -r '.[].name' 00:15:27.404 11:37:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@230 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:15:27.404 11:37:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@231 -- # hostrpc bdev_nvme_detach_controller nvme0 00:15:27.404 11:37:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:15:27.662 11:37:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@233 -- # rpc_cmd nvmf_subsystem_set_keys nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:2fb5c03d-b67f-4471-b8a9-a83bf7b15be8 00:15:27.662 11:37:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:27.662 11:37:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:27.662 11:37:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:27.662 11:37:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@234 -- # bdev_connect -b nvme0 00:15:27.662 11:37:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:2fb5c03d-b67f-4471-b8a9-a83bf7b15be8 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 00:15:27.662 11:37:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:2fb5c03d-b67f-4471-b8a9-a83bf7b15be8 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 00:15:27.919 nvme0n1 00:15:27.919 11:37:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@235 -- # hostrpc bdev_nvme_get_controllers 00:15:27.919 11:37:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:15:27.919 11:37:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@235 -- # jq -r '.[].name' 00:15:28.177 11:37:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@235 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:15:28.177 11:37:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@236 -- # hostrpc bdev_nvme_detach_controller nvme0 00:15:28.177 11:37:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:15:28.435 11:37:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@239 -- # rpc_cmd nvmf_subsystem_set_keys nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:2fb5c03d-b67f-4471-b8a9-a83bf7b15be8 --dhchap-key key1 --dhchap-ctrlr-key key3 00:15:28.435 11:37:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:28.435 11:37:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:28.435 11:37:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:28.435 11:37:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@240 -- # nvme_set_keys nvme0 DHHC-1:01:NzViMDgxNmYyYjg1NzY4ZjY1YWE3NTk5MjIwMmNlOWUtjHHm: '' 2s 00:15:28.435 11:37:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # local ctl key ckey dev timeout 00:15:28.435 11:37:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@51 -- # ctl=nvme0 00:15:28.435 11:37:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@51 -- # key=DHHC-1:01:NzViMDgxNmYyYjg1NzY4ZjY1YWE3NTk5MjIwMmNlOWUtjHHm: 00:15:28.435 11:37:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@51 -- # ckey= 00:15:28.435 11:37:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@51 -- # timeout=2s 00:15:28.435 11:37:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # dev=/sys/devices/virtual/nvme-fabrics/ctl/nvme0 00:15:28.435 11:37:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@54 -- # [[ -z DHHC-1:01:NzViMDgxNmYyYjg1NzY4ZjY1YWE3NTk5MjIwMmNlOWUtjHHm: ]] 00:15:28.435 11:37:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@54 -- # echo DHHC-1:01:NzViMDgxNmYyYjg1NzY4ZjY1YWE3NTk5MjIwMmNlOWUtjHHm: 00:15:28.435 11:37:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # [[ -z '' ]] 00:15:28.435 11:37:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # [[ -z 2s ]] 00:15:28.435 11:37:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # sleep 2s 00:15:30.963 11:37:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@241 -- # waitforblk nvme0n1 00:15:30.963 11:37:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1239 -- # local i=0 00:15:30.963 11:37:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1240 -- # lsblk -l -o NAME 00:15:30.963 11:37:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1240 -- # grep -q -w nvme0n1 00:15:30.963 11:37:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1246 -- # lsblk -l -o NAME 00:15:30.963 11:37:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1246 -- # grep -q -w nvme0n1 00:15:30.963 11:37:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1250 -- # return 0 00:15:30.963 11:37:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@243 -- # rpc_cmd nvmf_subsystem_set_keys nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:2fb5c03d-b67f-4471-b8a9-a83bf7b15be8 --dhchap-key key1 --dhchap-ctrlr-key key2 00:15:30.963 11:37:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:30.963 11:37:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:30.963 11:37:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:30.963 11:37:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@244 -- # nvme_set_keys nvme0 '' DHHC-1:02:ZDY0N2YwN2U5MDE1OGM3ZDRiZTE3ZjllMTA2YjcyOWJmNTdkMzQ3NTJlZWJhZGJjQ1cSZQ==: 2s 00:15:30.963 11:37:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # local ctl key ckey dev timeout 00:15:30.963 11:37:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@51 -- # ctl=nvme0 00:15:30.963 11:37:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@51 -- # key= 00:15:30.963 11:37:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@51 -- # ckey=DHHC-1:02:ZDY0N2YwN2U5MDE1OGM3ZDRiZTE3ZjllMTA2YjcyOWJmNTdkMzQ3NTJlZWJhZGJjQ1cSZQ==: 00:15:30.963 11:37:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@51 -- # timeout=2s 00:15:30.963 11:37:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # dev=/sys/devices/virtual/nvme-fabrics/ctl/nvme0 00:15:30.963 11:37:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@54 -- # [[ -z '' ]] 00:15:30.963 11:37:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # [[ -z DHHC-1:02:ZDY0N2YwN2U5MDE1OGM3ZDRiZTE3ZjllMTA2YjcyOWJmNTdkMzQ3NTJlZWJhZGJjQ1cSZQ==: ]] 00:15:30.963 11:37:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # echo DHHC-1:02:ZDY0N2YwN2U5MDE1OGM3ZDRiZTE3ZjllMTA2YjcyOWJmNTdkMzQ3NTJlZWJhZGJjQ1cSZQ==: 00:15:30.963 11:37:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # [[ -z 2s ]] 00:15:30.963 11:37:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # sleep 2s 00:15:32.886 11:37:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@245 -- # waitforblk nvme0n1 00:15:32.886 11:37:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1239 -- # local i=0 00:15:32.886 11:37:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1240 -- # grep -q -w nvme0n1 00:15:32.886 11:37:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1240 -- # lsblk -l -o NAME 00:15:32.886 11:37:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1246 -- # lsblk -l -o NAME 00:15:32.886 11:37:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1246 -- # grep -q -w nvme0n1 00:15:32.886 11:37:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1250 -- # return 0 00:15:32.886 11:37:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@246 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:15:32.886 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:15:32.886 11:37:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@249 -- # rpc_cmd nvmf_subsystem_set_keys nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:2fb5c03d-b67f-4471-b8a9-a83bf7b15be8 --dhchap-key key0 --dhchap-ctrlr-key key1 00:15:32.886 11:37:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:32.886 11:37:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:32.886 11:37:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:32.886 11:37:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@250 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key key1 --ctrlr-loss-timeout-sec 1 --reconnect-delay-sec 1 00:15:32.886 11:37:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:2fb5c03d-b67f-4471-b8a9-a83bf7b15be8 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key key1 --ctrlr-loss-timeout-sec 1 --reconnect-delay-sec 1 00:15:32.886 11:37:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:2fb5c03d-b67f-4471-b8a9-a83bf7b15be8 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key key1 --ctrlr-loss-timeout-sec 1 --reconnect-delay-sec 1 00:15:33.820 nvme0n1 00:15:33.820 11:37:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@252 -- # rpc_cmd nvmf_subsystem_set_keys nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:2fb5c03d-b67f-4471-b8a9-a83bf7b15be8 --dhchap-key key2 --dhchap-ctrlr-key key3 00:15:33.820 11:37:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:33.820 11:37:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:33.820 11:37:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:33.820 11:37:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@253 -- # hostrpc bdev_nvme_set_keys nvme0 --dhchap-key key2 --dhchap-ctrlr-key key3 00:15:33.820 11:37:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_keys nvme0 --dhchap-key key2 --dhchap-ctrlr-key key3 00:15:34.385 11:37:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@254 -- # hostrpc bdev_nvme_get_controllers 00:15:34.385 11:37:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:15:34.385 11:37:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@254 -- # jq -r '.[].name' 00:15:34.643 11:37:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@254 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:15:34.643 11:37:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@256 -- # rpc_cmd nvmf_subsystem_set_keys nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:2fb5c03d-b67f-4471-b8a9-a83bf7b15be8 00:15:34.643 11:37:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:34.643 11:37:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:34.643 11:37:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:34.643 11:37:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@257 -- # hostrpc bdev_nvme_set_keys nvme0 00:15:34.643 11:37:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_keys nvme0 00:15:35.209 11:37:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@258 -- # hostrpc bdev_nvme_get_controllers 00:15:35.209 11:37:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:15:35.209 11:37:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@258 -- # jq -r '.[].name' 00:15:35.467 11:37:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@258 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:15:35.467 11:37:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@260 -- # rpc_cmd nvmf_subsystem_set_keys nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:2fb5c03d-b67f-4471-b8a9-a83bf7b15be8 --dhchap-key key2 --dhchap-ctrlr-key key3 00:15:35.467 11:37:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:35.467 11:37:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:35.467 11:37:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:35.467 11:37:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@261 -- # NOT hostrpc bdev_nvme_set_keys nvme0 --dhchap-key key1 --dhchap-ctrlr-key key3 00:15:35.467 11:37:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@652 -- # local es=0 00:15:35.467 11:37:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@654 -- # valid_exec_arg hostrpc bdev_nvme_set_keys nvme0 --dhchap-key key1 --dhchap-ctrlr-key key3 00:15:35.467 11:37:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@640 -- # local arg=hostrpc 00:15:35.467 11:37:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:15:35.467 11:37:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # type -t hostrpc 00:15:35.467 11:37:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:15:35.467 11:37:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@655 -- # hostrpc bdev_nvme_set_keys nvme0 --dhchap-key key1 --dhchap-ctrlr-key key3 00:15:35.467 11:37:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_keys nvme0 --dhchap-key key1 --dhchap-ctrlr-key key3 00:15:36.031 2024/12/16 11:37:36 error on JSON-RPC call, method: bdev_nvme_set_keys, params: map[dhchap_ctrlr_key:key3 dhchap_key:key1 name:nvme0], err: error received for bdev_nvme_set_keys method, err: Code=-13 Msg=Permission denied 00:15:36.031 request: 00:15:36.031 { 00:15:36.031 "method": "bdev_nvme_set_keys", 00:15:36.032 "params": { 00:15:36.032 "name": "nvme0", 00:15:36.032 "dhchap_key": "key1", 00:15:36.032 "dhchap_ctrlr_key": "key3" 00:15:36.032 } 00:15:36.032 } 00:15:36.032 Got JSON-RPC error response 00:15:36.032 GoRPCClient: error on JSON-RPC call 00:15:36.032 11:37:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@655 -- # es=1 00:15:36.032 11:37:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:15:36.032 11:37:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:15:36.032 11:37:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:15:36.032 11:37:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@262 -- # jq length 00:15:36.032 11:37:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@262 -- # hostrpc bdev_nvme_get_controllers 00:15:36.032 11:37:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:15:36.289 11:37:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@262 -- # (( 1 != 0 )) 00:15:36.289 11:37:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@263 -- # sleep 1s 00:15:37.662 11:37:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@262 -- # hostrpc bdev_nvme_get_controllers 00:15:37.662 11:37:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@262 -- # jq length 00:15:37.662 11:37:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:15:37.662 11:37:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@262 -- # (( 0 != 0 )) 00:15:37.662 11:37:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@267 -- # rpc_cmd nvmf_subsystem_set_keys nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:2fb5c03d-b67f-4471-b8a9-a83bf7b15be8 --dhchap-key key0 --dhchap-ctrlr-key key1 00:15:37.662 11:37:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:37.662 11:37:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:37.662 11:37:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:37.662 11:37:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@268 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key key1 --ctrlr-loss-timeout-sec 1 --reconnect-delay-sec 1 00:15:37.662 11:37:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:2fb5c03d-b67f-4471-b8a9-a83bf7b15be8 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key key1 --ctrlr-loss-timeout-sec 1 --reconnect-delay-sec 1 00:15:37.662 11:37:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:2fb5c03d-b67f-4471-b8a9-a83bf7b15be8 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key key1 --ctrlr-loss-timeout-sec 1 --reconnect-delay-sec 1 00:15:38.598 nvme0n1 00:15:38.598 11:37:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@270 -- # rpc_cmd nvmf_subsystem_set_keys nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:2fb5c03d-b67f-4471-b8a9-a83bf7b15be8 --dhchap-key key2 --dhchap-ctrlr-key key3 00:15:38.598 11:37:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:38.598 11:37:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:38.598 11:37:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:38.598 11:37:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@271 -- # NOT hostrpc bdev_nvme_set_keys nvme0 --dhchap-key key2 --dhchap-ctrlr-key key0 00:15:38.598 11:37:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@652 -- # local es=0 00:15:38.598 11:37:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@654 -- # valid_exec_arg hostrpc bdev_nvme_set_keys nvme0 --dhchap-key key2 --dhchap-ctrlr-key key0 00:15:38.598 11:37:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@640 -- # local arg=hostrpc 00:15:38.598 11:37:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:15:38.598 11:37:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # type -t hostrpc 00:15:38.598 11:37:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:15:38.598 11:37:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@655 -- # hostrpc bdev_nvme_set_keys nvme0 --dhchap-key key2 --dhchap-ctrlr-key key0 00:15:38.598 11:37:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_keys nvme0 --dhchap-key key2 --dhchap-ctrlr-key key0 00:15:39.533 2024/12/16 11:37:39 error on JSON-RPC call, method: bdev_nvme_set_keys, params: map[dhchap_ctrlr_key:key0 dhchap_key:key2 name:nvme0], err: error received for bdev_nvme_set_keys method, err: Code=-13 Msg=Permission denied 00:15:39.533 request: 00:15:39.533 { 00:15:39.533 "method": "bdev_nvme_set_keys", 00:15:39.533 "params": { 00:15:39.533 "name": "nvme0", 00:15:39.533 "dhchap_key": "key2", 00:15:39.533 "dhchap_ctrlr_key": "key0" 00:15:39.533 } 00:15:39.533 } 00:15:39.533 Got JSON-RPC error response 00:15:39.533 GoRPCClient: error on JSON-RPC call 00:15:39.533 11:37:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@655 -- # es=1 00:15:39.533 11:37:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:15:39.533 11:37:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:15:39.533 11:37:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:15:39.533 11:37:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@272 -- # hostrpc bdev_nvme_get_controllers 00:15:39.533 11:37:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@272 -- # jq length 00:15:39.533 11:37:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:15:39.791 11:37:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@272 -- # (( 1 != 0 )) 00:15:39.791 11:37:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@273 -- # sleep 1s 00:15:40.727 11:37:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@272 -- # hostrpc bdev_nvme_get_controllers 00:15:40.727 11:37:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:15:40.727 11:37:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@272 -- # jq length 00:15:40.985 11:37:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@272 -- # (( 0 != 0 )) 00:15:40.985 11:37:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@276 -- # trap - SIGINT SIGTERM EXIT 00:15:40.986 11:37:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@277 -- # cleanup 00:15:40.986 11:37:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@21 -- # killprocess 78401 00:15:40.986 11:37:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@954 -- # '[' -z 78401 ']' 00:15:40.986 11:37:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@958 -- # kill -0 78401 00:15:40.986 11:37:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@959 -- # uname 00:15:40.986 11:37:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:15:40.986 11:37:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 78401 00:15:40.986 11:37:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:15:40.986 11:37:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:15:40.986 killing process with pid 78401 00:15:40.986 11:37:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@972 -- # echo 'killing process with pid 78401' 00:15:40.986 11:37:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@973 -- # kill 78401 00:15:40.986 11:37:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@978 -- # wait 78401 00:15:41.555 11:37:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@22 -- # nvmftestfini 00:15:41.555 11:37:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@516 -- # nvmfcleanup 00:15:41.555 11:37:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@121 -- # sync 00:15:41.555 11:37:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:15:41.555 11:37:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@124 -- # set +e 00:15:41.555 11:37:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@125 -- # for i in {1..20} 00:15:41.555 11:37:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:15:41.555 rmmod nvme_tcp 00:15:41.555 rmmod nvme_fabrics 00:15:41.555 rmmod nvme_keyring 00:15:41.555 11:37:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:15:41.555 11:37:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@128 -- # set -e 00:15:41.556 11:37:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@129 -- # return 0 00:15:41.556 11:37:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@517 -- # '[' -n 83317 ']' 00:15:41.556 11:37:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@518 -- # killprocess 83317 00:15:41.556 11:37:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@954 -- # '[' -z 83317 ']' 00:15:41.556 11:37:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@958 -- # kill -0 83317 00:15:41.556 11:37:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@959 -- # uname 00:15:41.556 11:37:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:15:41.556 11:37:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 83317 00:15:41.556 11:37:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:15:41.556 11:37:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:15:41.556 killing process with pid 83317 00:15:41.556 11:37:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@972 -- # echo 'killing process with pid 83317' 00:15:41.556 11:37:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@973 -- # kill 83317 00:15:41.556 11:37:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@978 -- # wait 83317 00:15:41.814 11:37:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:15:41.814 11:37:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:15:41.814 11:37:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:15:41.814 11:37:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@297 -- # iptr 00:15:41.814 11:37:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@791 -- # iptables-save 00:15:41.814 11:37:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:15:41.814 11:37:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@791 -- # iptables-restore 00:15:41.814 11:37:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@298 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:15:41.814 11:37:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@299 -- # nvmf_veth_fini 00:15:41.814 11:37:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@233 -- # ip link set nvmf_init_br nomaster 00:15:41.814 11:37:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@234 -- # ip link set nvmf_init_br2 nomaster 00:15:41.814 11:37:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@235 -- # ip link set nvmf_tgt_br nomaster 00:15:41.814 11:37:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@236 -- # ip link set nvmf_tgt_br2 nomaster 00:15:41.814 11:37:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@237 -- # ip link set nvmf_init_br down 00:15:41.814 11:37:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@238 -- # ip link set nvmf_init_br2 down 00:15:41.814 11:37:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@239 -- # ip link set nvmf_tgt_br down 00:15:41.814 11:37:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@240 -- # ip link set nvmf_tgt_br2 down 00:15:41.814 11:37:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@241 -- # ip link delete nvmf_br type bridge 00:15:41.814 11:37:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@242 -- # ip link delete nvmf_init_if 00:15:41.814 11:37:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@243 -- # ip link delete nvmf_init_if2 00:15:41.814 11:37:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@244 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:15:42.072 11:37:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@245 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:15:42.072 11:37:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@246 -- # remove_spdk_ns 00:15:42.072 11:37:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:15:42.072 11:37:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:15:42.073 11:37:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:15:42.073 11:37:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@300 -- # return 0 00:15:42.073 11:37:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@23 -- # rm -f /tmp/spdk.key-null.LLV /tmp/spdk.key-sha256.Qet /tmp/spdk.key-sha384.BTV /tmp/spdk.key-sha512.Riu /tmp/spdk.key-sha512.CuG /tmp/spdk.key-sha384.ams /tmp/spdk.key-sha256.ZAi '' /home/vagrant/spdk_repo/spdk/../output/nvme-auth.log /home/vagrant/spdk_repo/spdk/../output/nvmf-auth.log 00:15:42.073 00:15:42.073 real 3m16.428s 00:15:42.073 user 7m58.683s 00:15:42.073 sys 0m24.793s 00:15:42.073 11:37:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1130 -- # xtrace_disable 00:15:42.073 11:37:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:42.073 ************************************ 00:15:42.073 END TEST nvmf_auth_target 00:15:42.073 ************************************ 00:15:42.073 11:37:42 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@39 -- # '[' tcp = tcp ']' 00:15:42.073 11:37:42 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@40 -- # run_test nvmf_bdevio_no_huge /home/vagrant/spdk_repo/spdk/test/nvmf/target/bdevio.sh --transport=tcp --no-hugepages 00:15:42.073 11:37:42 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:15:42.073 11:37:42 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1111 -- # xtrace_disable 00:15:42.073 11:37:42 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:15:42.073 ************************************ 00:15:42.073 START TEST nvmf_bdevio_no_huge 00:15:42.073 ************************************ 00:15:42.073 11:37:42 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/bdevio.sh --transport=tcp --no-hugepages 00:15:42.073 * Looking for test storage... 00:15:42.073 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:15:42.073 11:37:42 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:15:42.073 11:37:42 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@1711 -- # lcov --version 00:15:42.073 11:37:42 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:15:42.332 11:37:42 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:15:42.332 11:37:42 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:15:42.332 11:37:42 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@333 -- # local ver1 ver1_l 00:15:42.332 11:37:42 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@334 -- # local ver2 ver2_l 00:15:42.332 11:37:42 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@336 -- # IFS=.-: 00:15:42.332 11:37:42 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@336 -- # read -ra ver1 00:15:42.332 11:37:42 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@337 -- # IFS=.-: 00:15:42.332 11:37:42 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@337 -- # read -ra ver2 00:15:42.332 11:37:42 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@338 -- # local 'op=<' 00:15:42.332 11:37:42 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@340 -- # ver1_l=2 00:15:42.332 11:37:42 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@341 -- # ver2_l=1 00:15:42.332 11:37:42 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:15:42.332 11:37:42 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@344 -- # case "$op" in 00:15:42.332 11:37:42 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@345 -- # : 1 00:15:42.332 11:37:42 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@364 -- # (( v = 0 )) 00:15:42.332 11:37:42 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:15:42.332 11:37:42 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@365 -- # decimal 1 00:15:42.332 11:37:42 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@353 -- # local d=1 00:15:42.332 11:37:42 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:15:42.332 11:37:42 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@355 -- # echo 1 00:15:42.332 11:37:42 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@365 -- # ver1[v]=1 00:15:42.332 11:37:42 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@366 -- # decimal 2 00:15:42.332 11:37:42 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@353 -- # local d=2 00:15:42.332 11:37:42 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:15:42.332 11:37:42 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@355 -- # echo 2 00:15:42.332 11:37:42 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@366 -- # ver2[v]=2 00:15:42.332 11:37:42 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:15:42.332 11:37:42 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:15:42.332 11:37:42 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@368 -- # return 0 00:15:42.332 11:37:42 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:15:42.332 11:37:42 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:15:42.332 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:15:42.332 --rc genhtml_branch_coverage=1 00:15:42.332 --rc genhtml_function_coverage=1 00:15:42.332 --rc genhtml_legend=1 00:15:42.332 --rc geninfo_all_blocks=1 00:15:42.332 --rc geninfo_unexecuted_blocks=1 00:15:42.332 00:15:42.332 ' 00:15:42.332 11:37:42 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:15:42.332 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:15:42.332 --rc genhtml_branch_coverage=1 00:15:42.332 --rc genhtml_function_coverage=1 00:15:42.332 --rc genhtml_legend=1 00:15:42.332 --rc geninfo_all_blocks=1 00:15:42.332 --rc geninfo_unexecuted_blocks=1 00:15:42.332 00:15:42.332 ' 00:15:42.332 11:37:42 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:15:42.332 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:15:42.332 --rc genhtml_branch_coverage=1 00:15:42.332 --rc genhtml_function_coverage=1 00:15:42.332 --rc genhtml_legend=1 00:15:42.332 --rc geninfo_all_blocks=1 00:15:42.332 --rc geninfo_unexecuted_blocks=1 00:15:42.332 00:15:42.332 ' 00:15:42.332 11:37:42 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:15:42.332 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:15:42.332 --rc genhtml_branch_coverage=1 00:15:42.332 --rc genhtml_function_coverage=1 00:15:42.332 --rc genhtml_legend=1 00:15:42.332 --rc geninfo_all_blocks=1 00:15:42.332 --rc geninfo_unexecuted_blocks=1 00:15:42.332 00:15:42.332 ' 00:15:42.332 11:37:42 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:15:42.332 11:37:42 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@7 -- # uname -s 00:15:42.332 11:37:42 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:15:42.332 11:37:42 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:15:42.332 11:37:42 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:15:42.332 11:37:42 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:15:42.332 11:37:42 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:15:42.332 11:37:42 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:15:42.332 11:37:42 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:15:42.332 11:37:42 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:15:42.332 11:37:42 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:15:42.332 11:37:42 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:15:42.333 11:37:42 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:2fb5c03d-b67f-4471-b8a9-a83bf7b15be8 00:15:42.333 11:37:42 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@18 -- # NVME_HOSTID=2fb5c03d-b67f-4471-b8a9-a83bf7b15be8 00:15:42.333 11:37:42 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:15:42.333 11:37:42 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:15:42.333 11:37:42 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:15:42.333 11:37:42 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:15:42.333 11:37:42 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@49 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:15:42.333 11:37:42 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@15 -- # shopt -s extglob 00:15:42.333 11:37:42 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:15:42.333 11:37:42 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:15:42.333 11:37:42 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:15:42.333 11:37:42 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:42.333 11:37:42 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:42.333 11:37:42 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:42.333 11:37:42 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- paths/export.sh@5 -- # export PATH 00:15:42.333 11:37:42 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:42.333 11:37:42 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@51 -- # : 0 00:15:42.333 11:37:42 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:15:42.333 11:37:42 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:15:42.333 11:37:42 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:15:42.333 11:37:42 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:15:42.333 11:37:42 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:15:42.333 11:37:42 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:15:42.333 /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:15:42.333 11:37:42 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:15:42.333 11:37:42 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:15:42.333 11:37:42 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@55 -- # have_pci_nics=0 00:15:42.333 11:37:42 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@11 -- # MALLOC_BDEV_SIZE=64 00:15:42.333 11:37:42 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:15:42.333 11:37:42 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@14 -- # nvmftestinit 00:15:42.333 11:37:42 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:15:42.333 11:37:42 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:15:42.333 11:37:42 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@476 -- # prepare_net_devs 00:15:42.333 11:37:42 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@438 -- # local -g is_hw=no 00:15:42.333 11:37:42 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@440 -- # remove_spdk_ns 00:15:42.333 11:37:42 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:15:42.333 11:37:42 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:15:42.333 11:37:42 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:15:42.333 11:37:42 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@442 -- # [[ virt != virt ]] 00:15:42.333 11:37:42 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@444 -- # [[ no == yes ]] 00:15:42.333 11:37:42 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@451 -- # [[ virt == phy ]] 00:15:42.333 11:37:42 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@454 -- # [[ virt == phy-fallback ]] 00:15:42.333 11:37:42 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@459 -- # [[ tcp == tcp ]] 00:15:42.333 11:37:42 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@460 -- # nvmf_veth_init 00:15:42.333 11:37:42 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@145 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:15:42.333 11:37:42 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@146 -- # NVMF_SECOND_INITIATOR_IP=10.0.0.2 00:15:42.333 11:37:42 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@147 -- # NVMF_FIRST_TARGET_IP=10.0.0.3 00:15:42.333 11:37:42 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@148 -- # NVMF_SECOND_TARGET_IP=10.0.0.4 00:15:42.333 11:37:42 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@149 -- # NVMF_INITIATOR_IP=10.0.0.1 00:15:42.333 11:37:42 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@150 -- # NVMF_BRIDGE=nvmf_br 00:15:42.333 11:37:42 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@151 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:15:42.333 11:37:42 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@152 -- # NVMF_INITIATOR_INTERFACE2=nvmf_init_if2 00:15:42.333 11:37:42 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@153 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:15:42.333 11:37:42 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@154 -- # NVMF_INITIATOR_BRIDGE2=nvmf_init_br2 00:15:42.333 11:37:42 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@155 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:15:42.333 11:37:42 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@156 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:15:42.333 11:37:42 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@157 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:15:42.333 11:37:42 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@158 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:15:42.333 11:37:42 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@159 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:15:42.333 11:37:42 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@160 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:15:42.333 11:37:42 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@162 -- # ip link set nvmf_init_br nomaster 00:15:42.333 Cannot find device "nvmf_init_br" 00:15:42.333 11:37:42 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@162 -- # true 00:15:42.333 11:37:42 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@163 -- # ip link set nvmf_init_br2 nomaster 00:15:42.333 Cannot find device "nvmf_init_br2" 00:15:42.333 11:37:42 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@163 -- # true 00:15:42.333 11:37:42 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@164 -- # ip link set nvmf_tgt_br nomaster 00:15:42.333 Cannot find device "nvmf_tgt_br" 00:15:42.333 11:37:42 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@164 -- # true 00:15:42.333 11:37:42 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@165 -- # ip link set nvmf_tgt_br2 nomaster 00:15:42.333 Cannot find device "nvmf_tgt_br2" 00:15:42.333 11:37:42 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@165 -- # true 00:15:42.333 11:37:42 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@166 -- # ip link set nvmf_init_br down 00:15:42.333 Cannot find device "nvmf_init_br" 00:15:42.333 11:37:42 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@166 -- # true 00:15:42.333 11:37:42 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@167 -- # ip link set nvmf_init_br2 down 00:15:42.333 Cannot find device "nvmf_init_br2" 00:15:42.333 11:37:42 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@167 -- # true 00:15:42.333 11:37:42 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@168 -- # ip link set nvmf_tgt_br down 00:15:42.333 Cannot find device "nvmf_tgt_br" 00:15:42.333 11:37:42 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@168 -- # true 00:15:42.333 11:37:42 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@169 -- # ip link set nvmf_tgt_br2 down 00:15:42.333 Cannot find device "nvmf_tgt_br2" 00:15:42.333 11:37:43 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@169 -- # true 00:15:42.333 11:37:43 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@170 -- # ip link delete nvmf_br type bridge 00:15:42.333 Cannot find device "nvmf_br" 00:15:42.333 11:37:43 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@170 -- # true 00:15:42.333 11:37:43 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@171 -- # ip link delete nvmf_init_if 00:15:42.333 Cannot find device "nvmf_init_if" 00:15:42.333 11:37:43 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@171 -- # true 00:15:42.333 11:37:43 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@172 -- # ip link delete nvmf_init_if2 00:15:42.333 Cannot find device "nvmf_init_if2" 00:15:42.333 11:37:43 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@172 -- # true 00:15:42.333 11:37:43 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@173 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:15:42.333 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:15:42.333 11:37:43 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@173 -- # true 00:15:42.333 11:37:43 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@174 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:15:42.333 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:15:42.334 11:37:43 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@174 -- # true 00:15:42.334 11:37:43 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@177 -- # ip netns add nvmf_tgt_ns_spdk 00:15:42.334 11:37:43 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@180 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:15:42.334 11:37:43 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@181 -- # ip link add nvmf_init_if2 type veth peer name nvmf_init_br2 00:15:42.334 11:37:43 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@182 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:15:42.334 11:37:43 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@183 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:15:42.592 11:37:43 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:15:42.592 11:37:43 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@187 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:15:42.592 11:37:43 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@190 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:15:42.592 11:37:43 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@191 -- # ip addr add 10.0.0.2/24 dev nvmf_init_if2 00:15:42.592 11:37:43 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@192 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if 00:15:42.592 11:37:43 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@193 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.4/24 dev nvmf_tgt_if2 00:15:42.592 11:37:43 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@196 -- # ip link set nvmf_init_if up 00:15:42.592 11:37:43 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@197 -- # ip link set nvmf_init_if2 up 00:15:42.592 11:37:43 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@198 -- # ip link set nvmf_init_br up 00:15:42.592 11:37:43 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@199 -- # ip link set nvmf_init_br2 up 00:15:42.592 11:37:43 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@200 -- # ip link set nvmf_tgt_br up 00:15:42.592 11:37:43 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@201 -- # ip link set nvmf_tgt_br2 up 00:15:42.592 11:37:43 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@202 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:15:42.592 11:37:43 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@203 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:15:42.592 11:37:43 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@204 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:15:42.592 11:37:43 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@207 -- # ip link add nvmf_br type bridge 00:15:42.592 11:37:43 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@208 -- # ip link set nvmf_br up 00:15:42.592 11:37:43 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@211 -- # ip link set nvmf_init_br master nvmf_br 00:15:42.592 11:37:43 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@212 -- # ip link set nvmf_init_br2 master nvmf_br 00:15:42.592 11:37:43 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@213 -- # ip link set nvmf_tgt_br master nvmf_br 00:15:42.592 11:37:43 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@214 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:15:42.592 11:37:43 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@217 -- # ipts -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:15:42.592 11:37:43 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT' 00:15:42.592 11:37:43 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@218 -- # ipts -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT 00:15:42.592 11:37:43 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT' 00:15:42.592 11:37:43 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@219 -- # ipts -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:15:42.592 11:37:43 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@790 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT -m comment --comment 'SPDK_NVMF:-A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT' 00:15:42.592 11:37:43 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@222 -- # ping -c 1 10.0.0.3 00:15:42.593 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:15:42.593 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.083 ms 00:15:42.593 00:15:42.593 --- 10.0.0.3 ping statistics --- 00:15:42.593 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:15:42.593 rtt min/avg/max/mdev = 0.083/0.083/0.083/0.000 ms 00:15:42.593 11:37:43 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@223 -- # ping -c 1 10.0.0.4 00:15:42.593 PING 10.0.0.4 (10.0.0.4) 56(84) bytes of data. 00:15:42.593 64 bytes from 10.0.0.4: icmp_seq=1 ttl=64 time=0.045 ms 00:15:42.593 00:15:42.593 --- 10.0.0.4 ping statistics --- 00:15:42.593 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:15:42.593 rtt min/avg/max/mdev = 0.045/0.045/0.045/0.000 ms 00:15:42.593 11:37:43 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@224 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:15:42.593 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:15:42.593 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.031 ms 00:15:42.593 00:15:42.593 --- 10.0.0.1 ping statistics --- 00:15:42.593 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:15:42.593 rtt min/avg/max/mdev = 0.031/0.031/0.031/0.000 ms 00:15:42.593 11:37:43 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@225 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.2 00:15:42.593 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:15:42.593 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.057 ms 00:15:42.593 00:15:42.593 --- 10.0.0.2 ping statistics --- 00:15:42.593 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:15:42.593 rtt min/avg/max/mdev = 0.057/0.057/0.057/0.000 ms 00:15:42.593 11:37:43 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@227 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:15:42.593 11:37:43 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@461 -- # return 0 00:15:42.593 11:37:43 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:15:42.593 11:37:43 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:15:42.593 11:37:43 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:15:42.593 11:37:43 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:15:42.593 11:37:43 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:15:42.593 11:37:43 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:15:42.593 11:37:43 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:15:42.593 11:37:43 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@16 -- # nvmfappstart -m 0x78 00:15:42.593 11:37:43 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:15:42.593 11:37:43 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@726 -- # xtrace_disable 00:15:42.593 11:37:43 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:15:42.593 11:37:43 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@509 -- # nvmfpid=84180 00:15:42.593 11:37:43 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@508 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --no-huge -s 1024 -m 0x78 00:15:42.593 11:37:43 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@510 -- # waitforlisten 84180 00:15:42.593 11:37:43 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@835 -- # '[' -z 84180 ']' 00:15:42.593 11:37:43 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:15:42.593 11:37:43 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@840 -- # local max_retries=100 00:15:42.593 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:15:42.593 11:37:43 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:15:42.593 11:37:43 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@844 -- # xtrace_disable 00:15:42.593 11:37:43 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:15:42.851 [2024-12-16 11:37:43.387370] Starting SPDK v25.01-pre git sha1 c360129e7 / DPDK 24.03.0 initialization... 00:15:42.851 [2024-12-16 11:37:43.387480] [ DPDK EAL parameters: nvmf -c 0x78 -m 1024 --no-huge --iova-mode=va --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --file-prefix=spdk0 --proc-type=auto ] 00:15:42.851 [2024-12-16 11:37:43.546726] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:15:43.110 [2024-12-16 11:37:43.627438] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:15:43.110 [2024-12-16 11:37:43.627518] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:15:43.110 [2024-12-16 11:37:43.627532] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:15:43.110 [2024-12-16 11:37:43.627543] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:15:43.110 [2024-12-16 11:37:43.627552] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:15:43.110 [2024-12-16 11:37:43.628185] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 4 00:15:43.110 [2024-12-16 11:37:43.628304] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 5 00:15:43.110 [2024-12-16 11:37:43.628454] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 6 00:15:43.110 [2024-12-16 11:37:43.628464] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 3 00:15:43.110 11:37:43 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:15:43.110 11:37:43 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@868 -- # return 0 00:15:43.110 11:37:43 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:15:43.110 11:37:43 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@732 -- # xtrace_disable 00:15:43.110 11:37:43 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:15:43.110 11:37:43 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:15:43.110 11:37:43 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@18 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:15:43.110 11:37:43 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:43.110 11:37:43 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:15:43.110 [2024-12-16 11:37:43.839087] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:15:43.110 11:37:43 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:43.110 11:37:43 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@19 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:15:43.110 11:37:43 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:43.110 11:37:43 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:15:43.110 Malloc0 00:15:43.110 11:37:43 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:43.110 11:37:43 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@20 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:15:43.110 11:37:43 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:43.110 11:37:43 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:15:43.369 11:37:43 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:43.369 11:37:43 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@21 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:15:43.369 11:37:43 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:43.369 11:37:43 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:15:43.369 11:37:43 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:43.369 11:37:43 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@22 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 00:15:43.369 11:37:43 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:43.369 11:37:43 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:15:43.369 [2024-12-16 11:37:43.879332] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4420 *** 00:15:43.369 11:37:43 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:43.369 11:37:43 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@24 -- # /home/vagrant/spdk_repo/spdk/test/bdev/bdevio/bdevio --json /dev/fd/62 --no-huge -s 1024 00:15:43.369 11:37:43 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@24 -- # gen_nvmf_target_json 00:15:43.369 11:37:43 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@560 -- # config=() 00:15:43.369 11:37:43 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@560 -- # local subsystem config 00:15:43.369 11:37:43 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:15:43.369 11:37:43 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:15:43.369 { 00:15:43.369 "params": { 00:15:43.369 "name": "Nvme$subsystem", 00:15:43.369 "trtype": "$TEST_TRANSPORT", 00:15:43.369 "traddr": "$NVMF_FIRST_TARGET_IP", 00:15:43.369 "adrfam": "ipv4", 00:15:43.369 "trsvcid": "$NVMF_PORT", 00:15:43.369 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:15:43.369 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:15:43.369 "hdgst": ${hdgst:-false}, 00:15:43.369 "ddgst": ${ddgst:-false} 00:15:43.369 }, 00:15:43.369 "method": "bdev_nvme_attach_controller" 00:15:43.369 } 00:15:43.369 EOF 00:15:43.369 )") 00:15:43.369 11:37:43 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@582 -- # cat 00:15:43.369 11:37:43 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@584 -- # jq . 00:15:43.369 11:37:43 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@585 -- # IFS=, 00:15:43.369 11:37:43 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:15:43.369 "params": { 00:15:43.369 "name": "Nvme1", 00:15:43.369 "trtype": "tcp", 00:15:43.369 "traddr": "10.0.0.3", 00:15:43.369 "adrfam": "ipv4", 00:15:43.369 "trsvcid": "4420", 00:15:43.369 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:15:43.369 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:15:43.369 "hdgst": false, 00:15:43.369 "ddgst": false 00:15:43.369 }, 00:15:43.369 "method": "bdev_nvme_attach_controller" 00:15:43.369 }' 00:15:43.369 [2024-12-16 11:37:43.940813] Starting SPDK v25.01-pre git sha1 c360129e7 / DPDK 24.03.0 initialization... 00:15:43.369 [2024-12-16 11:37:43.941373] [ DPDK EAL parameters: bdevio --no-shconf -c 0x7 -m 1024 --no-huge --iova-mode=va --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --file-prefix=spdk_pid84215 ] 00:15:43.369 [2024-12-16 11:37:44.111991] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:15:43.627 [2024-12-16 11:37:44.195057] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 1 00:15:43.627 [2024-12-16 11:37:44.195207] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:15:43.627 [2024-12-16 11:37:44.195214] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 2 00:15:43.885 I/O targets: 00:15:43.885 Nvme1n1: 131072 blocks of 512 bytes (64 MiB) 00:15:43.885 00:15:43.885 00:15:43.885 CUnit - A unit testing framework for C - Version 2.1-3 00:15:43.885 http://cunit.sourceforge.net/ 00:15:43.885 00:15:43.885 00:15:43.885 Suite: bdevio tests on: Nvme1n1 00:15:43.885 Test: blockdev write read block ...passed 00:15:43.885 Test: blockdev write zeroes read block ...passed 00:15:43.885 Test: blockdev write zeroes read no split ...passed 00:15:43.885 Test: blockdev write zeroes read split ...passed 00:15:43.885 Test: blockdev write zeroes read split partial ...passed 00:15:43.885 Test: blockdev reset ...[2024-12-16 11:37:44.559567] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 1] resetting controller 00:15:43.885 [2024-12-16 11:37:44.559661] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x13a6720 (9): Bad file descriptor 00:15:43.885 [2024-12-16 11:37:44.571854] bdev_nvme.c:2287:bdev_nvme_reset_ctrlr_complete: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller successful. 00:15:43.885 passed 00:15:43.885 Test: blockdev write read 8 blocks ...passed 00:15:43.885 Test: blockdev write read size > 128k ...passed 00:15:43.885 Test: blockdev write read invalid size ...passed 00:15:43.885 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:15:43.885 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:15:43.885 Test: blockdev write read max offset ...passed 00:15:44.143 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:15:44.143 Test: blockdev writev readv 8 blocks ...passed 00:15:44.143 Test: blockdev writev readv 30 x 1block ...passed 00:15:44.143 Test: blockdev writev readv block ...passed 00:15:44.143 Test: blockdev writev readv size > 128k ...passed 00:15:44.143 Test: blockdev writev readv size > 128k in two iovs ...passed 00:15:44.143 Test: blockdev comparev and writev ...[2024-12-16 11:37:44.749115] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:15:44.143 [2024-12-16 11:37:44.749769] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:15:44.143 [2024-12-16 11:37:44.750022] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:15:44.143 [2024-12-16 11:37:44.750245] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:15:44.143 [2024-12-16 11:37:44.750712] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:15:44.143 [2024-12-16 11:37:44.750983] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:1 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:15:44.143 [2024-12-16 11:37:44.751520] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:15:44.143 [2024-12-16 11:37:44.751958] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:0 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 00:15:44.143 [2024-12-16 11:37:44.752463] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:15:44.143 [2024-12-16 11:37:44.752725] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:0 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:15:44.143 [2024-12-16 11:37:44.753249] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:15:44.143 [2024-12-16 11:37:44.753666] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:1 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:15:44.143 [2024-12-16 11:37:44.754422] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:15:44.143 [2024-12-16 11:37:44.754671] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:1 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:15:44.143 [2024-12-16 11:37:44.755185] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:15:44.143 [2024-12-16 11:37:44.755610] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:0 cdw0:0 sqhd:0028 p:0 m:0 dnr:0 00:15:44.143 passed 00:15:44.143 Test: blockdev nvme passthru rw ...passed 00:15:44.143 Test: blockdev nvme passthru vendor specific ...[2024-12-16 11:37:44.837519] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:15:44.143 [2024-12-16 11:37:44.837876] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002c p:0 m:0 dnr:0 00:15:44.143 [2024-12-16 11:37:44.838128] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:15:44.143 [2024-12-16 11:37:44.838459] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002d p:0 m:0 dnr:0 00:15:44.143 [2024-12-16 11:37:44.838683] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:15:44.143 [2024-12-16 11:37:44.838756] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:15:44.143 [2024-12-16 11:37:44.839197] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:15:44.143 [2024-12-16 11:37:44.839304] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002f p:0 m:0 dnr:0 00:15:44.143 passed 00:15:44.143 Test: blockdev nvme admin passthru ...passed 00:15:44.402 Test: blockdev copy ...passed 00:15:44.402 00:15:44.402 Run Summary: Type Total Ran Passed Failed Inactive 00:15:44.402 suites 1 1 n/a 0 0 00:15:44.402 tests 23 23 23 0 0 00:15:44.402 asserts 152 152 152 0 n/a 00:15:44.402 00:15:44.402 Elapsed time = 0.923 seconds 00:15:44.670 11:37:45 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@26 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:15:44.670 11:37:45 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:44.670 11:37:45 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:15:44.670 11:37:45 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:44.670 11:37:45 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@28 -- # trap - SIGINT SIGTERM EXIT 00:15:44.670 11:37:45 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@30 -- # nvmftestfini 00:15:44.670 11:37:45 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@516 -- # nvmfcleanup 00:15:44.670 11:37:45 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@121 -- # sync 00:15:44.670 11:37:45 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:15:44.670 11:37:45 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@124 -- # set +e 00:15:44.670 11:37:45 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@125 -- # for i in {1..20} 00:15:44.670 11:37:45 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:15:44.670 rmmod nvme_tcp 00:15:44.670 rmmod nvme_fabrics 00:15:44.670 rmmod nvme_keyring 00:15:44.670 11:37:45 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:15:44.670 11:37:45 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@128 -- # set -e 00:15:44.670 11:37:45 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@129 -- # return 0 00:15:44.670 11:37:45 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@517 -- # '[' -n 84180 ']' 00:15:44.670 11:37:45 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@518 -- # killprocess 84180 00:15:44.670 11:37:45 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@954 -- # '[' -z 84180 ']' 00:15:44.670 11:37:45 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@958 -- # kill -0 84180 00:15:44.670 11:37:45 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@959 -- # uname 00:15:44.670 11:37:45 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:15:44.670 11:37:45 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 84180 00:15:44.927 11:37:45 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@960 -- # process_name=reactor_3 00:15:44.927 11:37:45 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@964 -- # '[' reactor_3 = sudo ']' 00:15:44.927 11:37:45 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@972 -- # echo 'killing process with pid 84180' 00:15:44.927 killing process with pid 84180 00:15:44.927 11:37:45 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@973 -- # kill 84180 00:15:44.927 11:37:45 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@978 -- # wait 84180 00:15:45.186 11:37:45 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:15:45.186 11:37:45 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:15:45.186 11:37:45 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:15:45.186 11:37:45 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@297 -- # iptr 00:15:45.186 11:37:45 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@791 -- # iptables-save 00:15:45.186 11:37:45 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@791 -- # iptables-restore 00:15:45.186 11:37:45 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:15:45.186 11:37:45 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@298 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:15:45.186 11:37:45 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@299 -- # nvmf_veth_fini 00:15:45.186 11:37:45 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@233 -- # ip link set nvmf_init_br nomaster 00:15:45.186 11:37:45 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@234 -- # ip link set nvmf_init_br2 nomaster 00:15:45.186 11:37:45 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@235 -- # ip link set nvmf_tgt_br nomaster 00:15:45.186 11:37:45 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@236 -- # ip link set nvmf_tgt_br2 nomaster 00:15:45.186 11:37:45 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@237 -- # ip link set nvmf_init_br down 00:15:45.186 11:37:45 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@238 -- # ip link set nvmf_init_br2 down 00:15:45.186 11:37:45 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@239 -- # ip link set nvmf_tgt_br down 00:15:45.186 11:37:45 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@240 -- # ip link set nvmf_tgt_br2 down 00:15:45.186 11:37:45 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@241 -- # ip link delete nvmf_br type bridge 00:15:45.444 11:37:45 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@242 -- # ip link delete nvmf_init_if 00:15:45.444 11:37:45 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@243 -- # ip link delete nvmf_init_if2 00:15:45.444 11:37:46 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@244 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:15:45.444 11:37:46 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@245 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:15:45.444 11:37:46 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@246 -- # remove_spdk_ns 00:15:45.444 11:37:46 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:15:45.444 11:37:46 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:15:45.444 11:37:46 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:15:45.444 11:37:46 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@300 -- # return 0 00:15:45.444 00:15:45.444 real 0m3.377s 00:15:45.444 user 0m10.891s 00:15:45.444 sys 0m1.465s 00:15:45.444 ************************************ 00:15:45.444 END TEST nvmf_bdevio_no_huge 00:15:45.444 ************************************ 00:15:45.444 11:37:46 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@1130 -- # xtrace_disable 00:15:45.444 11:37:46 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:15:45.444 11:37:46 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@41 -- # run_test nvmf_tls /home/vagrant/spdk_repo/spdk/test/nvmf/target/tls.sh --transport=tcp 00:15:45.444 11:37:46 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:15:45.445 11:37:46 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1111 -- # xtrace_disable 00:15:45.445 11:37:46 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:15:45.445 ************************************ 00:15:45.445 START TEST nvmf_tls 00:15:45.445 ************************************ 00:15:45.445 11:37:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/tls.sh --transport=tcp 00:15:45.708 * Looking for test storage... 00:15:45.708 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:15:45.708 11:37:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:15:45.708 11:37:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@1711 -- # lcov --version 00:15:45.708 11:37:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:15:45.708 11:37:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:15:45.708 11:37:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:15:45.708 11:37:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@333 -- # local ver1 ver1_l 00:15:45.708 11:37:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@334 -- # local ver2 ver2_l 00:15:45.708 11:37:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@336 -- # IFS=.-: 00:15:45.708 11:37:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@336 -- # read -ra ver1 00:15:45.708 11:37:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@337 -- # IFS=.-: 00:15:45.708 11:37:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@337 -- # read -ra ver2 00:15:45.708 11:37:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@338 -- # local 'op=<' 00:15:45.708 11:37:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@340 -- # ver1_l=2 00:15:45.708 11:37:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@341 -- # ver2_l=1 00:15:45.708 11:37:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:15:45.708 11:37:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@344 -- # case "$op" in 00:15:45.708 11:37:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@345 -- # : 1 00:15:45.708 11:37:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@364 -- # (( v = 0 )) 00:15:45.708 11:37:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:15:45.708 11:37:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@365 -- # decimal 1 00:15:45.708 11:37:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@353 -- # local d=1 00:15:45.708 11:37:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:15:45.708 11:37:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@355 -- # echo 1 00:15:45.708 11:37:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@365 -- # ver1[v]=1 00:15:45.708 11:37:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@366 -- # decimal 2 00:15:45.708 11:37:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@353 -- # local d=2 00:15:45.708 11:37:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:15:45.708 11:37:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@355 -- # echo 2 00:15:45.708 11:37:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@366 -- # ver2[v]=2 00:15:45.708 11:37:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:15:45.708 11:37:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:15:45.708 11:37:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@368 -- # return 0 00:15:45.708 11:37:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:15:45.708 11:37:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:15:45.708 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:15:45.708 --rc genhtml_branch_coverage=1 00:15:45.708 --rc genhtml_function_coverage=1 00:15:45.708 --rc genhtml_legend=1 00:15:45.708 --rc geninfo_all_blocks=1 00:15:45.708 --rc geninfo_unexecuted_blocks=1 00:15:45.708 00:15:45.708 ' 00:15:45.708 11:37:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:15:45.708 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:15:45.708 --rc genhtml_branch_coverage=1 00:15:45.708 --rc genhtml_function_coverage=1 00:15:45.708 --rc genhtml_legend=1 00:15:45.708 --rc geninfo_all_blocks=1 00:15:45.708 --rc geninfo_unexecuted_blocks=1 00:15:45.708 00:15:45.708 ' 00:15:45.708 11:37:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:15:45.708 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:15:45.708 --rc genhtml_branch_coverage=1 00:15:45.708 --rc genhtml_function_coverage=1 00:15:45.708 --rc genhtml_legend=1 00:15:45.708 --rc geninfo_all_blocks=1 00:15:45.708 --rc geninfo_unexecuted_blocks=1 00:15:45.708 00:15:45.708 ' 00:15:45.708 11:37:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:15:45.708 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:15:45.708 --rc genhtml_branch_coverage=1 00:15:45.708 --rc genhtml_function_coverage=1 00:15:45.708 --rc genhtml_legend=1 00:15:45.708 --rc geninfo_all_blocks=1 00:15:45.708 --rc geninfo_unexecuted_blocks=1 00:15:45.708 00:15:45.708 ' 00:15:45.708 11:37:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@10 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:15:45.708 11:37:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@7 -- # uname -s 00:15:45.708 11:37:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:15:45.708 11:37:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:15:45.708 11:37:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:15:45.708 11:37:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:15:45.708 11:37:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:15:45.708 11:37:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:15:45.709 11:37:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:15:45.709 11:37:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:15:45.709 11:37:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:15:45.709 11:37:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:15:45.709 11:37:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:2fb5c03d-b67f-4471-b8a9-a83bf7b15be8 00:15:45.709 11:37:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@18 -- # NVME_HOSTID=2fb5c03d-b67f-4471-b8a9-a83bf7b15be8 00:15:45.709 11:37:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:15:45.709 11:37:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:15:45.709 11:37:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:15:45.709 11:37:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:15:45.709 11:37:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@49 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:15:45.709 11:37:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@15 -- # shopt -s extglob 00:15:45.709 11:37:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:15:45.709 11:37:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:15:45.709 11:37:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:15:45.709 11:37:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:45.709 11:37:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:45.709 11:37:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:45.709 11:37:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- paths/export.sh@5 -- # export PATH 00:15:45.709 11:37:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:45.709 11:37:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@51 -- # : 0 00:15:45.709 11:37:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:15:45.709 11:37:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:15:45.709 11:37:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:15:45.709 11:37:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:15:45.709 11:37:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:15:45.709 11:37:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:15:45.709 /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:15:45.709 11:37:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:15:45.709 11:37:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:15:45.709 11:37:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@55 -- # have_pci_nics=0 00:15:45.709 11:37:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@12 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:15:45.709 11:37:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@63 -- # nvmftestinit 00:15:45.709 11:37:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:15:45.709 11:37:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:15:45.709 11:37:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@476 -- # prepare_net_devs 00:15:45.709 11:37:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@438 -- # local -g is_hw=no 00:15:45.709 11:37:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@440 -- # remove_spdk_ns 00:15:45.709 11:37:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:15:45.709 11:37:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:15:45.709 11:37:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:15:45.709 11:37:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@442 -- # [[ virt != virt ]] 00:15:45.709 11:37:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@444 -- # [[ no == yes ]] 00:15:45.709 11:37:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@451 -- # [[ virt == phy ]] 00:15:45.709 11:37:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@454 -- # [[ virt == phy-fallback ]] 00:15:45.709 11:37:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@459 -- # [[ tcp == tcp ]] 00:15:45.709 11:37:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@460 -- # nvmf_veth_init 00:15:45.709 11:37:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@145 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:15:45.709 11:37:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@146 -- # NVMF_SECOND_INITIATOR_IP=10.0.0.2 00:15:45.709 11:37:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@147 -- # NVMF_FIRST_TARGET_IP=10.0.0.3 00:15:45.709 11:37:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@148 -- # NVMF_SECOND_TARGET_IP=10.0.0.4 00:15:45.709 11:37:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@149 -- # NVMF_INITIATOR_IP=10.0.0.1 00:15:45.709 11:37:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@150 -- # NVMF_BRIDGE=nvmf_br 00:15:45.709 11:37:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@151 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:15:45.709 11:37:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@152 -- # NVMF_INITIATOR_INTERFACE2=nvmf_init_if2 00:15:45.709 11:37:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@153 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:15:45.709 11:37:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@154 -- # NVMF_INITIATOR_BRIDGE2=nvmf_init_br2 00:15:45.709 11:37:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@155 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:15:45.709 11:37:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@156 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:15:45.709 11:37:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@157 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:15:45.709 11:37:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@158 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:15:45.709 11:37:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@159 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:15:45.709 11:37:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@160 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:15:45.709 11:37:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@162 -- # ip link set nvmf_init_br nomaster 00:15:45.709 Cannot find device "nvmf_init_br" 00:15:45.709 11:37:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@162 -- # true 00:15:45.709 11:37:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@163 -- # ip link set nvmf_init_br2 nomaster 00:15:45.709 Cannot find device "nvmf_init_br2" 00:15:45.709 11:37:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@163 -- # true 00:15:45.709 11:37:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@164 -- # ip link set nvmf_tgt_br nomaster 00:15:45.709 Cannot find device "nvmf_tgt_br" 00:15:45.709 11:37:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@164 -- # true 00:15:45.709 11:37:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@165 -- # ip link set nvmf_tgt_br2 nomaster 00:15:45.709 Cannot find device "nvmf_tgt_br2" 00:15:45.709 11:37:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@165 -- # true 00:15:45.709 11:37:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@166 -- # ip link set nvmf_init_br down 00:15:45.709 Cannot find device "nvmf_init_br" 00:15:45.709 11:37:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@166 -- # true 00:15:45.709 11:37:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@167 -- # ip link set nvmf_init_br2 down 00:15:45.709 Cannot find device "nvmf_init_br2" 00:15:45.709 11:37:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@167 -- # true 00:15:45.709 11:37:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@168 -- # ip link set nvmf_tgt_br down 00:15:45.709 Cannot find device "nvmf_tgt_br" 00:15:45.709 11:37:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@168 -- # true 00:15:45.709 11:37:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@169 -- # ip link set nvmf_tgt_br2 down 00:15:45.709 Cannot find device "nvmf_tgt_br2" 00:15:45.709 11:37:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@169 -- # true 00:15:45.709 11:37:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@170 -- # ip link delete nvmf_br type bridge 00:15:45.968 Cannot find device "nvmf_br" 00:15:45.968 11:37:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@170 -- # true 00:15:45.968 11:37:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@171 -- # ip link delete nvmf_init_if 00:15:45.968 Cannot find device "nvmf_init_if" 00:15:45.968 11:37:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@171 -- # true 00:15:45.968 11:37:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@172 -- # ip link delete nvmf_init_if2 00:15:45.968 Cannot find device "nvmf_init_if2" 00:15:45.968 11:37:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@172 -- # true 00:15:45.968 11:37:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@173 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:15:45.968 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:15:45.968 11:37:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@173 -- # true 00:15:45.968 11:37:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@174 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:15:45.968 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:15:45.968 11:37:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@174 -- # true 00:15:45.968 11:37:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@177 -- # ip netns add nvmf_tgt_ns_spdk 00:15:45.968 11:37:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@180 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:15:45.968 11:37:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@181 -- # ip link add nvmf_init_if2 type veth peer name nvmf_init_br2 00:15:45.968 11:37:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@182 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:15:45.968 11:37:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@183 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:15:45.968 11:37:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:15:45.968 11:37:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@187 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:15:45.968 11:37:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@190 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:15:45.968 11:37:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@191 -- # ip addr add 10.0.0.2/24 dev nvmf_init_if2 00:15:45.968 11:37:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@192 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if 00:15:45.968 11:37:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@193 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.4/24 dev nvmf_tgt_if2 00:15:45.968 11:37:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@196 -- # ip link set nvmf_init_if up 00:15:45.968 11:37:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@197 -- # ip link set nvmf_init_if2 up 00:15:45.968 11:37:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@198 -- # ip link set nvmf_init_br up 00:15:45.968 11:37:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@199 -- # ip link set nvmf_init_br2 up 00:15:45.968 11:37:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@200 -- # ip link set nvmf_tgt_br up 00:15:45.968 11:37:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@201 -- # ip link set nvmf_tgt_br2 up 00:15:45.968 11:37:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@202 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:15:45.968 11:37:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@203 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:15:45.968 11:37:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@204 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:15:45.968 11:37:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@207 -- # ip link add nvmf_br type bridge 00:15:45.968 11:37:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@208 -- # ip link set nvmf_br up 00:15:45.968 11:37:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@211 -- # ip link set nvmf_init_br master nvmf_br 00:15:45.968 11:37:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@212 -- # ip link set nvmf_init_br2 master nvmf_br 00:15:45.968 11:37:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@213 -- # ip link set nvmf_tgt_br master nvmf_br 00:15:45.968 11:37:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@214 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:15:45.968 11:37:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@217 -- # ipts -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:15:45.968 11:37:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT' 00:15:45.968 11:37:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@218 -- # ipts -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT 00:15:45.968 11:37:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT' 00:15:45.968 11:37:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@219 -- # ipts -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:15:45.968 11:37:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@790 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT -m comment --comment 'SPDK_NVMF:-A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT' 00:15:45.968 11:37:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@222 -- # ping -c 1 10.0.0.3 00:15:46.227 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:15:46.227 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.059 ms 00:15:46.227 00:15:46.227 --- 10.0.0.3 ping statistics --- 00:15:46.227 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:15:46.227 rtt min/avg/max/mdev = 0.059/0.059/0.059/0.000 ms 00:15:46.227 11:37:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@223 -- # ping -c 1 10.0.0.4 00:15:46.227 PING 10.0.0.4 (10.0.0.4) 56(84) bytes of data. 00:15:46.227 64 bytes from 10.0.0.4: icmp_seq=1 ttl=64 time=0.098 ms 00:15:46.227 00:15:46.227 --- 10.0.0.4 ping statistics --- 00:15:46.227 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:15:46.227 rtt min/avg/max/mdev = 0.098/0.098/0.098/0.000 ms 00:15:46.227 11:37:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@224 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:15:46.227 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:15:46.227 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.030 ms 00:15:46.227 00:15:46.227 --- 10.0.0.1 ping statistics --- 00:15:46.227 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:15:46.227 rtt min/avg/max/mdev = 0.030/0.030/0.030/0.000 ms 00:15:46.227 11:37:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@225 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.2 00:15:46.227 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:15:46.227 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.116 ms 00:15:46.227 00:15:46.227 --- 10.0.0.2 ping statistics --- 00:15:46.227 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:15:46.227 rtt min/avg/max/mdev = 0.116/0.116/0.116/0.000 ms 00:15:46.227 11:37:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@227 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:15:46.227 11:37:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@461 -- # return 0 00:15:46.227 11:37:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:15:46.227 11:37:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:15:46.227 11:37:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:15:46.227 11:37:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:15:46.227 11:37:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:15:46.227 11:37:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:15:46.227 11:37:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:15:46.227 11:37:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@64 -- # nvmfappstart -m 0x2 --wait-for-rpc 00:15:46.227 11:37:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:15:46.227 11:37:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@726 -- # xtrace_disable 00:15:46.227 11:37:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:15:46.227 11:37:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@509 -- # nvmfpid=84464 00:15:46.227 11:37:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@508 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 --wait-for-rpc 00:15:46.227 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:15:46.227 11:37:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@510 -- # waitforlisten 84464 00:15:46.227 11:37:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # '[' -z 84464 ']' 00:15:46.227 11:37:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:15:46.227 11:37:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # local max_retries=100 00:15:46.227 11:37:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:15:46.227 11:37:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@844 -- # xtrace_disable 00:15:46.227 11:37:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:15:46.227 [2024-12-16 11:37:46.837372] Starting SPDK v25.01-pre git sha1 c360129e7 / DPDK 24.03.0 initialization... 00:15:46.227 [2024-12-16 11:37:46.837701] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:15:46.486 [2024-12-16 11:37:46.992703] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:15:46.486 [2024-12-16 11:37:47.056540] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:15:46.486 [2024-12-16 11:37:47.056601] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:15:46.486 [2024-12-16 11:37:47.056616] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:15:46.486 [2024-12-16 11:37:47.056626] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:15:46.486 [2024-12-16 11:37:47.056635] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:15:46.486 [2024-12-16 11:37:47.057091] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 1 00:15:47.421 11:37:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:15:47.421 11:37:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@868 -- # return 0 00:15:47.421 11:37:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:15:47.421 11:37:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@732 -- # xtrace_disable 00:15:47.421 11:37:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:15:47.421 11:37:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:15:47.421 11:37:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@66 -- # '[' tcp '!=' tcp ']' 00:15:47.421 11:37:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@71 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py sock_set_default_impl -i ssl 00:15:47.679 true 00:15:47.679 11:37:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@74 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py sock_impl_get_options -i ssl 00:15:47.679 11:37:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@74 -- # jq -r .tls_version 00:15:47.938 11:37:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@74 -- # version=0 00:15:47.938 11:37:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@75 -- # [[ 0 != \0 ]] 00:15:47.938 11:37:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@81 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py sock_impl_set_options -i ssl --tls-version 13 00:15:48.201 11:37:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@82 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py sock_impl_get_options -i ssl 00:15:48.201 11:37:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@82 -- # jq -r .tls_version 00:15:48.473 11:37:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@82 -- # version=13 00:15:48.473 11:37:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@83 -- # [[ 13 != \1\3 ]] 00:15:48.473 11:37:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@89 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py sock_impl_set_options -i ssl --tls-version 7 00:15:48.731 11:37:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@90 -- # jq -r .tls_version 00:15:48.731 11:37:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@90 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py sock_impl_get_options -i ssl 00:15:48.990 11:37:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@90 -- # version=7 00:15:48.990 11:37:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@91 -- # [[ 7 != \7 ]] 00:15:48.990 11:37:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@97 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py sock_impl_get_options -i ssl 00:15:48.990 11:37:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@97 -- # jq -r .enable_ktls 00:15:49.249 11:37:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@97 -- # ktls=false 00:15:49.249 11:37:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@98 -- # [[ false != \f\a\l\s\e ]] 00:15:49.249 11:37:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@104 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py sock_impl_set_options -i ssl --enable-ktls 00:15:49.507 11:37:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@105 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py sock_impl_get_options -i ssl 00:15:49.507 11:37:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@105 -- # jq -r .enable_ktls 00:15:50.074 11:37:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@105 -- # ktls=true 00:15:50.074 11:37:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@106 -- # [[ true != \t\r\u\e ]] 00:15:50.074 11:37:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@112 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py sock_impl_set_options -i ssl --disable-ktls 00:15:50.333 11:37:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@113 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py sock_impl_get_options -i ssl 00:15:50.333 11:37:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@113 -- # jq -r .enable_ktls 00:15:50.592 11:37:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@113 -- # ktls=false 00:15:50.592 11:37:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@114 -- # [[ false != \f\a\l\s\e ]] 00:15:50.592 11:37:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@119 -- # format_interchange_psk 00112233445566778899aabbccddeeff 1 00:15:50.592 11:37:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@743 -- # format_key NVMeTLSkey-1 00112233445566778899aabbccddeeff 1 00:15:50.592 11:37:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@730 -- # local prefix key digest 00:15:50.592 11:37:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@732 -- # prefix=NVMeTLSkey-1 00:15:50.592 11:37:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@732 -- # key=00112233445566778899aabbccddeeff 00:15:50.592 11:37:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@732 -- # digest=1 00:15:50.592 11:37:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@733 -- # python - 00:15:50.592 11:37:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@119 -- # key=NVMeTLSkey-1:01:MDAxMTIyMzM0NDU1NjY3Nzg4OTlhYWJiY2NkZGVlZmZwJEiQ: 00:15:50.592 11:37:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@120 -- # format_interchange_psk ffeeddccbbaa99887766554433221100 1 00:15:50.592 11:37:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@743 -- # format_key NVMeTLSkey-1 ffeeddccbbaa99887766554433221100 1 00:15:50.592 11:37:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@730 -- # local prefix key digest 00:15:50.592 11:37:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@732 -- # prefix=NVMeTLSkey-1 00:15:50.592 11:37:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@732 -- # key=ffeeddccbbaa99887766554433221100 00:15:50.592 11:37:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@732 -- # digest=1 00:15:50.592 11:37:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@733 -- # python - 00:15:50.592 11:37:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@120 -- # key_2=NVMeTLSkey-1:01:ZmZlZWRkY2NiYmFhOTk4ODc3NjY1NTQ0MzMyMjExMDBfBm/Y: 00:15:50.592 11:37:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@122 -- # mktemp 00:15:50.592 11:37:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@122 -- # key_path=/tmp/tmp.xv8GE3ck7Q 00:15:50.592 11:37:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@123 -- # mktemp 00:15:50.592 11:37:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@123 -- # key_2_path=/tmp/tmp.lrZP6vEiZy 00:15:50.592 11:37:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@125 -- # echo -n NVMeTLSkey-1:01:MDAxMTIyMzM0NDU1NjY3Nzg4OTlhYWJiY2NkZGVlZmZwJEiQ: 00:15:50.592 11:37:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@126 -- # echo -n NVMeTLSkey-1:01:ZmZlZWRkY2NiYmFhOTk4ODc3NjY1NTQ0MzMyMjExMDBfBm/Y: 00:15:50.592 11:37:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@128 -- # chmod 0600 /tmp/tmp.xv8GE3ck7Q 00:15:50.592 11:37:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@129 -- # chmod 0600 /tmp/tmp.lrZP6vEiZy 00:15:50.592 11:37:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@131 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py sock_impl_set_options -i ssl --tls-version 13 00:15:50.850 11:37:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@132 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py framework_start_init 00:15:51.418 11:37:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@134 -- # setup_nvmf_tgt /tmp/tmp.xv8GE3ck7Q 00:15:51.418 11:37:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@50 -- # local key=/tmp/tmp.xv8GE3ck7Q 00:15:51.418 11:37:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@52 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o 00:15:51.418 [2024-12-16 11:37:52.136436] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:15:51.418 11:37:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@53 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDK00000000000001 -m 10 00:15:51.677 11:37:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 -k 00:15:51.936 [2024-12-16 11:37:52.652560] tcp.c:1049:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:15:51.936 [2024-12-16 11:37:52.652805] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4420 *** 00:15:51.936 11:37:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@56 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_create 32 4096 -b malloc0 00:15:52.194 malloc0 00:15:52.194 11:37:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@57 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 malloc0 -n 1 00:15:52.453 11:37:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@58 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py keyring_file_add_key key0 /tmp/tmp.xv8GE3ck7Q 00:15:52.711 11:37:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@59 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 --psk key0 00:15:52.970 11:37:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@138 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf -S ssl -q 64 -o 4096 -w randrw -M 30 -t 10 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.3 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 hostnqn:nqn.2016-06.io.spdk:host1' --psk-path /tmp/tmp.xv8GE3ck7Q 00:16:05.231 Initializing NVMe Controllers 00:16:05.231 Attached to NVMe over Fabrics controller at 10.0.0.3:4420: nqn.2016-06.io.spdk:cnode1 00:16:05.231 Associating TCP (addr:10.0.0.3 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:16:05.231 Initialization complete. Launching workers. 00:16:05.231 ======================================================== 00:16:05.231 Latency(us) 00:16:05.231 Device Information : IOPS MiB/s Average min max 00:16:05.231 TCP (addr:10.0.0.3 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 9636.62 37.64 6642.84 1168.93 9740.54 00:16:05.231 ======================================================== 00:16:05.231 Total : 9636.62 37.64 6642.84 1168.93 9740.54 00:16:05.231 00:16:05.231 11:38:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@144 -- # run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /tmp/tmp.xv8GE3ck7Q 00:16:05.231 11:38:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@22 -- # local subnqn hostnqn psk 00:16:05.231 11:38:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # subnqn=nqn.2016-06.io.spdk:cnode1 00:16:05.231 11:38:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # hostnqn=nqn.2016-06.io.spdk:host1 00:16:05.231 11:38:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # psk=/tmp/tmp.xv8GE3ck7Q 00:16:05.231 11:38:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@25 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:16:05.231 11:38:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@28 -- # bdevperf_pid=84838 00:16:05.231 11:38:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@30 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:16:05.231 11:38:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@31 -- # waitforlisten 84838 /var/tmp/bdevperf.sock 00:16:05.231 11:38:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # '[' -z 84838 ']' 00:16:05.231 11:38:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:16:05.231 11:38:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # local max_retries=100 00:16:05.231 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:16:05.231 11:38:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:16:05.231 11:38:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@844 -- # xtrace_disable 00:16:05.231 11:38:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:16:05.231 11:38:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@27 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:16:05.231 [2024-12-16 11:38:03.968354] Starting SPDK v25.01-pre git sha1 c360129e7 / DPDK 24.03.0 initialization... 00:16:05.231 [2024-12-16 11:38:03.968460] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid84838 ] 00:16:05.231 [2024-12-16 11:38:04.120706] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:16:05.231 [2024-12-16 11:38:04.184032] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 2 00:16:05.231 11:38:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:16:05.231 11:38:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@868 -- # return 0 00:16:05.231 11:38:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@33 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock keyring_file_add_key key0 /tmp/tmp.xv8GE3ck7Q 00:16:05.231 11:38:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@35 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.3 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 --psk key0 00:16:05.231 [2024-12-16 11:38:05.484354] bdev_nvme_rpc.c: 515:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:16:05.231 TLSTESTn1 00:16:05.231 11:38:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@42 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -t 20 -s /var/tmp/bdevperf.sock perform_tests 00:16:05.231 Running I/O for 10 seconds... 00:16:07.103 3989.00 IOPS, 15.58 MiB/s [2024-12-16T11:38:08.792Z] 4088.50 IOPS, 15.97 MiB/s [2024-12-16T11:38:09.744Z] 4107.67 IOPS, 16.05 MiB/s [2024-12-16T11:38:11.145Z] 4117.00 IOPS, 16.08 MiB/s [2024-12-16T11:38:12.082Z] 4120.40 IOPS, 16.10 MiB/s [2024-12-16T11:38:13.017Z] 4130.67 IOPS, 16.14 MiB/s [2024-12-16T11:38:13.951Z] 4137.71 IOPS, 16.16 MiB/s [2024-12-16T11:38:14.884Z] 4153.38 IOPS, 16.22 MiB/s [2024-12-16T11:38:15.825Z] 4169.67 IOPS, 16.29 MiB/s [2024-12-16T11:38:15.825Z] 4179.20 IOPS, 16.32 MiB/s 00:16:15.073 Latency(us) 00:16:15.073 [2024-12-16T11:38:15.826Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:16:15.073 Job: TLSTESTn1 (Core Mask 0x4, workload: verify, depth: 128, IO size: 4096) 00:16:15.073 Verification LBA range: start 0x0 length 0x2000 00:16:15.073 TLSTESTn1 : 10.02 4184.44 16.35 0.00 0.00 30532.94 5213.09 23235.49 00:16:15.073 [2024-12-16T11:38:15.826Z] =================================================================================================================== 00:16:15.073 [2024-12-16T11:38:15.826Z] Total : 4184.44 16.35 0.00 0.00 30532.94 5213.09 23235.49 00:16:15.073 { 00:16:15.073 "results": [ 00:16:15.073 { 00:16:15.073 "job": "TLSTESTn1", 00:16:15.073 "core_mask": "0x4", 00:16:15.073 "workload": "verify", 00:16:15.073 "status": "finished", 00:16:15.073 "verify_range": { 00:16:15.073 "start": 0, 00:16:15.073 "length": 8192 00:16:15.073 }, 00:16:15.073 "queue_depth": 128, 00:16:15.073 "io_size": 4096, 00:16:15.073 "runtime": 10.018058, 00:16:15.073 "iops": 4184.44373150964, 00:16:15.073 "mibps": 16.345483326209532, 00:16:15.073 "io_failed": 0, 00:16:15.073 "io_timeout": 0, 00:16:15.073 "avg_latency_us": 30532.944477446214, 00:16:15.073 "min_latency_us": 5213.090909090909, 00:16:15.073 "max_latency_us": 23235.49090909091 00:16:15.073 } 00:16:15.073 ], 00:16:15.073 "core_count": 1 00:16:15.073 } 00:16:15.073 11:38:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@45 -- # trap 'nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:16:15.073 11:38:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@46 -- # killprocess 84838 00:16:15.073 11:38:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # '[' -z 84838 ']' 00:16:15.073 11:38:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # kill -0 84838 00:16:15.073 11:38:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # uname 00:16:15.073 11:38:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:16:15.073 11:38:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 84838 00:16:15.073 11:38:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # process_name=reactor_2 00:16:15.073 11:38:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@964 -- # '[' reactor_2 = sudo ']' 00:16:15.073 killing process with pid 84838 00:16:15.073 11:38:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@972 -- # echo 'killing process with pid 84838' 00:16:15.073 11:38:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@973 -- # kill 84838 00:16:15.073 Received shutdown signal, test time was about 10.000000 seconds 00:16:15.073 00:16:15.073 Latency(us) 00:16:15.073 [2024-12-16T11:38:15.826Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:16:15.073 [2024-12-16T11:38:15.826Z] =================================================================================================================== 00:16:15.073 [2024-12-16T11:38:15.826Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:16:15.073 11:38:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@978 -- # wait 84838 00:16:15.331 11:38:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@147 -- # NOT run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /tmp/tmp.lrZP6vEiZy 00:16:15.331 11:38:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@652 -- # local es=0 00:16:15.331 11:38:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@654 -- # valid_exec_arg run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /tmp/tmp.lrZP6vEiZy 00:16:15.331 11:38:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@640 -- # local arg=run_bdevperf 00:16:15.331 11:38:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:16:15.331 11:38:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@644 -- # type -t run_bdevperf 00:16:15.331 11:38:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:16:15.331 11:38:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@655 -- # run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /tmp/tmp.lrZP6vEiZy 00:16:15.331 11:38:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@22 -- # local subnqn hostnqn psk 00:16:15.331 11:38:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # subnqn=nqn.2016-06.io.spdk:cnode1 00:16:15.331 11:38:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # hostnqn=nqn.2016-06.io.spdk:host1 00:16:15.331 11:38:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # psk=/tmp/tmp.lrZP6vEiZy 00:16:15.331 11:38:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@25 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:16:15.331 11:38:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@28 -- # bdevperf_pid=85001 00:16:15.331 11:38:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@27 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:16:15.331 11:38:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@30 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:16:15.331 11:38:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@31 -- # waitforlisten 85001 /var/tmp/bdevperf.sock 00:16:15.331 11:38:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # '[' -z 85001 ']' 00:16:15.331 11:38:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:16:15.331 11:38:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # local max_retries=100 00:16:15.331 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:16:15.331 11:38:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:16:15.331 11:38:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@844 -- # xtrace_disable 00:16:15.331 11:38:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:16:15.331 [2024-12-16 11:38:16.036374] Starting SPDK v25.01-pre git sha1 c360129e7 / DPDK 24.03.0 initialization... 00:16:15.331 [2024-12-16 11:38:16.036464] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid85001 ] 00:16:15.589 [2024-12-16 11:38:16.177370] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:16:15.589 [2024-12-16 11:38:16.235704] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 2 00:16:15.847 11:38:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:16:15.847 11:38:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@868 -- # return 0 00:16:15.847 11:38:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@33 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock keyring_file_add_key key0 /tmp/tmp.lrZP6vEiZy 00:16:16.105 11:38:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@35 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.3 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 --psk key0 00:16:16.364 [2024-12-16 11:38:16.921795] bdev_nvme_rpc.c: 515:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:16:16.364 [2024-12-16 11:38:16.927936] /home/vagrant/spdk_repo/spdk/include/spdk_internal/nvme_tcp.h: 421:nvme_tcp_read_data: *ERROR*: spdk_sock_recv() failed, errno 107: Transport endpoint is not connected 00:16:16.364 [2024-12-16 11:38:16.928853] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1e3f6a0 (107): Transport endpoint is not connected 00:16:16.364 [2024-12-16 11:38:16.929840] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1e3f6a0 (9): Bad file descriptor 00:16:16.364 [2024-12-16 11:38:16.930837] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 0] Ctrlr is in error state 00:16:16.364 [2024-12-16 11:38:16.930865] nvme.c: 709:nvme_ctrlr_poll_internal: *ERROR*: Failed to initialize SSD: 10.0.0.3 00:16:16.364 [2024-12-16 11:38:16.930892] nvme.c: 895:nvme_dummy_attach_fail_cb: *ERROR*: Failed to attach nvme ctrlr: trtype=TCP adrfam=IPv4 traddr=10.0.0.3 trsvcid=4420 subnqn=nqn.2016-06.io.spdk:cnode1, Operation not permitted 00:16:16.364 [2024-12-16 11:38:16.930925] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 0] in failed state. 00:16:16.364 2024/12/16 11:38:16 error on JSON-RPC call, method: bdev_nvme_attach_controller, params: map[adrfam:ipv4 allow_unrecognized_csi:%!s(bool=false) ddgst:%!s(bool=false) hdgst:%!s(bool=false) hostnqn:nqn.2016-06.io.spdk:host1 name:TLSTEST prchk_guard:%!s(bool=false) prchk_reftag:%!s(bool=false) psk:key0 subnqn:nqn.2016-06.io.spdk:cnode1 traddr:10.0.0.3 trsvcid:4420 trtype:tcp], err: error received for bdev_nvme_attach_controller method, err: Code=-5 Msg=Input/output error 00:16:16.364 request: 00:16:16.364 { 00:16:16.364 "method": "bdev_nvme_attach_controller", 00:16:16.364 "params": { 00:16:16.364 "name": "TLSTEST", 00:16:16.364 "trtype": "tcp", 00:16:16.364 "traddr": "10.0.0.3", 00:16:16.364 "adrfam": "ipv4", 00:16:16.364 "trsvcid": "4420", 00:16:16.364 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:16:16.364 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:16:16.364 "prchk_reftag": false, 00:16:16.364 "prchk_guard": false, 00:16:16.364 "hdgst": false, 00:16:16.364 "ddgst": false, 00:16:16.364 "psk": "key0", 00:16:16.364 "allow_unrecognized_csi": false 00:16:16.364 } 00:16:16.364 } 00:16:16.364 Got JSON-RPC error response 00:16:16.364 GoRPCClient: error on JSON-RPC call 00:16:16.364 11:38:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@37 -- # killprocess 85001 00:16:16.364 11:38:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # '[' -z 85001 ']' 00:16:16.364 11:38:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # kill -0 85001 00:16:16.364 11:38:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # uname 00:16:16.364 11:38:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:16:16.364 11:38:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 85001 00:16:16.364 11:38:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # process_name=reactor_2 00:16:16.364 11:38:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@964 -- # '[' reactor_2 = sudo ']' 00:16:16.364 killing process with pid 85001 00:16:16.364 11:38:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@972 -- # echo 'killing process with pid 85001' 00:16:16.364 11:38:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@973 -- # kill 85001 00:16:16.364 11:38:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@978 -- # wait 85001 00:16:16.364 Received shutdown signal, test time was about 10.000000 seconds 00:16:16.364 00:16:16.364 Latency(us) 00:16:16.364 [2024-12-16T11:38:17.117Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:16:16.364 [2024-12-16T11:38:17.117Z] =================================================================================================================== 00:16:16.364 [2024-12-16T11:38:17.117Z] Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:16:16.622 11:38:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@38 -- # return 1 00:16:16.622 11:38:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@655 -- # es=1 00:16:16.622 11:38:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:16:16.622 11:38:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:16:16.622 11:38:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:16:16.622 11:38:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@150 -- # NOT run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host2 /tmp/tmp.xv8GE3ck7Q 00:16:16.622 11:38:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@652 -- # local es=0 00:16:16.622 11:38:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@654 -- # valid_exec_arg run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host2 /tmp/tmp.xv8GE3ck7Q 00:16:16.622 11:38:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@640 -- # local arg=run_bdevperf 00:16:16.622 11:38:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:16:16.622 11:38:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@644 -- # type -t run_bdevperf 00:16:16.622 11:38:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:16:16.622 11:38:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@655 -- # run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host2 /tmp/tmp.xv8GE3ck7Q 00:16:16.622 11:38:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@22 -- # local subnqn hostnqn psk 00:16:16.622 11:38:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # subnqn=nqn.2016-06.io.spdk:cnode1 00:16:16.622 11:38:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # hostnqn=nqn.2016-06.io.spdk:host2 00:16:16.622 11:38:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # psk=/tmp/tmp.xv8GE3ck7Q 00:16:16.622 11:38:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@25 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:16:16.623 11:38:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@27 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:16:16.623 11:38:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@28 -- # bdevperf_pid=85043 00:16:16.623 11:38:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@30 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:16:16.623 11:38:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@31 -- # waitforlisten 85043 /var/tmp/bdevperf.sock 00:16:16.623 11:38:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # '[' -z 85043 ']' 00:16:16.623 11:38:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:16:16.623 11:38:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # local max_retries=100 00:16:16.623 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:16:16.623 11:38:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:16:16.623 11:38:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@844 -- # xtrace_disable 00:16:16.623 11:38:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:16:16.623 [2024-12-16 11:38:17.217340] Starting SPDK v25.01-pre git sha1 c360129e7 / DPDK 24.03.0 initialization... 00:16:16.623 [2024-12-16 11:38:17.217438] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid85043 ] 00:16:16.623 [2024-12-16 11:38:17.357091] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:16:16.881 [2024-12-16 11:38:17.407296] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 2 00:16:16.881 11:38:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:16:16.881 11:38:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@868 -- # return 0 00:16:16.881 11:38:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@33 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock keyring_file_add_key key0 /tmp/tmp.xv8GE3ck7Q 00:16:17.139 11:38:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@35 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.3 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host2 --psk key0 00:16:17.396 [2024-12-16 11:38:18.117904] bdev_nvme_rpc.c: 515:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:16:17.396 [2024-12-16 11:38:18.127971] tcp.c: 987:tcp_sock_get_key: *ERROR*: Could not find PSK for identity: NVMe0R01 nqn.2016-06.io.spdk:host2 nqn.2016-06.io.spdk:cnode1 00:16:17.396 [2024-12-16 11:38:18.128030] posix.c: 573:posix_sock_psk_find_session_server_cb: *ERROR*: Unable to find PSK for identity: NVMe0R01 nqn.2016-06.io.spdk:host2 nqn.2016-06.io.spdk:cnode1 00:16:17.396 [2024-12-16 11:38:18.128095] /home/vagrant/spdk_repo/spdk/include/spdk_internal/nvme_tcp.h: 421:nvme_tcp_read_data: *ERROR*: spdk_sock_recv() failed, errno 107: Transport endpoint is not connected 00:16:17.396 [2024-12-16 11:38:18.128933] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xd0d6a0 (107): Transport endpoint is not connected 00:16:17.397 [2024-12-16 11:38:18.129921] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xd0d6a0 (9): Bad file descriptor 00:16:17.397 [2024-12-16 11:38:18.130918] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 0] Ctrlr is in error state 00:16:17.397 [2024-12-16 11:38:18.130947] nvme.c: 709:nvme_ctrlr_poll_internal: *ERROR*: Failed to initialize SSD: 10.0.0.3 00:16:17.397 [2024-12-16 11:38:18.130975] nvme.c: 895:nvme_dummy_attach_fail_cb: *ERROR*: Failed to attach nvme ctrlr: trtype=TCP adrfam=IPv4 traddr=10.0.0.3 trsvcid=4420 subnqn=nqn.2016-06.io.spdk:cnode1, Operation not permitted 00:16:17.397 [2024-12-16 11:38:18.130991] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 0] in failed state. 00:16:17.397 2024/12/16 11:38:18 error on JSON-RPC call, method: bdev_nvme_attach_controller, params: map[adrfam:ipv4 allow_unrecognized_csi:%!s(bool=false) ddgst:%!s(bool=false) hdgst:%!s(bool=false) hostnqn:nqn.2016-06.io.spdk:host2 name:TLSTEST prchk_guard:%!s(bool=false) prchk_reftag:%!s(bool=false) psk:key0 subnqn:nqn.2016-06.io.spdk:cnode1 traddr:10.0.0.3 trsvcid:4420 trtype:tcp], err: error received for bdev_nvme_attach_controller method, err: Code=-5 Msg=Input/output error 00:16:17.397 request: 00:16:17.397 { 00:16:17.397 "method": "bdev_nvme_attach_controller", 00:16:17.397 "params": { 00:16:17.397 "name": "TLSTEST", 00:16:17.397 "trtype": "tcp", 00:16:17.397 "traddr": "10.0.0.3", 00:16:17.397 "adrfam": "ipv4", 00:16:17.397 "trsvcid": "4420", 00:16:17.397 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:16:17.397 "hostnqn": "nqn.2016-06.io.spdk:host2", 00:16:17.397 "prchk_reftag": false, 00:16:17.397 "prchk_guard": false, 00:16:17.397 "hdgst": false, 00:16:17.397 "ddgst": false, 00:16:17.397 "psk": "key0", 00:16:17.397 "allow_unrecognized_csi": false 00:16:17.397 } 00:16:17.397 } 00:16:17.397 Got JSON-RPC error response 00:16:17.397 GoRPCClient: error on JSON-RPC call 00:16:17.655 11:38:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@37 -- # killprocess 85043 00:16:17.655 11:38:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # '[' -z 85043 ']' 00:16:17.655 11:38:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # kill -0 85043 00:16:17.655 11:38:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # uname 00:16:17.655 11:38:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:16:17.655 11:38:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 85043 00:16:17.655 killing process with pid 85043 00:16:17.655 Received shutdown signal, test time was about 10.000000 seconds 00:16:17.655 00:16:17.656 Latency(us) 00:16:17.656 [2024-12-16T11:38:18.409Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:16:17.656 [2024-12-16T11:38:18.409Z] =================================================================================================================== 00:16:17.656 [2024-12-16T11:38:18.409Z] Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:16:17.656 11:38:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # process_name=reactor_2 00:16:17.656 11:38:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@964 -- # '[' reactor_2 = sudo ']' 00:16:17.656 11:38:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@972 -- # echo 'killing process with pid 85043' 00:16:17.656 11:38:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@973 -- # kill 85043 00:16:17.656 11:38:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@978 -- # wait 85043 00:16:17.656 11:38:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@38 -- # return 1 00:16:17.656 11:38:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@655 -- # es=1 00:16:17.656 11:38:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:16:17.656 11:38:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:16:17.656 11:38:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:16:17.656 11:38:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@153 -- # NOT run_bdevperf nqn.2016-06.io.spdk:cnode2 nqn.2016-06.io.spdk:host1 /tmp/tmp.xv8GE3ck7Q 00:16:17.656 11:38:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@652 -- # local es=0 00:16:17.656 11:38:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@654 -- # valid_exec_arg run_bdevperf nqn.2016-06.io.spdk:cnode2 nqn.2016-06.io.spdk:host1 /tmp/tmp.xv8GE3ck7Q 00:16:17.656 11:38:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@640 -- # local arg=run_bdevperf 00:16:17.656 11:38:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:16:17.656 11:38:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@644 -- # type -t run_bdevperf 00:16:17.656 11:38:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:16:17.656 11:38:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@655 -- # run_bdevperf nqn.2016-06.io.spdk:cnode2 nqn.2016-06.io.spdk:host1 /tmp/tmp.xv8GE3ck7Q 00:16:17.656 11:38:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@22 -- # local subnqn hostnqn psk 00:16:17.656 11:38:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # subnqn=nqn.2016-06.io.spdk:cnode2 00:16:17.656 11:38:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # hostnqn=nqn.2016-06.io.spdk:host1 00:16:17.656 11:38:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # psk=/tmp/tmp.xv8GE3ck7Q 00:16:17.656 11:38:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@25 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:16:17.656 11:38:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@28 -- # bdevperf_pid=85082 00:16:17.656 11:38:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@27 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:16:17.656 11:38:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@30 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:16:17.656 11:38:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@31 -- # waitforlisten 85082 /var/tmp/bdevperf.sock 00:16:17.656 11:38:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # '[' -z 85082 ']' 00:16:17.656 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:16:17.656 11:38:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:16:17.656 11:38:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # local max_retries=100 00:16:17.656 11:38:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:16:17.656 11:38:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@844 -- # xtrace_disable 00:16:17.656 11:38:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:16:17.914 [2024-12-16 11:38:18.448798] Starting SPDK v25.01-pre git sha1 c360129e7 / DPDK 24.03.0 initialization... 00:16:17.914 [2024-12-16 11:38:18.448918] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid85082 ] 00:16:17.914 [2024-12-16 11:38:18.597879] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:16:17.914 [2024-12-16 11:38:18.651225] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 2 00:16:18.849 11:38:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:16:18.849 11:38:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@868 -- # return 0 00:16:18.849 11:38:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@33 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock keyring_file_add_key key0 /tmp/tmp.xv8GE3ck7Q 00:16:19.107 11:38:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@35 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.3 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode2 -q nqn.2016-06.io.spdk:host1 --psk key0 00:16:19.365 [2024-12-16 11:38:19.980355] bdev_nvme_rpc.c: 515:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:16:19.365 [2024-12-16 11:38:19.988272] tcp.c: 987:tcp_sock_get_key: *ERROR*: Could not find PSK for identity: NVMe0R01 nqn.2016-06.io.spdk:host1 nqn.2016-06.io.spdk:cnode2 00:16:19.365 [2024-12-16 11:38:19.988315] posix.c: 573:posix_sock_psk_find_session_server_cb: *ERROR*: Unable to find PSK for identity: NVMe0R01 nqn.2016-06.io.spdk:host1 nqn.2016-06.io.spdk:cnode2 00:16:19.365 [2024-12-16 11:38:19.988364] /home/vagrant/spdk_repo/spdk/include/spdk_internal/nvme_tcp.h: 421:nvme_tcp_read_data: *ERROR*: spdk_sock_recv() failed, errno 107: Transport endpoint is not connected 00:16:19.365 [2024-12-16 11:38:19.989242] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1c836a0 (107): Transport endpoint is not connected 00:16:19.365 [2024-12-16 11:38:19.990226] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1c836a0 (9): Bad file descriptor 00:16:19.365 [2024-12-16 11:38:19.991222] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode2, 0] Ctrlr is in error state 00:16:19.365 [2024-12-16 11:38:19.991251] nvme.c: 709:nvme_ctrlr_poll_internal: *ERROR*: Failed to initialize SSD: 10.0.0.3 00:16:19.365 [2024-12-16 11:38:19.991262] nvme.c: 895:nvme_dummy_attach_fail_cb: *ERROR*: Failed to attach nvme ctrlr: trtype=TCP adrfam=IPv4 traddr=10.0.0.3 trsvcid=4420 subnqn=nqn.2016-06.io.spdk:cnode2, Operation not permitted 00:16:19.365 [2024-12-16 11:38:19.991280] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode2, 0] in failed state. 00:16:19.365 2024/12/16 11:38:19 error on JSON-RPC call, method: bdev_nvme_attach_controller, params: map[adrfam:ipv4 allow_unrecognized_csi:%!s(bool=false) ddgst:%!s(bool=false) hdgst:%!s(bool=false) hostnqn:nqn.2016-06.io.spdk:host1 name:TLSTEST prchk_guard:%!s(bool=false) prchk_reftag:%!s(bool=false) psk:key0 subnqn:nqn.2016-06.io.spdk:cnode2 traddr:10.0.0.3 trsvcid:4420 trtype:tcp], err: error received for bdev_nvme_attach_controller method, err: Code=-5 Msg=Input/output error 00:16:19.365 request: 00:16:19.365 { 00:16:19.365 "method": "bdev_nvme_attach_controller", 00:16:19.365 "params": { 00:16:19.365 "name": "TLSTEST", 00:16:19.365 "trtype": "tcp", 00:16:19.365 "traddr": "10.0.0.3", 00:16:19.365 "adrfam": "ipv4", 00:16:19.365 "trsvcid": "4420", 00:16:19.365 "subnqn": "nqn.2016-06.io.spdk:cnode2", 00:16:19.365 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:16:19.365 "prchk_reftag": false, 00:16:19.365 "prchk_guard": false, 00:16:19.365 "hdgst": false, 00:16:19.365 "ddgst": false, 00:16:19.365 "psk": "key0", 00:16:19.365 "allow_unrecognized_csi": false 00:16:19.365 } 00:16:19.365 } 00:16:19.365 Got JSON-RPC error response 00:16:19.365 GoRPCClient: error on JSON-RPC call 00:16:19.365 11:38:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@37 -- # killprocess 85082 00:16:19.365 11:38:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # '[' -z 85082 ']' 00:16:19.365 11:38:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # kill -0 85082 00:16:19.365 11:38:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # uname 00:16:19.365 11:38:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:16:19.365 11:38:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 85082 00:16:19.365 killing process with pid 85082 00:16:19.366 Received shutdown signal, test time was about 10.000000 seconds 00:16:19.366 00:16:19.366 Latency(us) 00:16:19.366 [2024-12-16T11:38:20.119Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:16:19.366 [2024-12-16T11:38:20.119Z] =================================================================================================================== 00:16:19.366 [2024-12-16T11:38:20.119Z] Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:16:19.366 11:38:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # process_name=reactor_2 00:16:19.366 11:38:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@964 -- # '[' reactor_2 = sudo ']' 00:16:19.366 11:38:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@972 -- # echo 'killing process with pid 85082' 00:16:19.366 11:38:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@973 -- # kill 85082 00:16:19.366 11:38:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@978 -- # wait 85082 00:16:19.624 11:38:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@38 -- # return 1 00:16:19.624 11:38:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@655 -- # es=1 00:16:19.624 11:38:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:16:19.624 11:38:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:16:19.624 11:38:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:16:19.624 11:38:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@156 -- # NOT run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 '' 00:16:19.624 11:38:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@652 -- # local es=0 00:16:19.624 11:38:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@654 -- # valid_exec_arg run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 '' 00:16:19.624 11:38:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@640 -- # local arg=run_bdevperf 00:16:19.624 11:38:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:16:19.624 11:38:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@644 -- # type -t run_bdevperf 00:16:19.624 11:38:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:16:19.624 11:38:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@655 -- # run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 '' 00:16:19.624 11:38:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@22 -- # local subnqn hostnqn psk 00:16:19.624 11:38:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # subnqn=nqn.2016-06.io.spdk:cnode1 00:16:19.624 11:38:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # hostnqn=nqn.2016-06.io.spdk:host1 00:16:19.624 11:38:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # psk= 00:16:19.624 11:38:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@25 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:16:19.624 11:38:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@28 -- # bdevperf_pid=85140 00:16:19.624 11:38:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@27 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:16:19.624 11:38:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@30 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:16:19.624 11:38:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@31 -- # waitforlisten 85140 /var/tmp/bdevperf.sock 00:16:19.624 11:38:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # '[' -z 85140 ']' 00:16:19.624 11:38:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:16:19.624 11:38:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # local max_retries=100 00:16:19.624 11:38:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:16:19.624 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:16:19.624 11:38:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@844 -- # xtrace_disable 00:16:19.624 11:38:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:16:19.624 [2024-12-16 11:38:20.301236] Starting SPDK v25.01-pre git sha1 c360129e7 / DPDK 24.03.0 initialization... 00:16:19.624 [2024-12-16 11:38:20.301467] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid85140 ] 00:16:19.883 [2024-12-16 11:38:20.446195] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:16:19.883 [2024-12-16 11:38:20.505405] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 2 00:16:19.883 11:38:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:16:19.883 11:38:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@868 -- # return 0 00:16:19.883 11:38:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@33 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock keyring_file_add_key key0 '' 00:16:20.140 [2024-12-16 11:38:20.871552] keyring.c: 24:keyring_file_check_path: *ERROR*: Non-absolute paths are not allowed: 00:16:20.140 [2024-12-16 11:38:20.871597] keyring.c: 126:spdk_keyring_add_key: *ERROR*: Failed to add key 'key0' to the keyring 00:16:20.140 2024/12/16 11:38:20 error on JSON-RPC call, method: keyring_file_add_key, params: map[name:key0 path:], err: error received for keyring_file_add_key method, err: Code=-1 Msg=Operation not permitted 00:16:20.140 request: 00:16:20.140 { 00:16:20.140 "method": "keyring_file_add_key", 00:16:20.140 "params": { 00:16:20.140 "name": "key0", 00:16:20.140 "path": "" 00:16:20.140 } 00:16:20.140 } 00:16:20.140 Got JSON-RPC error response 00:16:20.140 GoRPCClient: error on JSON-RPC call 00:16:20.399 11:38:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@35 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.3 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 --psk key0 00:16:20.399 [2024-12-16 11:38:21.127738] bdev_nvme_rpc.c: 515:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:16:20.399 [2024-12-16 11:38:21.127810] bdev_nvme.c:6754:spdk_bdev_nvme_create: *ERROR*: Could not load PSK: key0 00:16:20.399 2024/12/16 11:38:21 error on JSON-RPC call, method: bdev_nvme_attach_controller, params: map[adrfam:ipv4 allow_unrecognized_csi:%!s(bool=false) ddgst:%!s(bool=false) hdgst:%!s(bool=false) hostnqn:nqn.2016-06.io.spdk:host1 name:TLSTEST prchk_guard:%!s(bool=false) prchk_reftag:%!s(bool=false) psk:key0 subnqn:nqn.2016-06.io.spdk:cnode1 traddr:10.0.0.3 trsvcid:4420 trtype:tcp], err: error received for bdev_nvme_attach_controller method, err: Code=-126 Msg=Required key not available 00:16:20.399 request: 00:16:20.399 { 00:16:20.399 "method": "bdev_nvme_attach_controller", 00:16:20.399 "params": { 00:16:20.399 "name": "TLSTEST", 00:16:20.399 "trtype": "tcp", 00:16:20.399 "traddr": "10.0.0.3", 00:16:20.399 "adrfam": "ipv4", 00:16:20.399 "trsvcid": "4420", 00:16:20.399 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:16:20.399 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:16:20.399 "prchk_reftag": false, 00:16:20.399 "prchk_guard": false, 00:16:20.399 "hdgst": false, 00:16:20.399 "ddgst": false, 00:16:20.399 "psk": "key0", 00:16:20.399 "allow_unrecognized_csi": false 00:16:20.399 } 00:16:20.399 } 00:16:20.399 Got JSON-RPC error response 00:16:20.399 GoRPCClient: error on JSON-RPC call 00:16:20.399 11:38:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@37 -- # killprocess 85140 00:16:20.399 11:38:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # '[' -z 85140 ']' 00:16:20.399 11:38:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # kill -0 85140 00:16:20.399 11:38:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # uname 00:16:20.663 11:38:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:16:20.663 11:38:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 85140 00:16:20.663 killing process with pid 85140 00:16:20.663 Received shutdown signal, test time was about 10.000000 seconds 00:16:20.663 00:16:20.663 Latency(us) 00:16:20.663 [2024-12-16T11:38:21.416Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:16:20.663 [2024-12-16T11:38:21.416Z] =================================================================================================================== 00:16:20.663 [2024-12-16T11:38:21.416Z] Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:16:20.663 11:38:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # process_name=reactor_2 00:16:20.663 11:38:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@964 -- # '[' reactor_2 = sudo ']' 00:16:20.663 11:38:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@972 -- # echo 'killing process with pid 85140' 00:16:20.663 11:38:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@973 -- # kill 85140 00:16:20.663 11:38:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@978 -- # wait 85140 00:16:20.663 11:38:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@38 -- # return 1 00:16:20.663 11:38:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@655 -- # es=1 00:16:20.663 11:38:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:16:20.663 11:38:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:16:20.663 11:38:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:16:20.663 11:38:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@159 -- # killprocess 84464 00:16:20.663 11:38:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # '[' -z 84464 ']' 00:16:20.663 11:38:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # kill -0 84464 00:16:20.663 11:38:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # uname 00:16:20.663 11:38:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:16:20.663 11:38:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 84464 00:16:20.924 killing process with pid 84464 00:16:20.924 11:38:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:16:20.924 11:38:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:16:20.924 11:38:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@972 -- # echo 'killing process with pid 84464' 00:16:20.924 11:38:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@973 -- # kill 84464 00:16:20.924 11:38:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@978 -- # wait 84464 00:16:20.924 11:38:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@160 -- # format_interchange_psk 00112233445566778899aabbccddeeff0011223344556677 2 00:16:20.924 11:38:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@743 -- # format_key NVMeTLSkey-1 00112233445566778899aabbccddeeff0011223344556677 2 00:16:20.924 11:38:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@730 -- # local prefix key digest 00:16:20.924 11:38:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@732 -- # prefix=NVMeTLSkey-1 00:16:20.924 11:38:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@732 -- # key=00112233445566778899aabbccddeeff0011223344556677 00:16:20.924 11:38:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@732 -- # digest=2 00:16:20.924 11:38:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@733 -- # python - 00:16:21.183 11:38:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@160 -- # key_long=NVMeTLSkey-1:02:MDAxMTIyMzM0NDU1NjY3Nzg4OTlhYWJiY2NkZGVlZmYwMDExMjIzMzQ0NTU2Njc3wWXNJw==: 00:16:21.183 11:38:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@161 -- # mktemp 00:16:21.183 11:38:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@161 -- # key_long_path=/tmp/tmp.6UkUdY8wJO 00:16:21.183 11:38:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@162 -- # echo -n NVMeTLSkey-1:02:MDAxMTIyMzM0NDU1NjY3Nzg4OTlhYWJiY2NkZGVlZmYwMDExMjIzMzQ0NTU2Njc3wWXNJw==: 00:16:21.183 11:38:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@163 -- # chmod 0600 /tmp/tmp.6UkUdY8wJO 00:16:21.183 11:38:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@164 -- # nvmfappstart -m 0x2 00:16:21.183 11:38:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:16:21.183 11:38:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@726 -- # xtrace_disable 00:16:21.183 11:38:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:16:21.183 11:38:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@509 -- # nvmfpid=85189 00:16:21.183 11:38:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@510 -- # waitforlisten 85189 00:16:21.183 11:38:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # '[' -z 85189 ']' 00:16:21.183 11:38:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@508 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:16:21.183 11:38:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:16:21.183 11:38:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # local max_retries=100 00:16:21.183 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:16:21.183 11:38:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:16:21.183 11:38:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@844 -- # xtrace_disable 00:16:21.183 11:38:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:16:21.183 [2024-12-16 11:38:21.767652] Starting SPDK v25.01-pre git sha1 c360129e7 / DPDK 24.03.0 initialization... 00:16:21.183 [2024-12-16 11:38:21.767754] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:16:21.183 [2024-12-16 11:38:21.918215] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:16:21.442 [2024-12-16 11:38:21.971667] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:16:21.442 [2024-12-16 11:38:21.971726] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:16:21.442 [2024-12-16 11:38:21.971738] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:16:21.442 [2024-12-16 11:38:21.971746] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:16:21.442 [2024-12-16 11:38:21.971754] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:16:21.442 [2024-12-16 11:38:21.972165] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 1 00:16:21.442 11:38:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:16:21.442 11:38:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@868 -- # return 0 00:16:21.442 11:38:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:16:21.442 11:38:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@732 -- # xtrace_disable 00:16:21.442 11:38:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:16:21.442 11:38:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:16:21.442 11:38:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@166 -- # setup_nvmf_tgt /tmp/tmp.6UkUdY8wJO 00:16:21.442 11:38:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@50 -- # local key=/tmp/tmp.6UkUdY8wJO 00:16:21.442 11:38:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@52 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o 00:16:21.701 [2024-12-16 11:38:22.367347] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:16:21.701 11:38:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@53 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDK00000000000001 -m 10 00:16:21.960 11:38:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 -k 00:16:22.219 [2024-12-16 11:38:22.915488] tcp.c:1049:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:16:22.219 [2024-12-16 11:38:22.915740] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4420 *** 00:16:22.219 11:38:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@56 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_create 32 4096 -b malloc0 00:16:22.478 malloc0 00:16:22.478 11:38:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@57 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 malloc0 -n 1 00:16:23.046 11:38:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@58 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py keyring_file_add_key key0 /tmp/tmp.6UkUdY8wJO 00:16:23.046 11:38:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@59 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 --psk key0 00:16:23.305 11:38:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@168 -- # run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /tmp/tmp.6UkUdY8wJO 00:16:23.305 11:38:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@22 -- # local subnqn hostnqn psk 00:16:23.305 11:38:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # subnqn=nqn.2016-06.io.spdk:cnode1 00:16:23.305 11:38:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # hostnqn=nqn.2016-06.io.spdk:host1 00:16:23.305 11:38:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # psk=/tmp/tmp.6UkUdY8wJO 00:16:23.305 11:38:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@25 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:16:23.305 11:38:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@28 -- # bdevperf_pid=85291 00:16:23.305 11:38:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@27 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:16:23.305 11:38:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@30 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:16:23.305 11:38:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@31 -- # waitforlisten 85291 /var/tmp/bdevperf.sock 00:16:23.305 11:38:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # '[' -z 85291 ']' 00:16:23.305 11:38:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:16:23.305 11:38:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # local max_retries=100 00:16:23.305 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:16:23.305 11:38:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:16:23.305 11:38:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@844 -- # xtrace_disable 00:16:23.305 11:38:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:16:23.564 [2024-12-16 11:38:24.059533] Starting SPDK v25.01-pre git sha1 c360129e7 / DPDK 24.03.0 initialization... 00:16:23.564 [2024-12-16 11:38:24.059615] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid85291 ] 00:16:23.564 [2024-12-16 11:38:24.208500] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:16:23.564 [2024-12-16 11:38:24.266680] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 2 00:16:23.822 11:38:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:16:23.823 11:38:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@868 -- # return 0 00:16:23.823 11:38:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@33 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock keyring_file_add_key key0 /tmp/tmp.6UkUdY8wJO 00:16:24.081 11:38:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@35 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.3 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 --psk key0 00:16:24.340 [2024-12-16 11:38:24.923321] bdev_nvme_rpc.c: 515:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:16:24.340 TLSTESTn1 00:16:24.340 11:38:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@42 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -t 20 -s /var/tmp/bdevperf.sock perform_tests 00:16:24.598 Running I/O for 10 seconds... 00:16:26.469 4227.00 IOPS, 16.51 MiB/s [2024-12-16T11:38:28.159Z] 4209.50 IOPS, 16.44 MiB/s [2024-12-16T11:38:29.535Z] 4213.67 IOPS, 16.46 MiB/s [2024-12-16T11:38:30.471Z] 4214.75 IOPS, 16.46 MiB/s [2024-12-16T11:38:31.406Z] 4199.20 IOPS, 16.40 MiB/s [2024-12-16T11:38:32.342Z] 4214.33 IOPS, 16.46 MiB/s [2024-12-16T11:38:33.294Z] 4226.43 IOPS, 16.51 MiB/s [2024-12-16T11:38:34.244Z] 4230.00 IOPS, 16.52 MiB/s [2024-12-16T11:38:35.180Z] 4229.44 IOPS, 16.52 MiB/s [2024-12-16T11:38:35.180Z] 4234.30 IOPS, 16.54 MiB/s 00:16:34.427 Latency(us) 00:16:34.427 [2024-12-16T11:38:35.180Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:16:34.427 Job: TLSTESTn1 (Core Mask 0x4, workload: verify, depth: 128, IO size: 4096) 00:16:34.427 Verification LBA range: start 0x0 length 0x2000 00:16:34.427 TLSTESTn1 : 10.02 4239.90 16.56 0.00 0.00 30132.68 5481.19 25499.46 00:16:34.427 [2024-12-16T11:38:35.180Z] =================================================================================================================== 00:16:34.427 [2024-12-16T11:38:35.180Z] Total : 4239.90 16.56 0.00 0.00 30132.68 5481.19 25499.46 00:16:34.427 { 00:16:34.427 "results": [ 00:16:34.427 { 00:16:34.427 "job": "TLSTESTn1", 00:16:34.427 "core_mask": "0x4", 00:16:34.427 "workload": "verify", 00:16:34.427 "status": "finished", 00:16:34.427 "verify_range": { 00:16:34.427 "start": 0, 00:16:34.427 "length": 8192 00:16:34.427 }, 00:16:34.427 "queue_depth": 128, 00:16:34.427 "io_size": 4096, 00:16:34.427 "runtime": 10.016044, 00:16:34.427 "iops": 4239.897508437463, 00:16:34.427 "mibps": 16.56209964233384, 00:16:34.427 "io_failed": 0, 00:16:34.427 "io_timeout": 0, 00:16:34.427 "avg_latency_us": 30132.682928733968, 00:16:34.427 "min_latency_us": 5481.192727272727, 00:16:34.427 "max_latency_us": 25499.46181818182 00:16:34.427 } 00:16:34.427 ], 00:16:34.427 "core_count": 1 00:16:34.427 } 00:16:34.686 11:38:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@45 -- # trap 'nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:16:34.686 11:38:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@46 -- # killprocess 85291 00:16:34.686 11:38:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # '[' -z 85291 ']' 00:16:34.686 11:38:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # kill -0 85291 00:16:34.686 11:38:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # uname 00:16:34.686 11:38:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:16:34.686 11:38:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 85291 00:16:34.686 killing process with pid 85291 00:16:34.686 Received shutdown signal, test time was about 10.000000 seconds 00:16:34.686 00:16:34.686 Latency(us) 00:16:34.686 [2024-12-16T11:38:35.439Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:16:34.686 [2024-12-16T11:38:35.439Z] =================================================================================================================== 00:16:34.686 [2024-12-16T11:38:35.439Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:16:34.686 11:38:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # process_name=reactor_2 00:16:34.686 11:38:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@964 -- # '[' reactor_2 = sudo ']' 00:16:34.686 11:38:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@972 -- # echo 'killing process with pid 85291' 00:16:34.686 11:38:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@973 -- # kill 85291 00:16:34.686 11:38:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@978 -- # wait 85291 00:16:34.686 11:38:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@171 -- # chmod 0666 /tmp/tmp.6UkUdY8wJO 00:16:34.686 11:38:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@172 -- # NOT run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /tmp/tmp.6UkUdY8wJO 00:16:34.686 11:38:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@652 -- # local es=0 00:16:34.686 11:38:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@654 -- # valid_exec_arg run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /tmp/tmp.6UkUdY8wJO 00:16:34.686 11:38:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@640 -- # local arg=run_bdevperf 00:16:34.686 11:38:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:16:34.686 11:38:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@644 -- # type -t run_bdevperf 00:16:34.686 11:38:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:16:34.686 11:38:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@655 -- # run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /tmp/tmp.6UkUdY8wJO 00:16:34.686 11:38:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@22 -- # local subnqn hostnqn psk 00:16:34.686 11:38:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # subnqn=nqn.2016-06.io.spdk:cnode1 00:16:34.686 11:38:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # hostnqn=nqn.2016-06.io.spdk:host1 00:16:34.686 11:38:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # psk=/tmp/tmp.6UkUdY8wJO 00:16:34.686 11:38:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@25 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:16:34.686 11:38:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@28 -- # bdevperf_pid=85437 00:16:34.686 11:38:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@30 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:16:34.686 11:38:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@27 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:16:34.686 11:38:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@31 -- # waitforlisten 85437 /var/tmp/bdevperf.sock 00:16:34.686 11:38:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # '[' -z 85437 ']' 00:16:34.686 11:38:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:16:34.686 11:38:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # local max_retries=100 00:16:34.686 11:38:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:16:34.686 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:16:34.686 11:38:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@844 -- # xtrace_disable 00:16:34.686 11:38:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:16:34.945 [2024-12-16 11:38:35.470314] Starting SPDK v25.01-pre git sha1 c360129e7 / DPDK 24.03.0 initialization... 00:16:34.945 [2024-12-16 11:38:35.470565] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid85437 ] 00:16:34.945 [2024-12-16 11:38:35.619203] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:16:34.945 [2024-12-16 11:38:35.666992] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 2 00:16:35.203 11:38:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:16:35.203 11:38:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@868 -- # return 0 00:16:35.203 11:38:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@33 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock keyring_file_add_key key0 /tmp/tmp.6UkUdY8wJO 00:16:35.462 [2024-12-16 11:38:36.021223] keyring.c: 36:keyring_file_check_path: *ERROR*: Invalid permissions for key file '/tmp/tmp.6UkUdY8wJO': 0100666 00:16:35.462 [2024-12-16 11:38:36.021275] keyring.c: 126:spdk_keyring_add_key: *ERROR*: Failed to add key 'key0' to the keyring 00:16:35.462 2024/12/16 11:38:36 error on JSON-RPC call, method: keyring_file_add_key, params: map[name:key0 path:/tmp/tmp.6UkUdY8wJO], err: error received for keyring_file_add_key method, err: Code=-1 Msg=Operation not permitted 00:16:35.462 request: 00:16:35.462 { 00:16:35.462 "method": "keyring_file_add_key", 00:16:35.462 "params": { 00:16:35.462 "name": "key0", 00:16:35.462 "path": "/tmp/tmp.6UkUdY8wJO" 00:16:35.462 } 00:16:35.462 } 00:16:35.462 Got JSON-RPC error response 00:16:35.462 GoRPCClient: error on JSON-RPC call 00:16:35.462 11:38:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@35 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.3 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 --psk key0 00:16:35.721 [2024-12-16 11:38:36.269364] bdev_nvme_rpc.c: 515:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:16:35.721 [2024-12-16 11:38:36.269424] bdev_nvme.c:6754:spdk_bdev_nvme_create: *ERROR*: Could not load PSK: key0 00:16:35.721 2024/12/16 11:38:36 error on JSON-RPC call, method: bdev_nvme_attach_controller, params: map[adrfam:ipv4 allow_unrecognized_csi:%!s(bool=false) ddgst:%!s(bool=false) hdgst:%!s(bool=false) hostnqn:nqn.2016-06.io.spdk:host1 name:TLSTEST prchk_guard:%!s(bool=false) prchk_reftag:%!s(bool=false) psk:key0 subnqn:nqn.2016-06.io.spdk:cnode1 traddr:10.0.0.3 trsvcid:4420 trtype:tcp], err: error received for bdev_nvme_attach_controller method, err: Code=-126 Msg=Required key not available 00:16:35.721 request: 00:16:35.721 { 00:16:35.721 "method": "bdev_nvme_attach_controller", 00:16:35.721 "params": { 00:16:35.721 "name": "TLSTEST", 00:16:35.721 "trtype": "tcp", 00:16:35.721 "traddr": "10.0.0.3", 00:16:35.721 "adrfam": "ipv4", 00:16:35.721 "trsvcid": "4420", 00:16:35.721 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:16:35.721 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:16:35.721 "prchk_reftag": false, 00:16:35.721 "prchk_guard": false, 00:16:35.721 "hdgst": false, 00:16:35.721 "ddgst": false, 00:16:35.721 "psk": "key0", 00:16:35.721 "allow_unrecognized_csi": false 00:16:35.721 } 00:16:35.721 } 00:16:35.721 Got JSON-RPC error response 00:16:35.721 GoRPCClient: error on JSON-RPC call 00:16:35.721 11:38:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@37 -- # killprocess 85437 00:16:35.721 11:38:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # '[' -z 85437 ']' 00:16:35.721 11:38:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # kill -0 85437 00:16:35.721 11:38:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # uname 00:16:35.721 11:38:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:16:35.721 11:38:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 85437 00:16:35.721 11:38:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # process_name=reactor_2 00:16:35.721 killing process with pid 85437 00:16:35.721 Received shutdown signal, test time was about 10.000000 seconds 00:16:35.721 00:16:35.721 Latency(us) 00:16:35.721 [2024-12-16T11:38:36.474Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:16:35.721 [2024-12-16T11:38:36.474Z] =================================================================================================================== 00:16:35.721 [2024-12-16T11:38:36.474Z] Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:16:35.721 11:38:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@964 -- # '[' reactor_2 = sudo ']' 00:16:35.721 11:38:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@972 -- # echo 'killing process with pid 85437' 00:16:35.721 11:38:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@973 -- # kill 85437 00:16:35.721 11:38:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@978 -- # wait 85437 00:16:35.980 11:38:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@38 -- # return 1 00:16:35.980 11:38:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@655 -- # es=1 00:16:35.980 11:38:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:16:35.980 11:38:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:16:35.980 11:38:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:16:35.980 11:38:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@175 -- # killprocess 85189 00:16:35.980 11:38:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # '[' -z 85189 ']' 00:16:35.980 11:38:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # kill -0 85189 00:16:35.980 11:38:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # uname 00:16:35.980 11:38:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:16:35.980 11:38:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 85189 00:16:35.980 killing process with pid 85189 00:16:35.980 11:38:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:16:35.980 11:38:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:16:35.980 11:38:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@972 -- # echo 'killing process with pid 85189' 00:16:35.980 11:38:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@973 -- # kill 85189 00:16:35.980 11:38:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@978 -- # wait 85189 00:16:36.239 11:38:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@176 -- # nvmfappstart -m 0x2 00:16:36.239 11:38:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:16:36.239 11:38:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@726 -- # xtrace_disable 00:16:36.239 11:38:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:16:36.239 11:38:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@509 -- # nvmfpid=85481 00:16:36.239 11:38:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@508 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:16:36.239 11:38:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@510 -- # waitforlisten 85481 00:16:36.239 11:38:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # '[' -z 85481 ']' 00:16:36.239 11:38:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:16:36.239 11:38:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # local max_retries=100 00:16:36.239 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:16:36.239 11:38:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:16:36.239 11:38:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@844 -- # xtrace_disable 00:16:36.239 11:38:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:16:36.239 [2024-12-16 11:38:36.843707] Starting SPDK v25.01-pre git sha1 c360129e7 / DPDK 24.03.0 initialization... 00:16:36.239 [2024-12-16 11:38:36.843826] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:16:36.497 [2024-12-16 11:38:36.994306] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:16:36.497 [2024-12-16 11:38:37.050680] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:16:36.497 [2024-12-16 11:38:37.050748] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:16:36.497 [2024-12-16 11:38:37.050760] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:16:36.497 [2024-12-16 11:38:37.050769] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:16:36.497 [2024-12-16 11:38:37.050776] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:16:36.497 [2024-12-16 11:38:37.051180] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 1 00:16:36.497 11:38:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:16:36.497 11:38:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@868 -- # return 0 00:16:36.497 11:38:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:16:36.498 11:38:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@732 -- # xtrace_disable 00:16:36.498 11:38:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:16:36.498 11:38:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:16:36.498 11:38:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@178 -- # NOT setup_nvmf_tgt /tmp/tmp.6UkUdY8wJO 00:16:36.498 11:38:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@652 -- # local es=0 00:16:36.498 11:38:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@654 -- # valid_exec_arg setup_nvmf_tgt /tmp/tmp.6UkUdY8wJO 00:16:36.498 11:38:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@640 -- # local arg=setup_nvmf_tgt 00:16:36.498 11:38:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:16:36.498 11:38:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@644 -- # type -t setup_nvmf_tgt 00:16:36.498 11:38:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:16:36.498 11:38:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@655 -- # setup_nvmf_tgt /tmp/tmp.6UkUdY8wJO 00:16:36.498 11:38:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@50 -- # local key=/tmp/tmp.6UkUdY8wJO 00:16:36.498 11:38:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@52 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o 00:16:37.065 [2024-12-16 11:38:37.524068] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:16:37.065 11:38:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@53 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDK00000000000001 -m 10 00:16:37.065 11:38:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 -k 00:16:37.323 [2024-12-16 11:38:38.072225] tcp.c:1049:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:16:37.323 [2024-12-16 11:38:38.072478] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4420 *** 00:16:37.582 11:38:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@56 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_create 32 4096 -b malloc0 00:16:37.840 malloc0 00:16:37.840 11:38:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@57 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 malloc0 -n 1 00:16:38.098 11:38:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@58 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py keyring_file_add_key key0 /tmp/tmp.6UkUdY8wJO 00:16:38.356 [2024-12-16 11:38:38.944269] keyring.c: 36:keyring_file_check_path: *ERROR*: Invalid permissions for key file '/tmp/tmp.6UkUdY8wJO': 0100666 00:16:38.356 [2024-12-16 11:38:38.944324] keyring.c: 126:spdk_keyring_add_key: *ERROR*: Failed to add key 'key0' to the keyring 00:16:38.356 2024/12/16 11:38:38 error on JSON-RPC call, method: keyring_file_add_key, params: map[name:key0 path:/tmp/tmp.6UkUdY8wJO], err: error received for keyring_file_add_key method, err: Code=-1 Msg=Operation not permitted 00:16:38.356 request: 00:16:38.356 { 00:16:38.356 "method": "keyring_file_add_key", 00:16:38.356 "params": { 00:16:38.356 "name": "key0", 00:16:38.356 "path": "/tmp/tmp.6UkUdY8wJO" 00:16:38.356 } 00:16:38.356 } 00:16:38.356 Got JSON-RPC error response 00:16:38.356 GoRPCClient: error on JSON-RPC call 00:16:38.356 11:38:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@59 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 --psk key0 00:16:38.615 [2024-12-16 11:38:39.220381] tcp.c:3777:nvmf_tcp_subsystem_add_host: *ERROR*: Key 'key0' does not exist 00:16:38.615 [2024-12-16 11:38:39.220451] subsystem.c:1051:spdk_nvmf_subsystem_add_host_ext: *ERROR*: Unable to add host to TCP transport 00:16:38.615 2024/12/16 11:38:39 error on JSON-RPC call, method: nvmf_subsystem_add_host, params: map[host:nqn.2016-06.io.spdk:host1 nqn:nqn.2016-06.io.spdk:cnode1 psk:key0], err: error received for nvmf_subsystem_add_host method, err: Code=-32603 Msg=Internal error 00:16:38.615 request: 00:16:38.615 { 00:16:38.615 "method": "nvmf_subsystem_add_host", 00:16:38.615 "params": { 00:16:38.615 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:16:38.615 "host": "nqn.2016-06.io.spdk:host1", 00:16:38.615 "psk": "key0" 00:16:38.615 } 00:16:38.615 } 00:16:38.615 Got JSON-RPC error response 00:16:38.615 GoRPCClient: error on JSON-RPC call 00:16:38.615 11:38:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@655 -- # es=1 00:16:38.615 11:38:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:16:38.616 11:38:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:16:38.616 11:38:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:16:38.616 11:38:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@181 -- # killprocess 85481 00:16:38.616 11:38:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # '[' -z 85481 ']' 00:16:38.616 11:38:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # kill -0 85481 00:16:38.616 11:38:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # uname 00:16:38.616 11:38:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:16:38.616 11:38:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 85481 00:16:38.616 11:38:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:16:38.616 killing process with pid 85481 00:16:38.616 11:38:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:16:38.616 11:38:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@972 -- # echo 'killing process with pid 85481' 00:16:38.616 11:38:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@973 -- # kill 85481 00:16:38.616 11:38:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@978 -- # wait 85481 00:16:38.874 11:38:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@182 -- # chmod 0600 /tmp/tmp.6UkUdY8wJO 00:16:38.874 11:38:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@185 -- # nvmfappstart -m 0x2 00:16:38.874 11:38:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:16:38.874 11:38:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@726 -- # xtrace_disable 00:16:38.874 11:38:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:16:38.874 11:38:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@508 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:16:38.874 11:38:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@509 -- # nvmfpid=85590 00:16:38.874 11:38:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@510 -- # waitforlisten 85590 00:16:38.874 11:38:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # '[' -z 85590 ']' 00:16:38.874 11:38:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:16:38.874 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:16:38.874 11:38:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # local max_retries=100 00:16:38.874 11:38:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:16:38.874 11:38:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@844 -- # xtrace_disable 00:16:38.875 11:38:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:16:38.875 [2024-12-16 11:38:39.562389] Starting SPDK v25.01-pre git sha1 c360129e7 / DPDK 24.03.0 initialization... 00:16:38.875 [2024-12-16 11:38:39.562508] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:16:39.133 [2024-12-16 11:38:39.704734] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:16:39.133 [2024-12-16 11:38:39.765161] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:16:39.133 [2024-12-16 11:38:39.765214] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:16:39.133 [2024-12-16 11:38:39.765225] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:16:39.133 [2024-12-16 11:38:39.765234] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:16:39.133 [2024-12-16 11:38:39.765242] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:16:39.133 [2024-12-16 11:38:39.765640] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 1 00:16:40.067 11:38:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:16:40.067 11:38:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@868 -- # return 0 00:16:40.067 11:38:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:16:40.067 11:38:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@732 -- # xtrace_disable 00:16:40.067 11:38:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:16:40.067 11:38:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:16:40.067 11:38:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@186 -- # setup_nvmf_tgt /tmp/tmp.6UkUdY8wJO 00:16:40.067 11:38:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@50 -- # local key=/tmp/tmp.6UkUdY8wJO 00:16:40.067 11:38:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@52 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o 00:16:40.325 [2024-12-16 11:38:40.892394] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:16:40.325 11:38:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@53 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDK00000000000001 -m 10 00:16:40.583 11:38:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 -k 00:16:40.841 [2024-12-16 11:38:41.404579] tcp.c:1049:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:16:40.842 [2024-12-16 11:38:41.404872] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4420 *** 00:16:40.842 11:38:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@56 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_create 32 4096 -b malloc0 00:16:41.100 malloc0 00:16:41.100 11:38:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@57 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 malloc0 -n 1 00:16:41.359 11:38:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@58 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py keyring_file_add_key key0 /tmp/tmp.6UkUdY8wJO 00:16:41.617 11:38:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@59 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 --psk key0 00:16:41.876 11:38:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@189 -- # bdevperf_pid=85705 00:16:41.876 11:38:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@191 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:16:41.876 11:38:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@188 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:16:41.876 11:38:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@192 -- # waitforlisten 85705 /var/tmp/bdevperf.sock 00:16:41.876 11:38:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # '[' -z 85705 ']' 00:16:41.876 11:38:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:16:41.876 11:38:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # local max_retries=100 00:16:41.876 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:16:41.876 11:38:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:16:41.876 11:38:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@844 -- # xtrace_disable 00:16:41.876 11:38:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:16:41.876 [2024-12-16 11:38:42.589419] Starting SPDK v25.01-pre git sha1 c360129e7 / DPDK 24.03.0 initialization... 00:16:41.876 [2024-12-16 11:38:42.589525] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid85705 ] 00:16:42.135 [2024-12-16 11:38:42.738991] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:16:42.135 [2024-12-16 11:38:42.800799] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 2 00:16:43.106 11:38:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:16:43.106 11:38:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@868 -- # return 0 00:16:43.106 11:38:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@193 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock keyring_file_add_key key0 /tmp/tmp.6UkUdY8wJO 00:16:43.106 11:38:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@194 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.3 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 --psk key0 00:16:43.387 [2024-12-16 11:38:44.027362] bdev_nvme_rpc.c: 515:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:16:43.387 TLSTESTn1 00:16:43.387 11:38:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@198 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py save_config 00:16:43.953 11:38:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@198 -- # tgtconf='{ 00:16:43.953 "subsystems": [ 00:16:43.953 { 00:16:43.953 "subsystem": "keyring", 00:16:43.953 "config": [ 00:16:43.953 { 00:16:43.953 "method": "keyring_file_add_key", 00:16:43.953 "params": { 00:16:43.953 "name": "key0", 00:16:43.953 "path": "/tmp/tmp.6UkUdY8wJO" 00:16:43.953 } 00:16:43.953 } 00:16:43.953 ] 00:16:43.953 }, 00:16:43.953 { 00:16:43.953 "subsystem": "iobuf", 00:16:43.953 "config": [ 00:16:43.953 { 00:16:43.953 "method": "iobuf_set_options", 00:16:43.953 "params": { 00:16:43.953 "enable_numa": false, 00:16:43.953 "large_bufsize": 135168, 00:16:43.953 "large_pool_count": 1024, 00:16:43.953 "small_bufsize": 8192, 00:16:43.953 "small_pool_count": 8192 00:16:43.953 } 00:16:43.953 } 00:16:43.953 ] 00:16:43.953 }, 00:16:43.953 { 00:16:43.953 "subsystem": "sock", 00:16:43.953 "config": [ 00:16:43.953 { 00:16:43.953 "method": "sock_set_default_impl", 00:16:43.953 "params": { 00:16:43.953 "impl_name": "posix" 00:16:43.953 } 00:16:43.953 }, 00:16:43.953 { 00:16:43.953 "method": "sock_impl_set_options", 00:16:43.953 "params": { 00:16:43.953 "enable_ktls": false, 00:16:43.953 "enable_placement_id": 0, 00:16:43.953 "enable_quickack": false, 00:16:43.953 "enable_recv_pipe": true, 00:16:43.953 "enable_zerocopy_send_client": false, 00:16:43.954 "enable_zerocopy_send_server": true, 00:16:43.954 "impl_name": "ssl", 00:16:43.954 "recv_buf_size": 4096, 00:16:43.954 "send_buf_size": 4096, 00:16:43.954 "tls_version": 0, 00:16:43.954 "zerocopy_threshold": 0 00:16:43.954 } 00:16:43.954 }, 00:16:43.954 { 00:16:43.954 "method": "sock_impl_set_options", 00:16:43.954 "params": { 00:16:43.954 "enable_ktls": false, 00:16:43.954 "enable_placement_id": 0, 00:16:43.954 "enable_quickack": false, 00:16:43.954 "enable_recv_pipe": true, 00:16:43.954 "enable_zerocopy_send_client": false, 00:16:43.954 "enable_zerocopy_send_server": true, 00:16:43.954 "impl_name": "posix", 00:16:43.954 "recv_buf_size": 2097152, 00:16:43.954 "send_buf_size": 2097152, 00:16:43.954 "tls_version": 0, 00:16:43.954 "zerocopy_threshold": 0 00:16:43.954 } 00:16:43.954 } 00:16:43.954 ] 00:16:43.954 }, 00:16:43.954 { 00:16:43.954 "subsystem": "vmd", 00:16:43.954 "config": [] 00:16:43.954 }, 00:16:43.954 { 00:16:43.954 "subsystem": "accel", 00:16:43.954 "config": [ 00:16:43.954 { 00:16:43.954 "method": "accel_set_options", 00:16:43.954 "params": { 00:16:43.954 "buf_count": 2048, 00:16:43.954 "large_cache_size": 16, 00:16:43.954 "sequence_count": 2048, 00:16:43.954 "small_cache_size": 128, 00:16:43.954 "task_count": 2048 00:16:43.954 } 00:16:43.954 } 00:16:43.954 ] 00:16:43.954 }, 00:16:43.954 { 00:16:43.954 "subsystem": "bdev", 00:16:43.954 "config": [ 00:16:43.954 { 00:16:43.954 "method": "bdev_set_options", 00:16:43.954 "params": { 00:16:43.954 "bdev_auto_examine": true, 00:16:43.954 "bdev_io_cache_size": 256, 00:16:43.954 "bdev_io_pool_size": 65535, 00:16:43.954 "iobuf_large_cache_size": 16, 00:16:43.954 "iobuf_small_cache_size": 128 00:16:43.954 } 00:16:43.954 }, 00:16:43.954 { 00:16:43.954 "method": "bdev_raid_set_options", 00:16:43.954 "params": { 00:16:43.954 "process_max_bandwidth_mb_sec": 0, 00:16:43.954 "process_window_size_kb": 1024 00:16:43.954 } 00:16:43.954 }, 00:16:43.954 { 00:16:43.954 "method": "bdev_iscsi_set_options", 00:16:43.954 "params": { 00:16:43.954 "timeout_sec": 30 00:16:43.954 } 00:16:43.954 }, 00:16:43.954 { 00:16:43.954 "method": "bdev_nvme_set_options", 00:16:43.954 "params": { 00:16:43.954 "action_on_timeout": "none", 00:16:43.954 "allow_accel_sequence": false, 00:16:43.954 "arbitration_burst": 0, 00:16:43.954 "bdev_retry_count": 3, 00:16:43.954 "ctrlr_loss_timeout_sec": 0, 00:16:43.954 "delay_cmd_submit": true, 00:16:43.954 "dhchap_dhgroups": [ 00:16:43.954 "null", 00:16:43.954 "ffdhe2048", 00:16:43.954 "ffdhe3072", 00:16:43.954 "ffdhe4096", 00:16:43.954 "ffdhe6144", 00:16:43.954 "ffdhe8192" 00:16:43.954 ], 00:16:43.954 "dhchap_digests": [ 00:16:43.954 "sha256", 00:16:43.954 "sha384", 00:16:43.954 "sha512" 00:16:43.954 ], 00:16:43.954 "disable_auto_failback": false, 00:16:43.954 "fast_io_fail_timeout_sec": 0, 00:16:43.954 "generate_uuids": false, 00:16:43.954 "high_priority_weight": 0, 00:16:43.954 "io_path_stat": false, 00:16:43.954 "io_queue_requests": 0, 00:16:43.954 "keep_alive_timeout_ms": 10000, 00:16:43.954 "low_priority_weight": 0, 00:16:43.954 "medium_priority_weight": 0, 00:16:43.954 "nvme_adminq_poll_period_us": 10000, 00:16:43.954 "nvme_error_stat": false, 00:16:43.954 "nvme_ioq_poll_period_us": 0, 00:16:43.954 "rdma_cm_event_timeout_ms": 0, 00:16:43.954 "rdma_max_cq_size": 0, 00:16:43.954 "rdma_srq_size": 0, 00:16:43.954 "rdma_umr_per_io": false, 00:16:43.954 "reconnect_delay_sec": 0, 00:16:43.954 "timeout_admin_us": 0, 00:16:43.954 "timeout_us": 0, 00:16:43.954 "transport_ack_timeout": 0, 00:16:43.954 "transport_retry_count": 4, 00:16:43.954 "transport_tos": 0 00:16:43.954 } 00:16:43.954 }, 00:16:43.954 { 00:16:43.954 "method": "bdev_nvme_set_hotplug", 00:16:43.954 "params": { 00:16:43.954 "enable": false, 00:16:43.954 "period_us": 100000 00:16:43.954 } 00:16:43.954 }, 00:16:43.954 { 00:16:43.954 "method": "bdev_malloc_create", 00:16:43.954 "params": { 00:16:43.954 "block_size": 4096, 00:16:43.954 "dif_is_head_of_md": false, 00:16:43.954 "dif_pi_format": 0, 00:16:43.954 "dif_type": 0, 00:16:43.954 "md_size": 0, 00:16:43.954 "name": "malloc0", 00:16:43.954 "num_blocks": 8192, 00:16:43.954 "optimal_io_boundary": 0, 00:16:43.954 "physical_block_size": 4096, 00:16:43.954 "uuid": "902a8fc4-fc17-497f-bf1c-0d8a889b2350" 00:16:43.954 } 00:16:43.954 }, 00:16:43.954 { 00:16:43.954 "method": "bdev_wait_for_examine" 00:16:43.954 } 00:16:43.954 ] 00:16:43.954 }, 00:16:43.954 { 00:16:43.954 "subsystem": "nbd", 00:16:43.954 "config": [] 00:16:43.954 }, 00:16:43.954 { 00:16:43.954 "subsystem": "scheduler", 00:16:43.954 "config": [ 00:16:43.954 { 00:16:43.954 "method": "framework_set_scheduler", 00:16:43.954 "params": { 00:16:43.954 "name": "static" 00:16:43.954 } 00:16:43.954 } 00:16:43.954 ] 00:16:43.954 }, 00:16:43.954 { 00:16:43.954 "subsystem": "nvmf", 00:16:43.954 "config": [ 00:16:43.954 { 00:16:43.954 "method": "nvmf_set_config", 00:16:43.954 "params": { 00:16:43.954 "admin_cmd_passthru": { 00:16:43.954 "identify_ctrlr": false 00:16:43.954 }, 00:16:43.954 "dhchap_dhgroups": [ 00:16:43.954 "null", 00:16:43.954 "ffdhe2048", 00:16:43.954 "ffdhe3072", 00:16:43.954 "ffdhe4096", 00:16:43.954 "ffdhe6144", 00:16:43.954 "ffdhe8192" 00:16:43.954 ], 00:16:43.954 "dhchap_digests": [ 00:16:43.954 "sha256", 00:16:43.954 "sha384", 00:16:43.954 "sha512" 00:16:43.954 ], 00:16:43.954 "discovery_filter": "match_any" 00:16:43.954 } 00:16:43.954 }, 00:16:43.954 { 00:16:43.954 "method": "nvmf_set_max_subsystems", 00:16:43.954 "params": { 00:16:43.954 "max_subsystems": 1024 00:16:43.954 } 00:16:43.954 }, 00:16:43.954 { 00:16:43.954 "method": "nvmf_set_crdt", 00:16:43.954 "params": { 00:16:43.954 "crdt1": 0, 00:16:43.954 "crdt2": 0, 00:16:43.954 "crdt3": 0 00:16:43.954 } 00:16:43.954 }, 00:16:43.954 { 00:16:43.954 "method": "nvmf_create_transport", 00:16:43.954 "params": { 00:16:43.954 "abort_timeout_sec": 1, 00:16:43.954 "ack_timeout": 0, 00:16:43.954 "buf_cache_size": 4294967295, 00:16:43.954 "c2h_success": false, 00:16:43.954 "data_wr_pool_size": 0, 00:16:43.954 "dif_insert_or_strip": false, 00:16:43.954 "in_capsule_data_size": 4096, 00:16:43.954 "io_unit_size": 131072, 00:16:43.954 "max_aq_depth": 128, 00:16:43.954 "max_io_qpairs_per_ctrlr": 127, 00:16:43.954 "max_io_size": 131072, 00:16:43.954 "max_queue_depth": 128, 00:16:43.954 "num_shared_buffers": 511, 00:16:43.954 "sock_priority": 0, 00:16:43.954 "trtype": "TCP", 00:16:43.954 "zcopy": false 00:16:43.954 } 00:16:43.954 }, 00:16:43.954 { 00:16:43.954 "method": "nvmf_create_subsystem", 00:16:43.954 "params": { 00:16:43.954 "allow_any_host": false, 00:16:43.954 "ana_reporting": false, 00:16:43.954 "max_cntlid": 65519, 00:16:43.954 "max_namespaces": 10, 00:16:43.954 "min_cntlid": 1, 00:16:43.954 "model_number": "SPDK bdev Controller", 00:16:43.954 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:16:43.954 "serial_number": "SPDK00000000000001" 00:16:43.954 } 00:16:43.954 }, 00:16:43.954 { 00:16:43.954 "method": "nvmf_subsystem_add_host", 00:16:43.954 "params": { 00:16:43.954 "host": "nqn.2016-06.io.spdk:host1", 00:16:43.954 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:16:43.954 "psk": "key0" 00:16:43.954 } 00:16:43.954 }, 00:16:43.954 { 00:16:43.954 "method": "nvmf_subsystem_add_ns", 00:16:43.954 "params": { 00:16:43.954 "namespace": { 00:16:43.954 "bdev_name": "malloc0", 00:16:43.954 "nguid": "902A8FC4FC17497FBF1C0D8A889B2350", 00:16:43.954 "no_auto_visible": false, 00:16:43.954 "nsid": 1, 00:16:43.954 "uuid": "902a8fc4-fc17-497f-bf1c-0d8a889b2350" 00:16:43.954 }, 00:16:43.954 "nqn": "nqn.2016-06.io.spdk:cnode1" 00:16:43.954 } 00:16:43.955 }, 00:16:43.955 { 00:16:43.955 "method": "nvmf_subsystem_add_listener", 00:16:43.955 "params": { 00:16:43.955 "listen_address": { 00:16:43.955 "adrfam": "IPv4", 00:16:43.955 "traddr": "10.0.0.3", 00:16:43.955 "trsvcid": "4420", 00:16:43.955 "trtype": "TCP" 00:16:43.955 }, 00:16:43.955 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:16:43.955 "secure_channel": true 00:16:43.955 } 00:16:43.955 } 00:16:43.955 ] 00:16:43.955 } 00:16:43.955 ] 00:16:43.955 }' 00:16:43.955 11:38:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@199 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock save_config 00:16:44.213 11:38:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@199 -- # bdevperfconf='{ 00:16:44.213 "subsystems": [ 00:16:44.213 { 00:16:44.213 "subsystem": "keyring", 00:16:44.213 "config": [ 00:16:44.213 { 00:16:44.213 "method": "keyring_file_add_key", 00:16:44.213 "params": { 00:16:44.213 "name": "key0", 00:16:44.213 "path": "/tmp/tmp.6UkUdY8wJO" 00:16:44.213 } 00:16:44.213 } 00:16:44.213 ] 00:16:44.213 }, 00:16:44.213 { 00:16:44.213 "subsystem": "iobuf", 00:16:44.213 "config": [ 00:16:44.213 { 00:16:44.213 "method": "iobuf_set_options", 00:16:44.213 "params": { 00:16:44.213 "enable_numa": false, 00:16:44.213 "large_bufsize": 135168, 00:16:44.213 "large_pool_count": 1024, 00:16:44.213 "small_bufsize": 8192, 00:16:44.213 "small_pool_count": 8192 00:16:44.213 } 00:16:44.213 } 00:16:44.213 ] 00:16:44.213 }, 00:16:44.213 { 00:16:44.213 "subsystem": "sock", 00:16:44.213 "config": [ 00:16:44.213 { 00:16:44.213 "method": "sock_set_default_impl", 00:16:44.213 "params": { 00:16:44.213 "impl_name": "posix" 00:16:44.213 } 00:16:44.213 }, 00:16:44.213 { 00:16:44.213 "method": "sock_impl_set_options", 00:16:44.213 "params": { 00:16:44.213 "enable_ktls": false, 00:16:44.213 "enable_placement_id": 0, 00:16:44.213 "enable_quickack": false, 00:16:44.213 "enable_recv_pipe": true, 00:16:44.213 "enable_zerocopy_send_client": false, 00:16:44.213 "enable_zerocopy_send_server": true, 00:16:44.213 "impl_name": "ssl", 00:16:44.213 "recv_buf_size": 4096, 00:16:44.213 "send_buf_size": 4096, 00:16:44.213 "tls_version": 0, 00:16:44.213 "zerocopy_threshold": 0 00:16:44.213 } 00:16:44.213 }, 00:16:44.213 { 00:16:44.213 "method": "sock_impl_set_options", 00:16:44.213 "params": { 00:16:44.213 "enable_ktls": false, 00:16:44.213 "enable_placement_id": 0, 00:16:44.213 "enable_quickack": false, 00:16:44.213 "enable_recv_pipe": true, 00:16:44.213 "enable_zerocopy_send_client": false, 00:16:44.213 "enable_zerocopy_send_server": true, 00:16:44.213 "impl_name": "posix", 00:16:44.213 "recv_buf_size": 2097152, 00:16:44.213 "send_buf_size": 2097152, 00:16:44.213 "tls_version": 0, 00:16:44.213 "zerocopy_threshold": 0 00:16:44.213 } 00:16:44.213 } 00:16:44.213 ] 00:16:44.213 }, 00:16:44.213 { 00:16:44.213 "subsystem": "vmd", 00:16:44.213 "config": [] 00:16:44.213 }, 00:16:44.213 { 00:16:44.213 "subsystem": "accel", 00:16:44.213 "config": [ 00:16:44.213 { 00:16:44.213 "method": "accel_set_options", 00:16:44.213 "params": { 00:16:44.213 "buf_count": 2048, 00:16:44.213 "large_cache_size": 16, 00:16:44.213 "sequence_count": 2048, 00:16:44.213 "small_cache_size": 128, 00:16:44.213 "task_count": 2048 00:16:44.213 } 00:16:44.213 } 00:16:44.213 ] 00:16:44.213 }, 00:16:44.213 { 00:16:44.213 "subsystem": "bdev", 00:16:44.214 "config": [ 00:16:44.214 { 00:16:44.214 "method": "bdev_set_options", 00:16:44.214 "params": { 00:16:44.214 "bdev_auto_examine": true, 00:16:44.214 "bdev_io_cache_size": 256, 00:16:44.214 "bdev_io_pool_size": 65535, 00:16:44.214 "iobuf_large_cache_size": 16, 00:16:44.214 "iobuf_small_cache_size": 128 00:16:44.214 } 00:16:44.214 }, 00:16:44.214 { 00:16:44.214 "method": "bdev_raid_set_options", 00:16:44.214 "params": { 00:16:44.214 "process_max_bandwidth_mb_sec": 0, 00:16:44.214 "process_window_size_kb": 1024 00:16:44.214 } 00:16:44.214 }, 00:16:44.214 { 00:16:44.214 "method": "bdev_iscsi_set_options", 00:16:44.214 "params": { 00:16:44.214 "timeout_sec": 30 00:16:44.214 } 00:16:44.214 }, 00:16:44.214 { 00:16:44.214 "method": "bdev_nvme_set_options", 00:16:44.214 "params": { 00:16:44.214 "action_on_timeout": "none", 00:16:44.214 "allow_accel_sequence": false, 00:16:44.214 "arbitration_burst": 0, 00:16:44.214 "bdev_retry_count": 3, 00:16:44.214 "ctrlr_loss_timeout_sec": 0, 00:16:44.214 "delay_cmd_submit": true, 00:16:44.214 "dhchap_dhgroups": [ 00:16:44.214 "null", 00:16:44.214 "ffdhe2048", 00:16:44.214 "ffdhe3072", 00:16:44.214 "ffdhe4096", 00:16:44.214 "ffdhe6144", 00:16:44.214 "ffdhe8192" 00:16:44.214 ], 00:16:44.214 "dhchap_digests": [ 00:16:44.214 "sha256", 00:16:44.214 "sha384", 00:16:44.214 "sha512" 00:16:44.214 ], 00:16:44.214 "disable_auto_failback": false, 00:16:44.214 "fast_io_fail_timeout_sec": 0, 00:16:44.214 "generate_uuids": false, 00:16:44.214 "high_priority_weight": 0, 00:16:44.214 "io_path_stat": false, 00:16:44.214 "io_queue_requests": 512, 00:16:44.214 "keep_alive_timeout_ms": 10000, 00:16:44.214 "low_priority_weight": 0, 00:16:44.214 "medium_priority_weight": 0, 00:16:44.214 "nvme_adminq_poll_period_us": 10000, 00:16:44.214 "nvme_error_stat": false, 00:16:44.214 "nvme_ioq_poll_period_us": 0, 00:16:44.214 "rdma_cm_event_timeout_ms": 0, 00:16:44.214 "rdma_max_cq_size": 0, 00:16:44.214 "rdma_srq_size": 0, 00:16:44.214 "rdma_umr_per_io": false, 00:16:44.214 "reconnect_delay_sec": 0, 00:16:44.214 "timeout_admin_us": 0, 00:16:44.214 "timeout_us": 0, 00:16:44.214 "transport_ack_timeout": 0, 00:16:44.214 "transport_retry_count": 4, 00:16:44.214 "transport_tos": 0 00:16:44.214 } 00:16:44.214 }, 00:16:44.214 { 00:16:44.214 "method": "bdev_nvme_attach_controller", 00:16:44.214 "params": { 00:16:44.214 "adrfam": "IPv4", 00:16:44.214 "ctrlr_loss_timeout_sec": 0, 00:16:44.214 "ddgst": false, 00:16:44.214 "fast_io_fail_timeout_sec": 0, 00:16:44.214 "hdgst": false, 00:16:44.214 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:16:44.214 "multipath": "multipath", 00:16:44.214 "name": "TLSTEST", 00:16:44.214 "prchk_guard": false, 00:16:44.214 "prchk_reftag": false, 00:16:44.214 "psk": "key0", 00:16:44.214 "reconnect_delay_sec": 0, 00:16:44.214 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:16:44.214 "traddr": "10.0.0.3", 00:16:44.214 "trsvcid": "4420", 00:16:44.214 "trtype": "TCP" 00:16:44.214 } 00:16:44.214 }, 00:16:44.214 { 00:16:44.214 "method": "bdev_nvme_set_hotplug", 00:16:44.214 "params": { 00:16:44.214 "enable": false, 00:16:44.214 "period_us": 100000 00:16:44.214 } 00:16:44.214 }, 00:16:44.214 { 00:16:44.214 "method": "bdev_wait_for_examine" 00:16:44.214 } 00:16:44.214 ] 00:16:44.214 }, 00:16:44.214 { 00:16:44.214 "subsystem": "nbd", 00:16:44.214 "config": [] 00:16:44.214 } 00:16:44.214 ] 00:16:44.214 }' 00:16:44.214 11:38:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@201 -- # killprocess 85705 00:16:44.214 11:38:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # '[' -z 85705 ']' 00:16:44.214 11:38:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # kill -0 85705 00:16:44.214 11:38:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # uname 00:16:44.214 11:38:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:16:44.214 11:38:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 85705 00:16:44.214 11:38:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # process_name=reactor_2 00:16:44.214 11:38:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@964 -- # '[' reactor_2 = sudo ']' 00:16:44.214 killing process with pid 85705 00:16:44.214 11:38:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@972 -- # echo 'killing process with pid 85705' 00:16:44.214 11:38:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@973 -- # kill 85705 00:16:44.214 Received shutdown signal, test time was about 10.000000 seconds 00:16:44.214 00:16:44.214 Latency(us) 00:16:44.214 [2024-12-16T11:38:44.967Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:16:44.214 [2024-12-16T11:38:44.967Z] =================================================================================================================== 00:16:44.214 [2024-12-16T11:38:44.967Z] Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:16:44.214 11:38:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@978 -- # wait 85705 00:16:44.474 11:38:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@202 -- # killprocess 85590 00:16:44.474 11:38:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # '[' -z 85590 ']' 00:16:44.474 11:38:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # kill -0 85590 00:16:44.474 11:38:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # uname 00:16:44.474 11:38:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:16:44.474 11:38:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 85590 00:16:44.474 11:38:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:16:44.474 11:38:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:16:44.474 11:38:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@972 -- # echo 'killing process with pid 85590' 00:16:44.474 killing process with pid 85590 00:16:44.474 11:38:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@973 -- # kill 85590 00:16:44.474 11:38:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@978 -- # wait 85590 00:16:44.734 11:38:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@205 -- # nvmfappstart -m 0x2 -c /dev/fd/62 00:16:44.734 11:38:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@205 -- # echo '{ 00:16:44.734 "subsystems": [ 00:16:44.734 { 00:16:44.734 "subsystem": "keyring", 00:16:44.734 "config": [ 00:16:44.734 { 00:16:44.734 "method": "keyring_file_add_key", 00:16:44.734 "params": { 00:16:44.734 "name": "key0", 00:16:44.734 "path": "/tmp/tmp.6UkUdY8wJO" 00:16:44.734 } 00:16:44.734 } 00:16:44.734 ] 00:16:44.734 }, 00:16:44.734 { 00:16:44.734 "subsystem": "iobuf", 00:16:44.734 "config": [ 00:16:44.734 { 00:16:44.734 "method": "iobuf_set_options", 00:16:44.734 "params": { 00:16:44.734 "enable_numa": false, 00:16:44.734 "large_bufsize": 135168, 00:16:44.734 "large_pool_count": 1024, 00:16:44.734 "small_bufsize": 8192, 00:16:44.734 "small_pool_count": 8192 00:16:44.734 } 00:16:44.734 } 00:16:44.734 ] 00:16:44.734 }, 00:16:44.734 { 00:16:44.734 "subsystem": "sock", 00:16:44.734 "config": [ 00:16:44.734 { 00:16:44.734 "method": "sock_set_default_impl", 00:16:44.734 "params": { 00:16:44.734 "impl_name": "posix" 00:16:44.734 } 00:16:44.734 }, 00:16:44.734 { 00:16:44.734 "method": "sock_impl_set_options", 00:16:44.734 "params": { 00:16:44.734 "enable_ktls": false, 00:16:44.734 "enable_placement_id": 0, 00:16:44.734 "enable_quickack": false, 00:16:44.734 "enable_recv_pipe": true, 00:16:44.734 "enable_zerocopy_send_client": false, 00:16:44.734 "enable_zerocopy_send_server": true, 00:16:44.734 "impl_name": "ssl", 00:16:44.734 "recv_buf_size": 4096, 00:16:44.734 "send_buf_size": 4096, 00:16:44.734 "tls_version": 0, 00:16:44.734 "zerocopy_threshold": 0 00:16:44.734 } 00:16:44.734 }, 00:16:44.734 { 00:16:44.734 "method": "sock_impl_set_options", 00:16:44.734 "params": { 00:16:44.734 "enable_ktls": false, 00:16:44.734 "enable_placement_id": 0, 00:16:44.734 "enable_quickack": false, 00:16:44.734 "enable_recv_pipe": true, 00:16:44.734 "enable_zerocopy_send_client": false, 00:16:44.734 "enable_zerocopy_send_server": true, 00:16:44.734 "impl_name": "posix", 00:16:44.734 "recv_buf_size": 2097152, 00:16:44.734 "send_buf_size": 2097152, 00:16:44.734 "tls_version": 0, 00:16:44.734 "zerocopy_threshold": 0 00:16:44.734 } 00:16:44.734 } 00:16:44.734 ] 00:16:44.734 }, 00:16:44.734 { 00:16:44.734 "subsystem": "vmd", 00:16:44.734 "config": [] 00:16:44.734 }, 00:16:44.734 { 00:16:44.734 "subsystem": "accel", 00:16:44.734 "config": [ 00:16:44.734 { 00:16:44.734 "method": "accel_set_options", 00:16:44.734 "params": { 00:16:44.734 "buf_count": 2048, 00:16:44.734 "large_cache_size": 16, 00:16:44.734 "sequence_count": 2048, 00:16:44.734 "small_cache_size": 128, 00:16:44.734 "task_count": 2048 00:16:44.734 } 00:16:44.734 } 00:16:44.734 ] 00:16:44.734 }, 00:16:44.734 { 00:16:44.734 "subsystem": "bdev", 00:16:44.734 "config": [ 00:16:44.734 { 00:16:44.734 "method": "bdev_set_options", 00:16:44.734 "params": { 00:16:44.734 "bdev_auto_examine": true, 00:16:44.734 "bdev_io_cache_size": 256, 00:16:44.734 "bdev_io_pool_size": 65535, 00:16:44.734 "iobuf_large_cache_size": 16, 00:16:44.734 "iobuf_small_cache_size": 128 00:16:44.734 } 00:16:44.734 }, 00:16:44.734 { 00:16:44.734 "method": "bdev_raid_set_options", 00:16:44.734 "params": { 00:16:44.734 "process_max_bandwidth_mb_sec": 0, 00:16:44.734 "process_window_size_kb": 1024 00:16:44.734 } 00:16:44.734 }, 00:16:44.734 { 00:16:44.734 "method": "bdev_iscsi_set_options", 00:16:44.734 "params": { 00:16:44.734 "timeout_sec": 30 00:16:44.734 } 00:16:44.734 }, 00:16:44.734 { 00:16:44.734 "method": "bdev_nvme_set_options", 00:16:44.734 "params": { 00:16:44.734 "action_on_timeout": "none", 00:16:44.734 "allow_accel_sequence": false, 00:16:44.734 "arbitration_burst": 0, 00:16:44.734 "bdev_retry_count": 3, 00:16:44.734 "ctrlr_loss_timeout_sec": 0, 00:16:44.734 "delay_cmd_submit": true, 00:16:44.734 "dhchap_dhgroups": [ 00:16:44.734 "null", 00:16:44.734 "ffdhe2048", 00:16:44.734 "ffdhe3072", 00:16:44.734 "ffdhe4096", 00:16:44.734 "ffdhe6144", 00:16:44.734 "ffdhe8192" 00:16:44.734 ], 00:16:44.734 "dhchap_digests": [ 00:16:44.734 "sha256", 00:16:44.734 "sha384", 00:16:44.734 "sha512" 00:16:44.734 ], 00:16:44.734 "disable_auto_failback": false, 00:16:44.734 "fast_io_fail_timeout_sec": 0, 00:16:44.734 "generate_uuids": false, 00:16:44.734 "high_priority_weight": 0, 00:16:44.734 "io_path_stat": false, 00:16:44.734 "io_queue_requests": 0, 00:16:44.734 "keep_alive_timeout_ms": 10000, 00:16:44.734 "low_priority_weight": 0, 00:16:44.734 "medium_priority_weight": 0, 00:16:44.734 "nvme_adminq_poll_period_us": 10000, 00:16:44.734 "nvme_error_stat": false, 00:16:44.734 "nvme_ioq_poll_period_us": 0, 00:16:44.734 "rdma_cm_event_timeout_ms": 0, 00:16:44.735 "rdma_max_cq_size": 0, 00:16:44.735 "rdma_srq_size": 0, 00:16:44.735 "rdma_umr_per_io": false, 00:16:44.735 "reconnect_delay_sec": 0, 00:16:44.735 "timeout_admin_us": 0, 00:16:44.735 "timeout_us": 0, 00:16:44.735 "transport_ack_timeout": 0, 00:16:44.735 "transport_retry_count": 4, 00:16:44.735 "transport_tos": 0 00:16:44.735 } 00:16:44.735 }, 00:16:44.735 { 00:16:44.735 "method": "bdev_nvme_set_hotplug", 00:16:44.735 "params": { 00:16:44.735 "enable": false, 00:16:44.735 "period_us": 100000 00:16:44.735 } 00:16:44.735 }, 00:16:44.735 { 00:16:44.735 "method": "bdev_malloc_create", 00:16:44.735 "params": { 00:16:44.735 "block_size": 4096, 00:16:44.735 "dif_is_head_of_md": false, 00:16:44.735 "dif_pi_format": 0, 00:16:44.735 "dif_type": 0, 00:16:44.735 "md_size": 0, 00:16:44.735 "name": "malloc0", 00:16:44.735 "num_blocks": 8192, 00:16:44.735 "optimal_io_boundary": 0, 00:16:44.735 "physical_block_size": 4096, 00:16:44.735 "uuid": "902a8fc4-fc17-497f-bf1c-0d8a889b2350" 00:16:44.735 } 00:16:44.735 }, 00:16:44.735 { 00:16:44.735 "method": "bdev_wait_for_examine" 00:16:44.735 } 00:16:44.735 ] 00:16:44.735 }, 00:16:44.735 { 00:16:44.735 "subsystem": "nbd", 00:16:44.735 "config": [] 00:16:44.735 }, 00:16:44.735 { 00:16:44.735 "subsystem": "scheduler", 00:16:44.735 "config": [ 00:16:44.735 { 00:16:44.735 "method": "framework_set_scheduler", 00:16:44.735 "params": { 00:16:44.735 "name": "static" 00:16:44.735 } 00:16:44.735 } 00:16:44.735 ] 00:16:44.735 }, 00:16:44.735 { 00:16:44.735 "subsystem": "nvmf", 00:16:44.735 "config": [ 00:16:44.735 { 00:16:44.735 "method": "nvmf_set_config", 00:16:44.735 "params": { 00:16:44.735 "admin_cmd_passthru": { 00:16:44.735 "identify_ctrlr": false 00:16:44.735 }, 00:16:44.735 "dhchap_dhgroups": [ 00:16:44.735 "null", 00:16:44.735 "ffdhe2048", 00:16:44.735 "ffdhe3072", 00:16:44.735 "ffdhe4096", 00:16:44.735 "ffdhe6144", 00:16:44.735 "ffdhe8192" 00:16:44.735 ], 00:16:44.735 "dhchap_digests": [ 00:16:44.735 "sha256", 00:16:44.735 "sha384", 00:16:44.735 "sha512" 00:16:44.735 ], 00:16:44.735 "discovery_filter": "match_any" 00:16:44.735 } 00:16:44.735 }, 00:16:44.735 { 00:16:44.735 "method": "nvmf_set_max_subsystems", 00:16:44.735 "params": { 00:16:44.735 "max_subsystems": 1024 00:16:44.735 } 00:16:44.735 }, 00:16:44.735 { 00:16:44.735 "method": "nvmf_set_crdt", 00:16:44.735 "params": { 00:16:44.735 "crdt1": 0, 00:16:44.735 "crdt2": 0, 00:16:44.735 "crdt3": 0 00:16:44.735 } 00:16:44.735 }, 00:16:44.735 { 00:16:44.735 "method": "nvmf_create_transport", 00:16:44.735 "params": { 00:16:44.735 "abort_timeout_sec": 1, 00:16:44.735 "ack_timeout": 0, 00:16:44.735 "buf_cache_size": 4294967295, 00:16:44.735 "c2h_success": false, 00:16:44.735 "data_wr_pool_size": 0, 00:16:44.735 "dif_insert_or_strip": false, 00:16:44.735 "in_capsule_data_size": 4096, 00:16:44.735 "io_unit_size": 131072, 00:16:44.735 "max_aq_depth": 128, 00:16:44.735 "max_io_qpairs_per_ctrlr": 127, 00:16:44.735 "max_io_size": 131072, 00:16:44.735 "max_queue_depth": 128, 00:16:44.735 "num_shared_buffers": 511, 00:16:44.735 "sock_priority": 0, 00:16:44.735 "trtype": "TCP", 00:16:44.735 "zcopy": false 00:16:44.735 } 00:16:44.735 }, 00:16:44.735 { 00:16:44.735 "method": "nvmf_create_subsystem", 00:16:44.735 "params": { 00:16:44.735 "allow_any_host": false, 00:16:44.735 "ana_reporting": false, 00:16:44.735 "max_cntlid": 65519, 00:16:44.735 "max_namespaces": 10, 00:16:44.735 "min_cntlid": 1, 00:16:44.735 "model_number": "SPDK bdev Controller", 00:16:44.735 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:16:44.735 "serial_number": "SPDK00000000000001" 00:16:44.735 } 00:16:44.735 }, 00:16:44.735 { 00:16:44.735 "method": "nvmf_subsystem_add_host", 00:16:44.735 "params": { 00:16:44.735 "host": "nqn.2016-06.io.spdk:host1", 00:16:44.735 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:16:44.735 "psk": "key0" 00:16:44.735 } 00:16:44.735 }, 00:16:44.735 { 00:16:44.735 "method": "nvmf_subsystem_add_ns", 00:16:44.735 "params": { 00:16:44.735 "namespace": { 00:16:44.735 "bdev_name": "malloc0", 00:16:44.735 "nguid": "902A8FC4FC17497FBF1C0D8A889B2350", 00:16:44.735 "no_auto_visible": false, 00:16:44.735 "nsid": 1, 00:16:44.735 "uuid": "902a8fc4-fc17-497f-bf1c-0d8a889b2350" 00:16:44.735 }, 00:16:44.735 "nqn": "nqn.2016-06.io.spdk:cnode1" 00:16:44.735 } 00:16:44.735 }, 00:16:44.735 { 00:16:44.735 "method": "nvmf_subsystem_add_listener", 00:16:44.735 "params": { 00:16:44.735 "listen_address": { 00:16:44.735 "adrfam": "IPv4", 00:16:44.735 "traddr": "10.0.0.3", 00:16:44.735 "trsvcid": "4420", 00:16:44.735 "trtype": "TCP" 00:16:44.735 }, 00:16:44.735 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:16:44.735 "secure_channel": true 00:16:44.735 } 00:16:44.735 } 00:16:44.735 ] 00:16:44.735 } 00:16:44.735 ] 00:16:44.735 }' 00:16:44.735 11:38:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:16:44.735 11:38:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@726 -- # xtrace_disable 00:16:44.735 11:38:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:16:44.735 11:38:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@509 -- # nvmfpid=85785 00:16:44.735 11:38:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@510 -- # waitforlisten 85785 00:16:44.735 11:38:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # '[' -z 85785 ']' 00:16:44.735 11:38:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@508 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 -c /dev/fd/62 00:16:44.735 11:38:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:16:44.735 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:16:44.735 11:38:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # local max_retries=100 00:16:44.735 11:38:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:16:44.735 11:38:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@844 -- # xtrace_disable 00:16:44.735 11:38:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:16:44.735 [2024-12-16 11:38:45.396252] Starting SPDK v25.01-pre git sha1 c360129e7 / DPDK 24.03.0 initialization... 00:16:44.735 [2024-12-16 11:38:45.396379] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:16:44.993 [2024-12-16 11:38:45.545490] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:16:44.993 [2024-12-16 11:38:45.600848] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:16:44.993 [2024-12-16 11:38:45.600915] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:16:44.993 [2024-12-16 11:38:45.600940] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:16:44.993 [2024-12-16 11:38:45.600948] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:16:44.993 [2024-12-16 11:38:45.600955] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:16:44.993 [2024-12-16 11:38:45.601425] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 1 00:16:45.251 [2024-12-16 11:38:45.842291] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:16:45.251 [2024-12-16 11:38:45.874264] tcp.c:1049:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:16:45.251 [2024-12-16 11:38:45.874523] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4420 *** 00:16:45.818 11:38:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:16:45.818 11:38:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@868 -- # return 0 00:16:45.818 11:38:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:16:45.818 11:38:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@732 -- # xtrace_disable 00:16:45.818 11:38:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:16:45.818 11:38:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:16:45.818 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:16:45.818 11:38:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@209 -- # bdevperf_pid=85829 00:16:45.818 11:38:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@210 -- # waitforlisten 85829 /var/tmp/bdevperf.sock 00:16:45.818 11:38:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # '[' -z 85829 ']' 00:16:45.818 11:38:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:16:45.818 11:38:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # local max_retries=100 00:16:45.818 11:38:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:16:45.818 11:38:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@844 -- # xtrace_disable 00:16:45.818 11:38:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:16:45.818 11:38:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@206 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 -c /dev/fd/63 00:16:45.818 11:38:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@206 -- # echo '{ 00:16:45.818 "subsystems": [ 00:16:45.818 { 00:16:45.818 "subsystem": "keyring", 00:16:45.818 "config": [ 00:16:45.818 { 00:16:45.818 "method": "keyring_file_add_key", 00:16:45.818 "params": { 00:16:45.818 "name": "key0", 00:16:45.818 "path": "/tmp/tmp.6UkUdY8wJO" 00:16:45.818 } 00:16:45.818 } 00:16:45.818 ] 00:16:45.818 }, 00:16:45.818 { 00:16:45.818 "subsystem": "iobuf", 00:16:45.818 "config": [ 00:16:45.818 { 00:16:45.818 "method": "iobuf_set_options", 00:16:45.818 "params": { 00:16:45.818 "enable_numa": false, 00:16:45.818 "large_bufsize": 135168, 00:16:45.818 "large_pool_count": 1024, 00:16:45.818 "small_bufsize": 8192, 00:16:45.818 "small_pool_count": 8192 00:16:45.818 } 00:16:45.818 } 00:16:45.818 ] 00:16:45.818 }, 00:16:45.818 { 00:16:45.818 "subsystem": "sock", 00:16:45.818 "config": [ 00:16:45.818 { 00:16:45.818 "method": "sock_set_default_impl", 00:16:45.818 "params": { 00:16:45.818 "impl_name": "posix" 00:16:45.818 } 00:16:45.818 }, 00:16:45.818 { 00:16:45.818 "method": "sock_impl_set_options", 00:16:45.818 "params": { 00:16:45.818 "enable_ktls": false, 00:16:45.818 "enable_placement_id": 0, 00:16:45.818 "enable_quickack": false, 00:16:45.818 "enable_recv_pipe": true, 00:16:45.818 "enable_zerocopy_send_client": false, 00:16:45.818 "enable_zerocopy_send_server": true, 00:16:45.818 "impl_name": "ssl", 00:16:45.818 "recv_buf_size": 4096, 00:16:45.818 "send_buf_size": 4096, 00:16:45.818 "tls_version": 0, 00:16:45.818 "zerocopy_threshold": 0 00:16:45.818 } 00:16:45.818 }, 00:16:45.818 { 00:16:45.818 "method": "sock_impl_set_options", 00:16:45.818 "params": { 00:16:45.818 "enable_ktls": false, 00:16:45.818 "enable_placement_id": 0, 00:16:45.818 "enable_quickack": false, 00:16:45.818 "enable_recv_pipe": true, 00:16:45.818 "enable_zerocopy_send_client": false, 00:16:45.818 "enable_zerocopy_send_server": true, 00:16:45.818 "impl_name": "posix", 00:16:45.818 "recv_buf_size": 2097152, 00:16:45.818 "send_buf_size": 2097152, 00:16:45.818 "tls_version": 0, 00:16:45.818 "zerocopy_threshold": 0 00:16:45.818 } 00:16:45.818 } 00:16:45.818 ] 00:16:45.818 }, 00:16:45.818 { 00:16:45.818 "subsystem": "vmd", 00:16:45.818 "config": [] 00:16:45.818 }, 00:16:45.818 { 00:16:45.818 "subsystem": "accel", 00:16:45.818 "config": [ 00:16:45.818 { 00:16:45.818 "method": "accel_set_options", 00:16:45.818 "params": { 00:16:45.818 "buf_count": 2048, 00:16:45.818 "large_cache_size": 16, 00:16:45.818 "sequence_count": 2048, 00:16:45.818 "small_cache_size": 128, 00:16:45.818 "task_count": 2048 00:16:45.818 } 00:16:45.818 } 00:16:45.818 ] 00:16:45.819 }, 00:16:45.819 { 00:16:45.819 "subsystem": "bdev", 00:16:45.819 "config": [ 00:16:45.819 { 00:16:45.819 "method": "bdev_set_options", 00:16:45.819 "params": { 00:16:45.819 "bdev_auto_examine": true, 00:16:45.819 "bdev_io_cache_size": 256, 00:16:45.819 "bdev_io_pool_size": 65535, 00:16:45.819 "iobuf_large_cache_size": 16, 00:16:45.819 "iobuf_small_cache_size": 128 00:16:45.819 } 00:16:45.819 }, 00:16:45.819 { 00:16:45.819 "method": "bdev_raid_set_options", 00:16:45.819 "params": { 00:16:45.819 "process_max_bandwidth_mb_sec": 0, 00:16:45.819 "process_window_size_kb": 1024 00:16:45.819 } 00:16:45.819 }, 00:16:45.819 { 00:16:45.819 "method": "bdev_iscsi_set_options", 00:16:45.819 "params": { 00:16:45.819 "timeout_sec": 30 00:16:45.819 } 00:16:45.819 }, 00:16:45.819 { 00:16:45.819 "method": "bdev_nvme_set_options", 00:16:45.819 "params": { 00:16:45.819 "action_on_timeout": "none", 00:16:45.819 "allow_accel_sequence": false, 00:16:45.819 "arbitration_burst": 0, 00:16:45.819 "bdev_retry_count": 3, 00:16:45.819 "ctrlr_loss_timeout_sec": 0, 00:16:45.819 "delay_cmd_submit": true, 00:16:45.819 "dhchap_dhgroups": [ 00:16:45.819 "null", 00:16:45.819 "ffdhe2048", 00:16:45.819 "ffdhe3072", 00:16:45.819 "ffdhe4096", 00:16:45.819 "ffdhe6144", 00:16:45.819 "ffdhe8192" 00:16:45.819 ], 00:16:45.819 "dhchap_digests": [ 00:16:45.819 "sha256", 00:16:45.819 "sha384", 00:16:45.819 "sha512" 00:16:45.819 ], 00:16:45.819 "disable_auto_failback": false, 00:16:45.819 "fast_io_fail_timeout_sec": 0, 00:16:45.819 "generate_uuids": false, 00:16:45.819 "high_priority_weight": 0, 00:16:45.819 "io_path_stat": false, 00:16:45.819 "io_queue_requests": 512, 00:16:45.819 "keep_alive_timeout_ms": 10000, 00:16:45.819 "low_priority_weight": 0, 00:16:45.819 "medium_priority_weight": 0, 00:16:45.819 "nvme_adminq_poll_period_us": 10000, 00:16:45.819 "nvme_error_stat": false, 00:16:45.819 "nvme_ioq_poll_period_us": 0, 00:16:45.819 "rdma_cm_event_timeout_ms": 0, 00:16:45.819 "rdma_max_cq_size": 0, 00:16:45.819 "rdma_srq_size": 0, 00:16:45.819 "rdma_umr_per_io": false, 00:16:45.819 "reconnect_delay_sec": 0, 00:16:45.819 "timeout_admin_us": 0, 00:16:45.819 "timeout_us": 0, 00:16:45.819 "transport_ack_timeout": 0, 00:16:45.819 "transport_retry_count": 4, 00:16:45.819 "transport_tos": 0 00:16:45.819 } 00:16:45.819 }, 00:16:45.819 { 00:16:45.819 "method": "bdev_nvme_attach_controller", 00:16:45.819 "params": { 00:16:45.819 "adrfam": "IPv4", 00:16:45.819 "ctrlr_loss_timeout_sec": 0, 00:16:45.819 "ddgst": false, 00:16:45.819 "fast_io_fail_timeout_sec": 0, 00:16:45.819 "hdgst": false, 00:16:45.819 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:16:45.819 "multipath": "multipath", 00:16:45.819 "name": "TLSTEST", 00:16:45.819 "prchk_guard": false, 00:16:45.819 "prchk_reftag": false, 00:16:45.819 "psk": "key0", 00:16:45.819 "reconnect_delay_sec": 0, 00:16:45.819 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:16:45.819 "traddr": "10.0.0.3", 00:16:45.819 "trsvcid": "4420", 00:16:45.819 "trtype": "TCP" 00:16:45.819 } 00:16:45.819 }, 00:16:45.819 { 00:16:45.819 "method": "bdev_nvme_set_hotplug", 00:16:45.819 "params": { 00:16:45.819 "enable": false, 00:16:45.819 "period_us": 100000 00:16:45.819 } 00:16:45.819 }, 00:16:45.819 { 00:16:45.819 "method": "bdev_wait_for_examine" 00:16:45.819 } 00:16:45.819 ] 00:16:45.819 }, 00:16:45.819 { 00:16:45.819 "subsystem": "nbd", 00:16:45.819 "config": [] 00:16:45.819 } 00:16:45.819 ] 00:16:45.819 }' 00:16:45.819 [2024-12-16 11:38:46.423248] Starting SPDK v25.01-pre git sha1 c360129e7 / DPDK 24.03.0 initialization... 00:16:45.819 [2024-12-16 11:38:46.424048] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid85829 ] 00:16:46.077 [2024-12-16 11:38:46.571326] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:16:46.077 [2024-12-16 11:38:46.625453] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 2 00:16:46.077 [2024-12-16 11:38:46.803789] bdev_nvme_rpc.c: 515:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:16:47.013 11:38:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:16:47.013 11:38:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@868 -- # return 0 00:16:47.013 11:38:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@213 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -t 20 -s /var/tmp/bdevperf.sock perform_tests 00:16:47.013 Running I/O for 10 seconds... 00:16:48.883 4202.00 IOPS, 16.41 MiB/s [2024-12-16T11:38:50.567Z] 4209.50 IOPS, 16.44 MiB/s [2024-12-16T11:38:51.940Z] 4133.00 IOPS, 16.14 MiB/s [2024-12-16T11:38:52.873Z] 4110.75 IOPS, 16.06 MiB/s [2024-12-16T11:38:53.806Z] 4129.00 IOPS, 16.13 MiB/s [2024-12-16T11:38:54.787Z] 4128.17 IOPS, 16.13 MiB/s [2024-12-16T11:38:55.716Z] 4136.86 IOPS, 16.16 MiB/s [2024-12-16T11:38:56.649Z] 4152.12 IOPS, 16.22 MiB/s [2024-12-16T11:38:57.581Z] 4164.89 IOPS, 16.27 MiB/s [2024-12-16T11:38:57.581Z] 4167.80 IOPS, 16.28 MiB/s 00:16:56.828 Latency(us) 00:16:56.828 [2024-12-16T11:38:57.581Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:16:56.828 Job: TLSTESTn1 (Core Mask 0x4, workload: verify, depth: 128, IO size: 4096) 00:16:56.828 Verification LBA range: start 0x0 length 0x2000 00:16:56.828 TLSTESTn1 : 10.02 4173.84 16.30 0.00 0.00 30612.51 5421.61 23950.43 00:16:56.828 [2024-12-16T11:38:57.581Z] =================================================================================================================== 00:16:56.828 [2024-12-16T11:38:57.582Z] Total : 4173.84 16.30 0.00 0.00 30612.51 5421.61 23950.43 00:16:56.829 { 00:16:56.829 "results": [ 00:16:56.829 { 00:16:56.829 "job": "TLSTESTn1", 00:16:56.829 "core_mask": "0x4", 00:16:56.829 "workload": "verify", 00:16:56.829 "status": "finished", 00:16:56.829 "verify_range": { 00:16:56.829 "start": 0, 00:16:56.829 "length": 8192 00:16:56.829 }, 00:16:56.829 "queue_depth": 128, 00:16:56.829 "io_size": 4096, 00:16:56.829 "runtime": 10.015719, 00:16:56.829 "iops": 4173.839142252294, 00:16:56.829 "mibps": 16.304059149423022, 00:16:56.829 "io_failed": 0, 00:16:56.829 "io_timeout": 0, 00:16:56.829 "avg_latency_us": 30612.508726263688, 00:16:56.829 "min_latency_us": 5421.614545454546, 00:16:56.829 "max_latency_us": 23950.429090909092 00:16:56.829 } 00:16:56.829 ], 00:16:56.829 "core_count": 1 00:16:56.829 } 00:16:57.087 11:38:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@215 -- # trap 'nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:16:57.087 11:38:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@216 -- # killprocess 85829 00:16:57.087 11:38:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # '[' -z 85829 ']' 00:16:57.087 11:38:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # kill -0 85829 00:16:57.087 11:38:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # uname 00:16:57.087 11:38:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:16:57.087 11:38:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 85829 00:16:57.087 11:38:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # process_name=reactor_2 00:16:57.087 11:38:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@964 -- # '[' reactor_2 = sudo ']' 00:16:57.087 killing process with pid 85829 00:16:57.087 11:38:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@972 -- # echo 'killing process with pid 85829' 00:16:57.087 11:38:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@973 -- # kill 85829 00:16:57.087 Received shutdown signal, test time was about 10.000000 seconds 00:16:57.087 00:16:57.087 Latency(us) 00:16:57.087 [2024-12-16T11:38:57.840Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:16:57.087 [2024-12-16T11:38:57.840Z] =================================================================================================================== 00:16:57.087 [2024-12-16T11:38:57.840Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:16:57.087 11:38:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@978 -- # wait 85829 00:16:57.087 11:38:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@217 -- # killprocess 85785 00:16:57.087 11:38:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # '[' -z 85785 ']' 00:16:57.087 11:38:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # kill -0 85785 00:16:57.087 11:38:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # uname 00:16:57.087 11:38:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:16:57.087 11:38:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 85785 00:16:57.345 killing process with pid 85785 00:16:57.345 11:38:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:16:57.345 11:38:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:16:57.345 11:38:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@972 -- # echo 'killing process with pid 85785' 00:16:57.345 11:38:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@973 -- # kill 85785 00:16:57.345 11:38:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@978 -- # wait 85785 00:16:57.345 11:38:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@220 -- # nvmfappstart 00:16:57.345 11:38:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:16:57.345 11:38:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@726 -- # xtrace_disable 00:16:57.345 11:38:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:16:57.345 11:38:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@509 -- # nvmfpid=85983 00:16:57.345 11:38:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@508 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF 00:16:57.345 11:38:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@510 -- # waitforlisten 85983 00:16:57.345 11:38:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # '[' -z 85983 ']' 00:16:57.345 11:38:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:16:57.345 11:38:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # local max_retries=100 00:16:57.345 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:16:57.345 11:38:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:16:57.345 11:38:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@844 -- # xtrace_disable 00:16:57.345 11:38:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:16:57.603 [2024-12-16 11:38:58.154887] Starting SPDK v25.01-pre git sha1 c360129e7 / DPDK 24.03.0 initialization... 00:16:57.603 [2024-12-16 11:38:58.154998] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:16:57.603 [2024-12-16 11:38:58.310617] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:16:57.860 [2024-12-16 11:38:58.377874] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:16:57.860 [2024-12-16 11:38:58.377944] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:16:57.860 [2024-12-16 11:38:58.377970] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:16:57.860 [2024-12-16 11:38:58.377981] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:16:57.860 [2024-12-16 11:38:58.377991] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:16:57.860 [2024-12-16 11:38:58.378502] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:16:58.793 11:38:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:16:58.793 11:38:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@868 -- # return 0 00:16:58.793 11:38:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:16:58.793 11:38:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@732 -- # xtrace_disable 00:16:58.793 11:38:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:16:58.793 11:38:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:16:58.793 11:38:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@221 -- # setup_nvmf_tgt /tmp/tmp.6UkUdY8wJO 00:16:58.793 11:38:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@50 -- # local key=/tmp/tmp.6UkUdY8wJO 00:16:58.793 11:38:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@52 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o 00:16:59.052 [2024-12-16 11:38:59.547851] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:16:59.052 11:38:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@53 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDK00000000000001 -m 10 00:16:59.310 11:38:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 -k 00:16:59.568 [2024-12-16 11:39:00.083944] tcp.c:1049:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:16:59.568 [2024-12-16 11:39:00.084203] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4420 *** 00:16:59.568 11:39:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@56 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_create 32 4096 -b malloc0 00:16:59.826 malloc0 00:16:59.826 11:39:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@57 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 malloc0 -n 1 00:17:00.084 11:39:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@58 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py keyring_file_add_key key0 /tmp/tmp.6UkUdY8wJO 00:17:00.342 11:39:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@59 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 --psk key0 00:17:00.600 11:39:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@224 -- # bdevperf_pid=86102 00:17:00.600 11:39:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@222 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 2 -z -r /var/tmp/bdevperf.sock -q 128 -o 4k -w verify -t 1 00:17:00.600 11:39:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@226 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:17:00.600 11:39:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@227 -- # waitforlisten 86102 /var/tmp/bdevperf.sock 00:17:00.600 11:39:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # '[' -z 86102 ']' 00:17:00.600 11:39:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:17:00.600 11:39:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # local max_retries=100 00:17:00.600 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:17:00.600 11:39:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:17:00.600 11:39:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@844 -- # xtrace_disable 00:17:00.600 11:39:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:17:00.600 [2024-12-16 11:39:01.305578] Starting SPDK v25.01-pre git sha1 c360129e7 / DPDK 24.03.0 initialization... 00:17:00.600 [2024-12-16 11:39:01.305672] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid86102 ] 00:17:00.858 [2024-12-16 11:39:01.455511] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:17:00.858 [2024-12-16 11:39:01.521789] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 1 00:17:01.116 11:39:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:17:01.116 11:39:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@868 -- # return 0 00:17:01.116 11:39:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@229 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock keyring_file_add_key key0 /tmp/tmp.6UkUdY8wJO 00:17:01.374 11:39:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@230 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b nvme0 -t tcp -a 10.0.0.3 -s 4420 -f ipv4 --psk key0 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 00:17:01.632 [2024-12-16 11:39:02.197936] bdev_nvme_rpc.c: 515:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:17:01.632 nvme0n1 00:17:01.632 11:39:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@234 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:17:01.890 Running I/O for 1 seconds... 00:17:02.823 3887.00 IOPS, 15.18 MiB/s 00:17:02.823 Latency(us) 00:17:02.823 [2024-12-16T11:39:03.576Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:17:02.823 Job: nvme0n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:17:02.823 Verification LBA range: start 0x0 length 0x2000 00:17:02.823 nvme0n1 : 1.02 3925.68 15.33 0.00 0.00 32144.44 5302.46 25499.46 00:17:02.823 [2024-12-16T11:39:03.576Z] =================================================================================================================== 00:17:02.823 [2024-12-16T11:39:03.576Z] Total : 3925.68 15.33 0.00 0.00 32144.44 5302.46 25499.46 00:17:02.823 { 00:17:02.823 "results": [ 00:17:02.823 { 00:17:02.823 "job": "nvme0n1", 00:17:02.823 "core_mask": "0x2", 00:17:02.823 "workload": "verify", 00:17:02.823 "status": "finished", 00:17:02.823 "verify_range": { 00:17:02.823 "start": 0, 00:17:02.823 "length": 8192 00:17:02.823 }, 00:17:02.823 "queue_depth": 128, 00:17:02.823 "io_size": 4096, 00:17:02.823 "runtime": 1.022752, 00:17:02.823 "iops": 3925.6828634898784, 00:17:02.823 "mibps": 15.334698685507338, 00:17:02.823 "io_failed": 0, 00:17:02.823 "io_timeout": 0, 00:17:02.823 "avg_latency_us": 32144.44347152723, 00:17:02.823 "min_latency_us": 5302.458181818181, 00:17:02.823 "max_latency_us": 25499.46181818182 00:17:02.823 } 00:17:02.823 ], 00:17:02.823 "core_count": 1 00:17:02.823 } 00:17:02.823 11:39:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@236 -- # killprocess 86102 00:17:02.823 11:39:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # '[' -z 86102 ']' 00:17:02.823 11:39:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # kill -0 86102 00:17:02.823 11:39:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # uname 00:17:02.823 11:39:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:17:02.823 11:39:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 86102 00:17:02.823 11:39:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:17:02.823 killing process with pid 86102 00:17:02.823 11:39:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:17:02.823 11:39:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@972 -- # echo 'killing process with pid 86102' 00:17:02.823 Received shutdown signal, test time was about 1.000000 seconds 00:17:02.823 00:17:02.823 Latency(us) 00:17:02.823 [2024-12-16T11:39:03.576Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:17:02.823 [2024-12-16T11:39:03.576Z] =================================================================================================================== 00:17:02.823 [2024-12-16T11:39:03.576Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:17:02.823 11:39:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@973 -- # kill 86102 00:17:02.823 11:39:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@978 -- # wait 86102 00:17:03.081 11:39:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@237 -- # killprocess 85983 00:17:03.081 11:39:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # '[' -z 85983 ']' 00:17:03.081 11:39:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # kill -0 85983 00:17:03.081 11:39:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # uname 00:17:03.081 11:39:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:17:03.081 11:39:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 85983 00:17:03.081 11:39:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:17:03.081 killing process with pid 85983 00:17:03.081 11:39:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:17:03.081 11:39:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@972 -- # echo 'killing process with pid 85983' 00:17:03.081 11:39:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@973 -- # kill 85983 00:17:03.081 11:39:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@978 -- # wait 85983 00:17:03.339 11:39:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@242 -- # nvmfappstart 00:17:03.339 11:39:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:17:03.339 11:39:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@726 -- # xtrace_disable 00:17:03.339 11:39:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:17:03.339 11:39:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@509 -- # nvmfpid=86164 00:17:03.339 11:39:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@508 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF 00:17:03.339 11:39:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@510 -- # waitforlisten 86164 00:17:03.339 11:39:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # '[' -z 86164 ']' 00:17:03.339 11:39:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:17:03.339 11:39:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # local max_retries=100 00:17:03.339 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:17:03.339 11:39:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:17:03.339 11:39:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@844 -- # xtrace_disable 00:17:03.339 11:39:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:17:03.339 [2024-12-16 11:39:03.992870] Starting SPDK v25.01-pre git sha1 c360129e7 / DPDK 24.03.0 initialization... 00:17:03.340 [2024-12-16 11:39:03.992979] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:17:03.598 [2024-12-16 11:39:04.132368] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:17:03.598 [2024-12-16 11:39:04.180728] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:17:03.598 [2024-12-16 11:39:04.180796] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:17:03.598 [2024-12-16 11:39:04.180808] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:17:03.598 [2024-12-16 11:39:04.180817] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:17:03.598 [2024-12-16 11:39:04.180824] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:17:03.598 [2024-12-16 11:39:04.181284] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:17:04.531 11:39:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:17:04.532 11:39:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@868 -- # return 0 00:17:04.532 11:39:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:17:04.532 11:39:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@732 -- # xtrace_disable 00:17:04.532 11:39:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:17:04.532 11:39:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:17:04.532 11:39:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@243 -- # rpc_cmd 00:17:04.532 11:39:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:04.532 11:39:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:17:04.532 [2024-12-16 11:39:05.072329] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:17:04.532 malloc0 00:17:04.532 [2024-12-16 11:39:05.103908] tcp.c:1049:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:17:04.532 [2024-12-16 11:39:05.104205] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4420 *** 00:17:04.532 11:39:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:04.532 11:39:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@256 -- # bdevperf_pid=86214 00:17:04.532 11:39:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@254 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 2 -z -r /var/tmp/bdevperf.sock -q 128 -o 4k -w verify -t 1 00:17:04.532 11:39:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@258 -- # waitforlisten 86214 /var/tmp/bdevperf.sock 00:17:04.532 11:39:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # '[' -z 86214 ']' 00:17:04.532 11:39:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:17:04.532 11:39:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # local max_retries=100 00:17:04.532 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:17:04.532 11:39:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:17:04.532 11:39:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@844 -- # xtrace_disable 00:17:04.533 11:39:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:17:04.533 [2024-12-16 11:39:05.194376] Starting SPDK v25.01-pre git sha1 c360129e7 / DPDK 24.03.0 initialization... 00:17:04.533 [2024-12-16 11:39:05.194500] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid86214 ] 00:17:04.792 [2024-12-16 11:39:05.344946] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:17:04.792 [2024-12-16 11:39:05.405592] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 1 00:17:04.792 11:39:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:17:04.792 11:39:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@868 -- # return 0 00:17:04.792 11:39:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@259 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock keyring_file_add_key key0 /tmp/tmp.6UkUdY8wJO 00:17:05.357 11:39:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@260 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b nvme0 -t tcp -a 10.0.0.3 -s 4420 -f ipv4 --psk key0 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 00:17:05.357 [2024-12-16 11:39:06.082672] bdev_nvme_rpc.c: 515:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:17:05.615 nvme0n1 00:17:05.615 11:39:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@264 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:17:05.615 Running I/O for 1 seconds... 00:17:06.805 4096.00 IOPS, 16.00 MiB/s 00:17:06.805 Latency(us) 00:17:06.805 [2024-12-16T11:39:07.558Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:17:06.805 Job: nvme0n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:17:06.805 Verification LBA range: start 0x0 length 0x2000 00:17:06.805 nvme0n1 : 1.02 4136.78 16.16 0.00 0.00 30621.21 7983.48 19899.11 00:17:06.805 [2024-12-16T11:39:07.558Z] =================================================================================================================== 00:17:06.805 [2024-12-16T11:39:07.558Z] Total : 4136.78 16.16 0.00 0.00 30621.21 7983.48 19899.11 00:17:06.805 { 00:17:06.805 "results": [ 00:17:06.805 { 00:17:06.805 "job": "nvme0n1", 00:17:06.805 "core_mask": "0x2", 00:17:06.805 "workload": "verify", 00:17:06.805 "status": "finished", 00:17:06.805 "verify_range": { 00:17:06.805 "start": 0, 00:17:06.805 "length": 8192 00:17:06.805 }, 00:17:06.805 "queue_depth": 128, 00:17:06.805 "io_size": 4096, 00:17:06.805 "runtime": 1.021084, 00:17:06.805 "iops": 4136.780127785765, 00:17:06.805 "mibps": 16.159297374163145, 00:17:06.805 "io_failed": 0, 00:17:06.805 "io_timeout": 0, 00:17:06.805 "avg_latency_us": 30621.210798898068, 00:17:06.805 "min_latency_us": 7983.476363636363, 00:17:06.805 "max_latency_us": 19899.112727272728 00:17:06.805 } 00:17:06.805 ], 00:17:06.805 "core_count": 1 00:17:06.805 } 00:17:06.805 11:39:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@267 -- # rpc_cmd save_config 00:17:06.805 11:39:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:06.805 11:39:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:17:06.805 11:39:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:06.805 11:39:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@267 -- # tgtcfg='{ 00:17:06.805 "subsystems": [ 00:17:06.805 { 00:17:06.805 "subsystem": "keyring", 00:17:06.805 "config": [ 00:17:06.805 { 00:17:06.805 "method": "keyring_file_add_key", 00:17:06.805 "params": { 00:17:06.805 "name": "key0", 00:17:06.805 "path": "/tmp/tmp.6UkUdY8wJO" 00:17:06.805 } 00:17:06.805 } 00:17:06.805 ] 00:17:06.805 }, 00:17:06.805 { 00:17:06.805 "subsystem": "iobuf", 00:17:06.805 "config": [ 00:17:06.805 { 00:17:06.805 "method": "iobuf_set_options", 00:17:06.805 "params": { 00:17:06.805 "enable_numa": false, 00:17:06.805 "large_bufsize": 135168, 00:17:06.805 "large_pool_count": 1024, 00:17:06.805 "small_bufsize": 8192, 00:17:06.805 "small_pool_count": 8192 00:17:06.805 } 00:17:06.805 } 00:17:06.805 ] 00:17:06.805 }, 00:17:06.805 { 00:17:06.805 "subsystem": "sock", 00:17:06.805 "config": [ 00:17:06.805 { 00:17:06.805 "method": "sock_set_default_impl", 00:17:06.805 "params": { 00:17:06.805 "impl_name": "posix" 00:17:06.805 } 00:17:06.805 }, 00:17:06.805 { 00:17:06.805 "method": "sock_impl_set_options", 00:17:06.805 "params": { 00:17:06.805 "enable_ktls": false, 00:17:06.805 "enable_placement_id": 0, 00:17:06.805 "enable_quickack": false, 00:17:06.805 "enable_recv_pipe": true, 00:17:06.805 "enable_zerocopy_send_client": false, 00:17:06.805 "enable_zerocopy_send_server": true, 00:17:06.805 "impl_name": "ssl", 00:17:06.805 "recv_buf_size": 4096, 00:17:06.805 "send_buf_size": 4096, 00:17:06.805 "tls_version": 0, 00:17:06.805 "zerocopy_threshold": 0 00:17:06.805 } 00:17:06.805 }, 00:17:06.805 { 00:17:06.805 "method": "sock_impl_set_options", 00:17:06.805 "params": { 00:17:06.805 "enable_ktls": false, 00:17:06.805 "enable_placement_id": 0, 00:17:06.805 "enable_quickack": false, 00:17:06.805 "enable_recv_pipe": true, 00:17:06.805 "enable_zerocopy_send_client": false, 00:17:06.805 "enable_zerocopy_send_server": true, 00:17:06.805 "impl_name": "posix", 00:17:06.805 "recv_buf_size": 2097152, 00:17:06.805 "send_buf_size": 2097152, 00:17:06.805 "tls_version": 0, 00:17:06.805 "zerocopy_threshold": 0 00:17:06.805 } 00:17:06.805 } 00:17:06.805 ] 00:17:06.805 }, 00:17:06.805 { 00:17:06.805 "subsystem": "vmd", 00:17:06.805 "config": [] 00:17:06.805 }, 00:17:06.805 { 00:17:06.805 "subsystem": "accel", 00:17:06.805 "config": [ 00:17:06.805 { 00:17:06.805 "method": "accel_set_options", 00:17:06.805 "params": { 00:17:06.805 "buf_count": 2048, 00:17:06.805 "large_cache_size": 16, 00:17:06.805 "sequence_count": 2048, 00:17:06.805 "small_cache_size": 128, 00:17:06.805 "task_count": 2048 00:17:06.805 } 00:17:06.805 } 00:17:06.805 ] 00:17:06.805 }, 00:17:06.805 { 00:17:06.805 "subsystem": "bdev", 00:17:06.805 "config": [ 00:17:06.805 { 00:17:06.805 "method": "bdev_set_options", 00:17:06.805 "params": { 00:17:06.805 "bdev_auto_examine": true, 00:17:06.805 "bdev_io_cache_size": 256, 00:17:06.805 "bdev_io_pool_size": 65535, 00:17:06.805 "iobuf_large_cache_size": 16, 00:17:06.805 "iobuf_small_cache_size": 128 00:17:06.805 } 00:17:06.805 }, 00:17:06.805 { 00:17:06.805 "method": "bdev_raid_set_options", 00:17:06.805 "params": { 00:17:06.805 "process_max_bandwidth_mb_sec": 0, 00:17:06.805 "process_window_size_kb": 1024 00:17:06.805 } 00:17:06.805 }, 00:17:06.805 { 00:17:06.805 "method": "bdev_iscsi_set_options", 00:17:06.805 "params": { 00:17:06.805 "timeout_sec": 30 00:17:06.805 } 00:17:06.805 }, 00:17:06.805 { 00:17:06.805 "method": "bdev_nvme_set_options", 00:17:06.805 "params": { 00:17:06.805 "action_on_timeout": "none", 00:17:06.805 "allow_accel_sequence": false, 00:17:06.805 "arbitration_burst": 0, 00:17:06.805 "bdev_retry_count": 3, 00:17:06.805 "ctrlr_loss_timeout_sec": 0, 00:17:06.805 "delay_cmd_submit": true, 00:17:06.805 "dhchap_dhgroups": [ 00:17:06.805 "null", 00:17:06.805 "ffdhe2048", 00:17:06.805 "ffdhe3072", 00:17:06.805 "ffdhe4096", 00:17:06.805 "ffdhe6144", 00:17:06.805 "ffdhe8192" 00:17:06.805 ], 00:17:06.805 "dhchap_digests": [ 00:17:06.805 "sha256", 00:17:06.805 "sha384", 00:17:06.805 "sha512" 00:17:06.805 ], 00:17:06.805 "disable_auto_failback": false, 00:17:06.805 "fast_io_fail_timeout_sec": 0, 00:17:06.805 "generate_uuids": false, 00:17:06.805 "high_priority_weight": 0, 00:17:06.805 "io_path_stat": false, 00:17:06.805 "io_queue_requests": 0, 00:17:06.805 "keep_alive_timeout_ms": 10000, 00:17:06.805 "low_priority_weight": 0, 00:17:06.805 "medium_priority_weight": 0, 00:17:06.805 "nvme_adminq_poll_period_us": 10000, 00:17:06.805 "nvme_error_stat": false, 00:17:06.805 "nvme_ioq_poll_period_us": 0, 00:17:06.805 "rdma_cm_event_timeout_ms": 0, 00:17:06.805 "rdma_max_cq_size": 0, 00:17:06.805 "rdma_srq_size": 0, 00:17:06.805 "rdma_umr_per_io": false, 00:17:06.805 "reconnect_delay_sec": 0, 00:17:06.805 "timeout_admin_us": 0, 00:17:06.805 "timeout_us": 0, 00:17:06.805 "transport_ack_timeout": 0, 00:17:06.805 "transport_retry_count": 4, 00:17:06.805 "transport_tos": 0 00:17:06.805 } 00:17:06.805 }, 00:17:06.805 { 00:17:06.805 "method": "bdev_nvme_set_hotplug", 00:17:06.805 "params": { 00:17:06.805 "enable": false, 00:17:06.805 "period_us": 100000 00:17:06.805 } 00:17:06.805 }, 00:17:06.805 { 00:17:06.805 "method": "bdev_malloc_create", 00:17:06.805 "params": { 00:17:06.805 "block_size": 4096, 00:17:06.805 "dif_is_head_of_md": false, 00:17:06.805 "dif_pi_format": 0, 00:17:06.805 "dif_type": 0, 00:17:06.805 "md_size": 0, 00:17:06.805 "name": "malloc0", 00:17:06.805 "num_blocks": 8192, 00:17:06.805 "optimal_io_boundary": 0, 00:17:06.805 "physical_block_size": 4096, 00:17:06.805 "uuid": "83e5ca0b-c771-4dba-8a63-852a9a7267d5" 00:17:06.805 } 00:17:06.805 }, 00:17:06.805 { 00:17:06.805 "method": "bdev_wait_for_examine" 00:17:06.805 } 00:17:06.805 ] 00:17:06.805 }, 00:17:06.805 { 00:17:06.805 "subsystem": "nbd", 00:17:06.805 "config": [] 00:17:06.806 }, 00:17:06.806 { 00:17:06.806 "subsystem": "scheduler", 00:17:06.806 "config": [ 00:17:06.806 { 00:17:06.806 "method": "framework_set_scheduler", 00:17:06.806 "params": { 00:17:06.806 "name": "static" 00:17:06.806 } 00:17:06.806 } 00:17:06.806 ] 00:17:06.806 }, 00:17:06.806 { 00:17:06.806 "subsystem": "nvmf", 00:17:06.806 "config": [ 00:17:06.806 { 00:17:06.806 "method": "nvmf_set_config", 00:17:06.806 "params": { 00:17:06.806 "admin_cmd_passthru": { 00:17:06.806 "identify_ctrlr": false 00:17:06.806 }, 00:17:06.806 "dhchap_dhgroups": [ 00:17:06.806 "null", 00:17:06.806 "ffdhe2048", 00:17:06.806 "ffdhe3072", 00:17:06.806 "ffdhe4096", 00:17:06.806 "ffdhe6144", 00:17:06.806 "ffdhe8192" 00:17:06.806 ], 00:17:06.806 "dhchap_digests": [ 00:17:06.806 "sha256", 00:17:06.806 "sha384", 00:17:06.806 "sha512" 00:17:06.806 ], 00:17:06.806 "discovery_filter": "match_any" 00:17:06.806 } 00:17:06.806 }, 00:17:06.806 { 00:17:06.806 "method": "nvmf_set_max_subsystems", 00:17:06.806 "params": { 00:17:06.806 "max_subsystems": 1024 00:17:06.806 } 00:17:06.806 }, 00:17:06.806 { 00:17:06.806 "method": "nvmf_set_crdt", 00:17:06.806 "params": { 00:17:06.806 "crdt1": 0, 00:17:06.806 "crdt2": 0, 00:17:06.806 "crdt3": 0 00:17:06.806 } 00:17:06.806 }, 00:17:06.806 { 00:17:06.806 "method": "nvmf_create_transport", 00:17:06.806 "params": { 00:17:06.806 "abort_timeout_sec": 1, 00:17:06.806 "ack_timeout": 0, 00:17:06.806 "buf_cache_size": 4294967295, 00:17:06.806 "c2h_success": false, 00:17:06.806 "data_wr_pool_size": 0, 00:17:06.806 "dif_insert_or_strip": false, 00:17:06.806 "in_capsule_data_size": 4096, 00:17:06.806 "io_unit_size": 131072, 00:17:06.806 "max_aq_depth": 128, 00:17:06.806 "max_io_qpairs_per_ctrlr": 127, 00:17:06.806 "max_io_size": 131072, 00:17:06.806 "max_queue_depth": 128, 00:17:06.806 "num_shared_buffers": 511, 00:17:06.806 "sock_priority": 0, 00:17:06.806 "trtype": "TCP", 00:17:06.806 "zcopy": false 00:17:06.806 } 00:17:06.806 }, 00:17:06.806 { 00:17:06.806 "method": "nvmf_create_subsystem", 00:17:06.806 "params": { 00:17:06.806 "allow_any_host": false, 00:17:06.806 "ana_reporting": false, 00:17:06.806 "max_cntlid": 65519, 00:17:06.806 "max_namespaces": 32, 00:17:06.806 "min_cntlid": 1, 00:17:06.806 "model_number": "SPDK bdev Controller", 00:17:06.806 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:17:06.806 "serial_number": "00000000000000000000" 00:17:06.806 } 00:17:06.806 }, 00:17:06.806 { 00:17:06.806 "method": "nvmf_subsystem_add_host", 00:17:06.806 "params": { 00:17:06.806 "host": "nqn.2016-06.io.spdk:host1", 00:17:06.806 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:17:06.806 "psk": "key0" 00:17:06.806 } 00:17:06.806 }, 00:17:06.806 { 00:17:06.806 "method": "nvmf_subsystem_add_ns", 00:17:06.806 "params": { 00:17:06.806 "namespace": { 00:17:06.806 "bdev_name": "malloc0", 00:17:06.806 "nguid": "83E5CA0BC7714DBA8A63852A9A7267D5", 00:17:06.806 "no_auto_visible": false, 00:17:06.806 "nsid": 1, 00:17:06.806 "uuid": "83e5ca0b-c771-4dba-8a63-852a9a7267d5" 00:17:06.806 }, 00:17:06.806 "nqn": "nqn.2016-06.io.spdk:cnode1" 00:17:06.806 } 00:17:06.806 }, 00:17:06.806 { 00:17:06.806 "method": "nvmf_subsystem_add_listener", 00:17:06.806 "params": { 00:17:06.806 "listen_address": { 00:17:06.806 "adrfam": "IPv4", 00:17:06.806 "traddr": "10.0.0.3", 00:17:06.806 "trsvcid": "4420", 00:17:06.806 "trtype": "TCP" 00:17:06.806 }, 00:17:06.806 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:17:06.806 "secure_channel": false, 00:17:06.806 "sock_impl": "ssl" 00:17:06.806 } 00:17:06.806 } 00:17:06.806 ] 00:17:06.806 } 00:17:06.806 ] 00:17:06.806 }' 00:17:06.806 11:39:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@268 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock save_config 00:17:07.064 11:39:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@268 -- # bperfcfg='{ 00:17:07.064 "subsystems": [ 00:17:07.064 { 00:17:07.064 "subsystem": "keyring", 00:17:07.064 "config": [ 00:17:07.064 { 00:17:07.064 "method": "keyring_file_add_key", 00:17:07.064 "params": { 00:17:07.064 "name": "key0", 00:17:07.064 "path": "/tmp/tmp.6UkUdY8wJO" 00:17:07.064 } 00:17:07.064 } 00:17:07.064 ] 00:17:07.064 }, 00:17:07.064 { 00:17:07.064 "subsystem": "iobuf", 00:17:07.064 "config": [ 00:17:07.064 { 00:17:07.064 "method": "iobuf_set_options", 00:17:07.064 "params": { 00:17:07.064 "enable_numa": false, 00:17:07.064 "large_bufsize": 135168, 00:17:07.064 "large_pool_count": 1024, 00:17:07.064 "small_bufsize": 8192, 00:17:07.064 "small_pool_count": 8192 00:17:07.064 } 00:17:07.064 } 00:17:07.064 ] 00:17:07.064 }, 00:17:07.064 { 00:17:07.064 "subsystem": "sock", 00:17:07.064 "config": [ 00:17:07.064 { 00:17:07.064 "method": "sock_set_default_impl", 00:17:07.064 "params": { 00:17:07.064 "impl_name": "posix" 00:17:07.064 } 00:17:07.064 }, 00:17:07.064 { 00:17:07.064 "method": "sock_impl_set_options", 00:17:07.064 "params": { 00:17:07.064 "enable_ktls": false, 00:17:07.064 "enable_placement_id": 0, 00:17:07.064 "enable_quickack": false, 00:17:07.064 "enable_recv_pipe": true, 00:17:07.064 "enable_zerocopy_send_client": false, 00:17:07.064 "enable_zerocopy_send_server": true, 00:17:07.064 "impl_name": "ssl", 00:17:07.064 "recv_buf_size": 4096, 00:17:07.064 "send_buf_size": 4096, 00:17:07.064 "tls_version": 0, 00:17:07.064 "zerocopy_threshold": 0 00:17:07.064 } 00:17:07.064 }, 00:17:07.064 { 00:17:07.064 "method": "sock_impl_set_options", 00:17:07.064 "params": { 00:17:07.064 "enable_ktls": false, 00:17:07.064 "enable_placement_id": 0, 00:17:07.064 "enable_quickack": false, 00:17:07.064 "enable_recv_pipe": true, 00:17:07.064 "enable_zerocopy_send_client": false, 00:17:07.064 "enable_zerocopy_send_server": true, 00:17:07.064 "impl_name": "posix", 00:17:07.064 "recv_buf_size": 2097152, 00:17:07.064 "send_buf_size": 2097152, 00:17:07.064 "tls_version": 0, 00:17:07.064 "zerocopy_threshold": 0 00:17:07.064 } 00:17:07.064 } 00:17:07.064 ] 00:17:07.064 }, 00:17:07.064 { 00:17:07.065 "subsystem": "vmd", 00:17:07.065 "config": [] 00:17:07.065 }, 00:17:07.065 { 00:17:07.065 "subsystem": "accel", 00:17:07.065 "config": [ 00:17:07.065 { 00:17:07.065 "method": "accel_set_options", 00:17:07.065 "params": { 00:17:07.065 "buf_count": 2048, 00:17:07.065 "large_cache_size": 16, 00:17:07.065 "sequence_count": 2048, 00:17:07.065 "small_cache_size": 128, 00:17:07.065 "task_count": 2048 00:17:07.065 } 00:17:07.065 } 00:17:07.065 ] 00:17:07.065 }, 00:17:07.065 { 00:17:07.065 "subsystem": "bdev", 00:17:07.065 "config": [ 00:17:07.065 { 00:17:07.065 "method": "bdev_set_options", 00:17:07.065 "params": { 00:17:07.065 "bdev_auto_examine": true, 00:17:07.065 "bdev_io_cache_size": 256, 00:17:07.065 "bdev_io_pool_size": 65535, 00:17:07.065 "iobuf_large_cache_size": 16, 00:17:07.065 "iobuf_small_cache_size": 128 00:17:07.065 } 00:17:07.065 }, 00:17:07.065 { 00:17:07.065 "method": "bdev_raid_set_options", 00:17:07.065 "params": { 00:17:07.065 "process_max_bandwidth_mb_sec": 0, 00:17:07.065 "process_window_size_kb": 1024 00:17:07.065 } 00:17:07.065 }, 00:17:07.065 { 00:17:07.065 "method": "bdev_iscsi_set_options", 00:17:07.065 "params": { 00:17:07.065 "timeout_sec": 30 00:17:07.065 } 00:17:07.065 }, 00:17:07.065 { 00:17:07.065 "method": "bdev_nvme_set_options", 00:17:07.065 "params": { 00:17:07.065 "action_on_timeout": "none", 00:17:07.065 "allow_accel_sequence": false, 00:17:07.065 "arbitration_burst": 0, 00:17:07.065 "bdev_retry_count": 3, 00:17:07.065 "ctrlr_loss_timeout_sec": 0, 00:17:07.065 "delay_cmd_submit": true, 00:17:07.065 "dhchap_dhgroups": [ 00:17:07.065 "null", 00:17:07.065 "ffdhe2048", 00:17:07.065 "ffdhe3072", 00:17:07.065 "ffdhe4096", 00:17:07.065 "ffdhe6144", 00:17:07.065 "ffdhe8192" 00:17:07.065 ], 00:17:07.065 "dhchap_digests": [ 00:17:07.065 "sha256", 00:17:07.065 "sha384", 00:17:07.065 "sha512" 00:17:07.065 ], 00:17:07.065 "disable_auto_failback": false, 00:17:07.065 "fast_io_fail_timeout_sec": 0, 00:17:07.065 "generate_uuids": false, 00:17:07.065 "high_priority_weight": 0, 00:17:07.065 "io_path_stat": false, 00:17:07.065 "io_queue_requests": 512, 00:17:07.065 "keep_alive_timeout_ms": 10000, 00:17:07.065 "low_priority_weight": 0, 00:17:07.065 "medium_priority_weight": 0, 00:17:07.065 "nvme_adminq_poll_period_us": 10000, 00:17:07.065 "nvme_error_stat": false, 00:17:07.065 "nvme_ioq_poll_period_us": 0, 00:17:07.065 "rdma_cm_event_timeout_ms": 0, 00:17:07.065 "rdma_max_cq_size": 0, 00:17:07.065 "rdma_srq_size": 0, 00:17:07.065 "rdma_umr_per_io": false, 00:17:07.065 "reconnect_delay_sec": 0, 00:17:07.065 "timeout_admin_us": 0, 00:17:07.065 "timeout_us": 0, 00:17:07.065 "transport_ack_timeout": 0, 00:17:07.065 "transport_retry_count": 4, 00:17:07.065 "transport_tos": 0 00:17:07.065 } 00:17:07.065 }, 00:17:07.065 { 00:17:07.065 "method": "bdev_nvme_attach_controller", 00:17:07.065 "params": { 00:17:07.065 "adrfam": "IPv4", 00:17:07.065 "ctrlr_loss_timeout_sec": 0, 00:17:07.065 "ddgst": false, 00:17:07.065 "fast_io_fail_timeout_sec": 0, 00:17:07.065 "hdgst": false, 00:17:07.065 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:17:07.065 "multipath": "multipath", 00:17:07.065 "name": "nvme0", 00:17:07.065 "prchk_guard": false, 00:17:07.065 "prchk_reftag": false, 00:17:07.065 "psk": "key0", 00:17:07.065 "reconnect_delay_sec": 0, 00:17:07.065 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:17:07.065 "traddr": "10.0.0.3", 00:17:07.065 "trsvcid": "4420", 00:17:07.065 "trtype": "TCP" 00:17:07.065 } 00:17:07.065 }, 00:17:07.065 { 00:17:07.065 "method": "bdev_nvme_set_hotplug", 00:17:07.065 "params": { 00:17:07.065 "enable": false, 00:17:07.065 "period_us": 100000 00:17:07.065 } 00:17:07.065 }, 00:17:07.065 { 00:17:07.065 "method": "bdev_enable_histogram", 00:17:07.065 "params": { 00:17:07.065 "enable": true, 00:17:07.065 "name": "nvme0n1" 00:17:07.065 } 00:17:07.065 }, 00:17:07.065 { 00:17:07.065 "method": "bdev_wait_for_examine" 00:17:07.065 } 00:17:07.065 ] 00:17:07.065 }, 00:17:07.065 { 00:17:07.065 "subsystem": "nbd", 00:17:07.065 "config": [] 00:17:07.065 } 00:17:07.065 ] 00:17:07.065 }' 00:17:07.065 11:39:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@270 -- # killprocess 86214 00:17:07.065 11:39:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # '[' -z 86214 ']' 00:17:07.065 11:39:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # kill -0 86214 00:17:07.065 11:39:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # uname 00:17:07.065 11:39:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:17:07.065 11:39:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 86214 00:17:07.324 11:39:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:17:07.324 killing process with pid 86214 00:17:07.324 11:39:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:17:07.324 11:39:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@972 -- # echo 'killing process with pid 86214' 00:17:07.324 Received shutdown signal, test time was about 1.000000 seconds 00:17:07.324 00:17:07.324 Latency(us) 00:17:07.324 [2024-12-16T11:39:08.077Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:17:07.324 [2024-12-16T11:39:08.077Z] =================================================================================================================== 00:17:07.324 [2024-12-16T11:39:08.077Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:17:07.324 11:39:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@973 -- # kill 86214 00:17:07.324 11:39:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@978 -- # wait 86214 00:17:07.324 11:39:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@271 -- # killprocess 86164 00:17:07.324 11:39:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # '[' -z 86164 ']' 00:17:07.324 11:39:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # kill -0 86164 00:17:07.324 11:39:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # uname 00:17:07.324 11:39:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:17:07.324 11:39:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 86164 00:17:07.324 11:39:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:17:07.324 11:39:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:17:07.324 killing process with pid 86164 00:17:07.324 11:39:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@972 -- # echo 'killing process with pid 86164' 00:17:07.324 11:39:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@973 -- # kill 86164 00:17:07.324 11:39:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@978 -- # wait 86164 00:17:07.582 11:39:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@273 -- # nvmfappstart -c /dev/fd/62 00:17:07.582 11:39:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:17:07.582 11:39:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@726 -- # xtrace_disable 00:17:07.582 11:39:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@273 -- # echo '{ 00:17:07.582 "subsystems": [ 00:17:07.582 { 00:17:07.582 "subsystem": "keyring", 00:17:07.582 "config": [ 00:17:07.582 { 00:17:07.582 "method": "keyring_file_add_key", 00:17:07.582 "params": { 00:17:07.582 "name": "key0", 00:17:07.582 "path": "/tmp/tmp.6UkUdY8wJO" 00:17:07.582 } 00:17:07.582 } 00:17:07.582 ] 00:17:07.582 }, 00:17:07.582 { 00:17:07.582 "subsystem": "iobuf", 00:17:07.582 "config": [ 00:17:07.582 { 00:17:07.582 "method": "iobuf_set_options", 00:17:07.582 "params": { 00:17:07.582 "enable_numa": false, 00:17:07.582 "large_bufsize": 135168, 00:17:07.582 "large_pool_count": 1024, 00:17:07.582 "small_bufsize": 8192, 00:17:07.582 "small_pool_count": 8192 00:17:07.582 } 00:17:07.582 } 00:17:07.582 ] 00:17:07.582 }, 00:17:07.582 { 00:17:07.582 "subsystem": "sock", 00:17:07.582 "config": [ 00:17:07.582 { 00:17:07.582 "method": "sock_set_default_impl", 00:17:07.582 "params": { 00:17:07.582 "impl_name": "posix" 00:17:07.582 } 00:17:07.582 }, 00:17:07.582 { 00:17:07.582 "method": "sock_impl_set_options", 00:17:07.582 "params": { 00:17:07.582 "enable_ktls": false, 00:17:07.582 "enable_placement_id": 0, 00:17:07.582 "enable_quickack": false, 00:17:07.582 "enable_recv_pipe": true, 00:17:07.582 "enable_zerocopy_send_client": false, 00:17:07.582 "enable_zerocopy_send_server": true, 00:17:07.582 "impl_name": "ssl", 00:17:07.582 "recv_buf_size": 4096, 00:17:07.582 "send_buf_size": 4096, 00:17:07.582 "tls_version": 0, 00:17:07.582 "zerocopy_threshold": 0 00:17:07.582 } 00:17:07.582 }, 00:17:07.582 { 00:17:07.582 "method": "sock_impl_set_options", 00:17:07.582 "params": { 00:17:07.582 "enable_ktls": false, 00:17:07.582 "enable_placement_id": 0, 00:17:07.582 "enable_quickack": false, 00:17:07.582 "enable_recv_pipe": true, 00:17:07.582 "enable_zerocopy_send_client": false, 00:17:07.582 "enable_zerocopy_send_server": true, 00:17:07.582 "impl_name": "posix", 00:17:07.582 "recv_buf_size": 2097152, 00:17:07.582 "send_buf_size": 2097152, 00:17:07.582 "tls_version": 0, 00:17:07.582 "zerocopy_threshold": 0 00:17:07.582 } 00:17:07.582 } 00:17:07.582 ] 00:17:07.582 }, 00:17:07.582 { 00:17:07.582 "subsystem": "vmd", 00:17:07.582 "config": [] 00:17:07.582 }, 00:17:07.582 { 00:17:07.582 "subsystem": "accel", 00:17:07.582 "config": [ 00:17:07.582 { 00:17:07.582 "method": "accel_set_options", 00:17:07.582 "params": { 00:17:07.582 "buf_count": 2048, 00:17:07.582 "large_cache_size": 16, 00:17:07.582 "sequence_count": 2048, 00:17:07.582 "small_cache_size": 128, 00:17:07.582 "task_count": 2048 00:17:07.582 } 00:17:07.582 } 00:17:07.582 ] 00:17:07.582 }, 00:17:07.582 { 00:17:07.582 "subsystem": "bdev", 00:17:07.582 "config": [ 00:17:07.582 { 00:17:07.582 "method": "bdev_set_options", 00:17:07.582 "params": { 00:17:07.582 "bdev_auto_examine": true, 00:17:07.582 "bdev_io_cache_size": 256, 00:17:07.582 "bdev_io_pool_size": 65535, 00:17:07.582 "iobuf_large_cache_size": 16, 00:17:07.582 "iobuf_small_cache_size": 128 00:17:07.582 } 00:17:07.582 }, 00:17:07.582 { 00:17:07.582 "method": "bdev_raid_set_options", 00:17:07.582 "params": { 00:17:07.582 "process_max_bandwidth_mb_sec": 0, 00:17:07.582 "process_window_size_kb": 1024 00:17:07.582 } 00:17:07.582 }, 00:17:07.582 { 00:17:07.582 "method": "bdev_iscsi_set_options", 00:17:07.582 "params": { 00:17:07.582 "timeout_sec": 30 00:17:07.582 } 00:17:07.582 }, 00:17:07.582 { 00:17:07.582 "method": "bdev_nvme_set_options", 00:17:07.582 "params": { 00:17:07.582 "action_on_timeout": "none", 00:17:07.582 "allow_accel_sequence": false, 00:17:07.582 "arbitration_burst": 0, 00:17:07.582 "bdev_retry_count": 3, 00:17:07.582 "ctrlr_loss_timeout_sec": 0, 00:17:07.582 "delay_cmd_submit": true, 00:17:07.582 "dhchap_dhgroups": [ 00:17:07.582 "null", 00:17:07.582 "ffdhe2048", 00:17:07.583 "ffdhe3072", 00:17:07.583 "ffdhe4096", 00:17:07.583 "ffdhe6144", 00:17:07.583 "ffdhe8192" 00:17:07.583 ], 00:17:07.583 "dhchap_digests": [ 00:17:07.583 "sha256", 00:17:07.583 "sha384", 00:17:07.583 "sha512" 00:17:07.583 ], 00:17:07.583 "disable_auto_failback": false, 00:17:07.583 "fast_io_fail_timeout_sec": 0, 00:17:07.583 "generate_uuids": false, 00:17:07.583 "high_priority_weight": 0, 00:17:07.583 "io_path_stat": false, 00:17:07.583 "io_queue_requests": 0, 00:17:07.583 "keep_alive_timeout_ms": 10000, 00:17:07.583 "low_priority_weight": 0, 00:17:07.583 "medium_priority_weight": 0, 00:17:07.583 "nvme_adminq_poll_period_us": 10000, 00:17:07.583 "nvme_error_stat": false, 00:17:07.583 "nvme_ioq_poll_period_us": 0, 00:17:07.583 "rdma_cm_event_timeout_ms": 0, 00:17:07.583 "rdma_max_cq_size": 0, 00:17:07.583 "rdma_srq_size": 0, 00:17:07.583 "rdma_umr_per_io": false, 00:17:07.583 "reconnect_delay_sec": 0, 00:17:07.583 "timeout_admin_us": 0, 00:17:07.583 "timeout_us": 0, 00:17:07.583 "transport_ack_timeout": 0, 00:17:07.583 "transport_retry_count": 4, 00:17:07.583 "transport_tos": 0 00:17:07.583 } 00:17:07.583 }, 00:17:07.583 { 00:17:07.583 "method": "bdev_nvme_set_hotplug", 00:17:07.583 "params": { 00:17:07.583 "enable": false, 00:17:07.583 "period_us": 100000 00:17:07.583 } 00:17:07.583 }, 00:17:07.583 { 00:17:07.583 "method": "bdev_malloc_create", 00:17:07.583 "params": { 00:17:07.583 "block_size": 4096, 00:17:07.583 "dif_is_head_of_md": false, 00:17:07.583 "dif_pi_format": 0, 00:17:07.583 "dif_type": 0, 00:17:07.583 "md_size": 0, 00:17:07.583 "name": "malloc0", 00:17:07.583 "num_blocks": 8192, 00:17:07.583 "optimal_io_boundary": 0, 00:17:07.583 "physical_block_size": 4096, 00:17:07.583 "uuid": "83e5ca0b-c771-4dba-8a63-852a9a7267d5" 00:17:07.583 } 00:17:07.583 }, 00:17:07.583 { 00:17:07.583 "method": "bdev_wait_for_examine" 00:17:07.583 } 00:17:07.583 ] 00:17:07.583 }, 00:17:07.583 { 00:17:07.583 "subsystem": "nbd", 00:17:07.583 "config": [] 00:17:07.583 }, 00:17:07.583 { 00:17:07.583 "subsystem": "scheduler", 00:17:07.583 "config": [ 00:17:07.583 { 00:17:07.583 "method": "framework_set_scheduler", 00:17:07.583 "params": { 00:17:07.583 "name": "static" 00:17:07.583 } 00:17:07.583 } 00:17:07.583 ] 00:17:07.583 }, 00:17:07.583 { 00:17:07.583 "subsystem": "nvmf", 00:17:07.583 "config": [ 00:17:07.583 { 00:17:07.583 "method": "nvmf_set_config", 00:17:07.583 "params": { 00:17:07.583 "admin_cmd_passthru": { 00:17:07.583 "identify_ctrlr": false 00:17:07.583 }, 00:17:07.583 "dhchap_dhgroups": [ 00:17:07.583 "null", 00:17:07.583 "ffdhe2048", 00:17:07.583 "ffdhe3072", 00:17:07.583 "ffdhe4096", 00:17:07.583 "ffdhe6144", 00:17:07.583 "ffdhe8192" 00:17:07.583 ], 00:17:07.583 "dhchap_digests": [ 00:17:07.583 "sha256", 00:17:07.583 "sha384", 00:17:07.583 "sha512" 00:17:07.583 ], 00:17:07.583 "discovery_filter": "match_any" 00:17:07.583 } 00:17:07.583 }, 00:17:07.583 { 00:17:07.583 "method": "nvmf_set_max_subsystems", 00:17:07.583 "params": { 00:17:07.583 "max_subsystems": 1024 00:17:07.583 } 00:17:07.583 }, 00:17:07.583 { 00:17:07.583 "method": "nvmf_set_crdt", 00:17:07.583 "params": { 00:17:07.583 "crdt1": 0, 00:17:07.583 "crdt2": 0, 00:17:07.583 "crdt3": 0 00:17:07.583 } 00:17:07.583 }, 00:17:07.583 { 00:17:07.583 "method": "nvmf_create_transport", 00:17:07.583 "params": { 00:17:07.583 "abort_timeout_sec": 1, 00:17:07.583 "ack_timeout": 0, 00:17:07.583 "buf_cache_size": 4294967295, 00:17:07.583 "c2h_success": false, 00:17:07.583 "data_wr_pool_size": 0, 00:17:07.583 "dif_insert_or_strip": false, 00:17:07.583 "in_capsule_data_size": 4096, 00:17:07.583 "io_unit_size": 131072, 00:17:07.583 "max_aq_depth": 128, 00:17:07.583 "max_io_qpairs_per_ctrlr": 127, 00:17:07.583 "max_io_size": 131072, 00:17:07.583 "max_queue_depth": 128, 00:17:07.583 "num_shared_buffers": 511, 00:17:07.583 "sock_priority": 0, 00:17:07.583 "trtype": "TCP", 00:17:07.583 "zcopy": false 00:17:07.583 } 00:17:07.583 }, 00:17:07.583 { 00:17:07.583 "method": "nvmf_create_subsystem", 00:17:07.583 "params": { 00:17:07.583 "allow_any_host": false, 00:17:07.583 "ana_reporting": false, 00:17:07.583 "max_cntlid": 65519, 00:17:07.583 "max_namespaces": 32, 00:17:07.583 "min_cntlid": 1, 00:17:07.583 "model_number": "SPDK bdev Controller", 00:17:07.583 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:17:07.583 "serial_number": "00000000000000000000" 00:17:07.583 } 00:17:07.583 }, 00:17:07.583 { 00:17:07.583 "method": "nvmf_subsystem_add_host", 00:17:07.583 "params": { 00:17:07.583 "host": "nqn.2016-06.io.spdk:host1", 00:17:07.583 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:17:07.583 "psk": "key0" 00:17:07.583 } 00:17:07.583 }, 00:17:07.583 { 00:17:07.583 "method": "nvmf_subsystem_add_ns", 00:17:07.583 "params": { 00:17:07.583 "namespace": { 00:17:07.583 "bdev_name": "malloc0", 00:17:07.583 "nguid": "83E5CA0BC7714DBA8A63852A9A7267D5", 00:17:07.583 "no_auto_visible": false, 00:17:07.583 "nsid": 1, 00:17:07.583 "uuid": "83e5ca0b-c771-4dba-8a63-852a9a7267d5" 00:17:07.583 }, 00:17:07.583 "nqn": "nqn.2016-06.io.spdk:cnode1" 00:17:07.583 } 00:17:07.583 }, 00:17:07.583 { 00:17:07.583 "method": "nvmf_subsystem_add_listener", 00:17:07.583 "params": { 00:17:07.583 "listen_address": { 00:17:07.583 "adrfam": "IPv4", 00:17:07.583 "traddr": "10.0.0.3", 00:17:07.583 "trsvcid": "4420", 00:17:07.583 "trtype": "TCP" 00:17:07.583 }, 00:17:07.583 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:17:07.583 "secure_channel": false, 00:17:07.583 "sock_impl": "ssl" 00:17:07.583 } 00:17:07.583 } 00:17:07.583 ] 00:17:07.583 } 00:17:07.583 ] 00:17:07.583 }' 00:17:07.583 11:39:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:17:07.583 11:39:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@509 -- # nvmfpid=86291 00:17:07.583 11:39:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@508 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -c /dev/fd/62 00:17:07.583 11:39:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@510 -- # waitforlisten 86291 00:17:07.583 11:39:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # '[' -z 86291 ']' 00:17:07.583 11:39:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:17:07.583 11:39:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # local max_retries=100 00:17:07.583 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:17:07.583 11:39:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:17:07.583 11:39:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@844 -- # xtrace_disable 00:17:07.583 11:39:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:17:07.841 [2024-12-16 11:39:08.337431] Starting SPDK v25.01-pre git sha1 c360129e7 / DPDK 24.03.0 initialization... 00:17:07.841 [2024-12-16 11:39:08.337529] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:17:07.841 [2024-12-16 11:39:08.482250] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:17:07.841 [2024-12-16 11:39:08.536311] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:17:07.841 [2024-12-16 11:39:08.536367] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:17:07.841 [2024-12-16 11:39:08.536394] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:17:07.841 [2024-12-16 11:39:08.536403] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:17:07.841 [2024-12-16 11:39:08.536410] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:17:07.841 [2024-12-16 11:39:08.536880] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:17:08.098 [2024-12-16 11:39:08.779319] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:17:08.098 [2024-12-16 11:39:08.811271] tcp.c:1049:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:17:08.098 [2024-12-16 11:39:08.811516] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4420 *** 00:17:08.663 11:39:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:17:08.663 11:39:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@868 -- # return 0 00:17:08.663 11:39:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:17:08.663 11:39:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@732 -- # xtrace_disable 00:17:08.663 11:39:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:17:08.921 11:39:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:17:08.921 11:39:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@276 -- # bdevperf_pid=86335 00:17:08.921 11:39:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@277 -- # waitforlisten 86335 /var/tmp/bdevperf.sock 00:17:08.921 11:39:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # '[' -z 86335 ']' 00:17:08.921 11:39:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:17:08.921 11:39:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # local max_retries=100 00:17:08.921 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:17:08.921 11:39:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:17:08.921 11:39:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@844 -- # xtrace_disable 00:17:08.921 11:39:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:17:08.921 11:39:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@274 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 2 -z -r /var/tmp/bdevperf.sock -q 128 -o 4k -w verify -t 1 -c /dev/fd/63 00:17:08.921 11:39:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@274 -- # echo '{ 00:17:08.921 "subsystems": [ 00:17:08.921 { 00:17:08.922 "subsystem": "keyring", 00:17:08.922 "config": [ 00:17:08.922 { 00:17:08.922 "method": "keyring_file_add_key", 00:17:08.922 "params": { 00:17:08.922 "name": "key0", 00:17:08.922 "path": "/tmp/tmp.6UkUdY8wJO" 00:17:08.922 } 00:17:08.922 } 00:17:08.922 ] 00:17:08.922 }, 00:17:08.922 { 00:17:08.922 "subsystem": "iobuf", 00:17:08.922 "config": [ 00:17:08.922 { 00:17:08.922 "method": "iobuf_set_options", 00:17:08.922 "params": { 00:17:08.922 "enable_numa": false, 00:17:08.922 "large_bufsize": 135168, 00:17:08.922 "large_pool_count": 1024, 00:17:08.922 "small_bufsize": 8192, 00:17:08.922 "small_pool_count": 8192 00:17:08.922 } 00:17:08.922 } 00:17:08.922 ] 00:17:08.922 }, 00:17:08.922 { 00:17:08.922 "subsystem": "sock", 00:17:08.922 "config": [ 00:17:08.922 { 00:17:08.922 "method": "sock_set_default_impl", 00:17:08.922 "params": { 00:17:08.922 "impl_name": "posix" 00:17:08.922 } 00:17:08.922 }, 00:17:08.922 { 00:17:08.922 "method": "sock_impl_set_options", 00:17:08.922 "params": { 00:17:08.922 "enable_ktls": false, 00:17:08.922 "enable_placement_id": 0, 00:17:08.922 "enable_quickack": false, 00:17:08.922 "enable_recv_pipe": true, 00:17:08.922 "enable_zerocopy_send_client": false, 00:17:08.922 "enable_zerocopy_send_server": true, 00:17:08.922 "impl_name": "ssl", 00:17:08.922 "recv_buf_size": 4096, 00:17:08.922 "send_buf_size": 4096, 00:17:08.922 "tls_version": 0, 00:17:08.922 "zerocopy_threshold": 0 00:17:08.922 } 00:17:08.922 }, 00:17:08.922 { 00:17:08.922 "method": "sock_impl_set_options", 00:17:08.922 "params": { 00:17:08.922 "enable_ktls": false, 00:17:08.922 "enable_placement_id": 0, 00:17:08.922 "enable_quickack": false, 00:17:08.922 "enable_recv_pipe": true, 00:17:08.922 "enable_zerocopy_send_client": false, 00:17:08.922 "enable_zerocopy_send_server": true, 00:17:08.922 "impl_name": "posix", 00:17:08.922 "recv_buf_size": 2097152, 00:17:08.922 "send_buf_size": 2097152, 00:17:08.922 "tls_version": 0, 00:17:08.922 "zerocopy_threshold": 0 00:17:08.922 } 00:17:08.922 } 00:17:08.922 ] 00:17:08.922 }, 00:17:08.922 { 00:17:08.922 "subsystem": "vmd", 00:17:08.922 "config": [] 00:17:08.922 }, 00:17:08.922 { 00:17:08.922 "subsystem": "accel", 00:17:08.922 "config": [ 00:17:08.922 { 00:17:08.922 "method": "accel_set_options", 00:17:08.922 "params": { 00:17:08.922 "buf_count": 2048, 00:17:08.922 "large_cache_size": 16, 00:17:08.922 "sequence_count": 2048, 00:17:08.922 "small_cache_size": 128, 00:17:08.922 "task_count": 2048 00:17:08.922 } 00:17:08.922 } 00:17:08.922 ] 00:17:08.922 }, 00:17:08.922 { 00:17:08.922 "subsystem": "bdev", 00:17:08.922 "config": [ 00:17:08.922 { 00:17:08.922 "method": "bdev_set_options", 00:17:08.922 "params": { 00:17:08.922 "bdev_auto_examine": true, 00:17:08.922 "bdev_io_cache_size": 256, 00:17:08.922 "bdev_io_pool_size": 65535, 00:17:08.922 "iobuf_large_cache_size": 16, 00:17:08.922 "iobuf_small_cache_size": 128 00:17:08.922 } 00:17:08.922 }, 00:17:08.922 { 00:17:08.922 "method": "bdev_raid_set_options", 00:17:08.922 "params": { 00:17:08.922 "process_max_bandwidth_mb_sec": 0, 00:17:08.922 "process_window_size_kb": 1024 00:17:08.922 } 00:17:08.922 }, 00:17:08.922 { 00:17:08.922 "method": "bdev_iscsi_set_options", 00:17:08.922 "params": { 00:17:08.922 "timeout_sec": 30 00:17:08.922 } 00:17:08.922 }, 00:17:08.922 { 00:17:08.922 "method": "bdev_nvme_set_options", 00:17:08.922 "params": { 00:17:08.922 "action_on_timeout": "none", 00:17:08.922 "allow_accel_sequence": false, 00:17:08.922 "arbitration_burst": 0, 00:17:08.922 "bdev_retry_count": 3, 00:17:08.922 "ctrlr_loss_timeout_sec": 0, 00:17:08.922 "delay_cmd_submit": true, 00:17:08.922 "dhchap_dhgroups": [ 00:17:08.922 "null", 00:17:08.922 "ffdhe2048", 00:17:08.922 "ffdhe3072", 00:17:08.922 "ffdhe4096", 00:17:08.922 "ffdhe6144", 00:17:08.922 "ffdhe8192" 00:17:08.922 ], 00:17:08.922 "dhchap_digests": [ 00:17:08.922 "sha256", 00:17:08.922 "sha384", 00:17:08.922 "sha512" 00:17:08.922 ], 00:17:08.922 "disable_auto_failback": false, 00:17:08.922 "fast_io_fail_timeout_sec": 0, 00:17:08.922 "generate_uuids": false, 00:17:08.922 "high_priority_weight": 0, 00:17:08.922 "io_path_stat": false, 00:17:08.922 "io_queue_requests": 512, 00:17:08.922 "keep_alive_timeout_ms": 10000, 00:17:08.922 "low_priority_weight": 0, 00:17:08.922 "medium_priority_weight": 0, 00:17:08.922 "nvme_adminq_poll_period_us": 10000, 00:17:08.922 "nvme_error_stat": false, 00:17:08.922 "nvme_ioq_poll_period_us": 0, 00:17:08.922 "rdma_cm_event_timeout_ms": 0, 00:17:08.922 "rdma_max_cq_size": 0, 00:17:08.922 "rdma_srq_size": 0, 00:17:08.922 "rdma_umr_per_io": false, 00:17:08.922 "reconnect_delay_sec": 0, 00:17:08.922 "timeout_admin_us": 0, 00:17:08.922 "timeout_us": 0, 00:17:08.922 "transport_ack_timeout": 0, 00:17:08.922 "transport_retry_count": 4, 00:17:08.922 "transport_tos": 0 00:17:08.922 } 00:17:08.922 }, 00:17:08.922 { 00:17:08.922 "method": "bdev_nvme_attach_controller", 00:17:08.922 "params": { 00:17:08.922 "adrfam": "IPv4", 00:17:08.922 "ctrlr_loss_timeout_sec": 0, 00:17:08.922 "ddgst": false, 00:17:08.922 "fast_io_fail_timeout_sec": 0, 00:17:08.922 "hdgst": false, 00:17:08.922 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:17:08.922 "multipath": "multipath", 00:17:08.922 "name": "nvme0", 00:17:08.922 "prchk_guard": false, 00:17:08.922 "prchk_reftag": false, 00:17:08.922 "psk": "key0", 00:17:08.922 "reconnect_delay_sec": 0, 00:17:08.922 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:17:08.922 "traddr": "10.0.0.3", 00:17:08.922 "trsvcid": "4420", 00:17:08.922 "trtype": "TCP" 00:17:08.922 } 00:17:08.922 }, 00:17:08.922 { 00:17:08.922 "method": "bdev_nvme_set_hotplug", 00:17:08.922 "params": { 00:17:08.922 "enable": false, 00:17:08.922 "period_us": 100000 00:17:08.922 } 00:17:08.922 }, 00:17:08.922 { 00:17:08.922 "method": "bdev_enable_histogram", 00:17:08.922 "params": { 00:17:08.922 "enable": true, 00:17:08.922 "name": "nvme0n1" 00:17:08.922 } 00:17:08.922 }, 00:17:08.922 { 00:17:08.922 "method": "bdev_wait_for_examine" 00:17:08.922 } 00:17:08.922 ] 00:17:08.922 }, 00:17:08.922 { 00:17:08.922 "subsystem": "nbd", 00:17:08.922 "config": [] 00:17:08.922 } 00:17:08.922 ] 00:17:08.922 }' 00:17:08.922 [2024-12-16 11:39:09.482192] Starting SPDK v25.01-pre git sha1 c360129e7 / DPDK 24.03.0 initialization... 00:17:08.922 [2024-12-16 11:39:09.482289] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid86335 ] 00:17:08.922 [2024-12-16 11:39:09.633954] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:17:09.180 [2024-12-16 11:39:09.698497] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 1 00:17:09.180 [2024-12-16 11:39:09.878371] bdev_nvme_rpc.c: 515:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:17:10.126 11:39:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:17:10.126 11:39:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@868 -- # return 0 00:17:10.126 11:39:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@279 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:17:10.126 11:39:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@279 -- # jq -r '.[].name' 00:17:10.126 11:39:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@279 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:10.126 11:39:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@280 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:17:10.384 Running I/O for 1 seconds... 00:17:11.318 4169.00 IOPS, 16.29 MiB/s 00:17:11.318 Latency(us) 00:17:11.318 [2024-12-16T11:39:12.071Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:17:11.318 Job: nvme0n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:17:11.318 Verification LBA range: start 0x0 length 0x2000 00:17:11.318 nvme0n1 : 1.02 4194.29 16.38 0.00 0.00 30098.35 7566.43 20494.89 00:17:11.318 [2024-12-16T11:39:12.071Z] =================================================================================================================== 00:17:11.318 [2024-12-16T11:39:12.071Z] Total : 4194.29 16.38 0.00 0.00 30098.35 7566.43 20494.89 00:17:11.318 { 00:17:11.318 "results": [ 00:17:11.318 { 00:17:11.318 "job": "nvme0n1", 00:17:11.318 "core_mask": "0x2", 00:17:11.318 "workload": "verify", 00:17:11.318 "status": "finished", 00:17:11.318 "verify_range": { 00:17:11.318 "start": 0, 00:17:11.318 "length": 8192 00:17:11.318 }, 00:17:11.318 "queue_depth": 128, 00:17:11.318 "io_size": 4096, 00:17:11.318 "runtime": 1.024488, 00:17:11.318 "iops": 4194.290221066523, 00:17:11.318 "mibps": 16.383946176041107, 00:17:11.318 "io_failed": 0, 00:17:11.318 "io_timeout": 0, 00:17:11.318 "avg_latency_us": 30098.354637273358, 00:17:11.318 "min_latency_us": 7566.4290909090905, 00:17:11.318 "max_latency_us": 20494.894545454546 00:17:11.318 } 00:17:11.318 ], 00:17:11.318 "core_count": 1 00:17:11.318 } 00:17:11.318 11:39:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@282 -- # trap - SIGINT SIGTERM EXIT 00:17:11.318 11:39:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@283 -- # cleanup 00:17:11.318 11:39:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@15 -- # process_shm --id 0 00:17:11.318 11:39:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@812 -- # type=--id 00:17:11.318 11:39:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@813 -- # id=0 00:17:11.318 11:39:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@814 -- # '[' --id = --pid ']' 00:17:11.318 11:39:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@818 -- # find /dev/shm -name '*.0' -printf '%f\n' 00:17:11.318 11:39:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@818 -- # shm_files=nvmf_trace.0 00:17:11.318 11:39:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@820 -- # [[ -z nvmf_trace.0 ]] 00:17:11.318 11:39:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@824 -- # for n in $shm_files 00:17:11.318 11:39:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@825 -- # tar -C /dev/shm/ -cvzf /home/vagrant/spdk_repo/spdk/../output/nvmf_trace.0_shm.tar.gz nvmf_trace.0 00:17:11.318 nvmf_trace.0 00:17:11.576 11:39:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@827 -- # return 0 00:17:11.576 11:39:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@16 -- # killprocess 86335 00:17:11.576 11:39:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # '[' -z 86335 ']' 00:17:11.576 11:39:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # kill -0 86335 00:17:11.576 11:39:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # uname 00:17:11.576 11:39:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:17:11.576 11:39:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 86335 00:17:11.576 11:39:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:17:11.576 11:39:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:17:11.576 killing process with pid 86335 00:17:11.576 11:39:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@972 -- # echo 'killing process with pid 86335' 00:17:11.576 11:39:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@973 -- # kill 86335 00:17:11.576 Received shutdown signal, test time was about 1.000000 seconds 00:17:11.576 00:17:11.576 Latency(us) 00:17:11.576 [2024-12-16T11:39:12.329Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:17:11.576 [2024-12-16T11:39:12.329Z] =================================================================================================================== 00:17:11.576 [2024-12-16T11:39:12.329Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:17:11.576 11:39:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@978 -- # wait 86335 00:17:11.576 11:39:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@17 -- # nvmftestfini 00:17:11.576 11:39:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@516 -- # nvmfcleanup 00:17:11.576 11:39:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@121 -- # sync 00:17:11.834 11:39:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:17:11.834 11:39:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@124 -- # set +e 00:17:11.834 11:39:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@125 -- # for i in {1..20} 00:17:11.834 11:39:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:17:11.834 rmmod nvme_tcp 00:17:11.834 rmmod nvme_fabrics 00:17:11.834 rmmod nvme_keyring 00:17:11.834 11:39:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:17:11.834 11:39:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@128 -- # set -e 00:17:11.834 11:39:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@129 -- # return 0 00:17:11.834 11:39:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@517 -- # '[' -n 86291 ']' 00:17:11.834 11:39:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@518 -- # killprocess 86291 00:17:11.834 11:39:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # '[' -z 86291 ']' 00:17:11.834 11:39:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # kill -0 86291 00:17:11.834 11:39:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # uname 00:17:11.834 11:39:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:17:11.834 11:39:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 86291 00:17:11.834 11:39:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:17:11.834 11:39:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:17:11.834 killing process with pid 86291 00:17:11.834 11:39:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@972 -- # echo 'killing process with pid 86291' 00:17:11.834 11:39:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@973 -- # kill 86291 00:17:11.834 11:39:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@978 -- # wait 86291 00:17:12.092 11:39:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:17:12.092 11:39:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:17:12.092 11:39:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:17:12.092 11:39:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@297 -- # iptr 00:17:12.092 11:39:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@791 -- # iptables-save 00:17:12.092 11:39:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:17:12.092 11:39:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@791 -- # iptables-restore 00:17:12.092 11:39:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@298 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:17:12.092 11:39:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@299 -- # nvmf_veth_fini 00:17:12.092 11:39:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@233 -- # ip link set nvmf_init_br nomaster 00:17:12.092 11:39:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@234 -- # ip link set nvmf_init_br2 nomaster 00:17:12.092 11:39:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@235 -- # ip link set nvmf_tgt_br nomaster 00:17:12.092 11:39:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@236 -- # ip link set nvmf_tgt_br2 nomaster 00:17:12.092 11:39:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@237 -- # ip link set nvmf_init_br down 00:17:12.092 11:39:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@238 -- # ip link set nvmf_init_br2 down 00:17:12.092 11:39:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@239 -- # ip link set nvmf_tgt_br down 00:17:12.092 11:39:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@240 -- # ip link set nvmf_tgt_br2 down 00:17:12.092 11:39:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@241 -- # ip link delete nvmf_br type bridge 00:17:12.092 11:39:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@242 -- # ip link delete nvmf_init_if 00:17:12.092 11:39:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@243 -- # ip link delete nvmf_init_if2 00:17:12.092 11:39:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@244 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:17:12.350 11:39:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@245 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:17:12.350 11:39:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@246 -- # remove_spdk_ns 00:17:12.350 11:39:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:17:12.350 11:39:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:17:12.350 11:39:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:17:12.350 11:39:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@300 -- # return 0 00:17:12.350 11:39:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@18 -- # rm -f /tmp/tmp.xv8GE3ck7Q /tmp/tmp.lrZP6vEiZy /tmp/tmp.6UkUdY8wJO 00:17:12.350 ************************************ 00:17:12.350 END TEST nvmf_tls 00:17:12.350 ************************************ 00:17:12.350 00:17:12.350 real 1m26.792s 00:17:12.350 user 2m20.615s 00:17:12.350 sys 0m27.924s 00:17:12.350 11:39:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@1130 -- # xtrace_disable 00:17:12.350 11:39:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:17:12.350 11:39:12 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@42 -- # run_test nvmf_fips /home/vagrant/spdk_repo/spdk/test/nvmf/fips/fips.sh --transport=tcp 00:17:12.350 11:39:12 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:17:12.350 11:39:12 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1111 -- # xtrace_disable 00:17:12.350 11:39:12 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:17:12.350 ************************************ 00:17:12.350 START TEST nvmf_fips 00:17:12.350 ************************************ 00:17:12.350 11:39:12 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/fips/fips.sh --transport=tcp 00:17:12.350 * Looking for test storage... 00:17:12.350 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/fips 00:17:12.350 11:39:13 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:17:12.350 11:39:13 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@1711 -- # lcov --version 00:17:12.350 11:39:13 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:17:12.609 11:39:13 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:17:12.609 11:39:13 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:17:12.609 11:39:13 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@333 -- # local ver1 ver1_l 00:17:12.609 11:39:13 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@334 -- # local ver2 ver2_l 00:17:12.609 11:39:13 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@336 -- # IFS=.-: 00:17:12.609 11:39:13 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@336 -- # read -ra ver1 00:17:12.609 11:39:13 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@337 -- # IFS=.-: 00:17:12.609 11:39:13 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@337 -- # read -ra ver2 00:17:12.609 11:39:13 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@338 -- # local 'op=<' 00:17:12.609 11:39:13 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@340 -- # ver1_l=2 00:17:12.609 11:39:13 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@341 -- # ver2_l=1 00:17:12.609 11:39:13 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:17:12.609 11:39:13 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@344 -- # case "$op" in 00:17:12.609 11:39:13 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@345 -- # : 1 00:17:12.609 11:39:13 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@364 -- # (( v = 0 )) 00:17:12.609 11:39:13 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:17:12.609 11:39:13 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@365 -- # decimal 1 00:17:12.609 11:39:13 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@353 -- # local d=1 00:17:12.609 11:39:13 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:17:12.609 11:39:13 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@355 -- # echo 1 00:17:12.609 11:39:13 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@365 -- # ver1[v]=1 00:17:12.609 11:39:13 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@366 -- # decimal 2 00:17:12.609 11:39:13 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@353 -- # local d=2 00:17:12.609 11:39:13 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:17:12.609 11:39:13 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@355 -- # echo 2 00:17:12.609 11:39:13 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@366 -- # ver2[v]=2 00:17:12.609 11:39:13 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:17:12.609 11:39:13 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:17:12.609 11:39:13 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@368 -- # return 0 00:17:12.609 11:39:13 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:17:12.609 11:39:13 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:17:12.609 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:17:12.609 --rc genhtml_branch_coverage=1 00:17:12.609 --rc genhtml_function_coverage=1 00:17:12.609 --rc genhtml_legend=1 00:17:12.609 --rc geninfo_all_blocks=1 00:17:12.609 --rc geninfo_unexecuted_blocks=1 00:17:12.609 00:17:12.609 ' 00:17:12.609 11:39:13 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:17:12.609 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:17:12.609 --rc genhtml_branch_coverage=1 00:17:12.609 --rc genhtml_function_coverage=1 00:17:12.609 --rc genhtml_legend=1 00:17:12.609 --rc geninfo_all_blocks=1 00:17:12.609 --rc geninfo_unexecuted_blocks=1 00:17:12.609 00:17:12.609 ' 00:17:12.609 11:39:13 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:17:12.609 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:17:12.609 --rc genhtml_branch_coverage=1 00:17:12.609 --rc genhtml_function_coverage=1 00:17:12.609 --rc genhtml_legend=1 00:17:12.609 --rc geninfo_all_blocks=1 00:17:12.609 --rc geninfo_unexecuted_blocks=1 00:17:12.609 00:17:12.609 ' 00:17:12.609 11:39:13 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:17:12.609 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:17:12.609 --rc genhtml_branch_coverage=1 00:17:12.609 --rc genhtml_function_coverage=1 00:17:12.609 --rc genhtml_legend=1 00:17:12.609 --rc geninfo_all_blocks=1 00:17:12.609 --rc geninfo_unexecuted_blocks=1 00:17:12.609 00:17:12.609 ' 00:17:12.609 11:39:13 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@11 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:17:12.609 11:39:13 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@7 -- # uname -s 00:17:12.609 11:39:13 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:17:12.609 11:39:13 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:17:12.609 11:39:13 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:17:12.609 11:39:13 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:17:12.609 11:39:13 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:17:12.609 11:39:13 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:17:12.609 11:39:13 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:17:12.609 11:39:13 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:17:12.609 11:39:13 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:17:12.609 11:39:13 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:17:12.609 11:39:13 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:2fb5c03d-b67f-4471-b8a9-a83bf7b15be8 00:17:12.609 11:39:13 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@18 -- # NVME_HOSTID=2fb5c03d-b67f-4471-b8a9-a83bf7b15be8 00:17:12.609 11:39:13 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:17:12.609 11:39:13 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:17:12.609 11:39:13 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:17:12.609 11:39:13 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:17:12.609 11:39:13 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@49 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:17:12.609 11:39:13 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@15 -- # shopt -s extglob 00:17:12.609 11:39:13 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:17:12.609 11:39:13 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:17:12.609 11:39:13 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:17:12.609 11:39:13 nvmf_tcp.nvmf_target_extra.nvmf_fips -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:12.609 11:39:13 nvmf_tcp.nvmf_target_extra.nvmf_fips -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:12.609 11:39:13 nvmf_tcp.nvmf_target_extra.nvmf_fips -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:12.609 11:39:13 nvmf_tcp.nvmf_target_extra.nvmf_fips -- paths/export.sh@5 -- # export PATH 00:17:12.609 11:39:13 nvmf_tcp.nvmf_target_extra.nvmf_fips -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:12.609 11:39:13 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@51 -- # : 0 00:17:12.610 11:39:13 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:17:12.610 11:39:13 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:17:12.610 11:39:13 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:17:12.610 11:39:13 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:17:12.610 11:39:13 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:17:12.610 11:39:13 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:17:12.610 /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:17:12.610 11:39:13 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:17:12.610 11:39:13 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:17:12.610 11:39:13 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@55 -- # have_pci_nics=0 00:17:12.610 11:39:13 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@12 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:17:12.610 11:39:13 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@90 -- # check_openssl_version 00:17:12.610 11:39:13 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@84 -- # local target=3.0.0 00:17:12.610 11:39:13 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@86 -- # openssl version 00:17:12.610 11:39:13 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@86 -- # awk '{print $2}' 00:17:12.610 11:39:13 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@86 -- # ge 3.1.1 3.0.0 00:17:12.610 11:39:13 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@376 -- # cmp_versions 3.1.1 '>=' 3.0.0 00:17:12.610 11:39:13 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@333 -- # local ver1 ver1_l 00:17:12.610 11:39:13 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@334 -- # local ver2 ver2_l 00:17:12.610 11:39:13 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@336 -- # IFS=.-: 00:17:12.610 11:39:13 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@336 -- # read -ra ver1 00:17:12.610 11:39:13 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@337 -- # IFS=.-: 00:17:12.610 11:39:13 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@337 -- # read -ra ver2 00:17:12.610 11:39:13 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@338 -- # local 'op=>=' 00:17:12.610 11:39:13 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@340 -- # ver1_l=3 00:17:12.610 11:39:13 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@341 -- # ver2_l=3 00:17:12.610 11:39:13 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:17:12.610 11:39:13 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@344 -- # case "$op" in 00:17:12.610 11:39:13 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@348 -- # : 1 00:17:12.610 11:39:13 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@364 -- # (( v = 0 )) 00:17:12.610 11:39:13 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:17:12.610 11:39:13 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@365 -- # decimal 3 00:17:12.610 11:39:13 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@353 -- # local d=3 00:17:12.610 11:39:13 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@354 -- # [[ 3 =~ ^[0-9]+$ ]] 00:17:12.610 11:39:13 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@355 -- # echo 3 00:17:12.610 11:39:13 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@365 -- # ver1[v]=3 00:17:12.610 11:39:13 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@366 -- # decimal 3 00:17:12.610 11:39:13 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@353 -- # local d=3 00:17:12.610 11:39:13 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@354 -- # [[ 3 =~ ^[0-9]+$ ]] 00:17:12.610 11:39:13 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@355 -- # echo 3 00:17:12.610 11:39:13 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@366 -- # ver2[v]=3 00:17:12.610 11:39:13 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:17:12.610 11:39:13 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:17:12.610 11:39:13 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@364 -- # (( v++ )) 00:17:12.610 11:39:13 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:17:12.610 11:39:13 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@365 -- # decimal 1 00:17:12.610 11:39:13 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@353 -- # local d=1 00:17:12.610 11:39:13 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:17:12.610 11:39:13 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@355 -- # echo 1 00:17:12.610 11:39:13 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@365 -- # ver1[v]=1 00:17:12.610 11:39:13 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@366 -- # decimal 0 00:17:12.610 11:39:13 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@353 -- # local d=0 00:17:12.610 11:39:13 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@354 -- # [[ 0 =~ ^[0-9]+$ ]] 00:17:12.610 11:39:13 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@355 -- # echo 0 00:17:12.610 11:39:13 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@366 -- # ver2[v]=0 00:17:12.610 11:39:13 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:17:12.610 11:39:13 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@367 -- # return 0 00:17:12.610 11:39:13 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@96 -- # openssl info -modulesdir 00:17:12.610 11:39:13 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@96 -- # [[ ! -f /usr/lib64/ossl-modules/fips.so ]] 00:17:12.610 11:39:13 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@101 -- # openssl fipsinstall -help 00:17:12.610 11:39:13 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@101 -- # warn='This command is not enabled in the Red Hat Enterprise Linux OpenSSL build, please consult Red Hat documentation to learn how to enable FIPS mode' 00:17:12.610 11:39:13 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@102 -- # [[ This command is not enabled in the Red Hat Enterprise Linux OpenSSL build, please consult Red Hat documentation to learn how to enable FIPS mode == \T\h\i\s\ \c\o\m\m\a\n\d\ \i\s\ \n\o\t\ \e\n\a\b\l\e\d* ]] 00:17:12.610 11:39:13 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@105 -- # export callback=build_openssl_config 00:17:12.610 11:39:13 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@105 -- # callback=build_openssl_config 00:17:12.610 11:39:13 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@114 -- # build_openssl_config 00:17:12.610 11:39:13 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@38 -- # cat 00:17:12.610 11:39:13 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@58 -- # [[ ! -t 0 ]] 00:17:12.610 11:39:13 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@59 -- # cat - 00:17:12.610 11:39:13 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@115 -- # export OPENSSL_CONF=spdk_fips.conf 00:17:12.610 11:39:13 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@115 -- # OPENSSL_CONF=spdk_fips.conf 00:17:12.610 11:39:13 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@117 -- # mapfile -t providers 00:17:12.610 11:39:13 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@117 -- # grep name 00:17:12.610 11:39:13 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@117 -- # openssl list -providers 00:17:12.610 11:39:13 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@121 -- # (( 2 != 2 )) 00:17:12.610 11:39:13 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@121 -- # [[ name: openssl base provider != *base* ]] 00:17:12.610 11:39:13 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@121 -- # [[ name: red hat enterprise linux 9 - openssl fips provider != *fips* ]] 00:17:12.610 11:39:13 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@128 -- # NOT openssl md5 /dev/fd/62 00:17:12.610 11:39:13 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@128 -- # : 00:17:12.610 11:39:13 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@652 -- # local es=0 00:17:12.610 11:39:13 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@654 -- # valid_exec_arg openssl md5 /dev/fd/62 00:17:12.610 11:39:13 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@640 -- # local arg=openssl 00:17:12.610 11:39:13 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:17:12.610 11:39:13 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@644 -- # type -t openssl 00:17:12.610 11:39:13 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:17:12.610 11:39:13 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@646 -- # type -P openssl 00:17:12.610 11:39:13 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:17:12.610 11:39:13 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@646 -- # arg=/usr/bin/openssl 00:17:12.610 11:39:13 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@646 -- # [[ -x /usr/bin/openssl ]] 00:17:12.610 11:39:13 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@655 -- # openssl md5 /dev/fd/62 00:17:12.610 Error setting digest 00:17:12.610 406267F2E67F0000:error:0308010C:digital envelope routines:inner_evp_generic_fetch:unsupported:crypto/evp/evp_fetch.c:341:Global default library context, Algorithm (MD5 : 95), Properties () 00:17:12.610 406267F2E67F0000:error:03000086:digital envelope routines:evp_md_init_internal:initialization error:crypto/evp/digest.c:272: 00:17:12.610 11:39:13 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@655 -- # es=1 00:17:12.610 11:39:13 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:17:12.610 11:39:13 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:17:12.610 11:39:13 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:17:12.610 11:39:13 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@131 -- # nvmftestinit 00:17:12.610 11:39:13 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:17:12.610 11:39:13 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:17:12.610 11:39:13 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@476 -- # prepare_net_devs 00:17:12.610 11:39:13 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@438 -- # local -g is_hw=no 00:17:12.610 11:39:13 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@440 -- # remove_spdk_ns 00:17:12.610 11:39:13 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:17:12.610 11:39:13 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:17:12.610 11:39:13 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:17:12.610 11:39:13 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@442 -- # [[ virt != virt ]] 00:17:12.610 11:39:13 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@444 -- # [[ no == yes ]] 00:17:12.610 11:39:13 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@451 -- # [[ virt == phy ]] 00:17:12.610 11:39:13 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@454 -- # [[ virt == phy-fallback ]] 00:17:12.610 11:39:13 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@459 -- # [[ tcp == tcp ]] 00:17:12.610 11:39:13 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@460 -- # nvmf_veth_init 00:17:12.610 11:39:13 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@145 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:17:12.610 11:39:13 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@146 -- # NVMF_SECOND_INITIATOR_IP=10.0.0.2 00:17:12.610 11:39:13 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@147 -- # NVMF_FIRST_TARGET_IP=10.0.0.3 00:17:12.610 11:39:13 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@148 -- # NVMF_SECOND_TARGET_IP=10.0.0.4 00:17:12.610 11:39:13 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@149 -- # NVMF_INITIATOR_IP=10.0.0.1 00:17:12.610 11:39:13 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@150 -- # NVMF_BRIDGE=nvmf_br 00:17:12.610 11:39:13 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@151 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:17:12.611 11:39:13 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@152 -- # NVMF_INITIATOR_INTERFACE2=nvmf_init_if2 00:17:12.611 11:39:13 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@153 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:17:12.611 11:39:13 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@154 -- # NVMF_INITIATOR_BRIDGE2=nvmf_init_br2 00:17:12.611 11:39:13 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@155 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:17:12.611 11:39:13 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@156 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:17:12.611 11:39:13 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@157 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:17:12.611 11:39:13 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@158 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:17:12.611 11:39:13 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@159 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:17:12.611 11:39:13 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@160 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:17:12.611 11:39:13 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@162 -- # ip link set nvmf_init_br nomaster 00:17:12.869 Cannot find device "nvmf_init_br" 00:17:12.869 11:39:13 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@162 -- # true 00:17:12.869 11:39:13 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@163 -- # ip link set nvmf_init_br2 nomaster 00:17:12.869 Cannot find device "nvmf_init_br2" 00:17:12.869 11:39:13 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@163 -- # true 00:17:12.869 11:39:13 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@164 -- # ip link set nvmf_tgt_br nomaster 00:17:12.869 Cannot find device "nvmf_tgt_br" 00:17:12.869 11:39:13 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@164 -- # true 00:17:12.869 11:39:13 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@165 -- # ip link set nvmf_tgt_br2 nomaster 00:17:12.869 Cannot find device "nvmf_tgt_br2" 00:17:12.869 11:39:13 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@165 -- # true 00:17:12.869 11:39:13 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@166 -- # ip link set nvmf_init_br down 00:17:12.869 Cannot find device "nvmf_init_br" 00:17:12.869 11:39:13 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@166 -- # true 00:17:12.869 11:39:13 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@167 -- # ip link set nvmf_init_br2 down 00:17:12.869 Cannot find device "nvmf_init_br2" 00:17:12.869 11:39:13 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@167 -- # true 00:17:12.869 11:39:13 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@168 -- # ip link set nvmf_tgt_br down 00:17:12.869 Cannot find device "nvmf_tgt_br" 00:17:12.869 11:39:13 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@168 -- # true 00:17:12.869 11:39:13 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@169 -- # ip link set nvmf_tgt_br2 down 00:17:12.869 Cannot find device "nvmf_tgt_br2" 00:17:12.869 11:39:13 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@169 -- # true 00:17:12.869 11:39:13 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@170 -- # ip link delete nvmf_br type bridge 00:17:12.869 Cannot find device "nvmf_br" 00:17:12.869 11:39:13 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@170 -- # true 00:17:12.869 11:39:13 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@171 -- # ip link delete nvmf_init_if 00:17:12.869 Cannot find device "nvmf_init_if" 00:17:12.869 11:39:13 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@171 -- # true 00:17:12.869 11:39:13 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@172 -- # ip link delete nvmf_init_if2 00:17:12.869 Cannot find device "nvmf_init_if2" 00:17:12.869 11:39:13 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@172 -- # true 00:17:12.869 11:39:13 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@173 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:17:12.869 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:17:12.869 11:39:13 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@173 -- # true 00:17:12.869 11:39:13 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@174 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:17:12.869 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:17:12.869 11:39:13 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@174 -- # true 00:17:12.869 11:39:13 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@177 -- # ip netns add nvmf_tgt_ns_spdk 00:17:12.869 11:39:13 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@180 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:17:12.869 11:39:13 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@181 -- # ip link add nvmf_init_if2 type veth peer name nvmf_init_br2 00:17:12.869 11:39:13 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@182 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:17:12.869 11:39:13 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@183 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:17:12.869 11:39:13 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:17:12.869 11:39:13 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@187 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:17:12.869 11:39:13 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@190 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:17:12.869 11:39:13 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@191 -- # ip addr add 10.0.0.2/24 dev nvmf_init_if2 00:17:12.869 11:39:13 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@192 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if 00:17:12.869 11:39:13 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@193 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.4/24 dev nvmf_tgt_if2 00:17:12.869 11:39:13 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@196 -- # ip link set nvmf_init_if up 00:17:12.869 11:39:13 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@197 -- # ip link set nvmf_init_if2 up 00:17:12.869 11:39:13 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@198 -- # ip link set nvmf_init_br up 00:17:12.869 11:39:13 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@199 -- # ip link set nvmf_init_br2 up 00:17:12.869 11:39:13 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@200 -- # ip link set nvmf_tgt_br up 00:17:12.869 11:39:13 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@201 -- # ip link set nvmf_tgt_br2 up 00:17:12.869 11:39:13 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@202 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:17:12.869 11:39:13 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@203 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:17:13.127 11:39:13 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@204 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:17:13.127 11:39:13 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@207 -- # ip link add nvmf_br type bridge 00:17:13.127 11:39:13 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@208 -- # ip link set nvmf_br up 00:17:13.128 11:39:13 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@211 -- # ip link set nvmf_init_br master nvmf_br 00:17:13.128 11:39:13 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@212 -- # ip link set nvmf_init_br2 master nvmf_br 00:17:13.128 11:39:13 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@213 -- # ip link set nvmf_tgt_br master nvmf_br 00:17:13.128 11:39:13 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@214 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:17:13.128 11:39:13 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@217 -- # ipts -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:17:13.128 11:39:13 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT' 00:17:13.128 11:39:13 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@218 -- # ipts -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT 00:17:13.128 11:39:13 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT' 00:17:13.128 11:39:13 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@219 -- # ipts -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:17:13.128 11:39:13 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@790 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT -m comment --comment 'SPDK_NVMF:-A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT' 00:17:13.128 11:39:13 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@222 -- # ping -c 1 10.0.0.3 00:17:13.128 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:17:13.128 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.056 ms 00:17:13.128 00:17:13.128 --- 10.0.0.3 ping statistics --- 00:17:13.128 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:17:13.128 rtt min/avg/max/mdev = 0.056/0.056/0.056/0.000 ms 00:17:13.128 11:39:13 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@223 -- # ping -c 1 10.0.0.4 00:17:13.128 PING 10.0.0.4 (10.0.0.4) 56(84) bytes of data. 00:17:13.128 64 bytes from 10.0.0.4: icmp_seq=1 ttl=64 time=0.057 ms 00:17:13.128 00:17:13.128 --- 10.0.0.4 ping statistics --- 00:17:13.128 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:17:13.128 rtt min/avg/max/mdev = 0.057/0.057/0.057/0.000 ms 00:17:13.128 11:39:13 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@224 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:17:13.128 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:17:13.128 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.025 ms 00:17:13.128 00:17:13.128 --- 10.0.0.1 ping statistics --- 00:17:13.128 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:17:13.128 rtt min/avg/max/mdev = 0.025/0.025/0.025/0.000 ms 00:17:13.128 11:39:13 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@225 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.2 00:17:13.128 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:17:13.128 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.064 ms 00:17:13.128 00:17:13.128 --- 10.0.0.2 ping statistics --- 00:17:13.128 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:17:13.128 rtt min/avg/max/mdev = 0.064/0.064/0.064/0.000 ms 00:17:13.128 11:39:13 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@227 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:17:13.128 11:39:13 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@461 -- # return 0 00:17:13.128 11:39:13 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:17:13.128 11:39:13 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:17:13.128 11:39:13 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:17:13.128 11:39:13 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:17:13.128 11:39:13 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:17:13.128 11:39:13 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:17:13.128 11:39:13 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:17:13.128 11:39:13 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@132 -- # nvmfappstart -m 0x2 00:17:13.128 11:39:13 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:17:13.128 11:39:13 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@726 -- # xtrace_disable 00:17:13.128 11:39:13 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@10 -- # set +x 00:17:13.128 11:39:13 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@508 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:17:13.128 11:39:13 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@509 -- # nvmfpid=86674 00:17:13.128 11:39:13 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@510 -- # waitforlisten 86674 00:17:13.128 11:39:13 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@835 -- # '[' -z 86674 ']' 00:17:13.128 11:39:13 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:17:13.128 11:39:13 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@840 -- # local max_retries=100 00:17:13.128 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:17:13.128 11:39:13 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:17:13.128 11:39:13 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@844 -- # xtrace_disable 00:17:13.128 11:39:13 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@10 -- # set +x 00:17:13.128 [2024-12-16 11:39:13.844313] Starting SPDK v25.01-pre git sha1 c360129e7 / DPDK 24.03.0 initialization... 00:17:13.128 [2024-12-16 11:39:13.844413] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:17:13.385 [2024-12-16 11:39:13.999494] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:17:13.385 [2024-12-16 11:39:14.057352] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:17:13.385 [2024-12-16 11:39:14.057405] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:17:13.385 [2024-12-16 11:39:14.057419] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:17:13.385 [2024-12-16 11:39:14.057429] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:17:13.385 [2024-12-16 11:39:14.057439] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:17:13.385 [2024-12-16 11:39:14.057870] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 1 00:17:14.318 11:39:14 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:17:14.318 11:39:14 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@868 -- # return 0 00:17:14.318 11:39:14 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:17:14.318 11:39:14 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@732 -- # xtrace_disable 00:17:14.318 11:39:14 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@10 -- # set +x 00:17:14.318 11:39:14 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:17:14.318 11:39:14 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@134 -- # trap cleanup EXIT 00:17:14.318 11:39:14 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@137 -- # key=NVMeTLSkey-1:01:VRLbtnN9AQb2WXW3c9+wEf/DRLz0QuLdbYvEhwtdWwNf9LrZ: 00:17:14.318 11:39:14 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@138 -- # mktemp -t spdk-psk.XXX 00:17:14.318 11:39:14 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@138 -- # key_path=/tmp/spdk-psk.rna 00:17:14.318 11:39:14 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@139 -- # echo -n NVMeTLSkey-1:01:VRLbtnN9AQb2WXW3c9+wEf/DRLz0QuLdbYvEhwtdWwNf9LrZ: 00:17:14.318 11:39:14 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@140 -- # chmod 0600 /tmp/spdk-psk.rna 00:17:14.318 11:39:14 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@142 -- # setup_nvmf_tgt_conf /tmp/spdk-psk.rna 00:17:14.318 11:39:14 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@22 -- # local key=/tmp/spdk-psk.rna 00:17:14.318 11:39:14 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@24 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:17:14.576 [2024-12-16 11:39:15.210592] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:17:14.576 [2024-12-16 11:39:15.226528] tcp.c:1049:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:17:14.576 [2024-12-16 11:39:15.226716] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4420 *** 00:17:14.576 malloc0 00:17:14.576 11:39:15 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@145 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:17:14.576 11:39:15 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@148 -- # bdevperf_pid=86739 00:17:14.576 11:39:15 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@146 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:17:14.576 11:39:15 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@149 -- # waitforlisten 86739 /var/tmp/bdevperf.sock 00:17:14.576 11:39:15 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@835 -- # '[' -z 86739 ']' 00:17:14.576 11:39:15 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:17:14.576 11:39:15 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@840 -- # local max_retries=100 00:17:14.576 11:39:15 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:17:14.576 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:17:14.576 11:39:15 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@844 -- # xtrace_disable 00:17:14.576 11:39:15 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@10 -- # set +x 00:17:14.847 [2024-12-16 11:39:15.378924] Starting SPDK v25.01-pre git sha1 c360129e7 / DPDK 24.03.0 initialization... 00:17:14.847 [2024-12-16 11:39:15.379034] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid86739 ] 00:17:14.847 [2024-12-16 11:39:15.520573] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:17:14.847 [2024-12-16 11:39:15.586785] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 2 00:17:15.797 11:39:16 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:17:15.797 11:39:16 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@868 -- # return 0 00:17:15.797 11:39:16 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@151 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock keyring_file_add_key key0 /tmp/spdk-psk.rna 00:17:16.055 11:39:16 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@152 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.3 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 --psk key0 00:17:16.314 [2024-12-16 11:39:16.896589] bdev_nvme_rpc.c: 515:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:17:16.314 TLSTESTn1 00:17:16.314 11:39:16 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@156 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:17:16.572 Running I/O for 10 seconds... 00:17:18.444 3968.00 IOPS, 15.50 MiB/s [2024-12-16T11:39:20.133Z] 3997.00 IOPS, 15.61 MiB/s [2024-12-16T11:39:21.510Z] 4028.33 IOPS, 15.74 MiB/s [2024-12-16T11:39:22.449Z] 4045.75 IOPS, 15.80 MiB/s [2024-12-16T11:39:23.385Z] 4064.80 IOPS, 15.88 MiB/s [2024-12-16T11:39:24.320Z] 4062.17 IOPS, 15.87 MiB/s [2024-12-16T11:39:25.256Z] 4062.00 IOPS, 15.87 MiB/s [2024-12-16T11:39:26.189Z] 4065.75 IOPS, 15.88 MiB/s [2024-12-16T11:39:27.564Z] 4061.89 IOPS, 15.87 MiB/s [2024-12-16T11:39:27.564Z] 4047.50 IOPS, 15.81 MiB/s 00:17:26.811 Latency(us) 00:17:26.811 [2024-12-16T11:39:27.564Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:17:26.811 Job: TLSTESTn1 (Core Mask 0x4, workload: verify, depth: 128, IO size: 4096) 00:17:26.811 Verification LBA range: start 0x0 length 0x2000 00:17:26.811 TLSTESTn1 : 10.02 4052.90 15.83 0.00 0.00 31519.75 6702.55 24307.90 00:17:26.812 [2024-12-16T11:39:27.565Z] =================================================================================================================== 00:17:26.812 [2024-12-16T11:39:27.565Z] Total : 4052.90 15.83 0.00 0.00 31519.75 6702.55 24307.90 00:17:26.812 { 00:17:26.812 "results": [ 00:17:26.812 { 00:17:26.812 "job": "TLSTESTn1", 00:17:26.812 "core_mask": "0x4", 00:17:26.812 "workload": "verify", 00:17:26.812 "status": "finished", 00:17:26.812 "verify_range": { 00:17:26.812 "start": 0, 00:17:26.812 "length": 8192 00:17:26.812 }, 00:17:26.812 "queue_depth": 128, 00:17:26.812 "io_size": 4096, 00:17:26.812 "runtime": 10.018254, 00:17:26.812 "iops": 4052.9018329940527, 00:17:26.812 "mibps": 15.831647785133018, 00:17:26.812 "io_failed": 0, 00:17:26.812 "io_timeout": 0, 00:17:26.812 "avg_latency_us": 31519.7539664109, 00:17:26.812 "min_latency_us": 6702.545454545455, 00:17:26.812 "max_latency_us": 24307.898181818182 00:17:26.812 } 00:17:26.812 ], 00:17:26.812 "core_count": 1 00:17:26.812 } 00:17:26.812 11:39:27 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@1 -- # cleanup 00:17:26.812 11:39:27 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@15 -- # process_shm --id 0 00:17:26.812 11:39:27 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@812 -- # type=--id 00:17:26.812 11:39:27 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@813 -- # id=0 00:17:26.812 11:39:27 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@814 -- # '[' --id = --pid ']' 00:17:26.812 11:39:27 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@818 -- # find /dev/shm -name '*.0' -printf '%f\n' 00:17:26.812 11:39:27 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@818 -- # shm_files=nvmf_trace.0 00:17:26.812 11:39:27 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@820 -- # [[ -z nvmf_trace.0 ]] 00:17:26.812 11:39:27 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@824 -- # for n in $shm_files 00:17:26.812 11:39:27 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@825 -- # tar -C /dev/shm/ -cvzf /home/vagrant/spdk_repo/spdk/../output/nvmf_trace.0_shm.tar.gz nvmf_trace.0 00:17:26.812 nvmf_trace.0 00:17:26.812 11:39:27 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@827 -- # return 0 00:17:26.812 11:39:27 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@16 -- # killprocess 86739 00:17:26.812 11:39:27 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@954 -- # '[' -z 86739 ']' 00:17:26.812 11:39:27 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@958 -- # kill -0 86739 00:17:26.812 11:39:27 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@959 -- # uname 00:17:26.812 11:39:27 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:17:26.812 11:39:27 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 86739 00:17:26.812 11:39:27 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@960 -- # process_name=reactor_2 00:17:26.812 11:39:27 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@964 -- # '[' reactor_2 = sudo ']' 00:17:26.812 killing process with pid 86739 00:17:26.812 11:39:27 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@972 -- # echo 'killing process with pid 86739' 00:17:26.812 11:39:27 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@973 -- # kill 86739 00:17:26.812 Received shutdown signal, test time was about 10.000000 seconds 00:17:26.812 00:17:26.812 Latency(us) 00:17:26.812 [2024-12-16T11:39:27.565Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:17:26.812 [2024-12-16T11:39:27.565Z] =================================================================================================================== 00:17:26.812 [2024-12-16T11:39:27.565Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:17:26.812 11:39:27 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@978 -- # wait 86739 00:17:26.812 11:39:27 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@17 -- # nvmftestfini 00:17:26.812 11:39:27 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@516 -- # nvmfcleanup 00:17:26.812 11:39:27 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@121 -- # sync 00:17:26.812 11:39:27 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:17:26.812 11:39:27 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@124 -- # set +e 00:17:26.812 11:39:27 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@125 -- # for i in {1..20} 00:17:26.812 11:39:27 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:17:26.812 rmmod nvme_tcp 00:17:27.071 rmmod nvme_fabrics 00:17:27.071 rmmod nvme_keyring 00:17:27.071 11:39:27 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:17:27.071 11:39:27 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@128 -- # set -e 00:17:27.071 11:39:27 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@129 -- # return 0 00:17:27.071 11:39:27 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@517 -- # '[' -n 86674 ']' 00:17:27.071 11:39:27 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@518 -- # killprocess 86674 00:17:27.071 11:39:27 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@954 -- # '[' -z 86674 ']' 00:17:27.071 11:39:27 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@958 -- # kill -0 86674 00:17:27.071 11:39:27 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@959 -- # uname 00:17:27.071 11:39:27 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:17:27.071 11:39:27 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 86674 00:17:27.071 11:39:27 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:17:27.071 11:39:27 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:17:27.071 killing process with pid 86674 00:17:27.071 11:39:27 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@972 -- # echo 'killing process with pid 86674' 00:17:27.071 11:39:27 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@973 -- # kill 86674 00:17:27.071 11:39:27 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@978 -- # wait 86674 00:17:27.332 11:39:27 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:17:27.332 11:39:27 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:17:27.332 11:39:27 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:17:27.333 11:39:27 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@297 -- # iptr 00:17:27.333 11:39:27 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@791 -- # iptables-save 00:17:27.333 11:39:27 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:17:27.333 11:39:27 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@791 -- # iptables-restore 00:17:27.333 11:39:27 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@298 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:17:27.333 11:39:27 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@299 -- # nvmf_veth_fini 00:17:27.333 11:39:27 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@233 -- # ip link set nvmf_init_br nomaster 00:17:27.333 11:39:27 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@234 -- # ip link set nvmf_init_br2 nomaster 00:17:27.333 11:39:27 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@235 -- # ip link set nvmf_tgt_br nomaster 00:17:27.333 11:39:27 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@236 -- # ip link set nvmf_tgt_br2 nomaster 00:17:27.333 11:39:27 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@237 -- # ip link set nvmf_init_br down 00:17:27.333 11:39:27 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@238 -- # ip link set nvmf_init_br2 down 00:17:27.333 11:39:27 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@239 -- # ip link set nvmf_tgt_br down 00:17:27.333 11:39:27 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@240 -- # ip link set nvmf_tgt_br2 down 00:17:27.333 11:39:27 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@241 -- # ip link delete nvmf_br type bridge 00:17:27.333 11:39:28 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@242 -- # ip link delete nvmf_init_if 00:17:27.333 11:39:28 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@243 -- # ip link delete nvmf_init_if2 00:17:27.333 11:39:28 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@244 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:17:27.333 11:39:28 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@245 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:17:27.591 11:39:28 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@246 -- # remove_spdk_ns 00:17:27.591 11:39:28 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:17:27.591 11:39:28 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:17:27.591 11:39:28 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:17:27.591 11:39:28 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@300 -- # return 0 00:17:27.591 11:39:28 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@18 -- # rm -f /tmp/spdk-psk.rna 00:17:27.591 00:17:27.591 real 0m15.165s 00:17:27.591 user 0m21.242s 00:17:27.591 sys 0m5.874s 00:17:27.591 11:39:28 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@1130 -- # xtrace_disable 00:17:27.591 11:39:28 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@10 -- # set +x 00:17:27.591 ************************************ 00:17:27.591 END TEST nvmf_fips 00:17:27.591 ************************************ 00:17:27.591 11:39:28 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@43 -- # run_test nvmf_control_msg_list /home/vagrant/spdk_repo/spdk/test/nvmf/target/control_msg_list.sh --transport=tcp 00:17:27.591 11:39:28 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:17:27.591 11:39:28 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1111 -- # xtrace_disable 00:17:27.591 11:39:28 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:17:27.591 ************************************ 00:17:27.591 START TEST nvmf_control_msg_list 00:17:27.591 ************************************ 00:17:27.591 11:39:28 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/control_msg_list.sh --transport=tcp 00:17:27.591 * Looking for test storage... 00:17:27.591 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:17:27.591 11:39:28 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:17:27.591 11:39:28 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@1711 -- # lcov --version 00:17:27.591 11:39:28 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:17:27.860 11:39:28 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:17:27.860 11:39:28 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:17:27.860 11:39:28 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@333 -- # local ver1 ver1_l 00:17:27.860 11:39:28 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@334 -- # local ver2 ver2_l 00:17:27.860 11:39:28 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@336 -- # IFS=.-: 00:17:27.860 11:39:28 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@336 -- # read -ra ver1 00:17:27.860 11:39:28 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@337 -- # IFS=.-: 00:17:27.860 11:39:28 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@337 -- # read -ra ver2 00:17:27.860 11:39:28 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@338 -- # local 'op=<' 00:17:27.860 11:39:28 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@340 -- # ver1_l=2 00:17:27.860 11:39:28 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@341 -- # ver2_l=1 00:17:27.860 11:39:28 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:17:27.860 11:39:28 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@344 -- # case "$op" in 00:17:27.860 11:39:28 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@345 -- # : 1 00:17:27.860 11:39:28 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@364 -- # (( v = 0 )) 00:17:27.860 11:39:28 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:17:27.860 11:39:28 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@365 -- # decimal 1 00:17:27.860 11:39:28 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@353 -- # local d=1 00:17:27.860 11:39:28 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:17:27.860 11:39:28 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@355 -- # echo 1 00:17:27.860 11:39:28 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@365 -- # ver1[v]=1 00:17:27.861 11:39:28 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@366 -- # decimal 2 00:17:27.861 11:39:28 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@353 -- # local d=2 00:17:27.861 11:39:28 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:17:27.861 11:39:28 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@355 -- # echo 2 00:17:27.861 11:39:28 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@366 -- # ver2[v]=2 00:17:27.861 11:39:28 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:17:27.861 11:39:28 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:17:27.861 11:39:28 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@368 -- # return 0 00:17:27.861 11:39:28 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:17:27.861 11:39:28 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:17:27.861 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:17:27.861 --rc genhtml_branch_coverage=1 00:17:27.861 --rc genhtml_function_coverage=1 00:17:27.861 --rc genhtml_legend=1 00:17:27.861 --rc geninfo_all_blocks=1 00:17:27.861 --rc geninfo_unexecuted_blocks=1 00:17:27.861 00:17:27.861 ' 00:17:27.861 11:39:28 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:17:27.861 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:17:27.861 --rc genhtml_branch_coverage=1 00:17:27.861 --rc genhtml_function_coverage=1 00:17:27.861 --rc genhtml_legend=1 00:17:27.861 --rc geninfo_all_blocks=1 00:17:27.861 --rc geninfo_unexecuted_blocks=1 00:17:27.861 00:17:27.861 ' 00:17:27.861 11:39:28 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:17:27.861 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:17:27.861 --rc genhtml_branch_coverage=1 00:17:27.861 --rc genhtml_function_coverage=1 00:17:27.861 --rc genhtml_legend=1 00:17:27.861 --rc geninfo_all_blocks=1 00:17:27.861 --rc geninfo_unexecuted_blocks=1 00:17:27.861 00:17:27.861 ' 00:17:27.861 11:39:28 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:17:27.861 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:17:27.861 --rc genhtml_branch_coverage=1 00:17:27.861 --rc genhtml_function_coverage=1 00:17:27.861 --rc genhtml_legend=1 00:17:27.861 --rc geninfo_all_blocks=1 00:17:27.861 --rc geninfo_unexecuted_blocks=1 00:17:27.861 00:17:27.861 ' 00:17:27.861 11:39:28 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@10 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:17:27.861 11:39:28 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@7 -- # uname -s 00:17:27.861 11:39:28 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:17:27.861 11:39:28 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:17:27.861 11:39:28 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:17:27.861 11:39:28 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:17:27.861 11:39:28 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:17:27.861 11:39:28 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:17:27.861 11:39:28 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:17:27.861 11:39:28 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:17:27.861 11:39:28 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:17:27.861 11:39:28 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:17:27.861 11:39:28 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:2fb5c03d-b67f-4471-b8a9-a83bf7b15be8 00:17:27.861 11:39:28 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@18 -- # NVME_HOSTID=2fb5c03d-b67f-4471-b8a9-a83bf7b15be8 00:17:27.861 11:39:28 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:17:27.861 11:39:28 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:17:27.861 11:39:28 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:17:27.861 11:39:28 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:17:27.861 11:39:28 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@49 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:17:27.861 11:39:28 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@15 -- # shopt -s extglob 00:17:27.861 11:39:28 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:17:27.861 11:39:28 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:17:27.861 11:39:28 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:17:27.861 11:39:28 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:27.861 11:39:28 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:27.861 11:39:28 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:27.861 11:39:28 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- paths/export.sh@5 -- # export PATH 00:17:27.861 11:39:28 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:27.861 11:39:28 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@51 -- # : 0 00:17:27.861 11:39:28 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:17:27.861 11:39:28 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:17:27.861 11:39:28 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:17:27.861 11:39:28 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:17:27.861 11:39:28 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:17:27.861 11:39:28 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:17:27.861 /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:17:27.861 11:39:28 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:17:27.861 11:39:28 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:17:27.861 11:39:28 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@55 -- # have_pci_nics=0 00:17:27.861 11:39:28 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@12 -- # nvmftestinit 00:17:27.861 11:39:28 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:17:27.861 11:39:28 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:17:27.861 11:39:28 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@476 -- # prepare_net_devs 00:17:27.861 11:39:28 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@438 -- # local -g is_hw=no 00:17:27.861 11:39:28 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@440 -- # remove_spdk_ns 00:17:27.861 11:39:28 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:17:27.861 11:39:28 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:17:27.861 11:39:28 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:17:27.861 11:39:28 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@442 -- # [[ virt != virt ]] 00:17:27.861 11:39:28 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@444 -- # [[ no == yes ]] 00:17:27.861 11:39:28 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@451 -- # [[ virt == phy ]] 00:17:27.861 11:39:28 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@454 -- # [[ virt == phy-fallback ]] 00:17:27.861 11:39:28 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@459 -- # [[ tcp == tcp ]] 00:17:27.861 11:39:28 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@460 -- # nvmf_veth_init 00:17:27.861 11:39:28 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@145 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:17:27.861 11:39:28 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@146 -- # NVMF_SECOND_INITIATOR_IP=10.0.0.2 00:17:27.861 11:39:28 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@147 -- # NVMF_FIRST_TARGET_IP=10.0.0.3 00:17:27.861 11:39:28 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@148 -- # NVMF_SECOND_TARGET_IP=10.0.0.4 00:17:27.861 11:39:28 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@149 -- # NVMF_INITIATOR_IP=10.0.0.1 00:17:27.861 11:39:28 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@150 -- # NVMF_BRIDGE=nvmf_br 00:17:27.861 11:39:28 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@151 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:17:27.861 11:39:28 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@152 -- # NVMF_INITIATOR_INTERFACE2=nvmf_init_if2 00:17:27.861 11:39:28 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@153 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:17:27.861 11:39:28 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@154 -- # NVMF_INITIATOR_BRIDGE2=nvmf_init_br2 00:17:27.861 11:39:28 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@155 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:17:27.861 11:39:28 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@156 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:17:27.861 11:39:28 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@157 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:17:27.861 11:39:28 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@158 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:17:27.861 11:39:28 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@159 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:17:27.861 11:39:28 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@160 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:17:27.861 11:39:28 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@162 -- # ip link set nvmf_init_br nomaster 00:17:27.861 Cannot find device "nvmf_init_br" 00:17:27.861 11:39:28 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@162 -- # true 00:17:27.861 11:39:28 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@163 -- # ip link set nvmf_init_br2 nomaster 00:17:27.861 Cannot find device "nvmf_init_br2" 00:17:27.861 11:39:28 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@163 -- # true 00:17:27.861 11:39:28 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@164 -- # ip link set nvmf_tgt_br nomaster 00:17:27.861 Cannot find device "nvmf_tgt_br" 00:17:27.861 11:39:28 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@164 -- # true 00:17:27.861 11:39:28 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@165 -- # ip link set nvmf_tgt_br2 nomaster 00:17:27.861 Cannot find device "nvmf_tgt_br2" 00:17:27.861 11:39:28 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@165 -- # true 00:17:27.861 11:39:28 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@166 -- # ip link set nvmf_init_br down 00:17:27.861 Cannot find device "nvmf_init_br" 00:17:27.861 11:39:28 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@166 -- # true 00:17:27.861 11:39:28 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@167 -- # ip link set nvmf_init_br2 down 00:17:27.861 Cannot find device "nvmf_init_br2" 00:17:27.861 11:39:28 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@167 -- # true 00:17:27.861 11:39:28 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@168 -- # ip link set nvmf_tgt_br down 00:17:27.861 Cannot find device "nvmf_tgt_br" 00:17:27.861 11:39:28 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@168 -- # true 00:17:27.861 11:39:28 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@169 -- # ip link set nvmf_tgt_br2 down 00:17:27.861 Cannot find device "nvmf_tgt_br2" 00:17:27.861 11:39:28 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@169 -- # true 00:17:27.861 11:39:28 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@170 -- # ip link delete nvmf_br type bridge 00:17:27.861 Cannot find device "nvmf_br" 00:17:27.861 11:39:28 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@170 -- # true 00:17:27.861 11:39:28 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@171 -- # ip link delete nvmf_init_if 00:17:27.861 Cannot find device "nvmf_init_if" 00:17:27.861 11:39:28 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@171 -- # true 00:17:27.861 11:39:28 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@172 -- # ip link delete nvmf_init_if2 00:17:27.861 Cannot find device "nvmf_init_if2" 00:17:27.861 11:39:28 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@172 -- # true 00:17:27.861 11:39:28 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@173 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:17:27.861 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:17:27.861 11:39:28 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@173 -- # true 00:17:27.861 11:39:28 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@174 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:17:27.861 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:17:27.861 11:39:28 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@174 -- # true 00:17:27.861 11:39:28 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@177 -- # ip netns add nvmf_tgt_ns_spdk 00:17:27.861 11:39:28 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@180 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:17:27.861 11:39:28 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@181 -- # ip link add nvmf_init_if2 type veth peer name nvmf_init_br2 00:17:27.861 11:39:28 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@182 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:17:27.861 11:39:28 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@183 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:17:27.861 11:39:28 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:17:27.861 11:39:28 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@187 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:17:27.862 11:39:28 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@190 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:17:27.862 11:39:28 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@191 -- # ip addr add 10.0.0.2/24 dev nvmf_init_if2 00:17:27.862 11:39:28 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@192 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if 00:17:28.120 11:39:28 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@193 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.4/24 dev nvmf_tgt_if2 00:17:28.120 11:39:28 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@196 -- # ip link set nvmf_init_if up 00:17:28.120 11:39:28 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@197 -- # ip link set nvmf_init_if2 up 00:17:28.120 11:39:28 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@198 -- # ip link set nvmf_init_br up 00:17:28.120 11:39:28 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@199 -- # ip link set nvmf_init_br2 up 00:17:28.120 11:39:28 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@200 -- # ip link set nvmf_tgt_br up 00:17:28.120 11:39:28 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@201 -- # ip link set nvmf_tgt_br2 up 00:17:28.120 11:39:28 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@202 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:17:28.120 11:39:28 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@203 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:17:28.120 11:39:28 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@204 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:17:28.120 11:39:28 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@207 -- # ip link add nvmf_br type bridge 00:17:28.120 11:39:28 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@208 -- # ip link set nvmf_br up 00:17:28.120 11:39:28 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@211 -- # ip link set nvmf_init_br master nvmf_br 00:17:28.120 11:39:28 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@212 -- # ip link set nvmf_init_br2 master nvmf_br 00:17:28.120 11:39:28 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@213 -- # ip link set nvmf_tgt_br master nvmf_br 00:17:28.120 11:39:28 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@214 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:17:28.120 11:39:28 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@217 -- # ipts -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:17:28.120 11:39:28 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT' 00:17:28.120 11:39:28 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@218 -- # ipts -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT 00:17:28.120 11:39:28 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT' 00:17:28.120 11:39:28 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@219 -- # ipts -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:17:28.120 11:39:28 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@790 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT -m comment --comment 'SPDK_NVMF:-A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT' 00:17:28.120 11:39:28 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@222 -- # ping -c 1 10.0.0.3 00:17:28.120 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:17:28.120 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.068 ms 00:17:28.120 00:17:28.120 --- 10.0.0.3 ping statistics --- 00:17:28.120 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:17:28.120 rtt min/avg/max/mdev = 0.068/0.068/0.068/0.000 ms 00:17:28.120 11:39:28 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@223 -- # ping -c 1 10.0.0.4 00:17:28.120 PING 10.0.0.4 (10.0.0.4) 56(84) bytes of data. 00:17:28.120 64 bytes from 10.0.0.4: icmp_seq=1 ttl=64 time=0.050 ms 00:17:28.120 00:17:28.120 --- 10.0.0.4 ping statistics --- 00:17:28.120 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:17:28.120 rtt min/avg/max/mdev = 0.050/0.050/0.050/0.000 ms 00:17:28.120 11:39:28 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@224 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:17:28.120 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:17:28.120 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.030 ms 00:17:28.120 00:17:28.120 --- 10.0.0.1 ping statistics --- 00:17:28.120 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:17:28.120 rtt min/avg/max/mdev = 0.030/0.030/0.030/0.000 ms 00:17:28.120 11:39:28 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@225 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.2 00:17:28.120 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:17:28.120 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.039 ms 00:17:28.120 00:17:28.120 --- 10.0.0.2 ping statistics --- 00:17:28.120 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:17:28.120 rtt min/avg/max/mdev = 0.039/0.039/0.039/0.000 ms 00:17:28.120 11:39:28 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@227 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:17:28.120 11:39:28 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@461 -- # return 0 00:17:28.120 11:39:28 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:17:28.120 11:39:28 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:17:28.120 11:39:28 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:17:28.120 11:39:28 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:17:28.120 11:39:28 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:17:28.120 11:39:28 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:17:28.120 11:39:28 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:17:28.120 11:39:28 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@13 -- # nvmfappstart 00:17:28.120 11:39:28 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:17:28.120 11:39:28 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@726 -- # xtrace_disable 00:17:28.120 11:39:28 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@10 -- # set +x 00:17:28.120 11:39:28 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@509 -- # nvmfpid=87160 00:17:28.120 11:39:28 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@510 -- # waitforlisten 87160 00:17:28.120 11:39:28 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@508 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF 00:17:28.120 11:39:28 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@835 -- # '[' -z 87160 ']' 00:17:28.121 11:39:28 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:17:28.121 11:39:28 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@840 -- # local max_retries=100 00:17:28.121 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:17:28.121 11:39:28 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:17:28.121 11:39:28 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@844 -- # xtrace_disable 00:17:28.121 11:39:28 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@10 -- # set +x 00:17:28.121 [2024-12-16 11:39:28.839804] Starting SPDK v25.01-pre git sha1 c360129e7 / DPDK 24.03.0 initialization... 00:17:28.121 [2024-12-16 11:39:28.839908] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:17:28.379 [2024-12-16 11:39:28.990338] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:17:28.379 [2024-12-16 11:39:29.061222] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:17:28.379 [2024-12-16 11:39:29.061313] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:17:28.379 [2024-12-16 11:39:29.061327] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:17:28.379 [2024-12-16 11:39:29.061338] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:17:28.379 [2024-12-16 11:39:29.061348] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:17:28.379 [2024-12-16 11:39:29.061844] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:17:28.638 11:39:29 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:17:28.638 11:39:29 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@868 -- # return 0 00:17:28.638 11:39:29 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:17:28.638 11:39:29 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@732 -- # xtrace_disable 00:17:28.638 11:39:29 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@10 -- # set +x 00:17:28.638 11:39:29 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:17:28.638 11:39:29 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@15 -- # subnqn=nqn.2024-07.io.spdk:cnode0 00:17:28.638 11:39:29 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@16 -- # perf=/home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf 00:17:28.638 11:39:29 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@19 -- # rpc_cmd nvmf_create_transport '-t tcp -o' --in-capsule-data-size 768 --control-msg-num 1 00:17:28.638 11:39:29 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:28.638 11:39:29 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@10 -- # set +x 00:17:28.638 [2024-12-16 11:39:29.261692] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:17:28.638 11:39:29 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:28.638 11:39:29 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@20 -- # rpc_cmd nvmf_create_subsystem nqn.2024-07.io.spdk:cnode0 -a 00:17:28.638 11:39:29 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:28.638 11:39:29 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@10 -- # set +x 00:17:28.638 11:39:29 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:28.638 11:39:29 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@21 -- # rpc_cmd bdev_malloc_create -b Malloc0 32 512 00:17:28.638 11:39:29 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:28.638 11:39:29 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@10 -- # set +x 00:17:28.638 Malloc0 00:17:28.638 11:39:29 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:28.638 11:39:29 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@22 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2024-07.io.spdk:cnode0 Malloc0 00:17:28.638 11:39:29 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:28.638 11:39:29 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@10 -- # set +x 00:17:28.638 11:39:29 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:28.638 11:39:29 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@23 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2024-07.io.spdk:cnode0 -t tcp -a 10.0.0.3 -s 4420 00:17:28.638 11:39:29 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:28.638 11:39:29 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@10 -- # set +x 00:17:28.638 [2024-12-16 11:39:29.301908] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4420 *** 00:17:28.638 11:39:29 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:28.638 11:39:29 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@27 -- # perf_pid1=87195 00:17:28.638 11:39:29 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@26 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf -c 0x2 -q 1 -o 4096 -w randread -t 1 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.3 trsvcid:4420' 00:17:28.638 11:39:29 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@29 -- # perf_pid2=87196 00:17:28.638 11:39:29 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@31 -- # perf_pid3=87197 00:17:28.638 11:39:29 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@30 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf -c 0x8 -q 1 -o 4096 -w randread -t 1 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.3 trsvcid:4420' 00:17:28.638 11:39:29 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@33 -- # wait 87195 00:17:28.638 11:39:29 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@28 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf -c 0x4 -q 1 -o 4096 -w randread -t 1 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.3 trsvcid:4420' 00:17:28.896 [2024-12-16 11:39:29.490268] subsystem.c:1641:spdk_nvmf_subsystem_listener_allowed: *WARNING*: Allowing connection to discovery subsystem on TCP/10.0.0.3/4420, even though this listener was not added to the discovery subsystem. This behavior is deprecated and will be removed in a future release. 00:17:28.897 [2024-12-16 11:39:29.500706] subsystem.c:1641:spdk_nvmf_subsystem_listener_allowed: *WARNING*: Allowing connection to discovery subsystem on TCP/10.0.0.3/4420, even though this listener was not added to the discovery subsystem. This behavior is deprecated and will be removed in a future release. 00:17:28.897 [2024-12-16 11:39:29.501157] subsystem.c:1641:spdk_nvmf_subsystem_listener_allowed: *WARNING*: Allowing connection to discovery subsystem on TCP/10.0.0.3/4420, even though this listener was not added to the discovery subsystem. This behavior is deprecated and will be removed in a future release. 00:17:29.833 Initializing NVMe Controllers 00:17:29.833 Attached to NVMe over Fabrics controller at 10.0.0.3:4420: nqn.2024-07.io.spdk:cnode0 00:17:29.833 Associating TCP (addr:10.0.0.3 subnqn:nqn.2024-07.io.spdk:cnode0) NSID 1 with lcore 1 00:17:29.833 Initialization complete. Launching workers. 00:17:29.833 ======================================================== 00:17:29.833 Latency(us) 00:17:29.833 Device Information : IOPS MiB/s Average min max 00:17:29.833 TCP (addr:10.0.0.3 subnqn:nqn.2024-07.io.spdk:cnode0) NSID 1 from core 1: 3315.00 12.95 301.36 130.26 757.16 00:17:29.833 ======================================================== 00:17:29.833 Total : 3315.00 12.95 301.36 130.26 757.16 00:17:29.833 00:17:29.833 Initializing NVMe Controllers 00:17:29.833 Attached to NVMe over Fabrics controller at 10.0.0.3:4420: nqn.2024-07.io.spdk:cnode0 00:17:29.833 Associating TCP (addr:10.0.0.3 subnqn:nqn.2024-07.io.spdk:cnode0) NSID 1 with lcore 3 00:17:29.833 Initialization complete. Launching workers. 00:17:29.833 ======================================================== 00:17:29.833 Latency(us) 00:17:29.833 Device Information : IOPS MiB/s Average min max 00:17:29.833 TCP (addr:10.0.0.3 subnqn:nqn.2024-07.io.spdk:cnode0) NSID 1 from core 3: 3291.95 12.86 303.44 157.41 717.97 00:17:29.833 ======================================================== 00:17:29.833 Total : 3291.95 12.86 303.44 157.41 717.97 00:17:29.833 00:17:29.833 Initializing NVMe Controllers 00:17:29.833 Attached to NVMe over Fabrics controller at 10.0.0.3:4420: nqn.2024-07.io.spdk:cnode0 00:17:29.833 Associating TCP (addr:10.0.0.3 subnqn:nqn.2024-07.io.spdk:cnode0) NSID 1 with lcore 2 00:17:29.833 Initialization complete. Launching workers. 00:17:29.833 ======================================================== 00:17:29.833 Latency(us) 00:17:29.833 Device Information : IOPS MiB/s Average min max 00:17:29.833 TCP (addr:10.0.0.3 subnqn:nqn.2024-07.io.spdk:cnode0) NSID 1 from core 2: 3292.00 12.86 303.44 213.00 589.91 00:17:29.833 ======================================================== 00:17:29.833 Total : 3292.00 12.86 303.44 213.00 589.91 00:17:29.833 00:17:29.833 11:39:30 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@34 -- # wait 87196 00:17:29.833 11:39:30 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@35 -- # wait 87197 00:17:29.833 11:39:30 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@37 -- # trap - SIGINT SIGTERM EXIT 00:17:29.833 11:39:30 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@38 -- # nvmftestfini 00:17:29.833 11:39:30 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@516 -- # nvmfcleanup 00:17:29.833 11:39:30 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@121 -- # sync 00:17:30.092 11:39:30 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:17:30.092 11:39:30 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@124 -- # set +e 00:17:30.092 11:39:30 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@125 -- # for i in {1..20} 00:17:30.092 11:39:30 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:17:30.092 rmmod nvme_tcp 00:17:30.092 rmmod nvme_fabrics 00:17:30.092 rmmod nvme_keyring 00:17:30.092 11:39:30 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:17:30.092 11:39:30 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@128 -- # set -e 00:17:30.092 11:39:30 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@129 -- # return 0 00:17:30.092 11:39:30 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@517 -- # '[' -n 87160 ']' 00:17:30.092 11:39:30 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@518 -- # killprocess 87160 00:17:30.092 11:39:30 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@954 -- # '[' -z 87160 ']' 00:17:30.092 11:39:30 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@958 -- # kill -0 87160 00:17:30.092 11:39:30 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@959 -- # uname 00:17:30.092 11:39:30 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:17:30.092 11:39:30 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 87160 00:17:30.092 11:39:30 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:17:30.092 11:39:30 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:17:30.092 killing process with pid 87160 00:17:30.092 11:39:30 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@972 -- # echo 'killing process with pid 87160' 00:17:30.092 11:39:30 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@973 -- # kill 87160 00:17:30.092 11:39:30 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@978 -- # wait 87160 00:17:30.351 11:39:30 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:17:30.351 11:39:30 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:17:30.351 11:39:30 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:17:30.351 11:39:30 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@297 -- # iptr 00:17:30.351 11:39:30 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@791 -- # iptables-save 00:17:30.351 11:39:30 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:17:30.351 11:39:30 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@791 -- # iptables-restore 00:17:30.351 11:39:30 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@298 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:17:30.351 11:39:30 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@299 -- # nvmf_veth_fini 00:17:30.351 11:39:30 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@233 -- # ip link set nvmf_init_br nomaster 00:17:30.351 11:39:30 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@234 -- # ip link set nvmf_init_br2 nomaster 00:17:30.351 11:39:30 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@235 -- # ip link set nvmf_tgt_br nomaster 00:17:30.351 11:39:30 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@236 -- # ip link set nvmf_tgt_br2 nomaster 00:17:30.351 11:39:31 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@237 -- # ip link set nvmf_init_br down 00:17:30.351 11:39:31 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@238 -- # ip link set nvmf_init_br2 down 00:17:30.351 11:39:31 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@239 -- # ip link set nvmf_tgt_br down 00:17:30.351 11:39:31 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@240 -- # ip link set nvmf_tgt_br2 down 00:17:30.351 11:39:31 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@241 -- # ip link delete nvmf_br type bridge 00:17:30.351 11:39:31 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@242 -- # ip link delete nvmf_init_if 00:17:30.351 11:39:31 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@243 -- # ip link delete nvmf_init_if2 00:17:30.609 11:39:31 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@244 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:17:30.609 11:39:31 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@245 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:17:30.609 11:39:31 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@246 -- # remove_spdk_ns 00:17:30.609 11:39:31 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:17:30.609 11:39:31 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:17:30.609 11:39:31 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:17:30.609 11:39:31 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@300 -- # return 0 00:17:30.609 00:17:30.609 real 0m3.003s 00:17:30.609 user 0m4.833s 00:17:30.609 sys 0m1.433s 00:17:30.609 11:39:31 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@1130 -- # xtrace_disable 00:17:30.609 11:39:31 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@10 -- # set +x 00:17:30.609 ************************************ 00:17:30.609 END TEST nvmf_control_msg_list 00:17:30.609 ************************************ 00:17:30.609 11:39:31 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@44 -- # run_test nvmf_wait_for_buf /home/vagrant/spdk_repo/spdk/test/nvmf/target/wait_for_buf.sh --transport=tcp 00:17:30.609 11:39:31 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:17:30.609 11:39:31 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1111 -- # xtrace_disable 00:17:30.609 11:39:31 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:17:30.609 ************************************ 00:17:30.609 START TEST nvmf_wait_for_buf 00:17:30.609 ************************************ 00:17:30.609 11:39:31 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/wait_for_buf.sh --transport=tcp 00:17:30.609 * Looking for test storage... 00:17:30.609 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:17:30.609 11:39:31 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:17:30.609 11:39:31 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@1711 -- # lcov --version 00:17:30.610 11:39:31 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:17:30.869 11:39:31 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:17:30.869 11:39:31 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:17:30.869 11:39:31 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@333 -- # local ver1 ver1_l 00:17:30.869 11:39:31 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@334 -- # local ver2 ver2_l 00:17:30.869 11:39:31 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@336 -- # IFS=.-: 00:17:30.869 11:39:31 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@336 -- # read -ra ver1 00:17:30.869 11:39:31 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@337 -- # IFS=.-: 00:17:30.869 11:39:31 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@337 -- # read -ra ver2 00:17:30.869 11:39:31 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@338 -- # local 'op=<' 00:17:30.869 11:39:31 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@340 -- # ver1_l=2 00:17:30.869 11:39:31 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@341 -- # ver2_l=1 00:17:30.869 11:39:31 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:17:30.869 11:39:31 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@344 -- # case "$op" in 00:17:30.869 11:39:31 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@345 -- # : 1 00:17:30.869 11:39:31 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@364 -- # (( v = 0 )) 00:17:30.869 11:39:31 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:17:30.869 11:39:31 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@365 -- # decimal 1 00:17:30.869 11:39:31 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@353 -- # local d=1 00:17:30.869 11:39:31 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:17:30.869 11:39:31 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@355 -- # echo 1 00:17:30.869 11:39:31 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@365 -- # ver1[v]=1 00:17:30.869 11:39:31 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@366 -- # decimal 2 00:17:30.869 11:39:31 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@353 -- # local d=2 00:17:30.869 11:39:31 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:17:30.869 11:39:31 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@355 -- # echo 2 00:17:30.869 11:39:31 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@366 -- # ver2[v]=2 00:17:30.869 11:39:31 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:17:30.869 11:39:31 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:17:30.869 11:39:31 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@368 -- # return 0 00:17:30.869 11:39:31 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:17:30.869 11:39:31 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:17:30.869 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:17:30.869 --rc genhtml_branch_coverage=1 00:17:30.869 --rc genhtml_function_coverage=1 00:17:30.869 --rc genhtml_legend=1 00:17:30.869 --rc geninfo_all_blocks=1 00:17:30.869 --rc geninfo_unexecuted_blocks=1 00:17:30.869 00:17:30.869 ' 00:17:30.869 11:39:31 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:17:30.869 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:17:30.869 --rc genhtml_branch_coverage=1 00:17:30.869 --rc genhtml_function_coverage=1 00:17:30.869 --rc genhtml_legend=1 00:17:30.869 --rc geninfo_all_blocks=1 00:17:30.869 --rc geninfo_unexecuted_blocks=1 00:17:30.869 00:17:30.869 ' 00:17:30.869 11:39:31 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:17:30.869 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:17:30.869 --rc genhtml_branch_coverage=1 00:17:30.869 --rc genhtml_function_coverage=1 00:17:30.869 --rc genhtml_legend=1 00:17:30.869 --rc geninfo_all_blocks=1 00:17:30.869 --rc geninfo_unexecuted_blocks=1 00:17:30.869 00:17:30.869 ' 00:17:30.869 11:39:31 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:17:30.869 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:17:30.869 --rc genhtml_branch_coverage=1 00:17:30.869 --rc genhtml_function_coverage=1 00:17:30.869 --rc genhtml_legend=1 00:17:30.869 --rc geninfo_all_blocks=1 00:17:30.869 --rc geninfo_unexecuted_blocks=1 00:17:30.869 00:17:30.869 ' 00:17:30.869 11:39:31 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@10 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:17:30.869 11:39:31 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@7 -- # uname -s 00:17:30.869 11:39:31 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:17:30.869 11:39:31 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:17:30.869 11:39:31 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:17:30.870 11:39:31 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:17:30.870 11:39:31 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:17:30.870 11:39:31 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:17:30.870 11:39:31 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:17:30.870 11:39:31 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:17:30.870 11:39:31 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:17:30.870 11:39:31 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:17:30.870 11:39:31 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:2fb5c03d-b67f-4471-b8a9-a83bf7b15be8 00:17:30.870 11:39:31 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@18 -- # NVME_HOSTID=2fb5c03d-b67f-4471-b8a9-a83bf7b15be8 00:17:30.870 11:39:31 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:17:30.870 11:39:31 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:17:30.870 11:39:31 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:17:30.870 11:39:31 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:17:30.870 11:39:31 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@49 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:17:30.870 11:39:31 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@15 -- # shopt -s extglob 00:17:30.870 11:39:31 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:17:30.870 11:39:31 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:17:30.870 11:39:31 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:17:30.870 11:39:31 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:30.870 11:39:31 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:30.870 11:39:31 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:30.870 11:39:31 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- paths/export.sh@5 -- # export PATH 00:17:30.870 11:39:31 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:30.870 11:39:31 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@51 -- # : 0 00:17:30.870 11:39:31 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:17:30.870 11:39:31 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:17:30.870 11:39:31 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:17:30.870 11:39:31 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:17:30.870 11:39:31 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:17:30.870 11:39:31 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:17:30.870 /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:17:30.870 11:39:31 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:17:30.870 11:39:31 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:17:30.870 11:39:31 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@55 -- # have_pci_nics=0 00:17:30.870 11:39:31 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@12 -- # nvmftestinit 00:17:30.870 11:39:31 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:17:30.870 11:39:31 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:17:30.870 11:39:31 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@476 -- # prepare_net_devs 00:17:30.870 11:39:31 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@438 -- # local -g is_hw=no 00:17:30.870 11:39:31 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@440 -- # remove_spdk_ns 00:17:30.870 11:39:31 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:17:30.870 11:39:31 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:17:30.870 11:39:31 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:17:30.870 11:39:31 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@442 -- # [[ virt != virt ]] 00:17:30.870 11:39:31 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@444 -- # [[ no == yes ]] 00:17:30.870 11:39:31 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@451 -- # [[ virt == phy ]] 00:17:30.870 11:39:31 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@454 -- # [[ virt == phy-fallback ]] 00:17:30.870 11:39:31 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@459 -- # [[ tcp == tcp ]] 00:17:30.870 11:39:31 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@460 -- # nvmf_veth_init 00:17:30.870 11:39:31 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@145 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:17:30.870 11:39:31 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@146 -- # NVMF_SECOND_INITIATOR_IP=10.0.0.2 00:17:30.870 11:39:31 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@147 -- # NVMF_FIRST_TARGET_IP=10.0.0.3 00:17:30.870 11:39:31 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@148 -- # NVMF_SECOND_TARGET_IP=10.0.0.4 00:17:30.870 11:39:31 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@149 -- # NVMF_INITIATOR_IP=10.0.0.1 00:17:30.870 11:39:31 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@150 -- # NVMF_BRIDGE=nvmf_br 00:17:30.870 11:39:31 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@151 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:17:30.870 11:39:31 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@152 -- # NVMF_INITIATOR_INTERFACE2=nvmf_init_if2 00:17:30.870 11:39:31 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@153 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:17:30.870 11:39:31 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@154 -- # NVMF_INITIATOR_BRIDGE2=nvmf_init_br2 00:17:30.870 11:39:31 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@155 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:17:30.870 11:39:31 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@156 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:17:30.870 11:39:31 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@157 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:17:30.870 11:39:31 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@158 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:17:30.870 11:39:31 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@159 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:17:30.870 11:39:31 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@160 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:17:30.870 11:39:31 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@162 -- # ip link set nvmf_init_br nomaster 00:17:30.870 Cannot find device "nvmf_init_br" 00:17:30.870 11:39:31 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@162 -- # true 00:17:30.870 11:39:31 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@163 -- # ip link set nvmf_init_br2 nomaster 00:17:30.870 Cannot find device "nvmf_init_br2" 00:17:30.870 11:39:31 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@163 -- # true 00:17:30.870 11:39:31 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@164 -- # ip link set nvmf_tgt_br nomaster 00:17:30.870 Cannot find device "nvmf_tgt_br" 00:17:30.870 11:39:31 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@164 -- # true 00:17:30.870 11:39:31 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@165 -- # ip link set nvmf_tgt_br2 nomaster 00:17:30.870 Cannot find device "nvmf_tgt_br2" 00:17:30.870 11:39:31 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@165 -- # true 00:17:30.870 11:39:31 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@166 -- # ip link set nvmf_init_br down 00:17:30.870 Cannot find device "nvmf_init_br" 00:17:30.870 11:39:31 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@166 -- # true 00:17:30.870 11:39:31 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@167 -- # ip link set nvmf_init_br2 down 00:17:30.870 Cannot find device "nvmf_init_br2" 00:17:30.870 11:39:31 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@167 -- # true 00:17:30.870 11:39:31 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@168 -- # ip link set nvmf_tgt_br down 00:17:30.870 Cannot find device "nvmf_tgt_br" 00:17:30.870 11:39:31 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@168 -- # true 00:17:30.870 11:39:31 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@169 -- # ip link set nvmf_tgt_br2 down 00:17:30.870 Cannot find device "nvmf_tgt_br2" 00:17:30.870 11:39:31 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@169 -- # true 00:17:30.870 11:39:31 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@170 -- # ip link delete nvmf_br type bridge 00:17:30.870 Cannot find device "nvmf_br" 00:17:30.870 11:39:31 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@170 -- # true 00:17:30.870 11:39:31 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@171 -- # ip link delete nvmf_init_if 00:17:30.870 Cannot find device "nvmf_init_if" 00:17:30.870 11:39:31 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@171 -- # true 00:17:30.870 11:39:31 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@172 -- # ip link delete nvmf_init_if2 00:17:30.870 Cannot find device "nvmf_init_if2" 00:17:30.870 11:39:31 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@172 -- # true 00:17:30.870 11:39:31 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@173 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:17:30.870 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:17:30.870 11:39:31 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@173 -- # true 00:17:30.871 11:39:31 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@174 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:17:30.871 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:17:30.871 11:39:31 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@174 -- # true 00:17:30.871 11:39:31 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@177 -- # ip netns add nvmf_tgt_ns_spdk 00:17:31.130 11:39:31 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@180 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:17:31.130 11:39:31 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@181 -- # ip link add nvmf_init_if2 type veth peer name nvmf_init_br2 00:17:31.130 11:39:31 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@182 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:17:31.130 11:39:31 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@183 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:17:31.130 11:39:31 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:17:31.130 11:39:31 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@187 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:17:31.130 11:39:31 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@190 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:17:31.130 11:39:31 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@191 -- # ip addr add 10.0.0.2/24 dev nvmf_init_if2 00:17:31.130 11:39:31 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@192 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if 00:17:31.130 11:39:31 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@193 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.4/24 dev nvmf_tgt_if2 00:17:31.130 11:39:31 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@196 -- # ip link set nvmf_init_if up 00:17:31.130 11:39:31 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@197 -- # ip link set nvmf_init_if2 up 00:17:31.130 11:39:31 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@198 -- # ip link set nvmf_init_br up 00:17:31.130 11:39:31 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@199 -- # ip link set nvmf_init_br2 up 00:17:31.130 11:39:31 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@200 -- # ip link set nvmf_tgt_br up 00:17:31.130 11:39:31 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@201 -- # ip link set nvmf_tgt_br2 up 00:17:31.130 11:39:31 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@202 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:17:31.130 11:39:31 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@203 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:17:31.130 11:39:31 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@204 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:17:31.130 11:39:31 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@207 -- # ip link add nvmf_br type bridge 00:17:31.130 11:39:31 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@208 -- # ip link set nvmf_br up 00:17:31.130 11:39:31 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@211 -- # ip link set nvmf_init_br master nvmf_br 00:17:31.130 11:39:31 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@212 -- # ip link set nvmf_init_br2 master nvmf_br 00:17:31.130 11:39:31 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@213 -- # ip link set nvmf_tgt_br master nvmf_br 00:17:31.130 11:39:31 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@214 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:17:31.130 11:39:31 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@217 -- # ipts -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:17:31.130 11:39:31 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT' 00:17:31.130 11:39:31 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@218 -- # ipts -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT 00:17:31.130 11:39:31 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT' 00:17:31.130 11:39:31 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@219 -- # ipts -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:17:31.130 11:39:31 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@790 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT -m comment --comment 'SPDK_NVMF:-A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT' 00:17:31.130 11:39:31 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@222 -- # ping -c 1 10.0.0.3 00:17:31.130 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:17:31.130 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.111 ms 00:17:31.130 00:17:31.130 --- 10.0.0.3 ping statistics --- 00:17:31.130 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:17:31.130 rtt min/avg/max/mdev = 0.111/0.111/0.111/0.000 ms 00:17:31.130 11:39:31 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@223 -- # ping -c 1 10.0.0.4 00:17:31.130 PING 10.0.0.4 (10.0.0.4) 56(84) bytes of data. 00:17:31.130 64 bytes from 10.0.0.4: icmp_seq=1 ttl=64 time=0.064 ms 00:17:31.130 00:17:31.130 --- 10.0.0.4 ping statistics --- 00:17:31.130 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:17:31.130 rtt min/avg/max/mdev = 0.064/0.064/0.064/0.000 ms 00:17:31.130 11:39:31 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@224 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:17:31.130 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:17:31.130 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.032 ms 00:17:31.130 00:17:31.130 --- 10.0.0.1 ping statistics --- 00:17:31.130 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:17:31.130 rtt min/avg/max/mdev = 0.032/0.032/0.032/0.000 ms 00:17:31.130 11:39:31 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@225 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.2 00:17:31.130 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:17:31.130 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.091 ms 00:17:31.130 00:17:31.130 --- 10.0.0.2 ping statistics --- 00:17:31.130 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:17:31.130 rtt min/avg/max/mdev = 0.091/0.091/0.091/0.000 ms 00:17:31.130 11:39:31 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@227 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:17:31.130 11:39:31 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@461 -- # return 0 00:17:31.130 11:39:31 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:17:31.130 11:39:31 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:17:31.130 11:39:31 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:17:31.130 11:39:31 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:17:31.130 11:39:31 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:17:31.130 11:39:31 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:17:31.130 11:39:31 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:17:31.389 11:39:31 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@13 -- # nvmfappstart --wait-for-rpc 00:17:31.389 11:39:31 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:17:31.389 11:39:31 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@726 -- # xtrace_disable 00:17:31.389 11:39:31 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@10 -- # set +x 00:17:31.389 11:39:31 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@509 -- # nvmfpid=87437 00:17:31.389 11:39:31 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@508 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --wait-for-rpc 00:17:31.389 11:39:31 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@510 -- # waitforlisten 87437 00:17:31.389 11:39:31 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@835 -- # '[' -z 87437 ']' 00:17:31.389 11:39:31 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:17:31.389 11:39:31 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@840 -- # local max_retries=100 00:17:31.389 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:17:31.389 11:39:31 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:17:31.389 11:39:31 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@844 -- # xtrace_disable 00:17:31.389 11:39:31 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@10 -- # set +x 00:17:31.389 [2024-12-16 11:39:31.965300] Starting SPDK v25.01-pre git sha1 c360129e7 / DPDK 24.03.0 initialization... 00:17:31.389 [2024-12-16 11:39:31.965411] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:17:31.389 [2024-12-16 11:39:32.120351] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:17:31.648 [2024-12-16 11:39:32.201388] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:17:31.648 [2024-12-16 11:39:32.201460] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:17:31.648 [2024-12-16 11:39:32.201476] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:17:31.648 [2024-12-16 11:39:32.201488] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:17:31.648 [2024-12-16 11:39:32.201499] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:17:31.648 [2024-12-16 11:39:32.201999] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:17:31.648 11:39:32 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:17:31.648 11:39:32 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@868 -- # return 0 00:17:31.648 11:39:32 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:17:31.648 11:39:32 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@732 -- # xtrace_disable 00:17:31.648 11:39:32 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@10 -- # set +x 00:17:31.648 11:39:32 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:17:31.648 11:39:32 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@15 -- # subnqn=nqn.2024-07.io.spdk:cnode0 00:17:31.648 11:39:32 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@16 -- # perf=/home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf 00:17:31.648 11:39:32 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@19 -- # rpc_cmd accel_set_options --small-cache-size 0 --large-cache-size 0 00:17:31.648 11:39:32 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:31.648 11:39:32 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@10 -- # set +x 00:17:31.648 11:39:32 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:31.648 11:39:32 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@20 -- # rpc_cmd iobuf_set_options --small-pool-count 154 --small_bufsize=8192 00:17:31.648 11:39:32 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:31.648 11:39:32 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@10 -- # set +x 00:17:31.648 11:39:32 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:31.648 11:39:32 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@21 -- # rpc_cmd framework_start_init 00:17:31.648 11:39:32 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:31.648 11:39:32 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@10 -- # set +x 00:17:31.907 11:39:32 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:31.907 11:39:32 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@22 -- # rpc_cmd bdev_malloc_create -b Malloc0 32 512 00:17:31.907 11:39:32 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:31.907 11:39:32 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@10 -- # set +x 00:17:31.907 Malloc0 00:17:31.907 11:39:32 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:31.907 11:39:32 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@23 -- # rpc_cmd nvmf_create_transport '-t tcp -o' -u 8192 -n 24 -b 24 00:17:31.907 11:39:32 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:31.907 11:39:32 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@10 -- # set +x 00:17:31.907 [2024-12-16 11:39:32.487321] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:17:31.907 11:39:32 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:31.907 11:39:32 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@24 -- # rpc_cmd nvmf_create_subsystem nqn.2024-07.io.spdk:cnode0 -a -s SPDK00000000000001 00:17:31.907 11:39:32 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:31.907 11:39:32 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@10 -- # set +x 00:17:31.907 11:39:32 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:31.907 11:39:32 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@25 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2024-07.io.spdk:cnode0 Malloc0 00:17:31.907 11:39:32 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:31.907 11:39:32 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@10 -- # set +x 00:17:31.907 11:39:32 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:31.907 11:39:32 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@26 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2024-07.io.spdk:cnode0 -t tcp -a 10.0.0.3 -s 4420 00:17:31.907 11:39:32 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:31.907 11:39:32 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@10 -- # set +x 00:17:31.907 [2024-12-16 11:39:32.511509] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4420 *** 00:17:31.907 11:39:32 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:31.907 11:39:32 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@30 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf -q 4 -o 131072 -w randread -t 1 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.3 trsvcid:4420' 00:17:32.166 [2024-12-16 11:39:32.725336] subsystem.c:1641:spdk_nvmf_subsystem_listener_allowed: *WARNING*: Allowing connection to discovery subsystem on TCP/10.0.0.3/4420, even though this listener was not added to the discovery subsystem. This behavior is deprecated and will be removed in a future release. 00:17:33.543 Initializing NVMe Controllers 00:17:33.543 Attached to NVMe over Fabrics controller at 10.0.0.3:4420: nqn.2024-07.io.spdk:cnode0 00:17:33.543 Associating TCP (addr:10.0.0.3 subnqn:nqn.2024-07.io.spdk:cnode0) NSID 1 with lcore 0 00:17:33.543 Initialization complete. Launching workers. 00:17:33.543 ======================================================== 00:17:33.543 Latency(us) 00:17:33.543 Device Information : IOPS MiB/s Average min max 00:17:33.543 TCP (addr:10.0.0.3 subnqn:nqn.2024-07.io.spdk:cnode0) NSID 1 from core 0: 131.00 16.37 31959.75 7943.69 68068.05 00:17:33.543 ======================================================== 00:17:33.543 Total : 131.00 16.37 31959.75 7943.69 68068.05 00:17:33.543 00:17:33.543 11:39:34 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@32 -- # rpc_cmd iobuf_get_stats 00:17:33.543 11:39:34 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:33.543 11:39:34 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@32 -- # jq -r '.[] | select(.module == "nvmf_TCP") | .small_pool.retry' 00:17:33.543 11:39:34 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@10 -- # set +x 00:17:33.543 11:39:34 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:33.543 11:39:34 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@32 -- # retry_count=2070 00:17:33.543 11:39:34 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@33 -- # [[ 2070 -eq 0 ]] 00:17:33.543 11:39:34 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@37 -- # trap - SIGINT SIGTERM EXIT 00:17:33.543 11:39:34 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@38 -- # nvmftestfini 00:17:33.543 11:39:34 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@516 -- # nvmfcleanup 00:17:33.543 11:39:34 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@121 -- # sync 00:17:33.543 11:39:34 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:17:33.543 11:39:34 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@124 -- # set +e 00:17:33.543 11:39:34 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@125 -- # for i in {1..20} 00:17:33.543 11:39:34 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:17:33.543 rmmod nvme_tcp 00:17:33.543 rmmod nvme_fabrics 00:17:33.543 rmmod nvme_keyring 00:17:33.543 11:39:34 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:17:33.543 11:39:34 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@128 -- # set -e 00:17:33.543 11:39:34 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@129 -- # return 0 00:17:33.543 11:39:34 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@517 -- # '[' -n 87437 ']' 00:17:33.543 11:39:34 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@518 -- # killprocess 87437 00:17:33.543 11:39:34 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@954 -- # '[' -z 87437 ']' 00:17:33.543 11:39:34 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@958 -- # kill -0 87437 00:17:33.543 11:39:34 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@959 -- # uname 00:17:33.543 11:39:34 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:17:33.543 11:39:34 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 87437 00:17:33.805 11:39:34 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:17:33.805 killing process with pid 87437 00:17:33.805 11:39:34 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:17:33.805 11:39:34 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@972 -- # echo 'killing process with pid 87437' 00:17:33.805 11:39:34 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@973 -- # kill 87437 00:17:33.805 11:39:34 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@978 -- # wait 87437 00:17:33.805 11:39:34 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:17:33.805 11:39:34 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:17:33.805 11:39:34 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:17:33.805 11:39:34 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@297 -- # iptr 00:17:33.805 11:39:34 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:17:33.805 11:39:34 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@791 -- # iptables-save 00:17:33.805 11:39:34 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@791 -- # iptables-restore 00:17:33.805 11:39:34 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@298 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:17:33.805 11:39:34 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@299 -- # nvmf_veth_fini 00:17:33.805 11:39:34 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@233 -- # ip link set nvmf_init_br nomaster 00:17:33.805 11:39:34 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@234 -- # ip link set nvmf_init_br2 nomaster 00:17:33.805 11:39:34 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@235 -- # ip link set nvmf_tgt_br nomaster 00:17:34.064 11:39:34 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@236 -- # ip link set nvmf_tgt_br2 nomaster 00:17:34.064 11:39:34 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@237 -- # ip link set nvmf_init_br down 00:17:34.064 11:39:34 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@238 -- # ip link set nvmf_init_br2 down 00:17:34.064 11:39:34 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@239 -- # ip link set nvmf_tgt_br down 00:17:34.064 11:39:34 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@240 -- # ip link set nvmf_tgt_br2 down 00:17:34.064 11:39:34 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@241 -- # ip link delete nvmf_br type bridge 00:17:34.064 11:39:34 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@242 -- # ip link delete nvmf_init_if 00:17:34.064 11:39:34 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@243 -- # ip link delete nvmf_init_if2 00:17:34.064 11:39:34 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@244 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:17:34.064 11:39:34 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@245 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:17:34.064 11:39:34 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@246 -- # remove_spdk_ns 00:17:34.064 11:39:34 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:17:34.064 11:39:34 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:17:34.064 11:39:34 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:17:34.064 11:39:34 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@300 -- # return 0 00:17:34.064 00:17:34.064 real 0m3.513s 00:17:34.064 user 0m2.905s 00:17:34.064 sys 0m0.799s 00:17:34.064 11:39:34 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@1130 -- # xtrace_disable 00:17:34.064 11:39:34 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@10 -- # set +x 00:17:34.064 ************************************ 00:17:34.064 END TEST nvmf_wait_for_buf 00:17:34.064 ************************************ 00:17:34.064 11:39:34 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@47 -- # '[' 0 -eq 1 ']' 00:17:34.064 11:39:34 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@53 -- # [[ virt == phy ]] 00:17:34.064 11:39:34 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@67 -- # run_test nvmf_nsid /home/vagrant/spdk_repo/spdk/test/nvmf/target/nsid.sh --transport=tcp 00:17:34.064 11:39:34 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:17:34.064 11:39:34 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1111 -- # xtrace_disable 00:17:34.064 11:39:34 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:17:34.323 ************************************ 00:17:34.323 START TEST nvmf_nsid 00:17:34.323 ************************************ 00:17:34.323 11:39:34 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/nsid.sh --transport=tcp 00:17:34.323 * Looking for test storage... 00:17:34.323 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:17:34.323 11:39:34 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:17:34.323 11:39:34 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1711 -- # lcov --version 00:17:34.323 11:39:34 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:17:34.323 11:39:35 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:17:34.324 11:39:35 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:17:34.324 11:39:35 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@333 -- # local ver1 ver1_l 00:17:34.324 11:39:35 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@334 -- # local ver2 ver2_l 00:17:34.324 11:39:35 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@336 -- # IFS=.-: 00:17:34.324 11:39:35 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@336 -- # read -ra ver1 00:17:34.324 11:39:35 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@337 -- # IFS=.-: 00:17:34.324 11:39:35 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@337 -- # read -ra ver2 00:17:34.324 11:39:35 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@338 -- # local 'op=<' 00:17:34.324 11:39:35 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@340 -- # ver1_l=2 00:17:34.324 11:39:35 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@341 -- # ver2_l=1 00:17:34.324 11:39:35 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:17:34.324 11:39:35 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@344 -- # case "$op" in 00:17:34.324 11:39:35 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@345 -- # : 1 00:17:34.324 11:39:35 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@364 -- # (( v = 0 )) 00:17:34.324 11:39:35 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:17:34.324 11:39:35 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@365 -- # decimal 1 00:17:34.324 11:39:35 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@353 -- # local d=1 00:17:34.324 11:39:35 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:17:34.324 11:39:35 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@355 -- # echo 1 00:17:34.324 11:39:35 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@365 -- # ver1[v]=1 00:17:34.324 11:39:35 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@366 -- # decimal 2 00:17:34.324 11:39:35 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@353 -- # local d=2 00:17:34.324 11:39:35 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:17:34.324 11:39:35 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@355 -- # echo 2 00:17:34.324 11:39:35 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@366 -- # ver2[v]=2 00:17:34.324 11:39:35 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:17:34.324 11:39:35 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:17:34.324 11:39:35 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@368 -- # return 0 00:17:34.324 11:39:35 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:17:34.324 11:39:35 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:17:34.324 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:17:34.324 --rc genhtml_branch_coverage=1 00:17:34.324 --rc genhtml_function_coverage=1 00:17:34.324 --rc genhtml_legend=1 00:17:34.324 --rc geninfo_all_blocks=1 00:17:34.324 --rc geninfo_unexecuted_blocks=1 00:17:34.324 00:17:34.324 ' 00:17:34.324 11:39:35 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:17:34.324 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:17:34.324 --rc genhtml_branch_coverage=1 00:17:34.324 --rc genhtml_function_coverage=1 00:17:34.324 --rc genhtml_legend=1 00:17:34.324 --rc geninfo_all_blocks=1 00:17:34.324 --rc geninfo_unexecuted_blocks=1 00:17:34.324 00:17:34.324 ' 00:17:34.324 11:39:35 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:17:34.324 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:17:34.324 --rc genhtml_branch_coverage=1 00:17:34.324 --rc genhtml_function_coverage=1 00:17:34.324 --rc genhtml_legend=1 00:17:34.324 --rc geninfo_all_blocks=1 00:17:34.324 --rc geninfo_unexecuted_blocks=1 00:17:34.324 00:17:34.324 ' 00:17:34.324 11:39:35 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:17:34.324 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:17:34.324 --rc genhtml_branch_coverage=1 00:17:34.324 --rc genhtml_function_coverage=1 00:17:34.324 --rc genhtml_legend=1 00:17:34.324 --rc geninfo_all_blocks=1 00:17:34.324 --rc geninfo_unexecuted_blocks=1 00:17:34.324 00:17:34.324 ' 00:17:34.324 11:39:35 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:17:34.324 11:39:35 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@7 -- # uname -s 00:17:34.324 11:39:35 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:17:34.324 11:39:35 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:17:34.324 11:39:35 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:17:34.324 11:39:35 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:17:34.324 11:39:35 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:17:34.324 11:39:35 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:17:34.324 11:39:35 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:17:34.324 11:39:35 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:17:34.324 11:39:35 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:17:34.324 11:39:35 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:17:34.324 11:39:35 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:2fb5c03d-b67f-4471-b8a9-a83bf7b15be8 00:17:34.324 11:39:35 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@18 -- # NVME_HOSTID=2fb5c03d-b67f-4471-b8a9-a83bf7b15be8 00:17:34.324 11:39:35 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:17:34.324 11:39:35 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:17:34.324 11:39:35 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:17:34.324 11:39:35 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:17:34.324 11:39:35 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@49 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:17:34.324 11:39:35 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@15 -- # shopt -s extglob 00:17:34.324 11:39:35 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:17:34.324 11:39:35 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:17:34.324 11:39:35 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:17:34.324 11:39:35 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:34.324 11:39:35 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:34.324 11:39:35 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:34.324 11:39:35 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- paths/export.sh@5 -- # export PATH 00:17:34.324 11:39:35 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:34.324 11:39:35 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@51 -- # : 0 00:17:34.324 11:39:35 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:17:34.324 11:39:35 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:17:34.324 11:39:35 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:17:34.324 11:39:35 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:17:34.324 11:39:35 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:17:34.324 11:39:35 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:17:34.324 /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:17:34.324 11:39:35 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:17:34.324 11:39:35 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:17:34.324 11:39:35 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@55 -- # have_pci_nics=0 00:17:34.324 11:39:35 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@11 -- # subnqn1=nqn.2024-10.io.spdk:cnode0 00:17:34.324 11:39:35 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@12 -- # subnqn2=nqn.2024-10.io.spdk:cnode1 00:17:34.324 11:39:35 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@13 -- # subnqn3=nqn.2024-10.io.spdk:cnode2 00:17:34.324 11:39:35 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@14 -- # tgt2sock=/var/tmp/tgt2.sock 00:17:34.324 11:39:35 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@15 -- # tgt2pid= 00:17:34.324 11:39:35 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@46 -- # nvmftestinit 00:17:34.324 11:39:35 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:17:34.324 11:39:35 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:17:34.324 11:39:35 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@476 -- # prepare_net_devs 00:17:34.324 11:39:35 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@438 -- # local -g is_hw=no 00:17:34.324 11:39:35 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@440 -- # remove_spdk_ns 00:17:34.324 11:39:35 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:17:34.325 11:39:35 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:17:34.325 11:39:35 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:17:34.325 11:39:35 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@442 -- # [[ virt != virt ]] 00:17:34.325 11:39:35 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@444 -- # [[ no == yes ]] 00:17:34.325 11:39:35 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@451 -- # [[ virt == phy ]] 00:17:34.325 11:39:35 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@454 -- # [[ virt == phy-fallback ]] 00:17:34.325 11:39:35 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@459 -- # [[ tcp == tcp ]] 00:17:34.325 11:39:35 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@460 -- # nvmf_veth_init 00:17:34.325 11:39:35 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@145 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:17:34.325 11:39:35 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@146 -- # NVMF_SECOND_INITIATOR_IP=10.0.0.2 00:17:34.325 11:39:35 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@147 -- # NVMF_FIRST_TARGET_IP=10.0.0.3 00:17:34.325 11:39:35 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@148 -- # NVMF_SECOND_TARGET_IP=10.0.0.4 00:17:34.325 11:39:35 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@149 -- # NVMF_INITIATOR_IP=10.0.0.1 00:17:34.325 11:39:35 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@150 -- # NVMF_BRIDGE=nvmf_br 00:17:34.325 11:39:35 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@151 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:17:34.325 11:39:35 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@152 -- # NVMF_INITIATOR_INTERFACE2=nvmf_init_if2 00:17:34.325 11:39:35 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@153 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:17:34.325 11:39:35 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@154 -- # NVMF_INITIATOR_BRIDGE2=nvmf_init_br2 00:17:34.325 11:39:35 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@155 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:17:34.325 11:39:35 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@156 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:17:34.325 11:39:35 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@157 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:17:34.325 11:39:35 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@158 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:17:34.325 11:39:35 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@159 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:17:34.325 11:39:35 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@160 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:17:34.325 11:39:35 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@162 -- # ip link set nvmf_init_br nomaster 00:17:34.325 Cannot find device "nvmf_init_br" 00:17:34.325 11:39:35 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@162 -- # true 00:17:34.325 11:39:35 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@163 -- # ip link set nvmf_init_br2 nomaster 00:17:34.581 Cannot find device "nvmf_init_br2" 00:17:34.581 11:39:35 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@163 -- # true 00:17:34.581 11:39:35 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@164 -- # ip link set nvmf_tgt_br nomaster 00:17:34.581 Cannot find device "nvmf_tgt_br" 00:17:34.581 11:39:35 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@164 -- # true 00:17:34.581 11:39:35 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@165 -- # ip link set nvmf_tgt_br2 nomaster 00:17:34.581 Cannot find device "nvmf_tgt_br2" 00:17:34.581 11:39:35 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@165 -- # true 00:17:34.581 11:39:35 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@166 -- # ip link set nvmf_init_br down 00:17:34.581 Cannot find device "nvmf_init_br" 00:17:34.581 11:39:35 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@166 -- # true 00:17:34.581 11:39:35 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@167 -- # ip link set nvmf_init_br2 down 00:17:34.581 Cannot find device "nvmf_init_br2" 00:17:34.581 11:39:35 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@167 -- # true 00:17:34.581 11:39:35 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@168 -- # ip link set nvmf_tgt_br down 00:17:34.581 Cannot find device "nvmf_tgt_br" 00:17:34.581 11:39:35 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@168 -- # true 00:17:34.581 11:39:35 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@169 -- # ip link set nvmf_tgt_br2 down 00:17:34.581 Cannot find device "nvmf_tgt_br2" 00:17:34.581 11:39:35 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@169 -- # true 00:17:34.581 11:39:35 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@170 -- # ip link delete nvmf_br type bridge 00:17:34.581 Cannot find device "nvmf_br" 00:17:34.581 11:39:35 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@170 -- # true 00:17:34.581 11:39:35 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@171 -- # ip link delete nvmf_init_if 00:17:34.581 Cannot find device "nvmf_init_if" 00:17:34.581 11:39:35 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@171 -- # true 00:17:34.581 11:39:35 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@172 -- # ip link delete nvmf_init_if2 00:17:34.581 Cannot find device "nvmf_init_if2" 00:17:34.581 11:39:35 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@172 -- # true 00:17:34.581 11:39:35 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@173 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:17:34.582 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:17:34.582 11:39:35 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@173 -- # true 00:17:34.582 11:39:35 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@174 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:17:34.582 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:17:34.582 11:39:35 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@174 -- # true 00:17:34.582 11:39:35 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@177 -- # ip netns add nvmf_tgt_ns_spdk 00:17:34.582 11:39:35 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@180 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:17:34.582 11:39:35 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@181 -- # ip link add nvmf_init_if2 type veth peer name nvmf_init_br2 00:17:34.582 11:39:35 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@182 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:17:34.582 11:39:35 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@183 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:17:34.582 11:39:35 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:17:34.582 11:39:35 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@187 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:17:34.582 11:39:35 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@190 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:17:34.582 11:39:35 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@191 -- # ip addr add 10.0.0.2/24 dev nvmf_init_if2 00:17:34.582 11:39:35 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@192 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if 00:17:34.582 11:39:35 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@193 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.4/24 dev nvmf_tgt_if2 00:17:34.582 11:39:35 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@196 -- # ip link set nvmf_init_if up 00:17:34.582 11:39:35 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@197 -- # ip link set nvmf_init_if2 up 00:17:34.582 11:39:35 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@198 -- # ip link set nvmf_init_br up 00:17:34.582 11:39:35 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@199 -- # ip link set nvmf_init_br2 up 00:17:34.582 11:39:35 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@200 -- # ip link set nvmf_tgt_br up 00:17:34.582 11:39:35 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@201 -- # ip link set nvmf_tgt_br2 up 00:17:34.582 11:39:35 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@202 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:17:34.840 11:39:35 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@203 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:17:34.840 11:39:35 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@204 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:17:34.840 11:39:35 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@207 -- # ip link add nvmf_br type bridge 00:17:34.840 11:39:35 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@208 -- # ip link set nvmf_br up 00:17:34.840 11:39:35 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@211 -- # ip link set nvmf_init_br master nvmf_br 00:17:34.840 11:39:35 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@212 -- # ip link set nvmf_init_br2 master nvmf_br 00:17:34.840 11:39:35 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@213 -- # ip link set nvmf_tgt_br master nvmf_br 00:17:34.840 11:39:35 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@214 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:17:34.840 11:39:35 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@217 -- # ipts -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:17:34.840 11:39:35 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT' 00:17:34.840 11:39:35 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@218 -- # ipts -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT 00:17:34.840 11:39:35 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT' 00:17:34.840 11:39:35 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@219 -- # ipts -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:17:34.840 11:39:35 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@790 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT -m comment --comment 'SPDK_NVMF:-A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT' 00:17:34.840 11:39:35 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@222 -- # ping -c 1 10.0.0.3 00:17:34.840 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:17:34.840 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.061 ms 00:17:34.840 00:17:34.840 --- 10.0.0.3 ping statistics --- 00:17:34.840 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:17:34.840 rtt min/avg/max/mdev = 0.061/0.061/0.061/0.000 ms 00:17:34.840 11:39:35 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@223 -- # ping -c 1 10.0.0.4 00:17:34.840 PING 10.0.0.4 (10.0.0.4) 56(84) bytes of data. 00:17:34.840 64 bytes from 10.0.0.4: icmp_seq=1 ttl=64 time=0.084 ms 00:17:34.840 00:17:34.840 --- 10.0.0.4 ping statistics --- 00:17:34.840 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:17:34.840 rtt min/avg/max/mdev = 0.084/0.084/0.084/0.000 ms 00:17:34.840 11:39:35 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@224 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:17:34.840 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:17:34.840 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.030 ms 00:17:34.840 00:17:34.840 --- 10.0.0.1 ping statistics --- 00:17:34.840 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:17:34.840 rtt min/avg/max/mdev = 0.030/0.030/0.030/0.000 ms 00:17:34.840 11:39:35 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@225 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.2 00:17:34.840 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:17:34.840 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.056 ms 00:17:34.840 00:17:34.840 --- 10.0.0.2 ping statistics --- 00:17:34.840 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:17:34.840 rtt min/avg/max/mdev = 0.056/0.056/0.056/0.000 ms 00:17:34.840 11:39:35 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@227 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:17:34.840 11:39:35 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@461 -- # return 0 00:17:34.840 11:39:35 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:17:34.840 11:39:35 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:17:34.840 11:39:35 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:17:34.840 11:39:35 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:17:34.840 11:39:35 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:17:34.840 11:39:35 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:17:34.840 11:39:35 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:17:34.840 11:39:35 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@47 -- # nvmfappstart -m 1 00:17:34.840 11:39:35 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:17:34.840 11:39:35 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@726 -- # xtrace_disable 00:17:34.840 11:39:35 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@10 -- # set +x 00:17:34.840 11:39:35 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@509 -- # nvmfpid=87706 00:17:34.840 11:39:35 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@510 -- # waitforlisten 87706 00:17:34.840 11:39:35 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@508 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 1 00:17:34.840 11:39:35 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@835 -- # '[' -z 87706 ']' 00:17:34.840 11:39:35 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:17:34.840 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:17:34.840 11:39:35 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@840 -- # local max_retries=100 00:17:34.840 11:39:35 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:17:34.840 11:39:35 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@844 -- # xtrace_disable 00:17:34.840 11:39:35 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@10 -- # set +x 00:17:34.840 [2024-12-16 11:39:35.534606] Starting SPDK v25.01-pre git sha1 c360129e7 / DPDK 24.03.0 initialization... 00:17:34.841 [2024-12-16 11:39:35.534729] [ DPDK EAL parameters: nvmf -c 1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:17:35.099 [2024-12-16 11:39:35.688252] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:17:35.099 [2024-12-16 11:39:35.758077] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:17:35.099 [2024-12-16 11:39:35.758178] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:17:35.099 [2024-12-16 11:39:35.758193] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:17:35.099 [2024-12-16 11:39:35.758202] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:17:35.099 [2024-12-16 11:39:35.758210] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:17:35.099 [2024-12-16 11:39:35.758661] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:17:36.035 11:39:36 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:17:36.035 11:39:36 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@868 -- # return 0 00:17:36.035 11:39:36 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:17:36.035 11:39:36 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@732 -- # xtrace_disable 00:17:36.035 11:39:36 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@10 -- # set +x 00:17:36.035 11:39:36 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:17:36.035 11:39:36 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@49 -- # trap cleanup SIGINT SIGTERM EXIT 00:17:36.035 11:39:36 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@52 -- # tgt2pid=87756 00:17:36.035 11:39:36 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@51 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 2 -r /var/tmp/tgt2.sock 00:17:36.035 11:39:36 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@54 -- # tgt1addr=10.0.0.3 00:17:36.035 11:39:36 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@55 -- # get_main_ns_ip 00:17:36.035 11:39:36 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@769 -- # local ip 00:17:36.035 11:39:36 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@770 -- # ip_candidates=() 00:17:36.035 11:39:36 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@770 -- # local -A ip_candidates 00:17:36.035 11:39:36 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:17:36.035 11:39:36 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:17:36.035 11:39:36 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:17:36.035 11:39:36 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:17:36.035 11:39:36 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:17:36.035 11:39:36 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:17:36.035 11:39:36 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:17:36.035 11:39:36 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@55 -- # tgt2addr=10.0.0.1 00:17:36.035 11:39:36 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@56 -- # uuidgen 00:17:36.035 11:39:36 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@56 -- # ns1uuid=b9c7a9f8-50e7-4fd4-b5d9-269e982da6a3 00:17:36.035 11:39:36 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@57 -- # uuidgen 00:17:36.035 11:39:36 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@57 -- # ns2uuid=120e3b07-986c-4640-acec-16c3fadbcbfa 00:17:36.035 11:39:36 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@58 -- # uuidgen 00:17:36.035 11:39:36 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@58 -- # ns3uuid=6a628d1f-3b42-43e6-829c-011733b042f4 00:17:36.035 11:39:36 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@63 -- # rpc_cmd 00:17:36.035 11:39:36 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:36.035 11:39:36 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@10 -- # set +x 00:17:36.035 null0 00:17:36.035 null1 00:17:36.035 null2 00:17:36.035 [2024-12-16 11:39:36.637962] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:17:36.035 [2024-12-16 11:39:36.653254] Starting SPDK v25.01-pre git sha1 c360129e7 / DPDK 24.03.0 initialization... 00:17:36.035 [2024-12-16 11:39:36.653339] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid87756 ] 00:17:36.035 [2024-12-16 11:39:36.662084] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4420 *** 00:17:36.035 11:39:36 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:36.035 11:39:36 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@79 -- # waitforlisten 87756 /var/tmp/tgt2.sock 00:17:36.035 11:39:36 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@835 -- # '[' -z 87756 ']' 00:17:36.035 11:39:36 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/tgt2.sock 00:17:36.035 11:39:36 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@840 -- # local max_retries=100 00:17:36.035 Waiting for process to start up and listen on UNIX domain socket /var/tmp/tgt2.sock... 00:17:36.035 11:39:36 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/tgt2.sock...' 00:17:36.036 11:39:36 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@844 -- # xtrace_disable 00:17:36.036 11:39:36 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@10 -- # set +x 00:17:36.294 [2024-12-16 11:39:36.805760] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:17:36.294 [2024-12-16 11:39:36.875234] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 1 00:17:36.552 11:39:37 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:17:36.552 11:39:37 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@868 -- # return 0 00:17:36.552 11:39:37 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@80 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/tgt2.sock 00:17:37.118 [2024-12-16 11:39:37.654333] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:17:37.118 [2024-12-16 11:39:37.670437] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.1 port 4421 *** 00:17:37.118 nvme0n1 nvme0n2 00:17:37.118 nvme1n1 00:17:37.118 11:39:37 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@94 -- # nvme_connect 00:17:37.118 11:39:37 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@23 -- # local ctrlr 00:17:37.118 11:39:37 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@25 -- # nvme connect -t tcp -a 10.0.0.1 -s 4421 -n nqn.2024-10.io.spdk:cnode2 --hostnqn=nqn.2014-08.org.nvmexpress:uuid:2fb5c03d-b67f-4471-b8a9-a83bf7b15be8 --hostid=2fb5c03d-b67f-4471-b8a9-a83bf7b15be8 00:17:37.118 11:39:37 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@28 -- # for ctrlr in /sys/class/nvme/nvme* 00:17:37.118 11:39:37 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@30 -- # [[ -e /sys/class/nvme/nvme0/subsysnqn ]] 00:17:37.118 11:39:37 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@30 -- # [[ nqn.2024-10.io.spdk:cnode2 == \n\q\n\.\2\0\2\4\-\1\0\.\i\o\.\s\p\d\k\:\c\n\o\d\e\2 ]] 00:17:37.118 11:39:37 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@31 -- # echo nvme0 00:17:37.118 11:39:37 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@32 -- # return 0 00:17:37.118 11:39:37 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@94 -- # ctrlr=nvme0 00:17:37.118 11:39:37 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@95 -- # waitforblk nvme0n1 00:17:37.118 11:39:37 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1239 -- # local i=0 00:17:37.118 11:39:37 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1240 -- # lsblk -l -o NAME 00:17:37.118 11:39:37 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1240 -- # grep -q -w nvme0n1 00:17:37.377 11:39:37 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1241 -- # '[' 0 -lt 15 ']' 00:17:37.377 11:39:37 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1242 -- # i=1 00:17:37.377 11:39:37 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1243 -- # sleep 1 00:17:38.312 11:39:38 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1240 -- # lsblk -l -o NAME 00:17:38.312 11:39:38 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1240 -- # grep -q -w nvme0n1 00:17:38.312 11:39:38 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1246 -- # lsblk -l -o NAME 00:17:38.312 11:39:38 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1246 -- # grep -q -w nvme0n1 00:17:38.312 11:39:38 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1250 -- # return 0 00:17:38.312 11:39:38 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@96 -- # uuid2nguid b9c7a9f8-50e7-4fd4-b5d9-269e982da6a3 00:17:38.312 11:39:38 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@787 -- # tr -d - 00:17:38.312 11:39:38 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@96 -- # nvme_get_nguid nvme0 1 00:17:38.312 11:39:38 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@40 -- # local ctrlr=nvme0 nsid=1 nguid 00:17:38.312 11:39:38 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@42 -- # nvme id-ns /dev/nvme0n1 -o json 00:17:38.312 11:39:38 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@42 -- # jq -r .nguid 00:17:38.312 11:39:38 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@42 -- # nguid=b9c7a9f850e74fd4b5d9269e982da6a3 00:17:38.312 11:39:38 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@43 -- # echo B9C7A9F850E74FD4B5D9269E982DA6A3 00:17:38.312 11:39:38 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@96 -- # [[ B9C7A9F850E74FD4B5D9269E982DA6A3 == \B\9\C\7\A\9\F\8\5\0\E\7\4\F\D\4\B\5\D\9\2\6\9\E\9\8\2\D\A\6\A\3 ]] 00:17:38.312 11:39:38 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@97 -- # waitforblk nvme0n2 00:17:38.312 11:39:38 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1239 -- # local i=0 00:17:38.312 11:39:38 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1240 -- # lsblk -l -o NAME 00:17:38.312 11:39:38 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1240 -- # grep -q -w nvme0n2 00:17:38.312 11:39:38 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1246 -- # lsblk -l -o NAME 00:17:38.312 11:39:38 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1246 -- # grep -q -w nvme0n2 00:17:38.312 11:39:38 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1250 -- # return 0 00:17:38.312 11:39:38 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@98 -- # uuid2nguid 120e3b07-986c-4640-acec-16c3fadbcbfa 00:17:38.312 11:39:38 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@787 -- # tr -d - 00:17:38.312 11:39:38 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@98 -- # nvme_get_nguid nvme0 2 00:17:38.312 11:39:38 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@40 -- # local ctrlr=nvme0 nsid=2 nguid 00:17:38.312 11:39:38 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@42 -- # jq -r .nguid 00:17:38.312 11:39:38 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@42 -- # nvme id-ns /dev/nvme0n2 -o json 00:17:38.312 11:39:39 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@42 -- # nguid=120e3b07986c4640acec16c3fadbcbfa 00:17:38.312 11:39:39 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@43 -- # echo 120E3B07986C4640ACEC16C3FADBCBFA 00:17:38.312 11:39:39 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@98 -- # [[ 120E3B07986C4640ACEC16C3FADBCBFA == \1\2\0\E\3\B\0\7\9\8\6\C\4\6\4\0\A\C\E\C\1\6\C\3\F\A\D\B\C\B\F\A ]] 00:17:38.312 11:39:39 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@99 -- # waitforblk nvme0n3 00:17:38.312 11:39:39 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1239 -- # local i=0 00:17:38.312 11:39:39 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1240 -- # lsblk -l -o NAME 00:17:38.312 11:39:39 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1240 -- # grep -q -w nvme0n3 00:17:38.312 11:39:39 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1246 -- # lsblk -l -o NAME 00:17:38.312 11:39:39 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1246 -- # grep -q -w nvme0n3 00:17:38.312 11:39:39 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1250 -- # return 0 00:17:38.313 11:39:39 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@100 -- # uuid2nguid 6a628d1f-3b42-43e6-829c-011733b042f4 00:17:38.313 11:39:39 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@787 -- # tr -d - 00:17:38.313 11:39:39 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@100 -- # nvme_get_nguid nvme0 3 00:17:38.313 11:39:39 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@40 -- # local ctrlr=nvme0 nsid=3 nguid 00:17:38.313 11:39:39 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@42 -- # nvme id-ns /dev/nvme0n3 -o json 00:17:38.313 11:39:39 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@42 -- # jq -r .nguid 00:17:38.571 11:39:39 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@42 -- # nguid=6a628d1f3b4243e6829c011733b042f4 00:17:38.571 11:39:39 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@43 -- # echo 6A628D1F3B4243E6829C011733B042F4 00:17:38.571 11:39:39 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@100 -- # [[ 6A628D1F3B4243E6829C011733B042F4 == \6\A\6\2\8\D\1\F\3\B\4\2\4\3\E\6\8\2\9\C\0\1\1\7\3\3\B\0\4\2\F\4 ]] 00:17:38.571 11:39:39 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@101 -- # nvme disconnect -d /dev/nvme0 00:17:38.571 11:39:39 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@103 -- # trap - SIGINT SIGTERM EXIT 00:17:38.571 11:39:39 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@104 -- # cleanup 00:17:38.571 11:39:39 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@18 -- # killprocess 87756 00:17:38.571 11:39:39 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@954 -- # '[' -z 87756 ']' 00:17:38.571 11:39:39 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@958 -- # kill -0 87756 00:17:38.571 11:39:39 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@959 -- # uname 00:17:38.571 11:39:39 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:17:38.571 11:39:39 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 87756 00:17:38.571 11:39:39 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:17:38.571 11:39:39 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:17:38.571 killing process with pid 87756 00:17:38.571 11:39:39 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@972 -- # echo 'killing process with pid 87756' 00:17:38.571 11:39:39 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@973 -- # kill 87756 00:17:38.571 11:39:39 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@978 -- # wait 87756 00:17:39.136 11:39:39 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@19 -- # nvmftestfini 00:17:39.136 11:39:39 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@516 -- # nvmfcleanup 00:17:39.136 11:39:39 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@121 -- # sync 00:17:39.136 11:39:39 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:17:39.136 11:39:39 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@124 -- # set +e 00:17:39.136 11:39:39 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@125 -- # for i in {1..20} 00:17:39.136 11:39:39 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:17:39.136 rmmod nvme_tcp 00:17:39.136 rmmod nvme_fabrics 00:17:39.136 rmmod nvme_keyring 00:17:39.136 11:39:39 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:17:39.136 11:39:39 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@128 -- # set -e 00:17:39.136 11:39:39 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@129 -- # return 0 00:17:39.136 11:39:39 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@517 -- # '[' -n 87706 ']' 00:17:39.136 11:39:39 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@518 -- # killprocess 87706 00:17:39.136 11:39:39 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@954 -- # '[' -z 87706 ']' 00:17:39.136 11:39:39 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@958 -- # kill -0 87706 00:17:39.136 11:39:39 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@959 -- # uname 00:17:39.137 11:39:39 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:17:39.137 11:39:39 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 87706 00:17:39.137 11:39:39 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:17:39.137 11:39:39 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:17:39.137 killing process with pid 87706 00:17:39.137 11:39:39 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@972 -- # echo 'killing process with pid 87706' 00:17:39.137 11:39:39 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@973 -- # kill 87706 00:17:39.137 11:39:39 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@978 -- # wait 87706 00:17:39.395 11:39:40 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:17:39.395 11:39:40 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:17:39.395 11:39:40 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:17:39.395 11:39:40 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@297 -- # iptr 00:17:39.395 11:39:40 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@791 -- # iptables-save 00:17:39.395 11:39:40 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@791 -- # iptables-restore 00:17:39.395 11:39:40 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:17:39.395 11:39:40 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@298 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:17:39.395 11:39:40 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@299 -- # nvmf_veth_fini 00:17:39.395 11:39:40 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@233 -- # ip link set nvmf_init_br nomaster 00:17:39.395 11:39:40 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@234 -- # ip link set nvmf_init_br2 nomaster 00:17:39.395 11:39:40 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@235 -- # ip link set nvmf_tgt_br nomaster 00:17:39.395 11:39:40 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@236 -- # ip link set nvmf_tgt_br2 nomaster 00:17:39.395 11:39:40 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@237 -- # ip link set nvmf_init_br down 00:17:39.395 11:39:40 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@238 -- # ip link set nvmf_init_br2 down 00:17:39.395 11:39:40 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@239 -- # ip link set nvmf_tgt_br down 00:17:39.395 11:39:40 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@240 -- # ip link set nvmf_tgt_br2 down 00:17:39.395 11:39:40 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@241 -- # ip link delete nvmf_br type bridge 00:17:39.653 11:39:40 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@242 -- # ip link delete nvmf_init_if 00:17:39.653 11:39:40 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@243 -- # ip link delete nvmf_init_if2 00:17:39.653 11:39:40 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@244 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:17:39.653 11:39:40 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@245 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:17:39.653 11:39:40 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@246 -- # remove_spdk_ns 00:17:39.653 11:39:40 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:17:39.653 11:39:40 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:17:39.653 11:39:40 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:17:39.653 11:39:40 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@300 -- # return 0 00:17:39.653 00:17:39.653 real 0m5.454s 00:17:39.653 user 0m8.327s 00:17:39.653 sys 0m1.436s 00:17:39.653 11:39:40 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1130 -- # xtrace_disable 00:17:39.653 11:39:40 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@10 -- # set +x 00:17:39.653 ************************************ 00:17:39.653 END TEST nvmf_nsid 00:17:39.653 ************************************ 00:17:39.653 11:39:40 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@69 -- # trap - SIGINT SIGTERM EXIT 00:17:39.653 00:17:39.653 real 7m21.888s 00:17:39.653 user 17m43.575s 00:17:39.653 sys 1m28.093s 00:17:39.653 11:39:40 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1130 -- # xtrace_disable 00:17:39.653 11:39:40 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:17:39.653 ************************************ 00:17:39.653 END TEST nvmf_target_extra 00:17:39.653 ************************************ 00:17:39.653 11:39:40 nvmf_tcp -- nvmf/nvmf.sh@16 -- # run_test nvmf_host /home/vagrant/spdk_repo/spdk/test/nvmf/nvmf_host.sh --transport=tcp 00:17:39.653 11:39:40 nvmf_tcp -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:17:39.653 11:39:40 nvmf_tcp -- common/autotest_common.sh@1111 -- # xtrace_disable 00:17:39.653 11:39:40 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:17:39.653 ************************************ 00:17:39.653 START TEST nvmf_host 00:17:39.653 ************************************ 00:17:39.653 11:39:40 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/nvmf_host.sh --transport=tcp 00:17:39.912 * Looking for test storage... 00:17:39.912 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf 00:17:39.912 11:39:40 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:17:39.912 11:39:40 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1711 -- # lcov --version 00:17:39.912 11:39:40 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:17:39.912 11:39:40 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:17:39.912 11:39:40 nvmf_tcp.nvmf_host -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:17:39.912 11:39:40 nvmf_tcp.nvmf_host -- scripts/common.sh@333 -- # local ver1 ver1_l 00:17:39.912 11:39:40 nvmf_tcp.nvmf_host -- scripts/common.sh@334 -- # local ver2 ver2_l 00:17:39.912 11:39:40 nvmf_tcp.nvmf_host -- scripts/common.sh@336 -- # IFS=.-: 00:17:39.912 11:39:40 nvmf_tcp.nvmf_host -- scripts/common.sh@336 -- # read -ra ver1 00:17:39.912 11:39:40 nvmf_tcp.nvmf_host -- scripts/common.sh@337 -- # IFS=.-: 00:17:39.912 11:39:40 nvmf_tcp.nvmf_host -- scripts/common.sh@337 -- # read -ra ver2 00:17:39.912 11:39:40 nvmf_tcp.nvmf_host -- scripts/common.sh@338 -- # local 'op=<' 00:17:39.912 11:39:40 nvmf_tcp.nvmf_host -- scripts/common.sh@340 -- # ver1_l=2 00:17:39.912 11:39:40 nvmf_tcp.nvmf_host -- scripts/common.sh@341 -- # ver2_l=1 00:17:39.913 11:39:40 nvmf_tcp.nvmf_host -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:17:39.913 11:39:40 nvmf_tcp.nvmf_host -- scripts/common.sh@344 -- # case "$op" in 00:17:39.913 11:39:40 nvmf_tcp.nvmf_host -- scripts/common.sh@345 -- # : 1 00:17:39.913 11:39:40 nvmf_tcp.nvmf_host -- scripts/common.sh@364 -- # (( v = 0 )) 00:17:39.913 11:39:40 nvmf_tcp.nvmf_host -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:17:39.913 11:39:40 nvmf_tcp.nvmf_host -- scripts/common.sh@365 -- # decimal 1 00:17:39.913 11:39:40 nvmf_tcp.nvmf_host -- scripts/common.sh@353 -- # local d=1 00:17:39.913 11:39:40 nvmf_tcp.nvmf_host -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:17:39.913 11:39:40 nvmf_tcp.nvmf_host -- scripts/common.sh@355 -- # echo 1 00:17:39.913 11:39:40 nvmf_tcp.nvmf_host -- scripts/common.sh@365 -- # ver1[v]=1 00:17:39.913 11:39:40 nvmf_tcp.nvmf_host -- scripts/common.sh@366 -- # decimal 2 00:17:39.913 11:39:40 nvmf_tcp.nvmf_host -- scripts/common.sh@353 -- # local d=2 00:17:39.913 11:39:40 nvmf_tcp.nvmf_host -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:17:39.913 11:39:40 nvmf_tcp.nvmf_host -- scripts/common.sh@355 -- # echo 2 00:17:39.913 11:39:40 nvmf_tcp.nvmf_host -- scripts/common.sh@366 -- # ver2[v]=2 00:17:39.913 11:39:40 nvmf_tcp.nvmf_host -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:17:39.913 11:39:40 nvmf_tcp.nvmf_host -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:17:39.913 11:39:40 nvmf_tcp.nvmf_host -- scripts/common.sh@368 -- # return 0 00:17:39.913 11:39:40 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:17:39.913 11:39:40 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:17:39.913 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:17:39.913 --rc genhtml_branch_coverage=1 00:17:39.913 --rc genhtml_function_coverage=1 00:17:39.913 --rc genhtml_legend=1 00:17:39.913 --rc geninfo_all_blocks=1 00:17:39.913 --rc geninfo_unexecuted_blocks=1 00:17:39.913 00:17:39.913 ' 00:17:39.913 11:39:40 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:17:39.913 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:17:39.913 --rc genhtml_branch_coverage=1 00:17:39.913 --rc genhtml_function_coverage=1 00:17:39.913 --rc genhtml_legend=1 00:17:39.913 --rc geninfo_all_blocks=1 00:17:39.913 --rc geninfo_unexecuted_blocks=1 00:17:39.913 00:17:39.913 ' 00:17:39.913 11:39:40 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:17:39.913 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:17:39.913 --rc genhtml_branch_coverage=1 00:17:39.913 --rc genhtml_function_coverage=1 00:17:39.913 --rc genhtml_legend=1 00:17:39.913 --rc geninfo_all_blocks=1 00:17:39.913 --rc geninfo_unexecuted_blocks=1 00:17:39.913 00:17:39.913 ' 00:17:39.913 11:39:40 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:17:39.913 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:17:39.913 --rc genhtml_branch_coverage=1 00:17:39.913 --rc genhtml_function_coverage=1 00:17:39.913 --rc genhtml_legend=1 00:17:39.913 --rc geninfo_all_blocks=1 00:17:39.913 --rc geninfo_unexecuted_blocks=1 00:17:39.913 00:17:39.913 ' 00:17:39.913 11:39:40 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:17:39.913 11:39:40 nvmf_tcp.nvmf_host -- nvmf/common.sh@7 -- # uname -s 00:17:39.913 11:39:40 nvmf_tcp.nvmf_host -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:17:39.913 11:39:40 nvmf_tcp.nvmf_host -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:17:39.913 11:39:40 nvmf_tcp.nvmf_host -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:17:39.913 11:39:40 nvmf_tcp.nvmf_host -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:17:39.913 11:39:40 nvmf_tcp.nvmf_host -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:17:39.913 11:39:40 nvmf_tcp.nvmf_host -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:17:39.913 11:39:40 nvmf_tcp.nvmf_host -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:17:39.913 11:39:40 nvmf_tcp.nvmf_host -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:17:39.913 11:39:40 nvmf_tcp.nvmf_host -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:17:39.913 11:39:40 nvmf_tcp.nvmf_host -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:17:39.913 11:39:40 nvmf_tcp.nvmf_host -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:2fb5c03d-b67f-4471-b8a9-a83bf7b15be8 00:17:39.913 11:39:40 nvmf_tcp.nvmf_host -- nvmf/common.sh@18 -- # NVME_HOSTID=2fb5c03d-b67f-4471-b8a9-a83bf7b15be8 00:17:39.913 11:39:40 nvmf_tcp.nvmf_host -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:17:39.913 11:39:40 nvmf_tcp.nvmf_host -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:17:39.913 11:39:40 nvmf_tcp.nvmf_host -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:17:39.913 11:39:40 nvmf_tcp.nvmf_host -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:17:39.913 11:39:40 nvmf_tcp.nvmf_host -- nvmf/common.sh@49 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:17:39.913 11:39:40 nvmf_tcp.nvmf_host -- scripts/common.sh@15 -- # shopt -s extglob 00:17:39.913 11:39:40 nvmf_tcp.nvmf_host -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:17:39.913 11:39:40 nvmf_tcp.nvmf_host -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:17:39.913 11:39:40 nvmf_tcp.nvmf_host -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:17:39.913 11:39:40 nvmf_tcp.nvmf_host -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:39.913 11:39:40 nvmf_tcp.nvmf_host -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:39.913 11:39:40 nvmf_tcp.nvmf_host -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:39.913 11:39:40 nvmf_tcp.nvmf_host -- paths/export.sh@5 -- # export PATH 00:17:39.913 11:39:40 nvmf_tcp.nvmf_host -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:39.913 11:39:40 nvmf_tcp.nvmf_host -- nvmf/common.sh@51 -- # : 0 00:17:39.913 11:39:40 nvmf_tcp.nvmf_host -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:17:39.913 11:39:40 nvmf_tcp.nvmf_host -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:17:39.913 11:39:40 nvmf_tcp.nvmf_host -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:17:39.913 11:39:40 nvmf_tcp.nvmf_host -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:17:39.913 11:39:40 nvmf_tcp.nvmf_host -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:17:39.913 11:39:40 nvmf_tcp.nvmf_host -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:17:39.913 /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:17:39.913 11:39:40 nvmf_tcp.nvmf_host -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:17:39.913 11:39:40 nvmf_tcp.nvmf_host -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:17:39.913 11:39:40 nvmf_tcp.nvmf_host -- nvmf/common.sh@55 -- # have_pci_nics=0 00:17:39.913 11:39:40 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@11 -- # trap 'exit 1' SIGINT SIGTERM EXIT 00:17:39.913 11:39:40 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@13 -- # TEST_ARGS=("$@") 00:17:39.913 11:39:40 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@15 -- # [[ 0 -eq 0 ]] 00:17:39.913 11:39:40 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@16 -- # run_test nvmf_multicontroller /home/vagrant/spdk_repo/spdk/test/nvmf/host/multicontroller.sh --transport=tcp 00:17:39.913 11:39:40 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:17:39.913 11:39:40 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1111 -- # xtrace_disable 00:17:39.913 11:39:40 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:17:39.913 ************************************ 00:17:39.913 START TEST nvmf_multicontroller 00:17:39.913 ************************************ 00:17:39.913 11:39:40 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/host/multicontroller.sh --transport=tcp 00:17:40.174 * Looking for test storage... 00:17:40.174 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/host 00:17:40.174 11:39:40 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:17:40.174 11:39:40 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@1711 -- # lcov --version 00:17:40.174 11:39:40 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:17:40.174 11:39:40 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:17:40.174 11:39:40 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:17:40.174 11:39:40 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@333 -- # local ver1 ver1_l 00:17:40.174 11:39:40 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@334 -- # local ver2 ver2_l 00:17:40.174 11:39:40 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@336 -- # IFS=.-: 00:17:40.174 11:39:40 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@336 -- # read -ra ver1 00:17:40.174 11:39:40 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@337 -- # IFS=.-: 00:17:40.174 11:39:40 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@337 -- # read -ra ver2 00:17:40.174 11:39:40 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@338 -- # local 'op=<' 00:17:40.174 11:39:40 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@340 -- # ver1_l=2 00:17:40.174 11:39:40 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@341 -- # ver2_l=1 00:17:40.174 11:39:40 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:17:40.174 11:39:40 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@344 -- # case "$op" in 00:17:40.174 11:39:40 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@345 -- # : 1 00:17:40.174 11:39:40 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@364 -- # (( v = 0 )) 00:17:40.174 11:39:40 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:17:40.174 11:39:40 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@365 -- # decimal 1 00:17:40.174 11:39:40 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@353 -- # local d=1 00:17:40.174 11:39:40 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:17:40.174 11:39:40 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@355 -- # echo 1 00:17:40.174 11:39:40 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@365 -- # ver1[v]=1 00:17:40.174 11:39:40 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@366 -- # decimal 2 00:17:40.174 11:39:40 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@353 -- # local d=2 00:17:40.174 11:39:40 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:17:40.174 11:39:40 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@355 -- # echo 2 00:17:40.174 11:39:40 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@366 -- # ver2[v]=2 00:17:40.174 11:39:40 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:17:40.174 11:39:40 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:17:40.174 11:39:40 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@368 -- # return 0 00:17:40.174 11:39:40 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:17:40.174 11:39:40 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:17:40.174 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:17:40.174 --rc genhtml_branch_coverage=1 00:17:40.174 --rc genhtml_function_coverage=1 00:17:40.174 --rc genhtml_legend=1 00:17:40.174 --rc geninfo_all_blocks=1 00:17:40.174 --rc geninfo_unexecuted_blocks=1 00:17:40.174 00:17:40.174 ' 00:17:40.174 11:39:40 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:17:40.174 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:17:40.174 --rc genhtml_branch_coverage=1 00:17:40.174 --rc genhtml_function_coverage=1 00:17:40.174 --rc genhtml_legend=1 00:17:40.174 --rc geninfo_all_blocks=1 00:17:40.174 --rc geninfo_unexecuted_blocks=1 00:17:40.174 00:17:40.174 ' 00:17:40.174 11:39:40 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:17:40.174 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:17:40.174 --rc genhtml_branch_coverage=1 00:17:40.174 --rc genhtml_function_coverage=1 00:17:40.174 --rc genhtml_legend=1 00:17:40.174 --rc geninfo_all_blocks=1 00:17:40.174 --rc geninfo_unexecuted_blocks=1 00:17:40.174 00:17:40.174 ' 00:17:40.174 11:39:40 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:17:40.174 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:17:40.174 --rc genhtml_branch_coverage=1 00:17:40.174 --rc genhtml_function_coverage=1 00:17:40.174 --rc genhtml_legend=1 00:17:40.174 --rc geninfo_all_blocks=1 00:17:40.174 --rc geninfo_unexecuted_blocks=1 00:17:40.174 00:17:40.174 ' 00:17:40.174 11:39:40 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:17:40.174 11:39:40 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@7 -- # uname -s 00:17:40.174 11:39:40 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:17:40.174 11:39:40 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:17:40.174 11:39:40 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:17:40.174 11:39:40 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:17:40.174 11:39:40 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:17:40.174 11:39:40 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:17:40.174 11:39:40 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:17:40.174 11:39:40 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:17:40.174 11:39:40 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:17:40.174 11:39:40 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:17:40.174 11:39:40 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:2fb5c03d-b67f-4471-b8a9-a83bf7b15be8 00:17:40.174 11:39:40 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@18 -- # NVME_HOSTID=2fb5c03d-b67f-4471-b8a9-a83bf7b15be8 00:17:40.175 11:39:40 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:17:40.175 11:39:40 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:17:40.175 11:39:40 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:17:40.175 11:39:40 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:17:40.175 11:39:40 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@49 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:17:40.175 11:39:40 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@15 -- # shopt -s extglob 00:17:40.175 11:39:40 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:17:40.175 11:39:40 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:17:40.175 11:39:40 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:17:40.175 11:39:40 nvmf_tcp.nvmf_host.nvmf_multicontroller -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:40.175 11:39:40 nvmf_tcp.nvmf_host.nvmf_multicontroller -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:40.175 11:39:40 nvmf_tcp.nvmf_host.nvmf_multicontroller -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:40.175 11:39:40 nvmf_tcp.nvmf_host.nvmf_multicontroller -- paths/export.sh@5 -- # export PATH 00:17:40.175 11:39:40 nvmf_tcp.nvmf_host.nvmf_multicontroller -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:40.175 11:39:40 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@51 -- # : 0 00:17:40.175 11:39:40 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:17:40.175 11:39:40 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:17:40.175 11:39:40 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:17:40.175 11:39:40 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:17:40.175 11:39:40 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:17:40.175 11:39:40 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:17:40.175 /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:17:40.175 11:39:40 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:17:40.175 11:39:40 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:17:40.175 11:39:40 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@55 -- # have_pci_nics=0 00:17:40.175 11:39:40 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@11 -- # MALLOC_BDEV_SIZE=64 00:17:40.175 11:39:40 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:17:40.175 11:39:40 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@13 -- # NVMF_HOST_FIRST_PORT=60000 00:17:40.175 11:39:40 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@14 -- # NVMF_HOST_SECOND_PORT=60001 00:17:40.175 11:39:40 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@16 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:17:40.175 11:39:40 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@18 -- # '[' tcp == rdma ']' 00:17:40.175 11:39:40 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@23 -- # nvmftestinit 00:17:40.175 11:39:40 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:17:40.175 11:39:40 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:17:40.175 11:39:40 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@476 -- # prepare_net_devs 00:17:40.175 11:39:40 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@438 -- # local -g is_hw=no 00:17:40.175 11:39:40 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@440 -- # remove_spdk_ns 00:17:40.175 11:39:40 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:17:40.175 11:39:40 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:17:40.175 11:39:40 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:17:40.175 11:39:40 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@442 -- # [[ virt != virt ]] 00:17:40.175 11:39:40 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@444 -- # [[ no == yes ]] 00:17:40.175 11:39:40 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@451 -- # [[ virt == phy ]] 00:17:40.175 11:39:40 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@454 -- # [[ virt == phy-fallback ]] 00:17:40.175 11:39:40 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@459 -- # [[ tcp == tcp ]] 00:17:40.175 11:39:40 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@460 -- # nvmf_veth_init 00:17:40.175 11:39:40 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@145 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:17:40.175 11:39:40 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@146 -- # NVMF_SECOND_INITIATOR_IP=10.0.0.2 00:17:40.175 11:39:40 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@147 -- # NVMF_FIRST_TARGET_IP=10.0.0.3 00:17:40.175 11:39:40 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@148 -- # NVMF_SECOND_TARGET_IP=10.0.0.4 00:17:40.175 11:39:40 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@149 -- # NVMF_INITIATOR_IP=10.0.0.1 00:17:40.175 11:39:40 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@150 -- # NVMF_BRIDGE=nvmf_br 00:17:40.175 11:39:40 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@151 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:17:40.175 11:39:40 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@152 -- # NVMF_INITIATOR_INTERFACE2=nvmf_init_if2 00:17:40.175 11:39:40 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@153 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:17:40.175 11:39:40 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@154 -- # NVMF_INITIATOR_BRIDGE2=nvmf_init_br2 00:17:40.175 11:39:40 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@155 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:17:40.175 11:39:40 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@156 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:17:40.175 11:39:40 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@157 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:17:40.175 11:39:40 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@158 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:17:40.175 11:39:40 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@159 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:17:40.175 11:39:40 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@160 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:17:40.175 11:39:40 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@162 -- # ip link set nvmf_init_br nomaster 00:17:40.175 Cannot find device "nvmf_init_br" 00:17:40.175 11:39:40 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@162 -- # true 00:17:40.175 11:39:40 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@163 -- # ip link set nvmf_init_br2 nomaster 00:17:40.175 Cannot find device "nvmf_init_br2" 00:17:40.175 11:39:40 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@163 -- # true 00:17:40.175 11:39:40 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@164 -- # ip link set nvmf_tgt_br nomaster 00:17:40.175 Cannot find device "nvmf_tgt_br" 00:17:40.175 11:39:40 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@164 -- # true 00:17:40.175 11:39:40 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@165 -- # ip link set nvmf_tgt_br2 nomaster 00:17:40.175 Cannot find device "nvmf_tgt_br2" 00:17:40.175 11:39:40 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@165 -- # true 00:17:40.175 11:39:40 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@166 -- # ip link set nvmf_init_br down 00:17:40.175 Cannot find device "nvmf_init_br" 00:17:40.175 11:39:40 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@166 -- # true 00:17:40.175 11:39:40 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@167 -- # ip link set nvmf_init_br2 down 00:17:40.175 Cannot find device "nvmf_init_br2" 00:17:40.175 11:39:40 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@167 -- # true 00:17:40.175 11:39:40 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@168 -- # ip link set nvmf_tgt_br down 00:17:40.175 Cannot find device "nvmf_tgt_br" 00:17:40.175 11:39:40 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@168 -- # true 00:17:40.175 11:39:40 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@169 -- # ip link set nvmf_tgt_br2 down 00:17:40.434 Cannot find device "nvmf_tgt_br2" 00:17:40.434 11:39:40 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@169 -- # true 00:17:40.434 11:39:40 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@170 -- # ip link delete nvmf_br type bridge 00:17:40.434 Cannot find device "nvmf_br" 00:17:40.434 11:39:40 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@170 -- # true 00:17:40.434 11:39:40 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@171 -- # ip link delete nvmf_init_if 00:17:40.434 Cannot find device "nvmf_init_if" 00:17:40.434 11:39:40 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@171 -- # true 00:17:40.434 11:39:40 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@172 -- # ip link delete nvmf_init_if2 00:17:40.434 Cannot find device "nvmf_init_if2" 00:17:40.434 11:39:40 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@172 -- # true 00:17:40.434 11:39:40 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@173 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:17:40.434 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:17:40.434 11:39:40 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@173 -- # true 00:17:40.434 11:39:40 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@174 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:17:40.434 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:17:40.434 11:39:40 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@174 -- # true 00:17:40.434 11:39:40 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@177 -- # ip netns add nvmf_tgt_ns_spdk 00:17:40.434 11:39:40 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@180 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:17:40.434 11:39:40 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@181 -- # ip link add nvmf_init_if2 type veth peer name nvmf_init_br2 00:17:40.434 11:39:40 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@182 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:17:40.434 11:39:41 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@183 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:17:40.434 11:39:41 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:17:40.434 11:39:41 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@187 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:17:40.434 11:39:41 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@190 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:17:40.434 11:39:41 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@191 -- # ip addr add 10.0.0.2/24 dev nvmf_init_if2 00:17:40.434 11:39:41 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@192 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if 00:17:40.434 11:39:41 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@193 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.4/24 dev nvmf_tgt_if2 00:17:40.434 11:39:41 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@196 -- # ip link set nvmf_init_if up 00:17:40.434 11:39:41 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@197 -- # ip link set nvmf_init_if2 up 00:17:40.434 11:39:41 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@198 -- # ip link set nvmf_init_br up 00:17:40.434 11:39:41 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@199 -- # ip link set nvmf_init_br2 up 00:17:40.434 11:39:41 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@200 -- # ip link set nvmf_tgt_br up 00:17:40.434 11:39:41 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@201 -- # ip link set nvmf_tgt_br2 up 00:17:40.434 11:39:41 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@202 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:17:40.434 11:39:41 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@203 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:17:40.434 11:39:41 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@204 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:17:40.434 11:39:41 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@207 -- # ip link add nvmf_br type bridge 00:17:40.434 11:39:41 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@208 -- # ip link set nvmf_br up 00:17:40.434 11:39:41 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@211 -- # ip link set nvmf_init_br master nvmf_br 00:17:40.434 11:39:41 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@212 -- # ip link set nvmf_init_br2 master nvmf_br 00:17:40.434 11:39:41 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@213 -- # ip link set nvmf_tgt_br master nvmf_br 00:17:40.434 11:39:41 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@214 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:17:40.693 11:39:41 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@217 -- # ipts -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:17:40.693 11:39:41 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT' 00:17:40.693 11:39:41 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@218 -- # ipts -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT 00:17:40.693 11:39:41 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT' 00:17:40.693 11:39:41 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@219 -- # ipts -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:17:40.694 11:39:41 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@790 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT -m comment --comment 'SPDK_NVMF:-A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT' 00:17:40.694 11:39:41 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@222 -- # ping -c 1 10.0.0.3 00:17:40.694 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:17:40.694 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.083 ms 00:17:40.694 00:17:40.694 --- 10.0.0.3 ping statistics --- 00:17:40.694 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:17:40.694 rtt min/avg/max/mdev = 0.083/0.083/0.083/0.000 ms 00:17:40.694 11:39:41 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@223 -- # ping -c 1 10.0.0.4 00:17:40.694 PING 10.0.0.4 (10.0.0.4) 56(84) bytes of data. 00:17:40.694 64 bytes from 10.0.0.4: icmp_seq=1 ttl=64 time=0.106 ms 00:17:40.694 00:17:40.694 --- 10.0.0.4 ping statistics --- 00:17:40.694 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:17:40.694 rtt min/avg/max/mdev = 0.106/0.106/0.106/0.000 ms 00:17:40.694 11:39:41 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@224 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:17:40.694 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:17:40.694 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.030 ms 00:17:40.694 00:17:40.694 --- 10.0.0.1 ping statistics --- 00:17:40.694 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:17:40.694 rtt min/avg/max/mdev = 0.030/0.030/0.030/0.000 ms 00:17:40.694 11:39:41 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@225 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.2 00:17:40.694 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:17:40.694 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.124 ms 00:17:40.694 00:17:40.694 --- 10.0.0.2 ping statistics --- 00:17:40.694 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:17:40.694 rtt min/avg/max/mdev = 0.124/0.124/0.124/0.000 ms 00:17:40.694 11:39:41 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@227 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:17:40.694 11:39:41 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@461 -- # return 0 00:17:40.694 11:39:41 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:17:40.694 11:39:41 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:17:40.694 11:39:41 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:17:40.694 11:39:41 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:17:40.694 11:39:41 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:17:40.694 11:39:41 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:17:40.694 11:39:41 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:17:40.694 11:39:41 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@25 -- # nvmfappstart -m 0xE 00:17:40.694 11:39:41 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:17:40.694 11:39:41 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@726 -- # xtrace_disable 00:17:40.694 11:39:41 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:17:40.694 11:39:41 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@509 -- # nvmfpid=88125 00:17:40.694 11:39:41 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@508 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xE 00:17:40.694 11:39:41 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@510 -- # waitforlisten 88125 00:17:40.694 11:39:41 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@835 -- # '[' -z 88125 ']' 00:17:40.694 11:39:41 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:17:40.694 11:39:41 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@840 -- # local max_retries=100 00:17:40.694 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:17:40.694 11:39:41 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:17:40.694 11:39:41 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@844 -- # xtrace_disable 00:17:40.694 11:39:41 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:17:40.694 [2024-12-16 11:39:41.324355] Starting SPDK v25.01-pre git sha1 c360129e7 / DPDK 24.03.0 initialization... 00:17:40.694 [2024-12-16 11:39:41.324468] [ DPDK EAL parameters: nvmf -c 0xE --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:17:40.952 [2024-12-16 11:39:41.472615] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:17:40.952 [2024-12-16 11:39:41.523333] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:17:40.952 [2024-12-16 11:39:41.523414] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:17:40.952 [2024-12-16 11:39:41.523442] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:17:40.952 [2024-12-16 11:39:41.523450] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:17:40.952 [2024-12-16 11:39:41.523457] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:17:40.952 [2024-12-16 11:39:41.524694] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 2 00:17:40.952 [2024-12-16 11:39:41.524845] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 3 00:17:40.952 [2024-12-16 11:39:41.524847] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 1 00:17:40.953 11:39:41 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:17:40.953 11:39:41 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@868 -- # return 0 00:17:40.953 11:39:41 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:17:40.953 11:39:41 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@732 -- # xtrace_disable 00:17:40.953 11:39:41 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:17:40.953 11:39:41 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:17:40.953 11:39:41 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@27 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:17:40.953 11:39:41 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:40.953 11:39:41 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:17:41.211 [2024-12-16 11:39:41.709183] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:17:41.211 11:39:41 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:41.211 11:39:41 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@29 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:17:41.211 11:39:41 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:41.211 11:39:41 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:17:41.211 Malloc0 00:17:41.211 11:39:41 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:41.211 11:39:41 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@30 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:17:41.211 11:39:41 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:41.211 11:39:41 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:17:41.211 11:39:41 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:41.211 11:39:41 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@31 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:17:41.211 11:39:41 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:41.211 11:39:41 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:17:41.211 11:39:41 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:41.211 11:39:41 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@33 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 00:17:41.211 11:39:41 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:41.211 11:39:41 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:17:41.211 [2024-12-16 11:39:41.777956] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4420 *** 00:17:41.211 11:39:41 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:41.212 11:39:41 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@34 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4421 00:17:41.212 11:39:41 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:41.212 11:39:41 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:17:41.212 [2024-12-16 11:39:41.785854] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4421 *** 00:17:41.212 11:39:41 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:41.212 11:39:41 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@36 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc1 00:17:41.212 11:39:41 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:41.212 11:39:41 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:17:41.212 Malloc1 00:17:41.212 11:39:41 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:41.212 11:39:41 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@37 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode2 -a -s SPDK00000000000002 00:17:41.212 11:39:41 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:41.212 11:39:41 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:17:41.212 11:39:41 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:41.212 11:39:41 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@38 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode2 Malloc1 00:17:41.212 11:39:41 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:41.212 11:39:41 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:17:41.212 11:39:41 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:41.212 11:39:41 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@40 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode2 -t tcp -a 10.0.0.3 -s 4420 00:17:41.212 11:39:41 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:41.212 11:39:41 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:17:41.212 11:39:41 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:41.212 11:39:41 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@41 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode2 -t tcp -a 10.0.0.3 -s 4421 00:17:41.212 11:39:41 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:41.212 11:39:41 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:17:41.212 11:39:41 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:41.212 11:39:41 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@44 -- # bdevperf_pid=88165 00:17:41.212 11:39:41 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@43 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w write -t 1 -f 00:17:41.212 11:39:41 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@46 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; pap "$testdir/try.txt"; killprocess $bdevperf_pid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:17:41.212 11:39:41 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@47 -- # waitforlisten 88165 /var/tmp/bdevperf.sock 00:17:41.212 11:39:41 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@835 -- # '[' -z 88165 ']' 00:17:41.212 11:39:41 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:17:41.212 11:39:41 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@840 -- # local max_retries=100 00:17:41.212 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:17:41.212 11:39:41 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:17:41.212 11:39:41 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@844 -- # xtrace_disable 00:17:41.212 11:39:41 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:17:41.780 11:39:42 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:17:41.780 11:39:42 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@868 -- # return 0 00:17:41.780 11:39:42 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@50 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.3 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.1 00:17:41.780 11:39:42 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:41.780 11:39:42 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:17:41.780 NVMe0n1 00:17:41.780 11:39:42 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:41.780 11:39:42 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@54 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:17:41.780 11:39:42 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:41.780 11:39:42 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:17:41.780 11:39:42 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@54 -- # grep -c NVMe 00:17:41.780 11:39:42 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:41.780 1 00:17:41.780 11:39:42 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@60 -- # NOT rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.3 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.1 -q nqn.2021-09-7.io.spdk:00001 00:17:41.780 11:39:42 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@652 -- # local es=0 00:17:41.780 11:39:42 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@654 -- # valid_exec_arg rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.3 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.1 -q nqn.2021-09-7.io.spdk:00001 00:17:41.780 11:39:42 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@640 -- # local arg=rpc_cmd 00:17:41.780 11:39:42 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:17:41.780 11:39:42 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@644 -- # type -t rpc_cmd 00:17:41.780 11:39:42 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:17:41.780 11:39:42 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@655 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.3 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.1 -q nqn.2021-09-7.io.spdk:00001 00:17:41.780 11:39:42 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:41.780 11:39:42 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:17:41.780 2024/12/16 11:39:42 error on JSON-RPC call, method: bdev_nvme_attach_controller, params: map[adrfam:ipv4 allow_unrecognized_csi:%!s(bool=false) ddgst:%!s(bool=false) hdgst:%!s(bool=false) hostaddr:10.0.0.1 hostnqn:nqn.2021-09-7.io.spdk:00001 name:NVMe0 prchk_guard:%!s(bool=false) prchk_reftag:%!s(bool=false) subnqn:nqn.2016-06.io.spdk:cnode1 traddr:10.0.0.3 trsvcid:4420 trtype:tcp], err: error received for bdev_nvme_attach_controller method, err: Code=-114 Msg=A controller named NVMe0 already exists with the specified network path 00:17:41.780 request: 00:17:41.780 { 00:17:41.780 "method": "bdev_nvme_attach_controller", 00:17:41.780 "params": { 00:17:41.780 "name": "NVMe0", 00:17:41.780 "trtype": "tcp", 00:17:41.780 "traddr": "10.0.0.3", 00:17:41.780 "adrfam": "ipv4", 00:17:41.780 "trsvcid": "4420", 00:17:41.780 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:17:41.780 "hostnqn": "nqn.2021-09-7.io.spdk:00001", 00:17:41.780 "hostaddr": "10.0.0.1", 00:17:41.780 "prchk_reftag": false, 00:17:41.780 "prchk_guard": false, 00:17:41.780 "hdgst": false, 00:17:41.780 "ddgst": false, 00:17:41.780 "allow_unrecognized_csi": false 00:17:41.780 } 00:17:41.780 } 00:17:41.780 Got JSON-RPC error response 00:17:41.780 GoRPCClient: error on JSON-RPC call 00:17:41.780 11:39:42 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:17:41.780 11:39:42 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@655 -- # es=1 00:17:41.780 11:39:42 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:17:41.780 11:39:42 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:17:41.780 11:39:42 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:17:41.780 11:39:42 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@65 -- # NOT rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.3 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode2 -i 10.0.0.1 00:17:41.780 11:39:42 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@652 -- # local es=0 00:17:41.780 11:39:42 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@654 -- # valid_exec_arg rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.3 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode2 -i 10.0.0.1 00:17:41.780 11:39:42 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@640 -- # local arg=rpc_cmd 00:17:41.780 11:39:42 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:17:41.780 11:39:42 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@644 -- # type -t rpc_cmd 00:17:41.780 11:39:42 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:17:41.780 11:39:42 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@655 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.3 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode2 -i 10.0.0.1 00:17:41.780 11:39:42 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:41.780 11:39:42 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:17:41.780 2024/12/16 11:39:42 error on JSON-RPC call, method: bdev_nvme_attach_controller, params: map[adrfam:ipv4 allow_unrecognized_csi:%!s(bool=false) ddgst:%!s(bool=false) hdgst:%!s(bool=false) hostaddr:10.0.0.1 name:NVMe0 prchk_guard:%!s(bool=false) prchk_reftag:%!s(bool=false) subnqn:nqn.2016-06.io.spdk:cnode2 traddr:10.0.0.3 trsvcid:4420 trtype:tcp], err: error received for bdev_nvme_attach_controller method, err: Code=-114 Msg=A controller named NVMe0 already exists with the specified network path 00:17:41.780 request: 00:17:41.780 { 00:17:41.780 "method": "bdev_nvme_attach_controller", 00:17:41.780 "params": { 00:17:41.780 "name": "NVMe0", 00:17:41.780 "trtype": "tcp", 00:17:41.780 "traddr": "10.0.0.3", 00:17:41.780 "adrfam": "ipv4", 00:17:41.780 "trsvcid": "4420", 00:17:41.780 "subnqn": "nqn.2016-06.io.spdk:cnode2", 00:17:41.780 "hostaddr": "10.0.0.1", 00:17:41.780 "prchk_reftag": false, 00:17:41.780 "prchk_guard": false, 00:17:41.780 "hdgst": false, 00:17:41.780 "ddgst": false, 00:17:41.780 "allow_unrecognized_csi": false 00:17:41.780 } 00:17:41.780 } 00:17:41.780 Got JSON-RPC error response 00:17:41.780 GoRPCClient: error on JSON-RPC call 00:17:41.780 11:39:42 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:17:41.780 11:39:42 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@655 -- # es=1 00:17:41.780 11:39:42 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:17:41.780 11:39:42 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:17:41.780 11:39:42 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:17:41.780 11:39:42 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@69 -- # NOT rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.3 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.1 -x disable 00:17:41.780 11:39:42 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@652 -- # local es=0 00:17:41.780 11:39:42 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@654 -- # valid_exec_arg rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.3 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.1 -x disable 00:17:41.780 11:39:42 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@640 -- # local arg=rpc_cmd 00:17:41.780 11:39:42 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:17:41.780 11:39:42 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@644 -- # type -t rpc_cmd 00:17:41.780 11:39:42 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:17:41.780 11:39:42 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@655 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.3 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.1 -x disable 00:17:41.780 11:39:42 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:41.780 11:39:42 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:17:41.781 2024/12/16 11:39:42 error on JSON-RPC call, method: bdev_nvme_attach_controller, params: map[adrfam:ipv4 allow_unrecognized_csi:%!s(bool=false) ddgst:%!s(bool=false) hdgst:%!s(bool=false) hostaddr:10.0.0.1 multipath:disable name:NVMe0 prchk_guard:%!s(bool=false) prchk_reftag:%!s(bool=false) subnqn:nqn.2016-06.io.spdk:cnode1 traddr:10.0.0.3 trsvcid:4420 trtype:tcp], err: error received for bdev_nvme_attach_controller method, err: Code=-114 Msg=A controller named NVMe0 already exists and multipath is disabled 00:17:41.781 request: 00:17:41.781 { 00:17:41.781 "method": "bdev_nvme_attach_controller", 00:17:41.781 "params": { 00:17:41.781 "name": "NVMe0", 00:17:41.781 "trtype": "tcp", 00:17:41.781 "traddr": "10.0.0.3", 00:17:41.781 "adrfam": "ipv4", 00:17:41.781 "trsvcid": "4420", 00:17:41.781 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:17:41.781 "hostaddr": "10.0.0.1", 00:17:41.781 "prchk_reftag": false, 00:17:41.781 "prchk_guard": false, 00:17:41.781 "hdgst": false, 00:17:41.781 "ddgst": false, 00:17:41.781 "multipath": "disable", 00:17:41.781 "allow_unrecognized_csi": false 00:17:41.781 } 00:17:41.781 } 00:17:41.781 Got JSON-RPC error response 00:17:41.781 GoRPCClient: error on JSON-RPC call 00:17:41.781 11:39:42 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:17:41.781 11:39:42 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@655 -- # es=1 00:17:41.781 11:39:42 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:17:41.781 11:39:42 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:17:41.781 11:39:42 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:17:41.781 11:39:42 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@74 -- # NOT rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.3 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.1 -x failover 00:17:41.781 11:39:42 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@652 -- # local es=0 00:17:41.781 11:39:42 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@654 -- # valid_exec_arg rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.3 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.1 -x failover 00:17:41.781 11:39:42 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@640 -- # local arg=rpc_cmd 00:17:41.781 11:39:42 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:17:41.781 11:39:42 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@644 -- # type -t rpc_cmd 00:17:41.781 11:39:42 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:17:41.781 11:39:42 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@655 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.3 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.1 -x failover 00:17:41.781 11:39:42 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:41.781 11:39:42 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:17:41.781 2024/12/16 11:39:42 error on JSON-RPC call, method: bdev_nvme_attach_controller, params: map[adrfam:ipv4 allow_unrecognized_csi:%!s(bool=false) ddgst:%!s(bool=false) hdgst:%!s(bool=false) hostaddr:10.0.0.1 multipath:failover name:NVMe0 prchk_guard:%!s(bool=false) prchk_reftag:%!s(bool=false) subnqn:nqn.2016-06.io.spdk:cnode1 traddr:10.0.0.3 trsvcid:4420 trtype:tcp], err: error received for bdev_nvme_attach_controller method, err: Code=-114 Msg=A controller named NVMe0 already exists with the specified network path 00:17:41.781 request: 00:17:41.781 { 00:17:41.781 "method": "bdev_nvme_attach_controller", 00:17:41.781 "params": { 00:17:41.781 "name": "NVMe0", 00:17:41.781 "trtype": "tcp", 00:17:41.781 "traddr": "10.0.0.3", 00:17:41.781 "adrfam": "ipv4", 00:17:41.781 "trsvcid": "4420", 00:17:41.781 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:17:41.781 "hostaddr": "10.0.0.1", 00:17:41.781 "prchk_reftag": false, 00:17:41.781 "prchk_guard": false, 00:17:41.781 "hdgst": false, 00:17:41.781 "ddgst": false, 00:17:41.781 "multipath": "failover", 00:17:41.781 "allow_unrecognized_csi": false 00:17:41.781 } 00:17:41.781 } 00:17:41.781 Got JSON-RPC error response 00:17:41.781 GoRPCClient: error on JSON-RPC call 00:17:41.781 11:39:42 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:17:41.781 11:39:42 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@655 -- # es=1 00:17:41.781 11:39:42 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:17:41.781 11:39:42 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:17:41.781 11:39:42 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:17:41.781 11:39:42 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@79 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.3 -s 4421 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:17:41.781 11:39:42 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:41.781 11:39:42 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:17:41.781 NVMe0n1 00:17:41.781 11:39:42 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:41.781 11:39:42 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@83 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_detach_controller NVMe0 -t tcp -a 10.0.0.3 -s 4421 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:17:41.781 11:39:42 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:41.781 11:39:42 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:17:41.781 11:39:42 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:41.781 11:39:42 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@87 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe1 -t tcp -a 10.0.0.3 -s 4421 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.1 00:17:41.781 11:39:42 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:41.781 11:39:42 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:17:42.040 00:17:42.040 11:39:42 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:42.040 11:39:42 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@90 -- # grep -c NVMe 00:17:42.040 11:39:42 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@90 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:17:42.040 11:39:42 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:42.040 11:39:42 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:17:42.040 11:39:42 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:42.040 11:39:42 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@90 -- # '[' 2 '!=' 2 ']' 00:17:42.040 11:39:42 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@95 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:17:42.975 { 00:17:42.975 "results": [ 00:17:42.975 { 00:17:42.975 "job": "NVMe0n1", 00:17:42.975 "core_mask": "0x1", 00:17:42.975 "workload": "write", 00:17:42.975 "status": "finished", 00:17:42.975 "queue_depth": 128, 00:17:42.975 "io_size": 4096, 00:17:42.975 "runtime": 1.006584, 00:17:42.975 "iops": 19644.65956144743, 00:17:42.975 "mibps": 76.73695141190403, 00:17:42.975 "io_failed": 0, 00:17:42.975 "io_timeout": 0, 00:17:42.975 "avg_latency_us": 6496.851360004414, 00:17:42.975 "min_latency_us": 2293.76, 00:17:42.975 "max_latency_us": 13285.934545454546 00:17:42.975 } 00:17:42.975 ], 00:17:42.975 "core_count": 1 00:17:42.975 } 00:17:43.233 11:39:43 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@98 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_detach_controller NVMe1 00:17:43.233 11:39:43 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:43.233 11:39:43 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:17:43.233 11:39:43 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:43.233 11:39:43 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@100 -- # [[ -n 10.0.0.2 ]] 00:17:43.233 11:39:43 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@102 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b nvme1 -t tcp -a 10.0.0.3 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode2 -i 10.0.0.1 00:17:43.233 11:39:43 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:43.233 11:39:43 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:17:43.233 nvme1n1 00:17:43.233 11:39:43 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:43.233 11:39:43 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@106 -- # jq -r '.[].peer_address.traddr' 00:17:43.233 11:39:43 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@106 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2016-06.io.spdk:cnode2 00:17:43.233 11:39:43 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:43.233 11:39:43 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:17:43.233 11:39:43 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:43.233 11:39:43 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@106 -- # [[ 10.0.0.1 == \1\0\.\0\.\0\.\1 ]] 00:17:43.233 11:39:43 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@107 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_detach_controller nvme1 00:17:43.233 11:39:43 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:43.233 11:39:43 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:17:43.233 11:39:43 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:43.233 11:39:43 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@109 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b nvme1 -t tcp -a 10.0.0.3 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode2 -i 10.0.0.2 00:17:43.233 11:39:43 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:43.233 11:39:43 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:17:43.233 nvme1n1 00:17:43.233 11:39:43 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:43.233 11:39:43 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@113 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2016-06.io.spdk:cnode2 00:17:43.233 11:39:43 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:43.233 11:39:43 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:17:43.233 11:39:43 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@113 -- # jq -r '.[].peer_address.traddr' 00:17:43.233 11:39:43 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:43.492 11:39:44 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@113 -- # [[ 10.0.0.2 == \1\0\.\0\.\0\.\2 ]] 00:17:43.492 11:39:44 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@116 -- # killprocess 88165 00:17:43.492 11:39:44 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@954 -- # '[' -z 88165 ']' 00:17:43.492 11:39:44 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@958 -- # kill -0 88165 00:17:43.492 11:39:44 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@959 -- # uname 00:17:43.492 11:39:44 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:17:43.492 11:39:44 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 88165 00:17:43.492 11:39:44 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:17:43.492 11:39:44 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:17:43.492 killing process with pid 88165 00:17:43.492 11:39:44 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@972 -- # echo 'killing process with pid 88165' 00:17:43.492 11:39:44 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@973 -- # kill 88165 00:17:43.493 11:39:44 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@978 -- # wait 88165 00:17:43.752 11:39:44 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@118 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:17:43.752 11:39:44 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:43.752 11:39:44 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:17:43.752 11:39:44 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:43.752 11:39:44 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@119 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode2 00:17:43.752 11:39:44 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:43.752 11:39:44 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:17:43.752 11:39:44 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:43.752 11:39:44 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@121 -- # trap - SIGINT SIGTERM EXIT 00:17:43.752 11:39:44 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@123 -- # pap /home/vagrant/spdk_repo/spdk/test/nvmf/host/try.txt 00:17:43.752 11:39:44 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@1599 -- # read -r file 00:17:43.752 11:39:44 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@1598 -- # find /home/vagrant/spdk_repo/spdk/test/nvmf/host/try.txt -type f 00:17:43.752 11:39:44 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@1598 -- # sort -u 00:17:43.752 11:39:44 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@1600 -- # cat 00:17:43.752 --- /home/vagrant/spdk_repo/spdk/test/nvmf/host/try.txt --- 00:17:43.752 [2024-12-16 11:39:41.916350] Starting SPDK v25.01-pre git sha1 c360129e7 / DPDK 24.03.0 initialization... 00:17:43.752 [2024-12-16 11:39:41.916460] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid88165 ] 00:17:43.752 [2024-12-16 11:39:42.066046] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:17:43.752 [2024-12-16 11:39:42.130423] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:17:43.752 [2024-12-16 11:39:42.565978] bdev.c:4957:bdev_name_add: *ERROR*: Bdev name 5e99efd9-7aba-40e6-b3f1-19fc34ea9701 already exists 00:17:43.752 [2024-12-16 11:39:42.566049] bdev.c:8177:bdev_register: *ERROR*: Unable to add uuid:5e99efd9-7aba-40e6-b3f1-19fc34ea9701 alias for bdev NVMe1n1 00:17:43.752 [2024-12-16 11:39:42.566073] bdev_nvme.c:4666:nvme_bdev_create: *ERROR*: spdk_bdev_register() failed 00:17:43.752 Running I/O for 1 seconds... 00:17:43.753 19646.00 IOPS, 76.74 MiB/s 00:17:43.753 Latency(us) 00:17:43.753 [2024-12-16T11:39:44.506Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:17:43.753 Job: NVMe0n1 (Core Mask 0x1, workload: write, depth: 128, IO size: 4096) 00:17:43.753 NVMe0n1 : 1.01 19644.66 76.74 0.00 0.00 6496.85 2293.76 13285.93 00:17:43.753 [2024-12-16T11:39:44.506Z] =================================================================================================================== 00:17:43.753 [2024-12-16T11:39:44.506Z] Total : 19644.66 76.74 0.00 0.00 6496.85 2293.76 13285.93 00:17:43.753 Received shutdown signal, test time was about 1.000000 seconds 00:17:43.753 00:17:43.753 Latency(us) 00:17:43.753 [2024-12-16T11:39:44.506Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:17:43.753 [2024-12-16T11:39:44.506Z] =================================================================================================================== 00:17:43.753 [2024-12-16T11:39:44.506Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:17:43.753 --- /home/vagrant/spdk_repo/spdk/test/nvmf/host/try.txt --- 00:17:43.753 11:39:44 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@1605 -- # rm -f /home/vagrant/spdk_repo/spdk/test/nvmf/host/try.txt 00:17:43.753 11:39:44 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@1599 -- # read -r file 00:17:43.753 11:39:44 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@124 -- # nvmftestfini 00:17:43.753 11:39:44 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@516 -- # nvmfcleanup 00:17:43.753 11:39:44 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@121 -- # sync 00:17:43.753 11:39:44 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:17:43.753 11:39:44 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@124 -- # set +e 00:17:43.753 11:39:44 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@125 -- # for i in {1..20} 00:17:43.753 11:39:44 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:17:43.753 rmmod nvme_tcp 00:17:43.753 rmmod nvme_fabrics 00:17:43.753 rmmod nvme_keyring 00:17:43.753 11:39:44 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:17:43.753 11:39:44 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@128 -- # set -e 00:17:43.753 11:39:44 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@129 -- # return 0 00:17:43.753 11:39:44 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@517 -- # '[' -n 88125 ']' 00:17:43.753 11:39:44 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@518 -- # killprocess 88125 00:17:43.753 11:39:44 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@954 -- # '[' -z 88125 ']' 00:17:43.753 11:39:44 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@958 -- # kill -0 88125 00:17:43.753 11:39:44 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@959 -- # uname 00:17:43.753 11:39:44 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:17:43.753 11:39:44 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 88125 00:17:44.011 11:39:44 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:17:44.012 11:39:44 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:17:44.012 killing process with pid 88125 00:17:44.012 11:39:44 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@972 -- # echo 'killing process with pid 88125' 00:17:44.012 11:39:44 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@973 -- # kill 88125 00:17:44.012 11:39:44 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@978 -- # wait 88125 00:17:44.270 11:39:44 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:17:44.270 11:39:44 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:17:44.270 11:39:44 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:17:44.271 11:39:44 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@297 -- # iptr 00:17:44.271 11:39:44 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:17:44.271 11:39:44 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@791 -- # iptables-save 00:17:44.271 11:39:44 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@791 -- # iptables-restore 00:17:44.271 11:39:44 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@298 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:17:44.271 11:39:44 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@299 -- # nvmf_veth_fini 00:17:44.271 11:39:44 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@233 -- # ip link set nvmf_init_br nomaster 00:17:44.271 11:39:44 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@234 -- # ip link set nvmf_init_br2 nomaster 00:17:44.271 11:39:44 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@235 -- # ip link set nvmf_tgt_br nomaster 00:17:44.271 11:39:44 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@236 -- # ip link set nvmf_tgt_br2 nomaster 00:17:44.271 11:39:44 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@237 -- # ip link set nvmf_init_br down 00:17:44.271 11:39:44 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@238 -- # ip link set nvmf_init_br2 down 00:17:44.271 11:39:44 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@239 -- # ip link set nvmf_tgt_br down 00:17:44.271 11:39:44 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@240 -- # ip link set nvmf_tgt_br2 down 00:17:44.271 11:39:44 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@241 -- # ip link delete nvmf_br type bridge 00:17:44.271 11:39:44 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@242 -- # ip link delete nvmf_init_if 00:17:44.271 11:39:44 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@243 -- # ip link delete nvmf_init_if2 00:17:44.271 11:39:44 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@244 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:17:44.271 11:39:44 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@245 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:17:44.271 11:39:44 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@246 -- # remove_spdk_ns 00:17:44.271 11:39:44 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:17:44.271 11:39:44 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:17:44.271 11:39:44 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:17:44.530 11:39:45 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@300 -- # return 0 00:17:44.530 00:17:44.530 real 0m4.423s 00:17:44.530 user 0m12.587s 00:17:44.530 sys 0m1.177s 00:17:44.530 11:39:45 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@1130 -- # xtrace_disable 00:17:44.530 11:39:45 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:17:44.530 ************************************ 00:17:44.530 END TEST nvmf_multicontroller 00:17:44.530 ************************************ 00:17:44.530 11:39:45 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@17 -- # run_test nvmf_aer /home/vagrant/spdk_repo/spdk/test/nvmf/host/aer.sh --transport=tcp 00:17:44.530 11:39:45 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:17:44.530 11:39:45 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1111 -- # xtrace_disable 00:17:44.530 11:39:45 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:17:44.530 ************************************ 00:17:44.530 START TEST nvmf_aer 00:17:44.530 ************************************ 00:17:44.530 11:39:45 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/host/aer.sh --transport=tcp 00:17:44.530 * Looking for test storage... 00:17:44.530 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/host 00:17:44.530 11:39:45 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:17:44.530 11:39:45 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:17:44.530 11:39:45 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1711 -- # lcov --version 00:17:44.530 11:39:45 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:17:44.530 11:39:45 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:17:44.530 11:39:45 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@333 -- # local ver1 ver1_l 00:17:44.530 11:39:45 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@334 -- # local ver2 ver2_l 00:17:44.530 11:39:45 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@336 -- # IFS=.-: 00:17:44.530 11:39:45 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@336 -- # read -ra ver1 00:17:44.530 11:39:45 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@337 -- # IFS=.-: 00:17:44.530 11:39:45 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@337 -- # read -ra ver2 00:17:44.530 11:39:45 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@338 -- # local 'op=<' 00:17:44.530 11:39:45 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@340 -- # ver1_l=2 00:17:44.530 11:39:45 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@341 -- # ver2_l=1 00:17:44.530 11:39:45 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:17:44.530 11:39:45 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@344 -- # case "$op" in 00:17:44.530 11:39:45 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@345 -- # : 1 00:17:44.530 11:39:45 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@364 -- # (( v = 0 )) 00:17:44.530 11:39:45 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:17:44.530 11:39:45 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@365 -- # decimal 1 00:17:44.530 11:39:45 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@353 -- # local d=1 00:17:44.530 11:39:45 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:17:44.530 11:39:45 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@355 -- # echo 1 00:17:44.530 11:39:45 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@365 -- # ver1[v]=1 00:17:44.530 11:39:45 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@366 -- # decimal 2 00:17:44.530 11:39:45 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@353 -- # local d=2 00:17:44.530 11:39:45 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:17:44.530 11:39:45 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@355 -- # echo 2 00:17:44.530 11:39:45 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@366 -- # ver2[v]=2 00:17:44.530 11:39:45 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:17:44.530 11:39:45 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:17:44.530 11:39:45 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@368 -- # return 0 00:17:44.530 11:39:45 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:17:44.530 11:39:45 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:17:44.530 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:17:44.530 --rc genhtml_branch_coverage=1 00:17:44.530 --rc genhtml_function_coverage=1 00:17:44.530 --rc genhtml_legend=1 00:17:44.530 --rc geninfo_all_blocks=1 00:17:44.530 --rc geninfo_unexecuted_blocks=1 00:17:44.530 00:17:44.530 ' 00:17:44.530 11:39:45 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:17:44.530 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:17:44.530 --rc genhtml_branch_coverage=1 00:17:44.530 --rc genhtml_function_coverage=1 00:17:44.530 --rc genhtml_legend=1 00:17:44.530 --rc geninfo_all_blocks=1 00:17:44.530 --rc geninfo_unexecuted_blocks=1 00:17:44.530 00:17:44.530 ' 00:17:44.530 11:39:45 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:17:44.530 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:17:44.530 --rc genhtml_branch_coverage=1 00:17:44.530 --rc genhtml_function_coverage=1 00:17:44.530 --rc genhtml_legend=1 00:17:44.530 --rc geninfo_all_blocks=1 00:17:44.530 --rc geninfo_unexecuted_blocks=1 00:17:44.530 00:17:44.530 ' 00:17:44.530 11:39:45 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:17:44.530 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:17:44.530 --rc genhtml_branch_coverage=1 00:17:44.531 --rc genhtml_function_coverage=1 00:17:44.531 --rc genhtml_legend=1 00:17:44.531 --rc geninfo_all_blocks=1 00:17:44.531 --rc geninfo_unexecuted_blocks=1 00:17:44.531 00:17:44.531 ' 00:17:44.531 11:39:45 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:17:44.531 11:39:45 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@7 -- # uname -s 00:17:44.531 11:39:45 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:17:44.531 11:39:45 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:17:44.531 11:39:45 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:17:44.531 11:39:45 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:17:44.531 11:39:45 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:17:44.531 11:39:45 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:17:44.531 11:39:45 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:17:44.531 11:39:45 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:17:44.531 11:39:45 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:17:44.531 11:39:45 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:17:44.531 11:39:45 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:2fb5c03d-b67f-4471-b8a9-a83bf7b15be8 00:17:44.531 11:39:45 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@18 -- # NVME_HOSTID=2fb5c03d-b67f-4471-b8a9-a83bf7b15be8 00:17:44.531 11:39:45 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:17:44.531 11:39:45 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:17:44.531 11:39:45 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:17:44.531 11:39:45 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:17:44.531 11:39:45 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@49 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:17:44.531 11:39:45 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@15 -- # shopt -s extglob 00:17:44.531 11:39:45 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:17:44.531 11:39:45 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:17:44.531 11:39:45 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:17:44.531 11:39:45 nvmf_tcp.nvmf_host.nvmf_aer -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:44.531 11:39:45 nvmf_tcp.nvmf_host.nvmf_aer -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:44.531 11:39:45 nvmf_tcp.nvmf_host.nvmf_aer -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:44.531 11:39:45 nvmf_tcp.nvmf_host.nvmf_aer -- paths/export.sh@5 -- # export PATH 00:17:44.531 11:39:45 nvmf_tcp.nvmf_host.nvmf_aer -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:44.531 11:39:45 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@51 -- # : 0 00:17:44.531 11:39:45 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:17:44.531 11:39:45 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:17:44.531 11:39:45 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:17:44.531 11:39:45 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:17:44.531 11:39:45 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:17:44.531 11:39:45 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:17:44.531 /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:17:44.531 11:39:45 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:17:44.531 11:39:45 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:17:44.531 11:39:45 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@55 -- # have_pci_nics=0 00:17:44.789 11:39:45 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@11 -- # nvmftestinit 00:17:44.789 11:39:45 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:17:44.789 11:39:45 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:17:44.789 11:39:45 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@476 -- # prepare_net_devs 00:17:44.789 11:39:45 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@438 -- # local -g is_hw=no 00:17:44.789 11:39:45 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@440 -- # remove_spdk_ns 00:17:44.789 11:39:45 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:17:44.789 11:39:45 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:17:44.789 11:39:45 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:17:44.789 11:39:45 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@442 -- # [[ virt != virt ]] 00:17:44.789 11:39:45 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@444 -- # [[ no == yes ]] 00:17:44.789 11:39:45 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@451 -- # [[ virt == phy ]] 00:17:44.789 11:39:45 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@454 -- # [[ virt == phy-fallback ]] 00:17:44.789 11:39:45 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@459 -- # [[ tcp == tcp ]] 00:17:44.789 11:39:45 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@460 -- # nvmf_veth_init 00:17:44.789 11:39:45 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@145 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:17:44.789 11:39:45 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@146 -- # NVMF_SECOND_INITIATOR_IP=10.0.0.2 00:17:44.789 11:39:45 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@147 -- # NVMF_FIRST_TARGET_IP=10.0.0.3 00:17:44.789 11:39:45 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@148 -- # NVMF_SECOND_TARGET_IP=10.0.0.4 00:17:44.789 11:39:45 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@149 -- # NVMF_INITIATOR_IP=10.0.0.1 00:17:44.789 11:39:45 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@150 -- # NVMF_BRIDGE=nvmf_br 00:17:44.789 11:39:45 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@151 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:17:44.790 11:39:45 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@152 -- # NVMF_INITIATOR_INTERFACE2=nvmf_init_if2 00:17:44.790 11:39:45 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@153 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:17:44.790 11:39:45 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@154 -- # NVMF_INITIATOR_BRIDGE2=nvmf_init_br2 00:17:44.790 11:39:45 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@155 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:17:44.790 11:39:45 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@156 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:17:44.790 11:39:45 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@157 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:17:44.790 11:39:45 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@158 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:17:44.790 11:39:45 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@159 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:17:44.790 11:39:45 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@160 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:17:44.790 11:39:45 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@162 -- # ip link set nvmf_init_br nomaster 00:17:44.790 Cannot find device "nvmf_init_br" 00:17:44.790 11:39:45 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@162 -- # true 00:17:44.790 11:39:45 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@163 -- # ip link set nvmf_init_br2 nomaster 00:17:44.790 Cannot find device "nvmf_init_br2" 00:17:44.790 11:39:45 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@163 -- # true 00:17:44.790 11:39:45 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@164 -- # ip link set nvmf_tgt_br nomaster 00:17:44.790 Cannot find device "nvmf_tgt_br" 00:17:44.790 11:39:45 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@164 -- # true 00:17:44.790 11:39:45 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@165 -- # ip link set nvmf_tgt_br2 nomaster 00:17:44.790 Cannot find device "nvmf_tgt_br2" 00:17:44.790 11:39:45 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@165 -- # true 00:17:44.790 11:39:45 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@166 -- # ip link set nvmf_init_br down 00:17:44.790 Cannot find device "nvmf_init_br" 00:17:44.790 11:39:45 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@166 -- # true 00:17:44.790 11:39:45 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@167 -- # ip link set nvmf_init_br2 down 00:17:44.790 Cannot find device "nvmf_init_br2" 00:17:44.790 11:39:45 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@167 -- # true 00:17:44.790 11:39:45 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@168 -- # ip link set nvmf_tgt_br down 00:17:44.790 Cannot find device "nvmf_tgt_br" 00:17:44.790 11:39:45 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@168 -- # true 00:17:44.790 11:39:45 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@169 -- # ip link set nvmf_tgt_br2 down 00:17:44.790 Cannot find device "nvmf_tgt_br2" 00:17:44.790 11:39:45 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@169 -- # true 00:17:44.790 11:39:45 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@170 -- # ip link delete nvmf_br type bridge 00:17:44.790 Cannot find device "nvmf_br" 00:17:44.790 11:39:45 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@170 -- # true 00:17:44.790 11:39:45 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@171 -- # ip link delete nvmf_init_if 00:17:44.790 Cannot find device "nvmf_init_if" 00:17:44.790 11:39:45 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@171 -- # true 00:17:44.790 11:39:45 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@172 -- # ip link delete nvmf_init_if2 00:17:44.790 Cannot find device "nvmf_init_if2" 00:17:44.790 11:39:45 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@172 -- # true 00:17:44.790 11:39:45 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@173 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:17:44.790 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:17:44.790 11:39:45 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@173 -- # true 00:17:44.790 11:39:45 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@174 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:17:44.790 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:17:44.790 11:39:45 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@174 -- # true 00:17:44.790 11:39:45 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@177 -- # ip netns add nvmf_tgt_ns_spdk 00:17:44.790 11:39:45 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@180 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:17:44.790 11:39:45 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@181 -- # ip link add nvmf_init_if2 type veth peer name nvmf_init_br2 00:17:44.790 11:39:45 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@182 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:17:44.790 11:39:45 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@183 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:17:44.790 11:39:45 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:17:44.790 11:39:45 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@187 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:17:44.790 11:39:45 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@190 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:17:44.790 11:39:45 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@191 -- # ip addr add 10.0.0.2/24 dev nvmf_init_if2 00:17:44.790 11:39:45 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@192 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if 00:17:44.790 11:39:45 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@193 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.4/24 dev nvmf_tgt_if2 00:17:44.790 11:39:45 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@196 -- # ip link set nvmf_init_if up 00:17:44.790 11:39:45 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@197 -- # ip link set nvmf_init_if2 up 00:17:44.790 11:39:45 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@198 -- # ip link set nvmf_init_br up 00:17:44.790 11:39:45 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@199 -- # ip link set nvmf_init_br2 up 00:17:44.790 11:39:45 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@200 -- # ip link set nvmf_tgt_br up 00:17:44.790 11:39:45 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@201 -- # ip link set nvmf_tgt_br2 up 00:17:44.790 11:39:45 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@202 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:17:45.048 11:39:45 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@203 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:17:45.048 11:39:45 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@204 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:17:45.048 11:39:45 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@207 -- # ip link add nvmf_br type bridge 00:17:45.048 11:39:45 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@208 -- # ip link set nvmf_br up 00:17:45.048 11:39:45 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@211 -- # ip link set nvmf_init_br master nvmf_br 00:17:45.048 11:39:45 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@212 -- # ip link set nvmf_init_br2 master nvmf_br 00:17:45.048 11:39:45 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@213 -- # ip link set nvmf_tgt_br master nvmf_br 00:17:45.048 11:39:45 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@214 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:17:45.048 11:39:45 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@217 -- # ipts -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:17:45.048 11:39:45 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT' 00:17:45.048 11:39:45 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@218 -- # ipts -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT 00:17:45.048 11:39:45 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT' 00:17:45.048 11:39:45 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@219 -- # ipts -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:17:45.048 11:39:45 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@790 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT -m comment --comment 'SPDK_NVMF:-A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT' 00:17:45.048 11:39:45 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@222 -- # ping -c 1 10.0.0.3 00:17:45.048 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:17:45.048 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.066 ms 00:17:45.048 00:17:45.048 --- 10.0.0.3 ping statistics --- 00:17:45.048 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:17:45.048 rtt min/avg/max/mdev = 0.066/0.066/0.066/0.000 ms 00:17:45.048 11:39:45 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@223 -- # ping -c 1 10.0.0.4 00:17:45.048 PING 10.0.0.4 (10.0.0.4) 56(84) bytes of data. 00:17:45.048 64 bytes from 10.0.0.4: icmp_seq=1 ttl=64 time=0.040 ms 00:17:45.048 00:17:45.048 --- 10.0.0.4 ping statistics --- 00:17:45.048 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:17:45.048 rtt min/avg/max/mdev = 0.040/0.040/0.040/0.000 ms 00:17:45.048 11:39:45 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@224 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:17:45.048 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:17:45.048 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.030 ms 00:17:45.048 00:17:45.048 --- 10.0.0.1 ping statistics --- 00:17:45.048 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:17:45.048 rtt min/avg/max/mdev = 0.030/0.030/0.030/0.000 ms 00:17:45.048 11:39:45 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@225 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.2 00:17:45.048 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:17:45.049 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.047 ms 00:17:45.049 00:17:45.049 --- 10.0.0.2 ping statistics --- 00:17:45.049 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:17:45.049 rtt min/avg/max/mdev = 0.047/0.047/0.047/0.000 ms 00:17:45.049 11:39:45 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@227 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:17:45.049 11:39:45 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@461 -- # return 0 00:17:45.049 11:39:45 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:17:45.049 11:39:45 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:17:45.049 11:39:45 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:17:45.049 11:39:45 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:17:45.049 11:39:45 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:17:45.049 11:39:45 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:17:45.049 11:39:45 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:17:45.049 11:39:45 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@12 -- # nvmfappstart -m 0xF 00:17:45.049 11:39:45 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:17:45.049 11:39:45 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@726 -- # xtrace_disable 00:17:45.049 11:39:45 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:17:45.049 11:39:45 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@509 -- # nvmfpid=88469 00:17:45.049 11:39:45 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@510 -- # waitforlisten 88469 00:17:45.049 11:39:45 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@508 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:17:45.049 11:39:45 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@835 -- # '[' -z 88469 ']' 00:17:45.049 11:39:45 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:17:45.049 11:39:45 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@840 -- # local max_retries=100 00:17:45.049 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:17:45.049 11:39:45 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:17:45.049 11:39:45 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@844 -- # xtrace_disable 00:17:45.049 11:39:45 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:17:45.049 [2024-12-16 11:39:45.735042] Starting SPDK v25.01-pre git sha1 c360129e7 / DPDK 24.03.0 initialization... 00:17:45.049 [2024-12-16 11:39:45.735119] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:17:45.307 [2024-12-16 11:39:45.885107] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:17:45.307 [2024-12-16 11:39:45.958339] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:17:45.307 [2024-12-16 11:39:45.958395] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:17:45.307 [2024-12-16 11:39:45.958407] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:17:45.307 [2024-12-16 11:39:45.958416] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:17:45.307 [2024-12-16 11:39:45.958424] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:17:45.307 [2024-12-16 11:39:45.959615] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 1 00:17:45.307 [2024-12-16 11:39:45.959748] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 2 00:17:45.307 [2024-12-16 11:39:45.959810] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 3 00:17:45.307 [2024-12-16 11:39:45.959812] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:17:45.566 11:39:46 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:17:45.566 11:39:46 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@868 -- # return 0 00:17:45.566 11:39:46 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:17:45.566 11:39:46 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@732 -- # xtrace_disable 00:17:45.566 11:39:46 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:17:45.566 11:39:46 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:17:45.566 11:39:46 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@14 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:17:45.566 11:39:46 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:45.566 11:39:46 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:17:45.567 [2024-12-16 11:39:46.137170] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:17:45.567 11:39:46 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:45.567 11:39:46 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@16 -- # rpc_cmd bdev_malloc_create 64 512 --name Malloc0 00:17:45.567 11:39:46 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:45.567 11:39:46 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:17:45.567 Malloc0 00:17:45.567 11:39:46 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:45.567 11:39:46 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@17 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 2 00:17:45.567 11:39:46 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:45.567 11:39:46 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:17:45.567 11:39:46 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:45.567 11:39:46 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@18 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:17:45.567 11:39:46 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:45.567 11:39:46 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:17:45.567 11:39:46 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:45.567 11:39:46 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@19 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 00:17:45.567 11:39:46 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:45.567 11:39:46 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:17:45.567 [2024-12-16 11:39:46.199896] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4420 *** 00:17:45.567 11:39:46 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:45.567 11:39:46 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@21 -- # rpc_cmd nvmf_get_subsystems 00:17:45.567 11:39:46 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:45.567 11:39:46 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:17:45.567 [ 00:17:45.567 { 00:17:45.567 "allow_any_host": true, 00:17:45.567 "hosts": [], 00:17:45.567 "listen_addresses": [], 00:17:45.567 "nqn": "nqn.2014-08.org.nvmexpress.discovery", 00:17:45.567 "subtype": "Discovery" 00:17:45.567 }, 00:17:45.567 { 00:17:45.567 "allow_any_host": true, 00:17:45.567 "hosts": [], 00:17:45.567 "listen_addresses": [ 00:17:45.567 { 00:17:45.567 "adrfam": "IPv4", 00:17:45.567 "traddr": "10.0.0.3", 00:17:45.567 "trsvcid": "4420", 00:17:45.567 "trtype": "TCP" 00:17:45.567 } 00:17:45.567 ], 00:17:45.567 "max_cntlid": 65519, 00:17:45.567 "max_namespaces": 2, 00:17:45.567 "min_cntlid": 1, 00:17:45.567 "model_number": "SPDK bdev Controller", 00:17:45.567 "namespaces": [ 00:17:45.567 { 00:17:45.567 "bdev_name": "Malloc0", 00:17:45.567 "name": "Malloc0", 00:17:45.567 "nguid": "9A51CFBADDB14A4DA24F8BCB071F5B30", 00:17:45.567 "nsid": 1, 00:17:45.567 "uuid": "9a51cfba-ddb1-4a4d-a24f-8bcb071f5b30" 00:17:45.567 } 00:17:45.567 ], 00:17:45.567 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:17:45.567 "serial_number": "SPDK00000000000001", 00:17:45.567 "subtype": "NVMe" 00:17:45.567 } 00:17:45.567 ] 00:17:45.567 11:39:46 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:45.567 11:39:46 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@23 -- # AER_TOUCH_FILE=/tmp/aer_touch_file 00:17:45.567 11:39:46 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@24 -- # rm -f /tmp/aer_touch_file 00:17:45.567 11:39:46 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@33 -- # aerpid=88515 00:17:45.567 11:39:46 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@36 -- # waitforfile /tmp/aer_touch_file 00:17:45.567 11:39:46 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1269 -- # local i=0 00:17:45.567 11:39:46 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1270 -- # '[' '!' -e /tmp/aer_touch_file ']' 00:17:45.567 11:39:46 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1271 -- # '[' 0 -lt 200 ']' 00:17:45.567 11:39:46 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1272 -- # i=1 00:17:45.567 11:39:46 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@27 -- # /home/vagrant/spdk_repo/spdk/test/nvme/aer/aer -r ' trtype:tcp adrfam:IPv4 traddr:10.0.0.3 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' -n 2 -t /tmp/aer_touch_file 00:17:45.567 11:39:46 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1273 -- # sleep 0.1 00:17:45.826 11:39:46 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1270 -- # '[' '!' -e /tmp/aer_touch_file ']' 00:17:45.826 11:39:46 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1271 -- # '[' 1 -lt 200 ']' 00:17:45.826 11:39:46 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1272 -- # i=2 00:17:45.826 11:39:46 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1273 -- # sleep 0.1 00:17:45.826 11:39:46 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1270 -- # '[' '!' -e /tmp/aer_touch_file ']' 00:17:45.826 11:39:46 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1276 -- # '[' '!' -e /tmp/aer_touch_file ']' 00:17:45.826 11:39:46 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1280 -- # return 0 00:17:45.826 11:39:46 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@39 -- # rpc_cmd bdev_malloc_create 64 4096 --name Malloc1 00:17:45.826 11:39:46 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:45.826 11:39:46 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:17:45.826 Malloc1 00:17:45.826 11:39:46 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:45.826 11:39:46 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@40 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 2 00:17:45.826 11:39:46 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:45.826 11:39:46 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:17:45.826 11:39:46 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:45.826 11:39:46 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@41 -- # rpc_cmd nvmf_get_subsystems 00:17:45.826 11:39:46 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:45.826 11:39:46 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:17:45.826 [ 00:17:45.826 { 00:17:45.826 "allow_any_host": true, 00:17:45.826 "hosts": [], 00:17:45.826 "listen_addresses": [], 00:17:45.826 "nqn": "nqn.2014-08.org.nvmexpress.discovery", 00:17:45.826 "subtype": "Discovery" 00:17:45.826 }, 00:17:45.826 { 00:17:45.826 "allow_any_host": true, 00:17:45.826 "hosts": [], 00:17:45.826 "listen_addresses": [ 00:17:45.826 { 00:17:45.826 "adrfam": "IPv4", 00:17:45.826 "traddr": "10.0.0.3", 00:17:45.826 "trsvcid": "4420", 00:17:45.826 "trtype": "TCP" 00:17:45.826 } 00:17:45.826 ], 00:17:45.826 "max_cntlid": 65519, 00:17:45.826 "max_namespaces": 2, 00:17:45.826 "min_cntlid": 1, 00:17:45.826 "model_number": "SPDK bdev Controller", 00:17:45.826 "namespaces": [ 00:17:45.826 { 00:17:45.826 "bdev_name": "Malloc0", 00:17:45.826 "name": "Malloc0", 00:17:45.826 "nguid": "9A51CFBADDB14A4DA24F8BCB071F5B30", 00:17:45.826 "nsid": 1, 00:17:45.826 "uuid": "9a51cfba-ddb1-4a4d-a24f-8bcb071f5b30" 00:17:45.826 }, 00:17:45.826 { 00:17:45.826 "bdev_name": "Malloc1", 00:17:45.826 "name": "Malloc1", 00:17:45.826 "nguid": "2C2B8EE23DEA475CB2ED7DF0C953E382", 00:17:45.826 "nsid": 2, 00:17:45.826 "uuid": "2c2b8ee2-3dea-475c-b2ed-7df0c953e382" 00:17:45.826 } 00:17:45.826 ], 00:17:45.826 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:17:45.826 "serial_number": "SPDK00000000000001", 00:17:45.826 "subtype": "NVMe" 00:17:45.826 } 00:17:45.826 ] 00:17:45.826 11:39:46 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:45.826 11:39:46 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@43 -- # wait 88515 00:17:45.826 Asynchronous Event Request test 00:17:45.826 Attaching to 10.0.0.3 00:17:45.826 Attached to 10.0.0.3 00:17:45.826 Registering asynchronous event callbacks... 00:17:45.826 Starting namespace attribute notice tests for all controllers... 00:17:45.826 10.0.0.3: aer_cb for log page 4, aen_event_type: 0x02, aen_event_info: 0x00 00:17:45.826 aer_cb - Changed Namespace 00:17:45.826 Cleaning up... 00:17:45.826 11:39:46 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@45 -- # rpc_cmd bdev_malloc_delete Malloc0 00:17:45.826 11:39:46 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:45.826 11:39:46 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:17:45.826 11:39:46 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:45.826 11:39:46 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@46 -- # rpc_cmd bdev_malloc_delete Malloc1 00:17:45.826 11:39:46 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:45.826 11:39:46 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:17:45.826 11:39:46 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:45.826 11:39:46 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@47 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:17:45.826 11:39:46 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:45.826 11:39:46 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:17:46.085 11:39:46 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:46.085 11:39:46 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@49 -- # trap - SIGINT SIGTERM EXIT 00:17:46.085 11:39:46 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@51 -- # nvmftestfini 00:17:46.085 11:39:46 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@516 -- # nvmfcleanup 00:17:46.085 11:39:46 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@121 -- # sync 00:17:46.085 11:39:46 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:17:46.085 11:39:46 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@124 -- # set +e 00:17:46.085 11:39:46 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@125 -- # for i in {1..20} 00:17:46.085 11:39:46 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:17:46.085 rmmod nvme_tcp 00:17:46.085 rmmod nvme_fabrics 00:17:46.085 rmmod nvme_keyring 00:17:46.085 11:39:46 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:17:46.085 11:39:46 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@128 -- # set -e 00:17:46.085 11:39:46 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@129 -- # return 0 00:17:46.085 11:39:46 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@517 -- # '[' -n 88469 ']' 00:17:46.085 11:39:46 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@518 -- # killprocess 88469 00:17:46.085 11:39:46 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@954 -- # '[' -z 88469 ']' 00:17:46.085 11:39:46 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@958 -- # kill -0 88469 00:17:46.085 11:39:46 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@959 -- # uname 00:17:46.085 11:39:46 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:17:46.085 11:39:46 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 88469 00:17:46.085 killing process with pid 88469 00:17:46.085 11:39:46 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:17:46.085 11:39:46 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:17:46.085 11:39:46 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@972 -- # echo 'killing process with pid 88469' 00:17:46.085 11:39:46 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@973 -- # kill 88469 00:17:46.085 11:39:46 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@978 -- # wait 88469 00:17:46.344 11:39:46 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:17:46.344 11:39:46 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:17:46.344 11:39:46 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:17:46.344 11:39:46 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@297 -- # iptr 00:17:46.344 11:39:46 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:17:46.344 11:39:46 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@791 -- # iptables-save 00:17:46.344 11:39:46 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@791 -- # iptables-restore 00:17:46.344 11:39:46 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@298 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:17:46.344 11:39:46 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@299 -- # nvmf_veth_fini 00:17:46.344 11:39:46 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@233 -- # ip link set nvmf_init_br nomaster 00:17:46.344 11:39:46 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@234 -- # ip link set nvmf_init_br2 nomaster 00:17:46.344 11:39:46 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@235 -- # ip link set nvmf_tgt_br nomaster 00:17:46.344 11:39:46 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@236 -- # ip link set nvmf_tgt_br2 nomaster 00:17:46.344 11:39:47 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@237 -- # ip link set nvmf_init_br down 00:17:46.345 11:39:47 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@238 -- # ip link set nvmf_init_br2 down 00:17:46.345 11:39:47 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@239 -- # ip link set nvmf_tgt_br down 00:17:46.345 11:39:47 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@240 -- # ip link set nvmf_tgt_br2 down 00:17:46.345 11:39:47 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@241 -- # ip link delete nvmf_br type bridge 00:17:46.345 11:39:47 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@242 -- # ip link delete nvmf_init_if 00:17:46.345 11:39:47 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@243 -- # ip link delete nvmf_init_if2 00:17:46.345 11:39:47 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@244 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:17:46.604 11:39:47 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@245 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:17:46.604 11:39:47 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@246 -- # remove_spdk_ns 00:17:46.604 11:39:47 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:17:46.604 11:39:47 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:17:46.604 11:39:47 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:17:46.604 11:39:47 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@300 -- # return 0 00:17:46.604 00:17:46.604 real 0m2.086s 00:17:46.604 user 0m4.147s 00:17:46.604 sys 0m0.733s 00:17:46.604 ************************************ 00:17:46.604 END TEST nvmf_aer 00:17:46.604 ************************************ 00:17:46.604 11:39:47 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1130 -- # xtrace_disable 00:17:46.604 11:39:47 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:17:46.604 11:39:47 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@18 -- # run_test nvmf_async_init /home/vagrant/spdk_repo/spdk/test/nvmf/host/async_init.sh --transport=tcp 00:17:46.604 11:39:47 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:17:46.604 11:39:47 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1111 -- # xtrace_disable 00:17:46.604 11:39:47 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:17:46.604 ************************************ 00:17:46.604 START TEST nvmf_async_init 00:17:46.604 ************************************ 00:17:46.604 11:39:47 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/host/async_init.sh --transport=tcp 00:17:46.604 * Looking for test storage... 00:17:46.604 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/host 00:17:46.604 11:39:47 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:17:46.604 11:39:47 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@1711 -- # lcov --version 00:17:46.604 11:39:47 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:17:46.863 11:39:47 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:17:46.863 11:39:47 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:17:46.863 11:39:47 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@333 -- # local ver1 ver1_l 00:17:46.863 11:39:47 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@334 -- # local ver2 ver2_l 00:17:46.863 11:39:47 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@336 -- # IFS=.-: 00:17:46.863 11:39:47 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@336 -- # read -ra ver1 00:17:46.863 11:39:47 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@337 -- # IFS=.-: 00:17:46.863 11:39:47 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@337 -- # read -ra ver2 00:17:46.863 11:39:47 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@338 -- # local 'op=<' 00:17:46.863 11:39:47 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@340 -- # ver1_l=2 00:17:46.863 11:39:47 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@341 -- # ver2_l=1 00:17:46.863 11:39:47 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:17:46.863 11:39:47 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@344 -- # case "$op" in 00:17:46.863 11:39:47 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@345 -- # : 1 00:17:46.863 11:39:47 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@364 -- # (( v = 0 )) 00:17:46.863 11:39:47 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:17:46.863 11:39:47 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@365 -- # decimal 1 00:17:46.863 11:39:47 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@353 -- # local d=1 00:17:46.863 11:39:47 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:17:46.863 11:39:47 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@355 -- # echo 1 00:17:46.863 11:39:47 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@365 -- # ver1[v]=1 00:17:46.863 11:39:47 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@366 -- # decimal 2 00:17:46.863 11:39:47 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@353 -- # local d=2 00:17:46.863 11:39:47 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:17:46.863 11:39:47 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@355 -- # echo 2 00:17:46.863 11:39:47 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@366 -- # ver2[v]=2 00:17:46.863 11:39:47 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:17:46.863 11:39:47 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:17:46.863 11:39:47 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@368 -- # return 0 00:17:46.863 11:39:47 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:17:46.863 11:39:47 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:17:46.863 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:17:46.863 --rc genhtml_branch_coverage=1 00:17:46.863 --rc genhtml_function_coverage=1 00:17:46.863 --rc genhtml_legend=1 00:17:46.863 --rc geninfo_all_blocks=1 00:17:46.863 --rc geninfo_unexecuted_blocks=1 00:17:46.863 00:17:46.863 ' 00:17:46.863 11:39:47 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:17:46.864 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:17:46.864 --rc genhtml_branch_coverage=1 00:17:46.864 --rc genhtml_function_coverage=1 00:17:46.864 --rc genhtml_legend=1 00:17:46.864 --rc geninfo_all_blocks=1 00:17:46.864 --rc geninfo_unexecuted_blocks=1 00:17:46.864 00:17:46.864 ' 00:17:46.864 11:39:47 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:17:46.864 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:17:46.864 --rc genhtml_branch_coverage=1 00:17:46.864 --rc genhtml_function_coverage=1 00:17:46.864 --rc genhtml_legend=1 00:17:46.864 --rc geninfo_all_blocks=1 00:17:46.864 --rc geninfo_unexecuted_blocks=1 00:17:46.864 00:17:46.864 ' 00:17:46.864 11:39:47 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:17:46.864 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:17:46.864 --rc genhtml_branch_coverage=1 00:17:46.864 --rc genhtml_function_coverage=1 00:17:46.864 --rc genhtml_legend=1 00:17:46.864 --rc geninfo_all_blocks=1 00:17:46.864 --rc geninfo_unexecuted_blocks=1 00:17:46.864 00:17:46.864 ' 00:17:46.864 11:39:47 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@11 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:17:46.864 11:39:47 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@7 -- # uname -s 00:17:46.864 11:39:47 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:17:46.864 11:39:47 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:17:46.864 11:39:47 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:17:46.864 11:39:47 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:17:46.864 11:39:47 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:17:46.864 11:39:47 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:17:46.864 11:39:47 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:17:46.864 11:39:47 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:17:46.864 11:39:47 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:17:46.864 11:39:47 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:17:46.864 11:39:47 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:2fb5c03d-b67f-4471-b8a9-a83bf7b15be8 00:17:46.864 11:39:47 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@18 -- # NVME_HOSTID=2fb5c03d-b67f-4471-b8a9-a83bf7b15be8 00:17:46.864 11:39:47 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:17:46.864 11:39:47 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:17:46.864 11:39:47 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:17:46.864 11:39:47 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:17:46.864 11:39:47 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@49 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:17:46.864 11:39:47 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@15 -- # shopt -s extglob 00:17:46.864 11:39:47 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:17:46.864 11:39:47 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:17:46.864 11:39:47 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:17:46.864 11:39:47 nvmf_tcp.nvmf_host.nvmf_async_init -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:46.864 11:39:47 nvmf_tcp.nvmf_host.nvmf_async_init -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:46.864 11:39:47 nvmf_tcp.nvmf_host.nvmf_async_init -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:46.864 11:39:47 nvmf_tcp.nvmf_host.nvmf_async_init -- paths/export.sh@5 -- # export PATH 00:17:46.864 11:39:47 nvmf_tcp.nvmf_host.nvmf_async_init -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:46.864 11:39:47 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@51 -- # : 0 00:17:46.864 11:39:47 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:17:46.864 11:39:47 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:17:46.864 11:39:47 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:17:46.864 11:39:47 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:17:46.864 11:39:47 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:17:46.864 11:39:47 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:17:46.864 /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:17:46.864 11:39:47 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:17:46.864 11:39:47 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:17:46.864 11:39:47 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@55 -- # have_pci_nics=0 00:17:46.864 11:39:47 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@13 -- # null_bdev_size=1024 00:17:46.864 11:39:47 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@14 -- # null_block_size=512 00:17:46.864 11:39:47 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@15 -- # null_bdev=null0 00:17:46.864 11:39:47 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@16 -- # nvme_bdev=nvme0 00:17:46.864 11:39:47 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@20 -- # uuidgen 00:17:46.864 11:39:47 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@20 -- # tr -d - 00:17:46.864 11:39:47 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@20 -- # nguid=76574f5b0a584238994e8be2c3dd45c0 00:17:46.864 11:39:47 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@22 -- # nvmftestinit 00:17:46.864 11:39:47 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:17:46.864 11:39:47 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:17:46.864 11:39:47 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@476 -- # prepare_net_devs 00:17:46.864 11:39:47 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@438 -- # local -g is_hw=no 00:17:46.864 11:39:47 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@440 -- # remove_spdk_ns 00:17:46.864 11:39:47 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:17:46.864 11:39:47 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:17:46.864 11:39:47 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:17:46.864 11:39:47 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@442 -- # [[ virt != virt ]] 00:17:46.864 11:39:47 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@444 -- # [[ no == yes ]] 00:17:46.864 11:39:47 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@451 -- # [[ virt == phy ]] 00:17:46.864 11:39:47 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@454 -- # [[ virt == phy-fallback ]] 00:17:46.864 11:39:47 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@459 -- # [[ tcp == tcp ]] 00:17:46.864 11:39:47 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@460 -- # nvmf_veth_init 00:17:46.864 11:39:47 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@145 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:17:46.864 11:39:47 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@146 -- # NVMF_SECOND_INITIATOR_IP=10.0.0.2 00:17:46.864 11:39:47 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@147 -- # NVMF_FIRST_TARGET_IP=10.0.0.3 00:17:46.864 11:39:47 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@148 -- # NVMF_SECOND_TARGET_IP=10.0.0.4 00:17:46.864 11:39:47 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@149 -- # NVMF_INITIATOR_IP=10.0.0.1 00:17:46.864 11:39:47 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@150 -- # NVMF_BRIDGE=nvmf_br 00:17:46.864 11:39:47 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@151 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:17:46.864 11:39:47 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@152 -- # NVMF_INITIATOR_INTERFACE2=nvmf_init_if2 00:17:46.864 11:39:47 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@153 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:17:46.864 11:39:47 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@154 -- # NVMF_INITIATOR_BRIDGE2=nvmf_init_br2 00:17:46.864 11:39:47 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@155 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:17:46.864 11:39:47 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@156 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:17:46.864 11:39:47 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@157 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:17:46.864 11:39:47 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@158 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:17:46.864 11:39:47 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@159 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:17:46.864 11:39:47 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@160 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:17:46.864 11:39:47 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@162 -- # ip link set nvmf_init_br nomaster 00:17:46.864 Cannot find device "nvmf_init_br" 00:17:46.864 11:39:47 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@162 -- # true 00:17:46.864 11:39:47 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@163 -- # ip link set nvmf_init_br2 nomaster 00:17:46.864 Cannot find device "nvmf_init_br2" 00:17:46.864 11:39:47 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@163 -- # true 00:17:46.864 11:39:47 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@164 -- # ip link set nvmf_tgt_br nomaster 00:17:46.864 Cannot find device "nvmf_tgt_br" 00:17:46.864 11:39:47 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@164 -- # true 00:17:46.864 11:39:47 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@165 -- # ip link set nvmf_tgt_br2 nomaster 00:17:46.865 Cannot find device "nvmf_tgt_br2" 00:17:46.865 11:39:47 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@165 -- # true 00:17:46.865 11:39:47 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@166 -- # ip link set nvmf_init_br down 00:17:46.865 Cannot find device "nvmf_init_br" 00:17:46.865 11:39:47 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@166 -- # true 00:17:46.865 11:39:47 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@167 -- # ip link set nvmf_init_br2 down 00:17:46.865 Cannot find device "nvmf_init_br2" 00:17:46.865 11:39:47 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@167 -- # true 00:17:46.865 11:39:47 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@168 -- # ip link set nvmf_tgt_br down 00:17:46.865 Cannot find device "nvmf_tgt_br" 00:17:46.865 11:39:47 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@168 -- # true 00:17:46.865 11:39:47 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@169 -- # ip link set nvmf_tgt_br2 down 00:17:46.865 Cannot find device "nvmf_tgt_br2" 00:17:46.865 11:39:47 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@169 -- # true 00:17:46.865 11:39:47 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@170 -- # ip link delete nvmf_br type bridge 00:17:46.865 Cannot find device "nvmf_br" 00:17:46.865 11:39:47 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@170 -- # true 00:17:46.865 11:39:47 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@171 -- # ip link delete nvmf_init_if 00:17:46.865 Cannot find device "nvmf_init_if" 00:17:46.865 11:39:47 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@171 -- # true 00:17:46.865 11:39:47 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@172 -- # ip link delete nvmf_init_if2 00:17:46.865 Cannot find device "nvmf_init_if2" 00:17:46.865 11:39:47 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@172 -- # true 00:17:46.865 11:39:47 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@173 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:17:46.865 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:17:46.865 11:39:47 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@173 -- # true 00:17:46.865 11:39:47 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@174 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:17:46.865 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:17:46.865 11:39:47 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@174 -- # true 00:17:46.865 11:39:47 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@177 -- # ip netns add nvmf_tgt_ns_spdk 00:17:46.865 11:39:47 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@180 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:17:46.865 11:39:47 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@181 -- # ip link add nvmf_init_if2 type veth peer name nvmf_init_br2 00:17:47.124 11:39:47 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@182 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:17:47.124 11:39:47 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@183 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:17:47.124 11:39:47 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:17:47.124 11:39:47 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@187 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:17:47.124 11:39:47 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@190 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:17:47.124 11:39:47 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@191 -- # ip addr add 10.0.0.2/24 dev nvmf_init_if2 00:17:47.124 11:39:47 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@192 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if 00:17:47.124 11:39:47 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@193 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.4/24 dev nvmf_tgt_if2 00:17:47.124 11:39:47 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@196 -- # ip link set nvmf_init_if up 00:17:47.124 11:39:47 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@197 -- # ip link set nvmf_init_if2 up 00:17:47.124 11:39:47 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@198 -- # ip link set nvmf_init_br up 00:17:47.124 11:39:47 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@199 -- # ip link set nvmf_init_br2 up 00:17:47.124 11:39:47 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@200 -- # ip link set nvmf_tgt_br up 00:17:47.124 11:39:47 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@201 -- # ip link set nvmf_tgt_br2 up 00:17:47.124 11:39:47 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@202 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:17:47.124 11:39:47 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@203 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:17:47.124 11:39:47 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@204 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:17:47.124 11:39:47 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@207 -- # ip link add nvmf_br type bridge 00:17:47.124 11:39:47 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@208 -- # ip link set nvmf_br up 00:17:47.124 11:39:47 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@211 -- # ip link set nvmf_init_br master nvmf_br 00:17:47.124 11:39:47 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@212 -- # ip link set nvmf_init_br2 master nvmf_br 00:17:47.124 11:39:47 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@213 -- # ip link set nvmf_tgt_br master nvmf_br 00:17:47.124 11:39:47 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@214 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:17:47.124 11:39:47 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@217 -- # ipts -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:17:47.124 11:39:47 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT' 00:17:47.124 11:39:47 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@218 -- # ipts -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT 00:17:47.124 11:39:47 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT' 00:17:47.124 11:39:47 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@219 -- # ipts -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:17:47.124 11:39:47 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@790 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT -m comment --comment 'SPDK_NVMF:-A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT' 00:17:47.124 11:39:47 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@222 -- # ping -c 1 10.0.0.3 00:17:47.124 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:17:47.124 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.075 ms 00:17:47.124 00:17:47.124 --- 10.0.0.3 ping statistics --- 00:17:47.124 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:17:47.124 rtt min/avg/max/mdev = 0.075/0.075/0.075/0.000 ms 00:17:47.124 11:39:47 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@223 -- # ping -c 1 10.0.0.4 00:17:47.124 PING 10.0.0.4 (10.0.0.4) 56(84) bytes of data. 00:17:47.124 64 bytes from 10.0.0.4: icmp_seq=1 ttl=64 time=0.041 ms 00:17:47.124 00:17:47.124 --- 10.0.0.4 ping statistics --- 00:17:47.124 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:17:47.124 rtt min/avg/max/mdev = 0.041/0.041/0.041/0.000 ms 00:17:47.124 11:39:47 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@224 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:17:47.124 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:17:47.124 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.021 ms 00:17:47.124 00:17:47.124 --- 10.0.0.1 ping statistics --- 00:17:47.124 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:17:47.124 rtt min/avg/max/mdev = 0.021/0.021/0.021/0.000 ms 00:17:47.124 11:39:47 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@225 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.2 00:17:47.124 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:17:47.124 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.047 ms 00:17:47.124 00:17:47.124 --- 10.0.0.2 ping statistics --- 00:17:47.124 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:17:47.124 rtt min/avg/max/mdev = 0.047/0.047/0.047/0.000 ms 00:17:47.124 11:39:47 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@227 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:17:47.124 11:39:47 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@461 -- # return 0 00:17:47.124 11:39:47 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:17:47.124 11:39:47 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:17:47.124 11:39:47 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:17:47.124 11:39:47 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:17:47.124 11:39:47 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:17:47.124 11:39:47 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:17:47.124 11:39:47 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:17:47.384 11:39:47 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@23 -- # nvmfappstart -m 0x1 00:17:47.384 11:39:47 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:17:47.384 11:39:47 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@726 -- # xtrace_disable 00:17:47.384 11:39:47 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:17:47.384 11:39:47 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@509 -- # nvmfpid=88740 00:17:47.384 11:39:47 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@508 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x1 00:17:47.384 11:39:47 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@510 -- # waitforlisten 88740 00:17:47.384 11:39:47 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@835 -- # '[' -z 88740 ']' 00:17:47.384 11:39:47 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:17:47.384 11:39:47 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@840 -- # local max_retries=100 00:17:47.384 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:17:47.384 11:39:47 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:17:47.384 11:39:47 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@844 -- # xtrace_disable 00:17:47.384 11:39:47 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:17:47.384 [2024-12-16 11:39:47.958671] Starting SPDK v25.01-pre git sha1 c360129e7 / DPDK 24.03.0 initialization... 00:17:47.384 [2024-12-16 11:39:47.959530] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:17:47.384 [2024-12-16 11:39:48.104511] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:17:47.643 [2024-12-16 11:39:48.162378] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:17:47.643 [2024-12-16 11:39:48.162442] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:17:47.643 [2024-12-16 11:39:48.162453] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:17:47.643 [2024-12-16 11:39:48.162462] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:17:47.643 [2024-12-16 11:39:48.162469] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:17:47.643 [2024-12-16 11:39:48.162889] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:17:48.578 11:39:49 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:17:48.578 11:39:49 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@868 -- # return 0 00:17:48.578 11:39:49 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:17:48.578 11:39:49 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@732 -- # xtrace_disable 00:17:48.578 11:39:49 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:17:48.578 11:39:49 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:17:48.578 11:39:49 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@26 -- # rpc_cmd nvmf_create_transport -t tcp -o 00:17:48.578 11:39:49 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:48.578 11:39:49 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:17:48.578 [2024-12-16 11:39:49.063203] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:17:48.578 11:39:49 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:48.578 11:39:49 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@27 -- # rpc_cmd bdev_null_create null0 1024 512 00:17:48.578 11:39:49 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:48.578 11:39:49 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:17:48.578 null0 00:17:48.578 11:39:49 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:48.578 11:39:49 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@28 -- # rpc_cmd bdev_wait_for_examine 00:17:48.578 11:39:49 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:48.578 11:39:49 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:17:48.578 11:39:49 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:48.578 11:39:49 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@29 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 -a 00:17:48.578 11:39:49 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:48.578 11:39:49 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:17:48.578 11:39:49 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:48.578 11:39:49 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@30 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 null0 -g 76574f5b0a584238994e8be2c3dd45c0 00:17:48.578 11:39:49 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:48.578 11:39:49 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:17:48.578 11:39:49 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:48.578 11:39:49 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@31 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.3 -s 4420 00:17:48.578 11:39:49 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:48.578 11:39:49 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:17:48.578 [2024-12-16 11:39:49.103320] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4420 *** 00:17:48.578 11:39:49 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:48.578 11:39:49 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@37 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 10.0.0.3 -f ipv4 -s 4420 -n nqn.2016-06.io.spdk:cnode0 00:17:48.578 11:39:49 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:48.578 11:39:49 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:17:48.837 nvme0n1 00:17:48.837 11:39:49 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:48.837 11:39:49 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@41 -- # rpc_cmd bdev_get_bdevs -b nvme0n1 00:17:48.837 11:39:49 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:48.837 11:39:49 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:17:48.837 [ 00:17:48.837 { 00:17:48.837 "aliases": [ 00:17:48.837 "76574f5b-0a58-4238-994e-8be2c3dd45c0" 00:17:48.837 ], 00:17:48.837 "assigned_rate_limits": { 00:17:48.837 "r_mbytes_per_sec": 0, 00:17:48.837 "rw_ios_per_sec": 0, 00:17:48.837 "rw_mbytes_per_sec": 0, 00:17:48.837 "w_mbytes_per_sec": 0 00:17:48.837 }, 00:17:48.837 "block_size": 512, 00:17:48.837 "claimed": false, 00:17:48.837 "driver_specific": { 00:17:48.837 "mp_policy": "active_passive", 00:17:48.837 "nvme": [ 00:17:48.837 { 00:17:48.837 "ctrlr_data": { 00:17:48.837 "ana_reporting": false, 00:17:48.837 "cntlid": 1, 00:17:48.837 "firmware_revision": "25.01", 00:17:48.837 "model_number": "SPDK bdev Controller", 00:17:48.837 "multi_ctrlr": true, 00:17:48.837 "oacs": { 00:17:48.837 "firmware": 0, 00:17:48.837 "format": 0, 00:17:48.837 "ns_manage": 0, 00:17:48.837 "security": 0 00:17:48.837 }, 00:17:48.837 "serial_number": "00000000000000000000", 00:17:48.837 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:17:48.837 "vendor_id": "0x8086" 00:17:48.837 }, 00:17:48.837 "ns_data": { 00:17:48.837 "can_share": true, 00:17:48.837 "id": 1 00:17:48.837 }, 00:17:48.837 "trid": { 00:17:48.837 "adrfam": "IPv4", 00:17:48.837 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:17:48.837 "traddr": "10.0.0.3", 00:17:48.837 "trsvcid": "4420", 00:17:48.837 "trtype": "TCP" 00:17:48.837 }, 00:17:48.837 "vs": { 00:17:48.837 "nvme_version": "1.3" 00:17:48.837 } 00:17:48.837 } 00:17:48.837 ] 00:17:48.837 }, 00:17:48.837 "memory_domains": [ 00:17:48.837 { 00:17:48.837 "dma_device_id": "system", 00:17:48.837 "dma_device_type": 1 00:17:48.837 } 00:17:48.837 ], 00:17:48.837 "name": "nvme0n1", 00:17:48.837 "num_blocks": 2097152, 00:17:48.837 "numa_id": -1, 00:17:48.837 "product_name": "NVMe disk", 00:17:48.837 "supported_io_types": { 00:17:48.837 "abort": true, 00:17:48.837 "compare": true, 00:17:48.837 "compare_and_write": true, 00:17:48.837 "copy": true, 00:17:48.837 "flush": true, 00:17:48.837 "get_zone_info": false, 00:17:48.837 "nvme_admin": true, 00:17:48.837 "nvme_io": true, 00:17:48.837 "nvme_io_md": false, 00:17:48.837 "nvme_iov_md": false, 00:17:48.837 "read": true, 00:17:48.837 "reset": true, 00:17:48.837 "seek_data": false, 00:17:48.837 "seek_hole": false, 00:17:48.837 "unmap": false, 00:17:48.837 "write": true, 00:17:48.837 "write_zeroes": true, 00:17:48.837 "zcopy": false, 00:17:48.837 "zone_append": false, 00:17:48.837 "zone_management": false 00:17:48.837 }, 00:17:48.837 "uuid": "76574f5b-0a58-4238-994e-8be2c3dd45c0", 00:17:48.837 "zoned": false 00:17:48.837 } 00:17:48.837 ] 00:17:48.837 11:39:49 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:48.837 11:39:49 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@44 -- # rpc_cmd bdev_nvme_reset_controller nvme0 00:17:48.837 11:39:49 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:48.837 11:39:49 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:17:48.837 [2024-12-16 11:39:49.372614] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0, 1] resetting controller 00:17:48.837 [2024-12-16 11:39:49.372741] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1491130 (9): Bad file descriptor 00:17:48.837 [2024-12-16 11:39:49.504350] bdev_nvme.c:2287:bdev_nvme_reset_ctrlr_complete: *NOTICE*: [nqn.2016-06.io.spdk:cnode0, 2] Resetting controller successful. 00:17:48.837 11:39:49 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:48.837 11:39:49 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@47 -- # rpc_cmd bdev_get_bdevs -b nvme0n1 00:17:48.837 11:39:49 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:48.837 11:39:49 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:17:48.837 [ 00:17:48.837 { 00:17:48.837 "aliases": [ 00:17:48.837 "76574f5b-0a58-4238-994e-8be2c3dd45c0" 00:17:48.837 ], 00:17:48.837 "assigned_rate_limits": { 00:17:48.837 "r_mbytes_per_sec": 0, 00:17:48.837 "rw_ios_per_sec": 0, 00:17:48.837 "rw_mbytes_per_sec": 0, 00:17:48.837 "w_mbytes_per_sec": 0 00:17:48.837 }, 00:17:48.837 "block_size": 512, 00:17:48.837 "claimed": false, 00:17:48.837 "driver_specific": { 00:17:48.837 "mp_policy": "active_passive", 00:17:48.837 "nvme": [ 00:17:48.837 { 00:17:48.837 "ctrlr_data": { 00:17:48.837 "ana_reporting": false, 00:17:48.837 "cntlid": 2, 00:17:48.837 "firmware_revision": "25.01", 00:17:48.837 "model_number": "SPDK bdev Controller", 00:17:48.837 "multi_ctrlr": true, 00:17:48.837 "oacs": { 00:17:48.837 "firmware": 0, 00:17:48.837 "format": 0, 00:17:48.837 "ns_manage": 0, 00:17:48.837 "security": 0 00:17:48.837 }, 00:17:48.837 "serial_number": "00000000000000000000", 00:17:48.837 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:17:48.837 "vendor_id": "0x8086" 00:17:48.837 }, 00:17:48.837 "ns_data": { 00:17:48.837 "can_share": true, 00:17:48.837 "id": 1 00:17:48.837 }, 00:17:48.837 "trid": { 00:17:48.837 "adrfam": "IPv4", 00:17:48.837 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:17:48.837 "traddr": "10.0.0.3", 00:17:48.837 "trsvcid": "4420", 00:17:48.837 "trtype": "TCP" 00:17:48.837 }, 00:17:48.837 "vs": { 00:17:48.837 "nvme_version": "1.3" 00:17:48.837 } 00:17:48.837 } 00:17:48.837 ] 00:17:48.837 }, 00:17:48.837 "memory_domains": [ 00:17:48.837 { 00:17:48.837 "dma_device_id": "system", 00:17:48.837 "dma_device_type": 1 00:17:48.837 } 00:17:48.837 ], 00:17:48.837 "name": "nvme0n1", 00:17:48.837 "num_blocks": 2097152, 00:17:48.837 "numa_id": -1, 00:17:48.837 "product_name": "NVMe disk", 00:17:48.837 "supported_io_types": { 00:17:48.837 "abort": true, 00:17:48.837 "compare": true, 00:17:48.837 "compare_and_write": true, 00:17:48.837 "copy": true, 00:17:48.837 "flush": true, 00:17:48.837 "get_zone_info": false, 00:17:48.837 "nvme_admin": true, 00:17:48.837 "nvme_io": true, 00:17:48.837 "nvme_io_md": false, 00:17:48.837 "nvme_iov_md": false, 00:17:48.837 "read": true, 00:17:48.837 "reset": true, 00:17:48.837 "seek_data": false, 00:17:48.837 "seek_hole": false, 00:17:48.837 "unmap": false, 00:17:48.837 "write": true, 00:17:48.837 "write_zeroes": true, 00:17:48.837 "zcopy": false, 00:17:48.837 "zone_append": false, 00:17:48.837 "zone_management": false 00:17:48.837 }, 00:17:48.837 "uuid": "76574f5b-0a58-4238-994e-8be2c3dd45c0", 00:17:48.837 "zoned": false 00:17:48.837 } 00:17:48.837 ] 00:17:48.837 11:39:49 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:48.837 11:39:49 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@50 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:17:48.837 11:39:49 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:48.837 11:39:49 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:17:48.837 11:39:49 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:48.837 11:39:49 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@53 -- # mktemp 00:17:48.837 11:39:49 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@53 -- # key_path=/tmp/tmp.0QNJhEb31E 00:17:48.837 11:39:49 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@54 -- # echo -n NVMeTLSkey-1:01:MDAxMTIyMzM0NDU1NjY3Nzg4OTlhYWJiY2NkZGVlZmZwJEiQ: 00:17:48.837 11:39:49 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@55 -- # chmod 0600 /tmp/tmp.0QNJhEb31E 00:17:48.837 11:39:49 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@56 -- # rpc_cmd keyring_file_add_key key0 /tmp/tmp.0QNJhEb31E 00:17:48.837 11:39:49 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:48.837 11:39:49 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:17:48.837 11:39:49 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:48.837 11:39:49 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@57 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode0 --disable 00:17:48.837 11:39:49 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:48.838 11:39:49 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:17:48.838 11:39:49 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:48.838 11:39:49 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@58 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.3 -s 4421 --secure-channel 00:17:48.838 11:39:49 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:48.838 11:39:49 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:17:48.838 [2024-12-16 11:39:49.584804] tcp.c:1049:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:17:48.838 [2024-12-16 11:39:49.585049] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4421 *** 00:17:49.096 11:39:49 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:49.096 11:39:49 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@60 -- # rpc_cmd nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode0 nqn.2016-06.io.spdk:host1 --psk key0 00:17:49.096 11:39:49 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:49.096 11:39:49 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:17:49.097 11:39:49 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:49.097 11:39:49 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@66 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 10.0.0.3 -f ipv4 -s 4421 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host1 --psk key0 00:17:49.097 11:39:49 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:49.097 11:39:49 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:17:49.097 [2024-12-16 11:39:49.600807] bdev_nvme_rpc.c: 515:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:17:49.097 nvme0n1 00:17:49.097 11:39:49 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:49.097 11:39:49 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@70 -- # rpc_cmd bdev_get_bdevs -b nvme0n1 00:17:49.097 11:39:49 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:49.097 11:39:49 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:17:49.097 [ 00:17:49.097 { 00:17:49.097 "aliases": [ 00:17:49.097 "76574f5b-0a58-4238-994e-8be2c3dd45c0" 00:17:49.097 ], 00:17:49.097 "assigned_rate_limits": { 00:17:49.097 "r_mbytes_per_sec": 0, 00:17:49.097 "rw_ios_per_sec": 0, 00:17:49.097 "rw_mbytes_per_sec": 0, 00:17:49.097 "w_mbytes_per_sec": 0 00:17:49.097 }, 00:17:49.097 "block_size": 512, 00:17:49.097 "claimed": false, 00:17:49.097 "driver_specific": { 00:17:49.097 "mp_policy": "active_passive", 00:17:49.097 "nvme": [ 00:17:49.097 { 00:17:49.097 "ctrlr_data": { 00:17:49.097 "ana_reporting": false, 00:17:49.097 "cntlid": 3, 00:17:49.097 "firmware_revision": "25.01", 00:17:49.097 "model_number": "SPDK bdev Controller", 00:17:49.097 "multi_ctrlr": true, 00:17:49.097 "oacs": { 00:17:49.097 "firmware": 0, 00:17:49.097 "format": 0, 00:17:49.097 "ns_manage": 0, 00:17:49.097 "security": 0 00:17:49.097 }, 00:17:49.097 "serial_number": "00000000000000000000", 00:17:49.097 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:17:49.097 "vendor_id": "0x8086" 00:17:49.097 }, 00:17:49.097 "ns_data": { 00:17:49.097 "can_share": true, 00:17:49.097 "id": 1 00:17:49.097 }, 00:17:49.097 "trid": { 00:17:49.097 "adrfam": "IPv4", 00:17:49.097 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:17:49.097 "traddr": "10.0.0.3", 00:17:49.097 "trsvcid": "4421", 00:17:49.097 "trtype": "TCP" 00:17:49.097 }, 00:17:49.097 "vs": { 00:17:49.097 "nvme_version": "1.3" 00:17:49.097 } 00:17:49.097 } 00:17:49.097 ] 00:17:49.097 }, 00:17:49.097 "memory_domains": [ 00:17:49.097 { 00:17:49.097 "dma_device_id": "system", 00:17:49.097 "dma_device_type": 1 00:17:49.097 } 00:17:49.097 ], 00:17:49.097 "name": "nvme0n1", 00:17:49.097 "num_blocks": 2097152, 00:17:49.097 "numa_id": -1, 00:17:49.097 "product_name": "NVMe disk", 00:17:49.097 "supported_io_types": { 00:17:49.097 "abort": true, 00:17:49.097 "compare": true, 00:17:49.097 "compare_and_write": true, 00:17:49.097 "copy": true, 00:17:49.097 "flush": true, 00:17:49.097 "get_zone_info": false, 00:17:49.097 "nvme_admin": true, 00:17:49.097 "nvme_io": true, 00:17:49.097 "nvme_io_md": false, 00:17:49.097 "nvme_iov_md": false, 00:17:49.097 "read": true, 00:17:49.097 "reset": true, 00:17:49.097 "seek_data": false, 00:17:49.097 "seek_hole": false, 00:17:49.097 "unmap": false, 00:17:49.097 "write": true, 00:17:49.097 "write_zeroes": true, 00:17:49.097 "zcopy": false, 00:17:49.097 "zone_append": false, 00:17:49.097 "zone_management": false 00:17:49.097 }, 00:17:49.097 "uuid": "76574f5b-0a58-4238-994e-8be2c3dd45c0", 00:17:49.097 "zoned": false 00:17:49.097 } 00:17:49.097 ] 00:17:49.097 11:39:49 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:49.097 11:39:49 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@73 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:17:49.097 11:39:49 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:49.097 11:39:49 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:17:49.097 11:39:49 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:49.097 11:39:49 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@76 -- # rm -f /tmp/tmp.0QNJhEb31E 00:17:49.097 11:39:49 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@78 -- # trap - SIGINT SIGTERM EXIT 00:17:49.097 11:39:49 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@79 -- # nvmftestfini 00:17:49.097 11:39:49 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@516 -- # nvmfcleanup 00:17:49.097 11:39:49 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@121 -- # sync 00:17:49.097 11:39:49 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:17:49.097 11:39:49 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@124 -- # set +e 00:17:49.097 11:39:49 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@125 -- # for i in {1..20} 00:17:49.097 11:39:49 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:17:49.097 rmmod nvme_tcp 00:17:49.097 rmmod nvme_fabrics 00:17:49.097 rmmod nvme_keyring 00:17:49.097 11:39:49 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:17:49.097 11:39:49 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@128 -- # set -e 00:17:49.097 11:39:49 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@129 -- # return 0 00:17:49.097 11:39:49 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@517 -- # '[' -n 88740 ']' 00:17:49.097 11:39:49 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@518 -- # killprocess 88740 00:17:49.097 11:39:49 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@954 -- # '[' -z 88740 ']' 00:17:49.097 11:39:49 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@958 -- # kill -0 88740 00:17:49.097 11:39:49 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@959 -- # uname 00:17:49.097 11:39:49 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:17:49.097 11:39:49 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 88740 00:17:49.356 killing process with pid 88740 00:17:49.356 11:39:49 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:17:49.356 11:39:49 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:17:49.356 11:39:49 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@972 -- # echo 'killing process with pid 88740' 00:17:49.356 11:39:49 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@973 -- # kill 88740 00:17:49.356 11:39:49 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@978 -- # wait 88740 00:17:49.356 11:39:50 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:17:49.356 11:39:50 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:17:49.356 11:39:50 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:17:49.356 11:39:50 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@297 -- # iptr 00:17:49.356 11:39:50 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@791 -- # iptables-save 00:17:49.356 11:39:50 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@791 -- # iptables-restore 00:17:49.356 11:39:50 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:17:49.356 11:39:50 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@298 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:17:49.356 11:39:50 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@299 -- # nvmf_veth_fini 00:17:49.356 11:39:50 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@233 -- # ip link set nvmf_init_br nomaster 00:17:49.356 11:39:50 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@234 -- # ip link set nvmf_init_br2 nomaster 00:17:49.356 11:39:50 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@235 -- # ip link set nvmf_tgt_br nomaster 00:17:49.614 11:39:50 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@236 -- # ip link set nvmf_tgt_br2 nomaster 00:17:49.614 11:39:50 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@237 -- # ip link set nvmf_init_br down 00:17:49.614 11:39:50 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@238 -- # ip link set nvmf_init_br2 down 00:17:49.614 11:39:50 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@239 -- # ip link set nvmf_tgt_br down 00:17:49.614 11:39:50 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@240 -- # ip link set nvmf_tgt_br2 down 00:17:49.614 11:39:50 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@241 -- # ip link delete nvmf_br type bridge 00:17:49.614 11:39:50 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@242 -- # ip link delete nvmf_init_if 00:17:49.614 11:39:50 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@243 -- # ip link delete nvmf_init_if2 00:17:49.614 11:39:50 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@244 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:17:49.614 11:39:50 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@245 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:17:49.614 11:39:50 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@246 -- # remove_spdk_ns 00:17:49.614 11:39:50 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:17:49.614 11:39:50 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:17:49.614 11:39:50 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:17:49.614 11:39:50 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@300 -- # return 0 00:17:49.614 00:17:49.614 real 0m3.105s 00:17:49.614 user 0m2.777s 00:17:49.614 sys 0m0.740s 00:17:49.614 11:39:50 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@1130 -- # xtrace_disable 00:17:49.614 ************************************ 00:17:49.614 END TEST nvmf_async_init 00:17:49.614 ************************************ 00:17:49.614 11:39:50 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:17:49.614 11:39:50 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@19 -- # run_test dma /home/vagrant/spdk_repo/spdk/test/nvmf/host/dma.sh --transport=tcp 00:17:49.614 11:39:50 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:17:49.614 11:39:50 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1111 -- # xtrace_disable 00:17:49.614 11:39:50 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:17:49.614 ************************************ 00:17:49.614 START TEST dma 00:17:49.614 ************************************ 00:17:49.614 11:39:50 nvmf_tcp.nvmf_host.dma -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/host/dma.sh --transport=tcp 00:17:49.873 * Looking for test storage... 00:17:49.873 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/host 00:17:49.873 11:39:50 nvmf_tcp.nvmf_host.dma -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:17:49.873 11:39:50 nvmf_tcp.nvmf_host.dma -- common/autotest_common.sh@1711 -- # lcov --version 00:17:49.873 11:39:50 nvmf_tcp.nvmf_host.dma -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:17:49.873 11:39:50 nvmf_tcp.nvmf_host.dma -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:17:49.873 11:39:50 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:17:49.873 11:39:50 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@333 -- # local ver1 ver1_l 00:17:49.873 11:39:50 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@334 -- # local ver2 ver2_l 00:17:49.873 11:39:50 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@336 -- # IFS=.-: 00:17:49.873 11:39:50 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@336 -- # read -ra ver1 00:17:49.873 11:39:50 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@337 -- # IFS=.-: 00:17:49.874 11:39:50 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@337 -- # read -ra ver2 00:17:49.874 11:39:50 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@338 -- # local 'op=<' 00:17:49.874 11:39:50 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@340 -- # ver1_l=2 00:17:49.874 11:39:50 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@341 -- # ver2_l=1 00:17:49.874 11:39:50 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:17:49.874 11:39:50 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@344 -- # case "$op" in 00:17:49.874 11:39:50 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@345 -- # : 1 00:17:49.874 11:39:50 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@364 -- # (( v = 0 )) 00:17:49.874 11:39:50 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:17:49.874 11:39:50 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@365 -- # decimal 1 00:17:49.874 11:39:50 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@353 -- # local d=1 00:17:49.874 11:39:50 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:17:49.874 11:39:50 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@355 -- # echo 1 00:17:49.874 11:39:50 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@365 -- # ver1[v]=1 00:17:49.874 11:39:50 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@366 -- # decimal 2 00:17:49.874 11:39:50 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@353 -- # local d=2 00:17:49.874 11:39:50 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:17:49.874 11:39:50 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@355 -- # echo 2 00:17:49.874 11:39:50 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@366 -- # ver2[v]=2 00:17:49.874 11:39:50 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:17:49.874 11:39:50 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:17:49.874 11:39:50 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@368 -- # return 0 00:17:49.874 11:39:50 nvmf_tcp.nvmf_host.dma -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:17:49.874 11:39:50 nvmf_tcp.nvmf_host.dma -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:17:49.874 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:17:49.874 --rc genhtml_branch_coverage=1 00:17:49.874 --rc genhtml_function_coverage=1 00:17:49.874 --rc genhtml_legend=1 00:17:49.874 --rc geninfo_all_blocks=1 00:17:49.874 --rc geninfo_unexecuted_blocks=1 00:17:49.874 00:17:49.874 ' 00:17:49.874 11:39:50 nvmf_tcp.nvmf_host.dma -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:17:49.874 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:17:49.874 --rc genhtml_branch_coverage=1 00:17:49.874 --rc genhtml_function_coverage=1 00:17:49.874 --rc genhtml_legend=1 00:17:49.874 --rc geninfo_all_blocks=1 00:17:49.874 --rc geninfo_unexecuted_blocks=1 00:17:49.874 00:17:49.874 ' 00:17:49.874 11:39:50 nvmf_tcp.nvmf_host.dma -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:17:49.874 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:17:49.874 --rc genhtml_branch_coverage=1 00:17:49.874 --rc genhtml_function_coverage=1 00:17:49.874 --rc genhtml_legend=1 00:17:49.874 --rc geninfo_all_blocks=1 00:17:49.874 --rc geninfo_unexecuted_blocks=1 00:17:49.874 00:17:49.874 ' 00:17:49.874 11:39:50 nvmf_tcp.nvmf_host.dma -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:17:49.874 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:17:49.874 --rc genhtml_branch_coverage=1 00:17:49.874 --rc genhtml_function_coverage=1 00:17:49.874 --rc genhtml_legend=1 00:17:49.874 --rc geninfo_all_blocks=1 00:17:49.874 --rc geninfo_unexecuted_blocks=1 00:17:49.874 00:17:49.874 ' 00:17:49.874 11:39:50 nvmf_tcp.nvmf_host.dma -- host/dma.sh@10 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:17:49.874 11:39:50 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@7 -- # uname -s 00:17:49.874 11:39:50 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:17:49.874 11:39:50 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:17:49.874 11:39:50 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:17:49.874 11:39:50 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:17:49.874 11:39:50 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:17:49.874 11:39:50 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:17:49.874 11:39:50 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:17:49.874 11:39:50 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:17:49.874 11:39:50 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:17:49.874 11:39:50 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:17:49.874 11:39:50 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:2fb5c03d-b67f-4471-b8a9-a83bf7b15be8 00:17:49.874 11:39:50 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@18 -- # NVME_HOSTID=2fb5c03d-b67f-4471-b8a9-a83bf7b15be8 00:17:49.874 11:39:50 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:17:49.874 11:39:50 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:17:49.874 11:39:50 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:17:49.874 11:39:50 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:17:49.874 11:39:50 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@49 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:17:49.874 11:39:50 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@15 -- # shopt -s extglob 00:17:49.874 11:39:50 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:17:49.874 11:39:50 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:17:49.874 11:39:50 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:17:49.874 11:39:50 nvmf_tcp.nvmf_host.dma -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:49.874 11:39:50 nvmf_tcp.nvmf_host.dma -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:49.874 11:39:50 nvmf_tcp.nvmf_host.dma -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:49.874 11:39:50 nvmf_tcp.nvmf_host.dma -- paths/export.sh@5 -- # export PATH 00:17:49.874 11:39:50 nvmf_tcp.nvmf_host.dma -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:49.874 11:39:50 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@51 -- # : 0 00:17:49.874 11:39:50 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:17:49.874 11:39:50 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:17:49.874 11:39:50 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:17:49.874 11:39:50 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:17:49.874 11:39:50 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:17:49.874 11:39:50 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:17:49.874 /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:17:49.874 11:39:50 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:17:49.874 11:39:50 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:17:49.874 11:39:50 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@55 -- # have_pci_nics=0 00:17:49.874 11:39:50 nvmf_tcp.nvmf_host.dma -- host/dma.sh@12 -- # '[' tcp '!=' rdma ']' 00:17:49.874 11:39:50 nvmf_tcp.nvmf_host.dma -- host/dma.sh@13 -- # exit 0 00:17:49.874 ************************************ 00:17:49.874 END TEST dma 00:17:49.874 ************************************ 00:17:49.874 00:17:49.874 real 0m0.222s 00:17:49.874 user 0m0.144s 00:17:49.874 sys 0m0.083s 00:17:49.874 11:39:50 nvmf_tcp.nvmf_host.dma -- common/autotest_common.sh@1130 -- # xtrace_disable 00:17:49.874 11:39:50 nvmf_tcp.nvmf_host.dma -- common/autotest_common.sh@10 -- # set +x 00:17:50.134 11:39:50 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@22 -- # run_test nvmf_identify /home/vagrant/spdk_repo/spdk/test/nvmf/host/identify.sh --transport=tcp 00:17:50.134 11:39:50 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:17:50.134 11:39:50 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1111 -- # xtrace_disable 00:17:50.134 11:39:50 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:17:50.134 ************************************ 00:17:50.134 START TEST nvmf_identify 00:17:50.134 ************************************ 00:17:50.134 11:39:50 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/host/identify.sh --transport=tcp 00:17:50.134 * Looking for test storage... 00:17:50.134 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/host 00:17:50.134 11:39:50 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:17:50.134 11:39:50 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@1711 -- # lcov --version 00:17:50.134 11:39:50 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:17:50.134 11:39:50 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:17:50.134 11:39:50 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:17:50.134 11:39:50 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@333 -- # local ver1 ver1_l 00:17:50.134 11:39:50 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@334 -- # local ver2 ver2_l 00:17:50.134 11:39:50 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@336 -- # IFS=.-: 00:17:50.134 11:39:50 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@336 -- # read -ra ver1 00:17:50.134 11:39:50 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@337 -- # IFS=.-: 00:17:50.134 11:39:50 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@337 -- # read -ra ver2 00:17:50.134 11:39:50 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@338 -- # local 'op=<' 00:17:50.134 11:39:50 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@340 -- # ver1_l=2 00:17:50.134 11:39:50 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@341 -- # ver2_l=1 00:17:50.134 11:39:50 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:17:50.134 11:39:50 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@344 -- # case "$op" in 00:17:50.134 11:39:50 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@345 -- # : 1 00:17:50.134 11:39:50 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@364 -- # (( v = 0 )) 00:17:50.134 11:39:50 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:17:50.134 11:39:50 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@365 -- # decimal 1 00:17:50.134 11:39:50 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@353 -- # local d=1 00:17:50.134 11:39:50 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:17:50.134 11:39:50 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@355 -- # echo 1 00:17:50.134 11:39:50 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@365 -- # ver1[v]=1 00:17:50.134 11:39:50 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@366 -- # decimal 2 00:17:50.134 11:39:50 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@353 -- # local d=2 00:17:50.134 11:39:50 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:17:50.134 11:39:50 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@355 -- # echo 2 00:17:50.134 11:39:50 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@366 -- # ver2[v]=2 00:17:50.134 11:39:50 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:17:50.134 11:39:50 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:17:50.134 11:39:50 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@368 -- # return 0 00:17:50.134 11:39:50 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:17:50.134 11:39:50 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:17:50.134 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:17:50.134 --rc genhtml_branch_coverage=1 00:17:50.134 --rc genhtml_function_coverage=1 00:17:50.134 --rc genhtml_legend=1 00:17:50.134 --rc geninfo_all_blocks=1 00:17:50.134 --rc geninfo_unexecuted_blocks=1 00:17:50.134 00:17:50.134 ' 00:17:50.134 11:39:50 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:17:50.134 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:17:50.134 --rc genhtml_branch_coverage=1 00:17:50.134 --rc genhtml_function_coverage=1 00:17:50.134 --rc genhtml_legend=1 00:17:50.134 --rc geninfo_all_blocks=1 00:17:50.134 --rc geninfo_unexecuted_blocks=1 00:17:50.134 00:17:50.134 ' 00:17:50.134 11:39:50 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:17:50.134 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:17:50.134 --rc genhtml_branch_coverage=1 00:17:50.134 --rc genhtml_function_coverage=1 00:17:50.134 --rc genhtml_legend=1 00:17:50.134 --rc geninfo_all_blocks=1 00:17:50.134 --rc geninfo_unexecuted_blocks=1 00:17:50.134 00:17:50.134 ' 00:17:50.134 11:39:50 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:17:50.134 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:17:50.134 --rc genhtml_branch_coverage=1 00:17:50.134 --rc genhtml_function_coverage=1 00:17:50.134 --rc genhtml_legend=1 00:17:50.134 --rc geninfo_all_blocks=1 00:17:50.134 --rc geninfo_unexecuted_blocks=1 00:17:50.134 00:17:50.134 ' 00:17:50.134 11:39:50 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:17:50.134 11:39:50 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@7 -- # uname -s 00:17:50.134 11:39:50 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:17:50.134 11:39:50 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:17:50.134 11:39:50 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:17:50.134 11:39:50 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:17:50.134 11:39:50 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:17:50.134 11:39:50 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:17:50.134 11:39:50 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:17:50.134 11:39:50 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:17:50.134 11:39:50 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:17:50.134 11:39:50 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:17:50.134 11:39:50 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:2fb5c03d-b67f-4471-b8a9-a83bf7b15be8 00:17:50.134 11:39:50 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@18 -- # NVME_HOSTID=2fb5c03d-b67f-4471-b8a9-a83bf7b15be8 00:17:50.134 11:39:50 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:17:50.134 11:39:50 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:17:50.134 11:39:50 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:17:50.134 11:39:50 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:17:50.134 11:39:50 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@49 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:17:50.134 11:39:50 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@15 -- # shopt -s extglob 00:17:50.134 11:39:50 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:17:50.134 11:39:50 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:17:50.134 11:39:50 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:17:50.134 11:39:50 nvmf_tcp.nvmf_host.nvmf_identify -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:50.134 11:39:50 nvmf_tcp.nvmf_host.nvmf_identify -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:50.134 11:39:50 nvmf_tcp.nvmf_host.nvmf_identify -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:50.134 11:39:50 nvmf_tcp.nvmf_host.nvmf_identify -- paths/export.sh@5 -- # export PATH 00:17:50.135 11:39:50 nvmf_tcp.nvmf_host.nvmf_identify -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:50.135 11:39:50 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@51 -- # : 0 00:17:50.135 11:39:50 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:17:50.135 11:39:50 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:17:50.135 11:39:50 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:17:50.135 11:39:50 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:17:50.135 11:39:50 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:17:50.135 11:39:50 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:17:50.135 /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:17:50.135 11:39:50 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:17:50.135 11:39:50 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:17:50.135 11:39:50 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@55 -- # have_pci_nics=0 00:17:50.135 11:39:50 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@11 -- # MALLOC_BDEV_SIZE=64 00:17:50.135 11:39:50 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:17:50.135 11:39:50 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@14 -- # nvmftestinit 00:17:50.135 11:39:50 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:17:50.135 11:39:50 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:17:50.135 11:39:50 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@476 -- # prepare_net_devs 00:17:50.135 11:39:50 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@438 -- # local -g is_hw=no 00:17:50.135 11:39:50 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@440 -- # remove_spdk_ns 00:17:50.135 11:39:50 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:17:50.135 11:39:50 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:17:50.135 11:39:50 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:17:50.135 11:39:50 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@442 -- # [[ virt != virt ]] 00:17:50.135 11:39:50 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@444 -- # [[ no == yes ]] 00:17:50.135 11:39:50 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@451 -- # [[ virt == phy ]] 00:17:50.135 11:39:50 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@454 -- # [[ virt == phy-fallback ]] 00:17:50.135 11:39:50 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@459 -- # [[ tcp == tcp ]] 00:17:50.135 11:39:50 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@460 -- # nvmf_veth_init 00:17:50.135 11:39:50 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@145 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:17:50.135 11:39:50 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@146 -- # NVMF_SECOND_INITIATOR_IP=10.0.0.2 00:17:50.135 11:39:50 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@147 -- # NVMF_FIRST_TARGET_IP=10.0.0.3 00:17:50.135 11:39:50 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@148 -- # NVMF_SECOND_TARGET_IP=10.0.0.4 00:17:50.135 11:39:50 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@149 -- # NVMF_INITIATOR_IP=10.0.0.1 00:17:50.135 11:39:50 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@150 -- # NVMF_BRIDGE=nvmf_br 00:17:50.135 11:39:50 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@151 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:17:50.135 11:39:50 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@152 -- # NVMF_INITIATOR_INTERFACE2=nvmf_init_if2 00:17:50.135 11:39:50 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@153 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:17:50.135 11:39:50 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@154 -- # NVMF_INITIATOR_BRIDGE2=nvmf_init_br2 00:17:50.135 11:39:50 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@155 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:17:50.135 11:39:50 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@156 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:17:50.135 11:39:50 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@157 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:17:50.135 11:39:50 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@158 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:17:50.135 11:39:50 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@159 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:17:50.135 11:39:50 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@160 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:17:50.135 11:39:50 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@162 -- # ip link set nvmf_init_br nomaster 00:17:50.135 Cannot find device "nvmf_init_br" 00:17:50.135 11:39:50 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@162 -- # true 00:17:50.135 11:39:50 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@163 -- # ip link set nvmf_init_br2 nomaster 00:17:50.394 Cannot find device "nvmf_init_br2" 00:17:50.394 11:39:50 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@163 -- # true 00:17:50.394 11:39:50 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@164 -- # ip link set nvmf_tgt_br nomaster 00:17:50.394 Cannot find device "nvmf_tgt_br" 00:17:50.394 11:39:50 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@164 -- # true 00:17:50.394 11:39:50 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@165 -- # ip link set nvmf_tgt_br2 nomaster 00:17:50.394 Cannot find device "nvmf_tgt_br2" 00:17:50.394 11:39:50 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@165 -- # true 00:17:50.394 11:39:50 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@166 -- # ip link set nvmf_init_br down 00:17:50.394 Cannot find device "nvmf_init_br" 00:17:50.394 11:39:50 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@166 -- # true 00:17:50.394 11:39:50 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@167 -- # ip link set nvmf_init_br2 down 00:17:50.394 Cannot find device "nvmf_init_br2" 00:17:50.394 11:39:50 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@167 -- # true 00:17:50.394 11:39:50 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@168 -- # ip link set nvmf_tgt_br down 00:17:50.394 Cannot find device "nvmf_tgt_br" 00:17:50.394 11:39:50 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@168 -- # true 00:17:50.394 11:39:50 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@169 -- # ip link set nvmf_tgt_br2 down 00:17:50.394 Cannot find device "nvmf_tgt_br2" 00:17:50.394 11:39:50 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@169 -- # true 00:17:50.394 11:39:50 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@170 -- # ip link delete nvmf_br type bridge 00:17:50.394 Cannot find device "nvmf_br" 00:17:50.394 11:39:50 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@170 -- # true 00:17:50.394 11:39:50 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@171 -- # ip link delete nvmf_init_if 00:17:50.394 Cannot find device "nvmf_init_if" 00:17:50.394 11:39:50 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@171 -- # true 00:17:50.394 11:39:50 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@172 -- # ip link delete nvmf_init_if2 00:17:50.394 Cannot find device "nvmf_init_if2" 00:17:50.394 11:39:50 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@172 -- # true 00:17:50.394 11:39:50 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@173 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:17:50.394 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:17:50.394 11:39:50 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@173 -- # true 00:17:50.394 11:39:51 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@174 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:17:50.394 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:17:50.394 11:39:51 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@174 -- # true 00:17:50.394 11:39:51 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@177 -- # ip netns add nvmf_tgt_ns_spdk 00:17:50.394 11:39:51 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@180 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:17:50.394 11:39:51 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@181 -- # ip link add nvmf_init_if2 type veth peer name nvmf_init_br2 00:17:50.394 11:39:51 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@182 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:17:50.394 11:39:51 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@183 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:17:50.394 11:39:51 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:17:50.394 11:39:51 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@187 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:17:50.394 11:39:51 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@190 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:17:50.394 11:39:51 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@191 -- # ip addr add 10.0.0.2/24 dev nvmf_init_if2 00:17:50.394 11:39:51 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@192 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if 00:17:50.394 11:39:51 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@193 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.4/24 dev nvmf_tgt_if2 00:17:50.394 11:39:51 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@196 -- # ip link set nvmf_init_if up 00:17:50.394 11:39:51 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@197 -- # ip link set nvmf_init_if2 up 00:17:50.394 11:39:51 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@198 -- # ip link set nvmf_init_br up 00:17:50.654 11:39:51 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@199 -- # ip link set nvmf_init_br2 up 00:17:50.654 11:39:51 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@200 -- # ip link set nvmf_tgt_br up 00:17:50.654 11:39:51 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@201 -- # ip link set nvmf_tgt_br2 up 00:17:50.654 11:39:51 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@202 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:17:50.654 11:39:51 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@203 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:17:50.654 11:39:51 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@204 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:17:50.654 11:39:51 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@207 -- # ip link add nvmf_br type bridge 00:17:50.654 11:39:51 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@208 -- # ip link set nvmf_br up 00:17:50.654 11:39:51 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@211 -- # ip link set nvmf_init_br master nvmf_br 00:17:50.654 11:39:51 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@212 -- # ip link set nvmf_init_br2 master nvmf_br 00:17:50.654 11:39:51 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@213 -- # ip link set nvmf_tgt_br master nvmf_br 00:17:50.654 11:39:51 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@214 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:17:50.654 11:39:51 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@217 -- # ipts -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:17:50.654 11:39:51 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT' 00:17:50.654 11:39:51 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@218 -- # ipts -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT 00:17:50.654 11:39:51 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT' 00:17:50.654 11:39:51 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@219 -- # ipts -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:17:50.654 11:39:51 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@790 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT -m comment --comment 'SPDK_NVMF:-A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT' 00:17:50.654 11:39:51 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@222 -- # ping -c 1 10.0.0.3 00:17:50.654 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:17:50.654 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.057 ms 00:17:50.654 00:17:50.654 --- 10.0.0.3 ping statistics --- 00:17:50.654 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:17:50.654 rtt min/avg/max/mdev = 0.057/0.057/0.057/0.000 ms 00:17:50.654 11:39:51 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@223 -- # ping -c 1 10.0.0.4 00:17:50.654 PING 10.0.0.4 (10.0.0.4) 56(84) bytes of data. 00:17:50.654 64 bytes from 10.0.0.4: icmp_seq=1 ttl=64 time=0.082 ms 00:17:50.654 00:17:50.654 --- 10.0.0.4 ping statistics --- 00:17:50.654 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:17:50.654 rtt min/avg/max/mdev = 0.082/0.082/0.082/0.000 ms 00:17:50.654 11:39:51 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@224 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:17:50.654 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:17:50.654 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.021 ms 00:17:50.654 00:17:50.654 --- 10.0.0.1 ping statistics --- 00:17:50.654 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:17:50.654 rtt min/avg/max/mdev = 0.021/0.021/0.021/0.000 ms 00:17:50.654 11:39:51 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@225 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.2 00:17:50.654 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:17:50.654 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.063 ms 00:17:50.654 00:17:50.654 --- 10.0.0.2 ping statistics --- 00:17:50.654 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:17:50.654 rtt min/avg/max/mdev = 0.063/0.063/0.063/0.000 ms 00:17:50.654 11:39:51 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@227 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:17:50.654 11:39:51 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@461 -- # return 0 00:17:50.654 11:39:51 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:17:50.654 11:39:51 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:17:50.654 11:39:51 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:17:50.655 11:39:51 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:17:50.655 11:39:51 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:17:50.655 11:39:51 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:17:50.655 11:39:51 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:17:50.655 11:39:51 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@16 -- # timing_enter start_nvmf_tgt 00:17:50.655 11:39:51 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@726 -- # xtrace_disable 00:17:50.655 11:39:51 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:17:50.655 11:39:51 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@19 -- # nvmfpid=89069 00:17:50.655 11:39:51 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@18 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:17:50.655 11:39:51 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@21 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:17:50.655 11:39:51 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@23 -- # waitforlisten 89069 00:17:50.655 11:39:51 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@835 -- # '[' -z 89069 ']' 00:17:50.655 11:39:51 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:17:50.655 11:39:51 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@840 -- # local max_retries=100 00:17:50.655 11:39:51 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:17:50.655 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:17:50.655 11:39:51 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@844 -- # xtrace_disable 00:17:50.655 11:39:51 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:17:50.655 [2024-12-16 11:39:51.383303] Starting SPDK v25.01-pre git sha1 c360129e7 / DPDK 24.03.0 initialization... 00:17:50.655 [2024-12-16 11:39:51.383408] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:17:50.932 [2024-12-16 11:39:51.544482] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:17:50.932 [2024-12-16 11:39:51.610293] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:17:50.932 [2024-12-16 11:39:51.610678] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:17:50.932 [2024-12-16 11:39:51.610872] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:17:50.932 [2024-12-16 11:39:51.611016] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:17:50.932 [2024-12-16 11:39:51.611066] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:17:50.932 [2024-12-16 11:39:51.612482] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 1 00:17:50.932 [2024-12-16 11:39:51.612583] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 2 00:17:50.932 [2024-12-16 11:39:51.612728] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 3 00:17:50.932 [2024-12-16 11:39:51.612735] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:17:51.197 11:39:51 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:17:51.197 11:39:51 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@868 -- # return 0 00:17:51.197 11:39:51 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@24 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:17:51.197 11:39:51 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:51.197 11:39:51 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:17:51.197 [2024-12-16 11:39:51.765768] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:17:51.197 11:39:51 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:51.197 11:39:51 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@25 -- # timing_exit start_nvmf_tgt 00:17:51.197 11:39:51 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@732 -- # xtrace_disable 00:17:51.197 11:39:51 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:17:51.197 11:39:51 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@27 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:17:51.197 11:39:51 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:51.197 11:39:51 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:17:51.197 Malloc0 00:17:51.197 11:39:51 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:51.197 11:39:51 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@28 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:17:51.197 11:39:51 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:51.197 11:39:51 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:17:51.197 11:39:51 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:51.197 11:39:51 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@31 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 --nguid ABCDEF0123456789ABCDEF0123456789 --eui64 ABCDEF0123456789 00:17:51.197 11:39:51 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:51.197 11:39:51 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:17:51.197 11:39:51 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:51.197 11:39:51 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@34 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 00:17:51.198 11:39:51 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:51.198 11:39:51 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:17:51.198 [2024-12-16 11:39:51.889245] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4420 *** 00:17:51.198 11:39:51 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:51.198 11:39:51 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@35 -- # rpc_cmd nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.3 -s 4420 00:17:51.198 11:39:51 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:51.198 11:39:51 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:17:51.198 11:39:51 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:51.198 11:39:51 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@37 -- # rpc_cmd nvmf_get_subsystems 00:17:51.198 11:39:51 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:51.198 11:39:51 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:17:51.198 [ 00:17:51.198 { 00:17:51.198 "allow_any_host": true, 00:17:51.198 "hosts": [], 00:17:51.198 "listen_addresses": [ 00:17:51.198 { 00:17:51.198 "adrfam": "IPv4", 00:17:51.198 "traddr": "10.0.0.3", 00:17:51.198 "trsvcid": "4420", 00:17:51.198 "trtype": "TCP" 00:17:51.198 } 00:17:51.198 ], 00:17:51.198 "nqn": "nqn.2014-08.org.nvmexpress.discovery", 00:17:51.198 "subtype": "Discovery" 00:17:51.198 }, 00:17:51.198 { 00:17:51.198 "allow_any_host": true, 00:17:51.198 "hosts": [], 00:17:51.198 "listen_addresses": [ 00:17:51.198 { 00:17:51.198 "adrfam": "IPv4", 00:17:51.198 "traddr": "10.0.0.3", 00:17:51.198 "trsvcid": "4420", 00:17:51.198 "trtype": "TCP" 00:17:51.198 } 00:17:51.198 ], 00:17:51.198 "max_cntlid": 65519, 00:17:51.198 "max_namespaces": 32, 00:17:51.198 "min_cntlid": 1, 00:17:51.198 "model_number": "SPDK bdev Controller", 00:17:51.198 "namespaces": [ 00:17:51.198 { 00:17:51.198 "bdev_name": "Malloc0", 00:17:51.198 "eui64": "ABCDEF0123456789", 00:17:51.198 "name": "Malloc0", 00:17:51.198 "nguid": "ABCDEF0123456789ABCDEF0123456789", 00:17:51.198 "nsid": 1, 00:17:51.198 "uuid": "4d3a0805-b7fc-4103-87f3-53e15074d998" 00:17:51.198 } 00:17:51.198 ], 00:17:51.198 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:17:51.198 "serial_number": "SPDK00000000000001", 00:17:51.198 "subtype": "NVMe" 00:17:51.198 } 00:17:51.198 ] 00:17:51.198 11:39:51 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:51.198 11:39:51 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@39 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_identify -r ' trtype:tcp adrfam:IPv4 traddr:10.0.0.3 trsvcid:4420 subnqn:nqn.2014-08.org.nvmexpress.discovery' -L all 00:17:51.456 [2024-12-16 11:39:51.951187] Starting SPDK v25.01-pre git sha1 c360129e7 / DPDK 24.03.0 initialization... 00:17:51.456 [2024-12-16 11:39:51.951240] [ DPDK EAL parameters: identify --no-shconf -c 0x1 -n 1 -m 0 --no-pci --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid89113 ] 00:17:51.456 [2024-12-16 11:39:52.109752] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 0] setting state to connect adminq (no timeout) 00:17:51.456 [2024-12-16 11:39:52.109827] nvme_tcp.c:2238:nvme_tcp_qpair_connect_sock: *DEBUG*: adrfam 1 ai_family 2 00:17:51.456 [2024-12-16 11:39:52.109835] nvme_tcp.c:2242:nvme_tcp_qpair_connect_sock: *DEBUG*: trsvcid is 4420 00:17:51.456 [2024-12-16 11:39:52.109850] nvme_tcp.c:2263:nvme_tcp_qpair_connect_sock: *DEBUG*: sock_impl_name is (null) 00:17:51.456 [2024-12-16 11:39:52.109861] sock.c: 373:spdk_sock_connect_ext: *DEBUG*: Creating a client socket using impl posix 00:17:51.456 [2024-12-16 11:39:52.110218] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 0] setting state to wait for connect adminq (no timeout) 00:17:51.456 [2024-12-16 11:39:52.110280] nvme_tcp.c:1455:nvme_tcp_send_icreq_complete: *DEBUG*: Complete the icreq send for tqpair=0x6b9d90 0 00:17:51.456 [2024-12-16 11:39:52.116161] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 1 00:17:51.456 [2024-12-16 11:39:52.116190] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =1 00:17:51.456 [2024-12-16 11:39:52.116197] nvme_tcp.c:1501:nvme_tcp_icresp_handle: *DEBUG*: host_hdgst_enable: 0 00:17:51.456 [2024-12-16 11:39:52.116200] nvme_tcp.c:1502:nvme_tcp_icresp_handle: *DEBUG*: host_ddgst_enable: 0 00:17:51.456 [2024-12-16 11:39:52.116239] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:17:51.456 [2024-12-16 11:39:52.116248] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:17:51.456 [2024-12-16 11:39:52.116252] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x6b9d90) 00:17:51.456 [2024-12-16 11:39:52.116267] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:0 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x400 00:17:51.456 [2024-12-16 11:39:52.116300] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x6fa600, cid 0, qid 0 00:17:51.456 [2024-12-16 11:39:52.124189] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:17:51.456 [2024-12-16 11:39:52.124213] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:17:51.456 [2024-12-16 11:39:52.124219] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:17:51.456 [2024-12-16 11:39:52.124224] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x6fa600) on tqpair=0x6b9d90 00:17:51.456 [2024-12-16 11:39:52.124236] nvme_fabric.c: 621:nvme_fabric_qpair_connect_poll: *DEBUG*: CNTLID 0x0001 00:17:51.456 [2024-12-16 11:39:52.124245] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] setting state to read vs (no timeout) 00:17:51.456 [2024-12-16 11:39:52.124251] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] setting state to read vs wait for vs (no timeout) 00:17:51.456 [2024-12-16 11:39:52.124273] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:17:51.456 [2024-12-16 11:39:52.124279] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:17:51.456 [2024-12-16 11:39:52.124289] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x6b9d90) 00:17:51.456 [2024-12-16 11:39:52.124299] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:51.456 [2024-12-16 11:39:52.124329] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x6fa600, cid 0, qid 0 00:17:51.456 [2024-12-16 11:39:52.124395] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:17:51.456 [2024-12-16 11:39:52.124403] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:17:51.456 [2024-12-16 11:39:52.124407] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:17:51.456 [2024-12-16 11:39:52.124411] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x6fa600) on tqpair=0x6b9d90 00:17:51.456 [2024-12-16 11:39:52.124422] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] setting state to read cap (no timeout) 00:17:51.456 [2024-12-16 11:39:52.124430] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] setting state to read cap wait for cap (no timeout) 00:17:51.456 [2024-12-16 11:39:52.124439] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:17:51.456 [2024-12-16 11:39:52.124443] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:17:51.456 [2024-12-16 11:39:52.124447] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x6b9d90) 00:17:51.456 [2024-12-16 11:39:52.124455] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:51.456 [2024-12-16 11:39:52.124475] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x6fa600, cid 0, qid 0 00:17:51.456 [2024-12-16 11:39:52.124530] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:17:51.456 [2024-12-16 11:39:52.124537] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:17:51.456 [2024-12-16 11:39:52.124541] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:17:51.457 [2024-12-16 11:39:52.124545] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x6fa600) on tqpair=0x6b9d90 00:17:51.457 [2024-12-16 11:39:52.124551] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] setting state to check en (no timeout) 00:17:51.457 [2024-12-16 11:39:52.124560] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] setting state to check en wait for cc (timeout 15000 ms) 00:17:51.457 [2024-12-16 11:39:52.124568] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:17:51.457 [2024-12-16 11:39:52.124572] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:17:51.457 [2024-12-16 11:39:52.124576] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x6b9d90) 00:17:51.457 [2024-12-16 11:39:52.124584] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:51.457 [2024-12-16 11:39:52.124602] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x6fa600, cid 0, qid 0 00:17:51.457 [2024-12-16 11:39:52.124656] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:17:51.457 [2024-12-16 11:39:52.124663] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:17:51.457 [2024-12-16 11:39:52.124667] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:17:51.457 [2024-12-16 11:39:52.124671] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x6fa600) on tqpair=0x6b9d90 00:17:51.457 [2024-12-16 11:39:52.124677] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] setting state to disable and wait for CSTS.RDY = 0 (timeout 15000 ms) 00:17:51.457 [2024-12-16 11:39:52.124688] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:17:51.457 [2024-12-16 11:39:52.124692] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:17:51.457 [2024-12-16 11:39:52.124696] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x6b9d90) 00:17:51.457 [2024-12-16 11:39:52.124704] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:51.457 [2024-12-16 11:39:52.124721] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x6fa600, cid 0, qid 0 00:17:51.457 [2024-12-16 11:39:52.124773] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:17:51.457 [2024-12-16 11:39:52.124780] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:17:51.457 [2024-12-16 11:39:52.124783] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:17:51.457 [2024-12-16 11:39:52.124788] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x6fa600) on tqpair=0x6b9d90 00:17:51.457 [2024-12-16 11:39:52.124793] nvme_ctrlr.c:3906:nvme_ctrlr_process_init_wait_for_ready_0: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] CC.EN = 0 && CSTS.RDY = 0 00:17:51.457 [2024-12-16 11:39:52.124798] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] setting state to controller is disabled (timeout 15000 ms) 00:17:51.457 [2024-12-16 11:39:52.124806] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] setting state to enable controller by writing CC.EN = 1 (timeout 15000 ms) 00:17:51.457 [2024-12-16 11:39:52.124927] nvme_ctrlr.c:4104:nvme_ctrlr_process_init: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] Setting CC.EN = 1 00:17:51.457 [2024-12-16 11:39:52.124934] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] setting state to enable controller by writing CC.EN = 1 reg (timeout 15000 ms) 00:17:51.457 [2024-12-16 11:39:52.124944] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:17:51.457 [2024-12-16 11:39:52.124949] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:17:51.457 [2024-12-16 11:39:52.124953] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x6b9d90) 00:17:51.457 [2024-12-16 11:39:52.124960] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY SET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:51.457 [2024-12-16 11:39:52.124979] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x6fa600, cid 0, qid 0 00:17:51.457 [2024-12-16 11:39:52.125044] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:17:51.457 [2024-12-16 11:39:52.125051] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:17:51.457 [2024-12-16 11:39:52.125055] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:17:51.457 [2024-12-16 11:39:52.125059] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x6fa600) on tqpair=0x6b9d90 00:17:51.457 [2024-12-16 11:39:52.125064] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] setting state to wait for CSTS.RDY = 1 (timeout 15000 ms) 00:17:51.457 [2024-12-16 11:39:52.125075] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:17:51.457 [2024-12-16 11:39:52.125079] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:17:51.457 [2024-12-16 11:39:52.125083] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x6b9d90) 00:17:51.457 [2024-12-16 11:39:52.125091] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:51.457 [2024-12-16 11:39:52.125108] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x6fa600, cid 0, qid 0 00:17:51.457 [2024-12-16 11:39:52.125178] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:17:51.457 [2024-12-16 11:39:52.125187] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:17:51.457 [2024-12-16 11:39:52.125190] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:17:51.457 [2024-12-16 11:39:52.125195] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x6fa600) on tqpair=0x6b9d90 00:17:51.457 [2024-12-16 11:39:52.125200] nvme_ctrlr.c:3941:nvme_ctrlr_process_init_enable_wait_for_ready_1: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] CC.EN = 1 && CSTS.RDY = 1 - controller is ready 00:17:51.457 [2024-12-16 11:39:52.125206] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] setting state to reset admin queue (timeout 30000 ms) 00:17:51.457 [2024-12-16 11:39:52.125214] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] setting state to identify controller (no timeout) 00:17:51.457 [2024-12-16 11:39:52.125225] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] setting state to wait for identify controller (timeout 30000 ms) 00:17:51.457 [2024-12-16 11:39:52.125236] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:17:51.457 [2024-12-16 11:39:52.125241] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x6b9d90) 00:17:51.457 [2024-12-16 11:39:52.125249] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:0 nsid:0 cdw10:00000001 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:51.457 [2024-12-16 11:39:52.125270] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x6fa600, cid 0, qid 0 00:17:51.457 [2024-12-16 11:39:52.125379] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:17:51.457 [2024-12-16 11:39:52.125386] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:17:51.457 [2024-12-16 11:39:52.125390] nvme_tcp.c:1619:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:17:51.457 [2024-12-16 11:39:52.125395] nvme_tcp.c:1620:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x6b9d90): datao=0, datal=4096, cccid=0 00:17:51.457 [2024-12-16 11:39:52.125400] nvme_tcp.c:1631:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x6fa600) on tqpair(0x6b9d90): expected_datao=0, payload_size=4096 00:17:51.457 [2024-12-16 11:39:52.125405] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:17:51.457 [2024-12-16 11:39:52.125415] nvme_tcp.c:1421:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:17:51.457 [2024-12-16 11:39:52.125420] nvme_tcp.c:1255:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:17:51.457 [2024-12-16 11:39:52.125429] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:17:51.457 [2024-12-16 11:39:52.125436] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:17:51.457 [2024-12-16 11:39:52.125439] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:17:51.457 [2024-12-16 11:39:52.125444] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x6fa600) on tqpair=0x6b9d90 00:17:51.457 [2024-12-16 11:39:52.125454] nvme_ctrlr.c:2081:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] transport max_xfer_size 4294967295 00:17:51.457 [2024-12-16 11:39:52.125460] nvme_ctrlr.c:2085:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] MDTS max_xfer_size 131072 00:17:51.457 [2024-12-16 11:39:52.125465] nvme_ctrlr.c:2088:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] CNTLID 0x0001 00:17:51.457 [2024-12-16 11:39:52.125471] nvme_ctrlr.c:2112:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] transport max_sges 16 00:17:51.457 [2024-12-16 11:39:52.125476] nvme_ctrlr.c:2127:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] fuses compare and write: 1 00:17:51.457 [2024-12-16 11:39:52.125481] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] setting state to configure AER (timeout 30000 ms) 00:17:51.457 [2024-12-16 11:39:52.125490] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] setting state to wait for configure aer (timeout 30000 ms) 00:17:51.457 [2024-12-16 11:39:52.125498] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:17:51.457 [2024-12-16 11:39:52.125503] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:17:51.457 [2024-12-16 11:39:52.125507] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x6b9d90) 00:17:51.457 [2024-12-16 11:39:52.125515] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES ASYNC EVENT CONFIGURATION cid:0 cdw10:0000000b SGL DATA BLOCK OFFSET 0x0 len:0x0 00:17:51.457 [2024-12-16 11:39:52.125536] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x6fa600, cid 0, qid 0 00:17:51.457 [2024-12-16 11:39:52.125604] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:17:51.457 [2024-12-16 11:39:52.125611] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:17:51.457 [2024-12-16 11:39:52.125615] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:17:51.457 [2024-12-16 11:39:52.125619] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x6fa600) on tqpair=0x6b9d90 00:17:51.457 [2024-12-16 11:39:52.125628] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:17:51.457 [2024-12-16 11:39:52.125633] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:17:51.457 [2024-12-16 11:39:52.125637] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x6b9d90) 00:17:51.457 [2024-12-16 11:39:52.125644] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:17:51.457 [2024-12-16 11:39:52.125651] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:17:51.457 [2024-12-16 11:39:52.125655] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:17:51.457 [2024-12-16 11:39:52.125659] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=1 on tqpair(0x6b9d90) 00:17:51.457 [2024-12-16 11:39:52.125665] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:17:51.457 [2024-12-16 11:39:52.125672] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:17:51.457 [2024-12-16 11:39:52.125676] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:17:51.457 [2024-12-16 11:39:52.125680] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=2 on tqpair(0x6b9d90) 00:17:51.457 [2024-12-16 11:39:52.125686] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:17:51.457 [2024-12-16 11:39:52.125692] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:17:51.457 [2024-12-16 11:39:52.125696] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:17:51.457 [2024-12-16 11:39:52.125700] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x6b9d90) 00:17:51.457 [2024-12-16 11:39:52.125706] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:17:51.457 [2024-12-16 11:39:52.125712] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] setting state to set keep alive timeout (timeout 30000 ms) 00:17:51.457 [2024-12-16 11:39:52.125726] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] setting state to wait for set keep alive timeout (timeout 30000 ms) 00:17:51.457 [2024-12-16 11:39:52.125734] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:17:51.457 [2024-12-16 11:39:52.125739] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x6b9d90) 00:17:51.457 [2024-12-16 11:39:52.125746] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES KEEP ALIVE TIMER cid:4 cdw10:0000000f SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:51.457 [2024-12-16 11:39:52.125768] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x6fa600, cid 0, qid 0 00:17:51.457 [2024-12-16 11:39:52.125775] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x6fa780, cid 1, qid 0 00:17:51.457 [2024-12-16 11:39:52.125780] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x6fa900, cid 2, qid 0 00:17:51.457 [2024-12-16 11:39:52.125785] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x6faa80, cid 3, qid 0 00:17:51.457 [2024-12-16 11:39:52.125790] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x6fac00, cid 4, qid 0 00:17:51.457 [2024-12-16 11:39:52.125881] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:17:51.457 [2024-12-16 11:39:52.125887] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:17:51.457 [2024-12-16 11:39:52.125891] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:17:51.457 [2024-12-16 11:39:52.125896] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x6fac00) on tqpair=0x6b9d90 00:17:51.457 [2024-12-16 11:39:52.125910] nvme_ctrlr.c:3059:nvme_ctrlr_set_keep_alive_timeout_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] Sending keep alive every 5000000 us 00:17:51.457 [2024-12-16 11:39:52.125916] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] setting state to ready (no timeout) 00:17:51.457 [2024-12-16 11:39:52.125928] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:17:51.457 [2024-12-16 11:39:52.125933] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x6b9d90) 00:17:51.457 [2024-12-16 11:39:52.125940] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:4 nsid:0 cdw10:00000001 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:51.457 [2024-12-16 11:39:52.125959] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x6fac00, cid 4, qid 0 00:17:51.457 [2024-12-16 11:39:52.126024] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:17:51.457 [2024-12-16 11:39:52.126031] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:17:51.457 [2024-12-16 11:39:52.126035] nvme_tcp.c:1619:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:17:51.457 [2024-12-16 11:39:52.126039] nvme_tcp.c:1620:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x6b9d90): datao=0, datal=4096, cccid=4 00:17:51.457 [2024-12-16 11:39:52.126043] nvme_tcp.c:1631:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x6fac00) on tqpair(0x6b9d90): expected_datao=0, payload_size=4096 00:17:51.457 [2024-12-16 11:39:52.126048] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:17:51.457 [2024-12-16 11:39:52.126056] nvme_tcp.c:1421:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:17:51.457 [2024-12-16 11:39:52.126060] nvme_tcp.c:1255:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:17:51.457 [2024-12-16 11:39:52.126069] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:17:51.457 [2024-12-16 11:39:52.126075] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:17:51.457 [2024-12-16 11:39:52.126079] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:17:51.457 [2024-12-16 11:39:52.126083] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x6fac00) on tqpair=0x6b9d90 00:17:51.457 [2024-12-16 11:39:52.126097] nvme_ctrlr.c:4202:nvme_ctrlr_process_init: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] Ctrlr already in ready state 00:17:51.457 [2024-12-16 11:39:52.126126] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:17:51.457 [2024-12-16 11:39:52.126133] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x6b9d90) 00:17:51.457 [2024-12-16 11:39:52.126153] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:4 nsid:0 cdw10:00ff0070 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:51.457 [2024-12-16 11:39:52.126163] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:17:51.457 [2024-12-16 11:39:52.126168] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:17:51.457 [2024-12-16 11:39:52.126172] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=5 on tqpair(0x6b9d90) 00:17:51.457 [2024-12-16 11:39:52.126178] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: KEEP ALIVE (18) qid:0 cid:5 nsid:0 cdw10:00000000 cdw11:00000000 00:17:51.457 [2024-12-16 11:39:52.126207] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x6fac00, cid 4, qid 0 00:17:51.457 [2024-12-16 11:39:52.126214] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x6fad80, cid 5, qid 0 00:17:51.457 [2024-12-16 11:39:52.126325] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:17:51.457 [2024-12-16 11:39:52.126332] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:17:51.457 [2024-12-16 11:39:52.126336] nvme_tcp.c:1619:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:17:51.457 [2024-12-16 11:39:52.126340] nvme_tcp.c:1620:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x6b9d90): datao=0, datal=1024, cccid=4 00:17:51.457 [2024-12-16 11:39:52.126345] nvme_tcp.c:1631:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x6fac00) on tqpair(0x6b9d90): expected_datao=0, payload_size=1024 00:17:51.457 [2024-12-16 11:39:52.126350] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:17:51.457 [2024-12-16 11:39:52.126357] nvme_tcp.c:1421:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:17:51.457 [2024-12-16 11:39:52.126361] nvme_tcp.c:1255:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:17:51.457 [2024-12-16 11:39:52.126367] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:17:51.457 [2024-12-16 11:39:52.126373] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:17:51.457 [2024-12-16 11:39:52.126377] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:17:51.457 [2024-12-16 11:39:52.126381] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x6fad80) on tqpair=0x6b9d90 00:17:51.457 [2024-12-16 11:39:52.167205] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:17:51.457 [2024-12-16 11:39:52.167227] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:17:51.457 [2024-12-16 11:39:52.167248] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:17:51.457 [2024-12-16 11:39:52.167253] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x6fac00) on tqpair=0x6b9d90 00:17:51.457 [2024-12-16 11:39:52.167269] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:17:51.457 [2024-12-16 11:39:52.167275] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x6b9d90) 00:17:51.457 [2024-12-16 11:39:52.167284] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:4 nsid:0 cdw10:02ff0070 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:51.457 [2024-12-16 11:39:52.167318] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x6fac00, cid 4, qid 0 00:17:51.457 [2024-12-16 11:39:52.167392] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:17:51.457 [2024-12-16 11:39:52.167399] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:17:51.457 [2024-12-16 11:39:52.167403] nvme_tcp.c:1619:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:17:51.457 [2024-12-16 11:39:52.167407] nvme_tcp.c:1620:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x6b9d90): datao=0, datal=3072, cccid=4 00:17:51.457 [2024-12-16 11:39:52.167412] nvme_tcp.c:1631:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x6fac00) on tqpair(0x6b9d90): expected_datao=0, payload_size=3072 00:17:51.457 [2024-12-16 11:39:52.167417] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:17:51.457 [2024-12-16 11:39:52.167425] nvme_tcp.c:1421:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:17:51.457 [2024-12-16 11:39:52.167429] nvme_tcp.c:1255:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:17:51.457 [2024-12-16 11:39:52.167438] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:17:51.457 [2024-12-16 11:39:52.167444] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:17:51.457 [2024-12-16 11:39:52.167448] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:17:51.457 [2024-12-16 11:39:52.167452] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x6fac00) on tqpair=0x6b9d90 00:17:51.457 [2024-12-16 11:39:52.167463] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:17:51.457 [2024-12-16 11:39:52.167468] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x6b9d90) 00:17:51.457 [2024-12-16 11:39:52.167476] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:4 nsid:0 cdw10:00010070 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:51.457 [2024-12-16 11:39:52.167502] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x6fac00, cid 4, qid 0 00:17:51.457 [2024-12-16 11:39:52.167575] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:17:51.457 [2024-12-16 11:39:52.167592] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:17:51.457 [2024-12-16 11:39:52.167597] nvme_tcp.c:1619:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:17:51.457 [2024-12-16 11:39:52.167601] nvme_tcp.c:1620:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x6b9d90): datao=0, datal=8, cccid=4 00:17:51.457 [2024-12-16 11:39:52.167606] nvme_tcp.c:1631:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x6fac00) on tqpair(0x6b9d90): expected_datao=0, payload_size=8 00:17:51.457 [2024-12-16 11:39:52.167611] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:17:51.457 [2024-12-16 11:39:52.167618] nvme_tcp.c:1421:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:17:51.457 [2024-12-16 11:39:52.167622] nvme_tcp.c:1255:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:17:51.721 ===================================================== 00:17:51.721 NVMe over Fabrics controller at 10.0.0.3:4420: nqn.2014-08.org.nvmexpress.discovery 00:17:51.721 ===================================================== 00:17:51.721 Controller Capabilities/Features 00:17:51.721 ================================ 00:17:51.721 Vendor ID: 0000 00:17:51.721 Subsystem Vendor ID: 0000 00:17:51.721 Serial Number: .................... 00:17:51.721 Model Number: ........................................ 00:17:51.721 Firmware Version: 25.01 00:17:51.721 Recommended Arb Burst: 0 00:17:51.721 IEEE OUI Identifier: 00 00 00 00:17:51.721 Multi-path I/O 00:17:51.721 May have multiple subsystem ports: No 00:17:51.721 May have multiple controllers: No 00:17:51.721 Associated with SR-IOV VF: No 00:17:51.721 Max Data Transfer Size: 131072 00:17:51.721 Max Number of Namespaces: 0 00:17:51.721 Max Number of I/O Queues: 1024 00:17:51.721 NVMe Specification Version (VS): 1.3 00:17:51.721 NVMe Specification Version (Identify): 1.3 00:17:51.721 Maximum Queue Entries: 128 00:17:51.721 Contiguous Queues Required: Yes 00:17:51.721 Arbitration Mechanisms Supported 00:17:51.721 Weighted Round Robin: Not Supported 00:17:51.721 Vendor Specific: Not Supported 00:17:51.721 Reset Timeout: 15000 ms 00:17:51.721 Doorbell Stride: 4 bytes 00:17:51.721 NVM Subsystem Reset: Not Supported 00:17:51.721 Command Sets Supported 00:17:51.721 NVM Command Set: Supported 00:17:51.721 Boot Partition: Not Supported 00:17:51.721 Memory Page Size Minimum: 4096 bytes 00:17:51.721 Memory Page Size Maximum: 4096 bytes 00:17:51.721 Persistent Memory Region: Not Supported 00:17:51.721 Optional Asynchronous Events Supported 00:17:51.721 Namespace Attribute Notices: Not Supported 00:17:51.722 Firmware Activation Notices: Not Supported 00:17:51.722 ANA Change Notices: Not Supported 00:17:51.722 PLE Aggregate Log Change Notices: Not Supported 00:17:51.722 LBA Status Info Alert Notices: Not Supported 00:17:51.722 EGE Aggregate Log Change Notices: Not Supported 00:17:51.722 Normal NVM Subsystem Shutdown event: Not Supported 00:17:51.722 Zone Descriptor Change Notices: Not Supported 00:17:51.722 Discovery Log Change Notices: Supported 00:17:51.722 Controller Attributes 00:17:51.722 128-bit Host Identifier: Not Supported 00:17:51.722 Non-Operational Permissive Mode: Not Supported 00:17:51.722 NVM Sets: Not Supported 00:17:51.722 Read Recovery Levels: Not Supported 00:17:51.722 Endurance Groups: Not Supported 00:17:51.722 Predictable Latency Mode: Not Supported 00:17:51.722 Traffic Based Keep ALive: Not Supported 00:17:51.722 Namespace Granularity: Not Supported 00:17:51.722 SQ Associations: Not Supported 00:17:51.722 UUID List: Not Supported 00:17:51.722 Multi-Domain Subsystem: Not Supported 00:17:51.722 Fixed Capacity Management: Not Supported 00:17:51.722 Variable Capacity Management: Not Supported 00:17:51.722 Delete Endurance Group: Not Supported 00:17:51.722 Delete NVM Set: Not Supported 00:17:51.722 Extended LBA Formats Supported: Not Supported 00:17:51.722 Flexible Data Placement Supported: Not Supported 00:17:51.722 00:17:51.722 Controller Memory Buffer Support 00:17:51.722 ================================ 00:17:51.722 Supported: No 00:17:51.722 00:17:51.722 Persistent Memory Region Support 00:17:51.722 ================================ 00:17:51.722 Supported: No 00:17:51.722 00:17:51.722 Admin Command Set Attributes 00:17:51.722 ============================ 00:17:51.722 Security Send/Receive: Not Supported 00:17:51.722 Format NVM: Not Supported 00:17:51.722 Firmware Activate/Download: Not Supported 00:17:51.722 Namespace Management: Not Supported 00:17:51.722 Device Self-Test: Not Supported 00:17:51.722 Directives: Not Supported 00:17:51.722 NVMe-MI: Not Supported 00:17:51.722 Virtualization Management: Not Supported 00:17:51.722 Doorbell Buffer Config: Not Supported 00:17:51.722 Get LBA Status Capability: Not Supported 00:17:51.722 Command & Feature Lockdown Capability: Not Supported 00:17:51.722 Abort Command Limit: 1 00:17:51.722 Async Event Request Limit: 4 00:17:51.722 Number of Firmware Slots: N/A 00:17:51.722 Firmware Slot 1 Read-Only: N/A 00:17:51.722 Firm[2024-12-16 11:39:52.211218] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:17:51.722 [2024-12-16 11:39:52.211242] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:17:51.722 [2024-12-16 11:39:52.211264] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:17:51.722 [2024-12-16 11:39:52.211269] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x6fac00) on tqpair=0x6b9d90 00:17:51.722 ware Activation Without Reset: N/A 00:17:51.722 Multiple Update Detection Support: N/A 00:17:51.722 Firmware Update Granularity: No Information Provided 00:17:51.722 Per-Namespace SMART Log: No 00:17:51.722 Asymmetric Namespace Access Log Page: Not Supported 00:17:51.722 Subsystem NQN: nqn.2014-08.org.nvmexpress.discovery 00:17:51.722 Command Effects Log Page: Not Supported 00:17:51.722 Get Log Page Extended Data: Supported 00:17:51.722 Telemetry Log Pages: Not Supported 00:17:51.722 Persistent Event Log Pages: Not Supported 00:17:51.722 Supported Log Pages Log Page: May Support 00:17:51.722 Commands Supported & Effects Log Page: Not Supported 00:17:51.722 Feature Identifiers & Effects Log Page:May Support 00:17:51.722 NVMe-MI Commands & Effects Log Page: May Support 00:17:51.722 Data Area 4 for Telemetry Log: Not Supported 00:17:51.722 Error Log Page Entries Supported: 128 00:17:51.722 Keep Alive: Not Supported 00:17:51.722 00:17:51.722 NVM Command Set Attributes 00:17:51.722 ========================== 00:17:51.722 Submission Queue Entry Size 00:17:51.722 Max: 1 00:17:51.722 Min: 1 00:17:51.722 Completion Queue Entry Size 00:17:51.722 Max: 1 00:17:51.722 Min: 1 00:17:51.722 Number of Namespaces: 0 00:17:51.722 Compare Command: Not Supported 00:17:51.722 Write Uncorrectable Command: Not Supported 00:17:51.722 Dataset Management Command: Not Supported 00:17:51.722 Write Zeroes Command: Not Supported 00:17:51.722 Set Features Save Field: Not Supported 00:17:51.722 Reservations: Not Supported 00:17:51.722 Timestamp: Not Supported 00:17:51.722 Copy: Not Supported 00:17:51.722 Volatile Write Cache: Not Present 00:17:51.722 Atomic Write Unit (Normal): 1 00:17:51.722 Atomic Write Unit (PFail): 1 00:17:51.722 Atomic Compare & Write Unit: 1 00:17:51.722 Fused Compare & Write: Supported 00:17:51.722 Scatter-Gather List 00:17:51.722 SGL Command Set: Supported 00:17:51.722 SGL Keyed: Supported 00:17:51.722 SGL Bit Bucket Descriptor: Not Supported 00:17:51.722 SGL Metadata Pointer: Not Supported 00:17:51.722 Oversized SGL: Not Supported 00:17:51.722 SGL Metadata Address: Not Supported 00:17:51.722 SGL Offset: Supported 00:17:51.722 Transport SGL Data Block: Not Supported 00:17:51.722 Replay Protected Memory Block: Not Supported 00:17:51.722 00:17:51.722 Firmware Slot Information 00:17:51.722 ========================= 00:17:51.722 Active slot: 0 00:17:51.722 00:17:51.722 00:17:51.722 Error Log 00:17:51.722 ========= 00:17:51.722 00:17:51.722 Active Namespaces 00:17:51.722 ================= 00:17:51.722 Discovery Log Page 00:17:51.722 ================== 00:17:51.722 Generation Counter: 2 00:17:51.722 Number of Records: 2 00:17:51.722 Record Format: 0 00:17:51.722 00:17:51.722 Discovery Log Entry 0 00:17:51.722 ---------------------- 00:17:51.722 Transport Type: 3 (TCP) 00:17:51.722 Address Family: 1 (IPv4) 00:17:51.722 Subsystem Type: 3 (Current Discovery Subsystem) 00:17:51.722 Entry Flags: 00:17:51.722 Duplicate Returned Information: 1 00:17:51.722 Explicit Persistent Connection Support for Discovery: 1 00:17:51.722 Transport Requirements: 00:17:51.722 Secure Channel: Not Required 00:17:51.722 Port ID: 0 (0x0000) 00:17:51.722 Controller ID: 65535 (0xffff) 00:17:51.722 Admin Max SQ Size: 128 00:17:51.722 Transport Service Identifier: 4420 00:17:51.722 NVM Subsystem Qualified Name: nqn.2014-08.org.nvmexpress.discovery 00:17:51.722 Transport Address: 10.0.0.3 00:17:51.722 Discovery Log Entry 1 00:17:51.722 ---------------------- 00:17:51.722 Transport Type: 3 (TCP) 00:17:51.722 Address Family: 1 (IPv4) 00:17:51.722 Subsystem Type: 2 (NVM Subsystem) 00:17:51.722 Entry Flags: 00:17:51.722 Duplicate Returned Information: 0 00:17:51.722 Explicit Persistent Connection Support for Discovery: 0 00:17:51.722 Transport Requirements: 00:17:51.722 Secure Channel: Not Required 00:17:51.722 Port ID: 0 (0x0000) 00:17:51.722 Controller ID: 65535 (0xffff) 00:17:51.722 Admin Max SQ Size: 128 00:17:51.722 Transport Service Identifier: 4420 00:17:51.722 NVM Subsystem Qualified Name: nqn.2016-06.io.spdk:cnode1 00:17:51.722 Transport Address: 10.0.0.3 [2024-12-16 11:39:52.211403] nvme_ctrlr.c:4399:nvme_ctrlr_destruct_async: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] Prepare to destruct SSD 00:17:51.722 [2024-12-16 11:39:52.211421] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x6fa600) on tqpair=0x6b9d90 00:17:51.722 [2024-12-16 11:39:52.211429] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:51.722 [2024-12-16 11:39:52.211435] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x6fa780) on tqpair=0x6b9d90 00:17:51.722 [2024-12-16 11:39:52.211440] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:51.722 [2024-12-16 11:39:52.211445] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x6fa900) on tqpair=0x6b9d90 00:17:51.722 [2024-12-16 11:39:52.211450] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:51.722 [2024-12-16 11:39:52.211455] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x6faa80) on tqpair=0x6b9d90 00:17:51.722 [2024-12-16 11:39:52.211460] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:51.722 [2024-12-16 11:39:52.211474] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:17:51.722 [2024-12-16 11:39:52.211480] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:17:51.722 [2024-12-16 11:39:52.211483] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x6b9d90) 00:17:51.722 [2024-12-16 11:39:52.211492] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:51.722 [2024-12-16 11:39:52.211537] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x6faa80, cid 3, qid 0 00:17:51.722 [2024-12-16 11:39:52.211618] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:17:51.722 [2024-12-16 11:39:52.211636] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:17:51.722 [2024-12-16 11:39:52.211640] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:17:51.722 [2024-12-16 11:39:52.211645] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x6faa80) on tqpair=0x6b9d90 00:17:51.722 [2024-12-16 11:39:52.211654] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:17:51.722 [2024-12-16 11:39:52.211658] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:17:51.722 [2024-12-16 11:39:52.211662] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x6b9d90) 00:17:51.723 [2024-12-16 11:39:52.211670] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY SET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:51.723 [2024-12-16 11:39:52.211695] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x6faa80, cid 3, qid 0 00:17:51.723 [2024-12-16 11:39:52.211770] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:17:51.723 [2024-12-16 11:39:52.211777] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:17:51.723 [2024-12-16 11:39:52.211781] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:17:51.723 [2024-12-16 11:39:52.211785] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x6faa80) on tqpair=0x6b9d90 00:17:51.723 [2024-12-16 11:39:52.211791] nvme_ctrlr.c:1151:nvme_ctrlr_shutdown_set_cc_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] RTD3E = 0 us 00:17:51.723 [2024-12-16 11:39:52.211796] nvme_ctrlr.c:1154:nvme_ctrlr_shutdown_set_cc_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] shutdown timeout = 10000 ms 00:17:51.723 [2024-12-16 11:39:52.211807] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:17:51.723 [2024-12-16 11:39:52.211812] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:17:51.723 [2024-12-16 11:39:52.211815] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x6b9d90) 00:17:51.723 [2024-12-16 11:39:52.211823] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:51.723 [2024-12-16 11:39:52.211841] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x6faa80, cid 3, qid 0 00:17:51.723 [2024-12-16 11:39:52.211894] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:17:51.723 [2024-12-16 11:39:52.211901] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:17:51.723 [2024-12-16 11:39:52.211905] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:17:51.723 [2024-12-16 11:39:52.211909] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x6faa80) on tqpair=0x6b9d90 00:17:51.723 [2024-12-16 11:39:52.211920] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:17:51.723 [2024-12-16 11:39:52.211925] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:17:51.723 [2024-12-16 11:39:52.211929] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x6b9d90) 00:17:51.723 [2024-12-16 11:39:52.211936] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:51.723 [2024-12-16 11:39:52.211953] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x6faa80, cid 3, qid 0 00:17:51.723 [2024-12-16 11:39:52.212005] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:17:51.723 [2024-12-16 11:39:52.212011] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:17:51.723 [2024-12-16 11:39:52.212015] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:17:51.723 [2024-12-16 11:39:52.212019] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x6faa80) on tqpair=0x6b9d90 00:17:51.723 [2024-12-16 11:39:52.212030] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:17:51.723 [2024-12-16 11:39:52.212034] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:17:51.723 [2024-12-16 11:39:52.212038] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x6b9d90) 00:17:51.723 [2024-12-16 11:39:52.212046] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:51.723 [2024-12-16 11:39:52.212063] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x6faa80, cid 3, qid 0 00:17:51.723 [2024-12-16 11:39:52.212113] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:17:51.723 [2024-12-16 11:39:52.212120] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:17:51.723 [2024-12-16 11:39:52.212124] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:17:51.723 [2024-12-16 11:39:52.212128] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x6faa80) on tqpair=0x6b9d90 00:17:51.723 [2024-12-16 11:39:52.212138] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:17:51.723 [2024-12-16 11:39:52.212157] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:17:51.723 [2024-12-16 11:39:52.212162] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x6b9d90) 00:17:51.723 [2024-12-16 11:39:52.212170] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:51.723 [2024-12-16 11:39:52.212190] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x6faa80, cid 3, qid 0 00:17:51.723 [2024-12-16 11:39:52.212256] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:17:51.723 [2024-12-16 11:39:52.212262] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:17:51.723 [2024-12-16 11:39:52.212266] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:17:51.723 [2024-12-16 11:39:52.212271] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x6faa80) on tqpair=0x6b9d90 00:17:51.723 [2024-12-16 11:39:52.212281] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:17:51.723 [2024-12-16 11:39:52.212286] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:17:51.723 [2024-12-16 11:39:52.212290] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x6b9d90) 00:17:51.723 [2024-12-16 11:39:52.212297] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:51.723 [2024-12-16 11:39:52.212315] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x6faa80, cid 3, qid 0 00:17:51.723 [2024-12-16 11:39:52.212368] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:17:51.723 [2024-12-16 11:39:52.212375] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:17:51.723 [2024-12-16 11:39:52.212379] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:17:51.723 [2024-12-16 11:39:52.212383] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x6faa80) on tqpair=0x6b9d90 00:17:51.723 [2024-12-16 11:39:52.212394] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:17:51.723 [2024-12-16 11:39:52.212398] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:17:51.723 [2024-12-16 11:39:52.212402] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x6b9d90) 00:17:51.723 [2024-12-16 11:39:52.212409] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:51.723 [2024-12-16 11:39:52.212426] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x6faa80, cid 3, qid 0 00:17:51.723 [2024-12-16 11:39:52.212479] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:17:51.723 [2024-12-16 11:39:52.212486] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:17:51.723 [2024-12-16 11:39:52.212489] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:17:51.723 [2024-12-16 11:39:52.212494] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x6faa80) on tqpair=0x6b9d90 00:17:51.723 [2024-12-16 11:39:52.212504] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:17:51.723 [2024-12-16 11:39:52.212509] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:17:51.723 [2024-12-16 11:39:52.212512] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x6b9d90) 00:17:51.723 [2024-12-16 11:39:52.212520] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:51.723 [2024-12-16 11:39:52.212537] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x6faa80, cid 3, qid 0 00:17:51.723 [2024-12-16 11:39:52.212587] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:17:51.723 [2024-12-16 11:39:52.212594] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:17:51.723 [2024-12-16 11:39:52.212598] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:17:51.723 [2024-12-16 11:39:52.212602] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x6faa80) on tqpair=0x6b9d90 00:17:51.723 [2024-12-16 11:39:52.212613] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:17:51.723 [2024-12-16 11:39:52.212617] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:17:51.723 [2024-12-16 11:39:52.212621] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x6b9d90) 00:17:51.723 [2024-12-16 11:39:52.212629] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:51.723 [2024-12-16 11:39:52.212646] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x6faa80, cid 3, qid 0 00:17:51.723 [2024-12-16 11:39:52.212695] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:17:51.723 [2024-12-16 11:39:52.212702] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:17:51.723 [2024-12-16 11:39:52.212706] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:17:51.723 [2024-12-16 11:39:52.212710] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x6faa80) on tqpair=0x6b9d90 00:17:51.723 [2024-12-16 11:39:52.212721] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:17:51.723 [2024-12-16 11:39:52.212725] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:17:51.723 [2024-12-16 11:39:52.212729] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x6b9d90) 00:17:51.723 [2024-12-16 11:39:52.212737] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:51.723 [2024-12-16 11:39:52.212753] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x6faa80, cid 3, qid 0 00:17:51.723 [2024-12-16 11:39:52.212806] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:17:51.723 [2024-12-16 11:39:52.212813] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:17:51.723 [2024-12-16 11:39:52.212817] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:17:51.723 [2024-12-16 11:39:52.212821] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x6faa80) on tqpair=0x6b9d90 00:17:51.723 [2024-12-16 11:39:52.212831] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:17:51.723 [2024-12-16 11:39:52.212836] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:17:51.723 [2024-12-16 11:39:52.212840] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x6b9d90) 00:17:51.723 [2024-12-16 11:39:52.212847] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:51.723 [2024-12-16 11:39:52.212864] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x6faa80, cid 3, qid 0 00:17:51.723 [2024-12-16 11:39:52.212914] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:17:51.723 [2024-12-16 11:39:52.212921] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:17:51.723 [2024-12-16 11:39:52.212925] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:17:51.723 [2024-12-16 11:39:52.212929] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x6faa80) on tqpair=0x6b9d90 00:17:51.723 [2024-12-16 11:39:52.212940] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:17:51.723 [2024-12-16 11:39:52.212944] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:17:51.723 [2024-12-16 11:39:52.212948] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x6b9d90) 00:17:51.723 [2024-12-16 11:39:52.212956] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:51.723 [2024-12-16 11:39:52.212972] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x6faa80, cid 3, qid 0 00:17:51.723 [2024-12-16 11:39:52.213022] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:17:51.723 [2024-12-16 11:39:52.213029] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:17:51.723 [2024-12-16 11:39:52.213033] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:17:51.723 [2024-12-16 11:39:52.213037] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x6faa80) on tqpair=0x6b9d90 00:17:51.723 [2024-12-16 11:39:52.213047] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:17:51.723 [2024-12-16 11:39:52.213052] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:17:51.723 [2024-12-16 11:39:52.213056] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x6b9d90) 00:17:51.724 [2024-12-16 11:39:52.213063] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:51.724 [2024-12-16 11:39:52.213080] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x6faa80, cid 3, qid 0 00:17:51.724 [2024-12-16 11:39:52.213133] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:17:51.724 [2024-12-16 11:39:52.213149] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:17:51.724 [2024-12-16 11:39:52.213155] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:17:51.724 [2024-12-16 11:39:52.213159] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x6faa80) on tqpair=0x6b9d90 00:17:51.724 [2024-12-16 11:39:52.213170] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:17:51.724 [2024-12-16 11:39:52.213175] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:17:51.724 [2024-12-16 11:39:52.213179] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x6b9d90) 00:17:51.724 [2024-12-16 11:39:52.213186] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:51.724 [2024-12-16 11:39:52.213213] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x6faa80, cid 3, qid 0 00:17:51.724 [2024-12-16 11:39:52.213268] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:17:51.724 [2024-12-16 11:39:52.213274] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:17:51.724 [2024-12-16 11:39:52.213278] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:17:51.724 [2024-12-16 11:39:52.213282] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x6faa80) on tqpair=0x6b9d90 00:17:51.724 [2024-12-16 11:39:52.213293] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:17:51.724 [2024-12-16 11:39:52.213298] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:17:51.724 [2024-12-16 11:39:52.213302] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x6b9d90) 00:17:51.724 [2024-12-16 11:39:52.213309] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:51.724 [2024-12-16 11:39:52.213326] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x6faa80, cid 3, qid 0 00:17:51.724 [2024-12-16 11:39:52.213385] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:17:51.724 [2024-12-16 11:39:52.213392] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:17:51.724 [2024-12-16 11:39:52.213395] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:17:51.724 [2024-12-16 11:39:52.213400] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x6faa80) on tqpair=0x6b9d90 00:17:51.724 [2024-12-16 11:39:52.213410] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:17:51.724 [2024-12-16 11:39:52.213415] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:17:51.724 [2024-12-16 11:39:52.213419] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x6b9d90) 00:17:51.724 [2024-12-16 11:39:52.213426] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:51.724 [2024-12-16 11:39:52.213443] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x6faa80, cid 3, qid 0 00:17:51.724 [2024-12-16 11:39:52.213493] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:17:51.724 [2024-12-16 11:39:52.213500] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:17:51.724 [2024-12-16 11:39:52.213504] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:17:51.724 [2024-12-16 11:39:52.213508] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x6faa80) on tqpair=0x6b9d90 00:17:51.724 [2024-12-16 11:39:52.213519] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:17:51.724 [2024-12-16 11:39:52.213523] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:17:51.724 [2024-12-16 11:39:52.213527] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x6b9d90) 00:17:51.724 [2024-12-16 11:39:52.213535] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:51.724 [2024-12-16 11:39:52.213552] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x6faa80, cid 3, qid 0 00:17:51.724 [2024-12-16 11:39:52.213602] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:17:51.724 [2024-12-16 11:39:52.213609] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:17:51.724 [2024-12-16 11:39:52.213612] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:17:51.724 [2024-12-16 11:39:52.213616] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x6faa80) on tqpair=0x6b9d90 00:17:51.724 [2024-12-16 11:39:52.213627] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:17:51.724 [2024-12-16 11:39:52.213632] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:17:51.724 [2024-12-16 11:39:52.213635] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x6b9d90) 00:17:51.724 [2024-12-16 11:39:52.213644] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:51.724 [2024-12-16 11:39:52.213661] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x6faa80, cid 3, qid 0 00:17:51.724 [2024-12-16 11:39:52.213712] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:17:51.724 [2024-12-16 11:39:52.213719] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:17:51.724 [2024-12-16 11:39:52.213722] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:17:51.724 [2024-12-16 11:39:52.213727] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x6faa80) on tqpair=0x6b9d90 00:17:51.724 [2024-12-16 11:39:52.213737] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:17:51.724 [2024-12-16 11:39:52.213742] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:17:51.724 [2024-12-16 11:39:52.213746] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x6b9d90) 00:17:51.724 [2024-12-16 11:39:52.213753] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:51.724 [2024-12-16 11:39:52.213770] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x6faa80, cid 3, qid 0 00:17:51.724 [2024-12-16 11:39:52.213821] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:17:51.724 [2024-12-16 11:39:52.213832] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:17:51.724 [2024-12-16 11:39:52.213837] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:17:51.724 [2024-12-16 11:39:52.213841] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x6faa80) on tqpair=0x6b9d90 00:17:51.724 [2024-12-16 11:39:52.213852] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:17:51.724 [2024-12-16 11:39:52.213857] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:17:51.724 [2024-12-16 11:39:52.213861] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x6b9d90) 00:17:51.724 [2024-12-16 11:39:52.213868] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:51.724 [2024-12-16 11:39:52.213887] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x6faa80, cid 3, qid 0 00:17:51.724 [2024-12-16 11:39:52.213940] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:17:51.724 [2024-12-16 11:39:52.213952] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:17:51.724 [2024-12-16 11:39:52.213956] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:17:51.724 [2024-12-16 11:39:52.213961] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x6faa80) on tqpair=0x6b9d90 00:17:51.724 [2024-12-16 11:39:52.213972] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:17:51.724 [2024-12-16 11:39:52.213976] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:17:51.724 [2024-12-16 11:39:52.213980] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x6b9d90) 00:17:51.724 [2024-12-16 11:39:52.213988] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:51.724 [2024-12-16 11:39:52.214005] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x6faa80, cid 3, qid 0 00:17:51.724 [2024-12-16 11:39:52.214057] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:17:51.724 [2024-12-16 11:39:52.214064] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:17:51.724 [2024-12-16 11:39:52.214068] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:17:51.724 [2024-12-16 11:39:52.214072] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x6faa80) on tqpair=0x6b9d90 00:17:51.724 [2024-12-16 11:39:52.214083] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:17:51.724 [2024-12-16 11:39:52.214088] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:17:51.724 [2024-12-16 11:39:52.214091] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x6b9d90) 00:17:51.724 [2024-12-16 11:39:52.214099] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:51.724 [2024-12-16 11:39:52.214117] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x6faa80, cid 3, qid 0 00:17:51.724 [2024-12-16 11:39:52.214179] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:17:51.724 [2024-12-16 11:39:52.214187] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:17:51.724 [2024-12-16 11:39:52.214191] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:17:51.724 [2024-12-16 11:39:52.214195] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x6faa80) on tqpair=0x6b9d90 00:17:51.724 [2024-12-16 11:39:52.214206] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:17:51.724 [2024-12-16 11:39:52.214211] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:17:51.724 [2024-12-16 11:39:52.214215] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x6b9d90) 00:17:51.724 [2024-12-16 11:39:52.214222] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:51.724 [2024-12-16 11:39:52.214241] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x6faa80, cid 3, qid 0 00:17:51.724 [2024-12-16 11:39:52.214295] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:17:51.724 [2024-12-16 11:39:52.214302] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:17:51.724 [2024-12-16 11:39:52.214306] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:17:51.724 [2024-12-16 11:39:52.214310] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x6faa80) on tqpair=0x6b9d90 00:17:51.724 [2024-12-16 11:39:52.214321] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:17:51.724 [2024-12-16 11:39:52.214326] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:17:51.724 [2024-12-16 11:39:52.214330] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x6b9d90) 00:17:51.724 [2024-12-16 11:39:52.214337] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:51.724 [2024-12-16 11:39:52.214354] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x6faa80, cid 3, qid 0 00:17:51.724 [2024-12-16 11:39:52.214402] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:17:51.724 [2024-12-16 11:39:52.214409] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:17:51.724 [2024-12-16 11:39:52.214413] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:17:51.724 [2024-12-16 11:39:52.214417] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x6faa80) on tqpair=0x6b9d90 00:17:51.724 [2024-12-16 11:39:52.214427] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:17:51.724 [2024-12-16 11:39:52.214432] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:17:51.724 [2024-12-16 11:39:52.214436] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x6b9d90) 00:17:51.724 [2024-12-16 11:39:52.214443] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:51.724 [2024-12-16 11:39:52.214460] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x6faa80, cid 3, qid 0 00:17:51.724 [2024-12-16 11:39:52.214513] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:17:51.725 [2024-12-16 11:39:52.214519] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:17:51.725 [2024-12-16 11:39:52.214523] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:17:51.725 [2024-12-16 11:39:52.214527] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x6faa80) on tqpair=0x6b9d90 00:17:51.725 [2024-12-16 11:39:52.214538] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:17:51.725 [2024-12-16 11:39:52.214543] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:17:51.725 [2024-12-16 11:39:52.214547] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x6b9d90) 00:17:51.725 [2024-12-16 11:39:52.214554] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:51.725 [2024-12-16 11:39:52.214571] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x6faa80, cid 3, qid 0 00:17:51.725 [2024-12-16 11:39:52.214624] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:17:51.725 [2024-12-16 11:39:52.214631] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:17:51.725 [2024-12-16 11:39:52.214635] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:17:51.725 [2024-12-16 11:39:52.214639] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x6faa80) on tqpair=0x6b9d90 00:17:51.725 [2024-12-16 11:39:52.214649] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:17:51.725 [2024-12-16 11:39:52.214654] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:17:51.725 [2024-12-16 11:39:52.214658] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x6b9d90) 00:17:51.725 [2024-12-16 11:39:52.214665] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:51.725 [2024-12-16 11:39:52.214682] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x6faa80, cid 3, qid 0 00:17:51.725 [2024-12-16 11:39:52.214732] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:17:51.725 [2024-12-16 11:39:52.214739] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:17:51.725 [2024-12-16 11:39:52.214743] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:17:51.725 [2024-12-16 11:39:52.214747] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x6faa80) on tqpair=0x6b9d90 00:17:51.725 [2024-12-16 11:39:52.214757] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:17:51.725 [2024-12-16 11:39:52.214762] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:17:51.725 [2024-12-16 11:39:52.214766] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x6b9d90) 00:17:51.725 [2024-12-16 11:39:52.214773] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:51.725 [2024-12-16 11:39:52.214791] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x6faa80, cid 3, qid 0 00:17:51.725 [2024-12-16 11:39:52.214839] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:17:51.725 [2024-12-16 11:39:52.214845] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:17:51.725 [2024-12-16 11:39:52.214849] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:17:51.725 [2024-12-16 11:39:52.214853] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x6faa80) on tqpair=0x6b9d90 00:17:51.725 [2024-12-16 11:39:52.214864] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:17:51.725 [2024-12-16 11:39:52.214868] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:17:51.725 [2024-12-16 11:39:52.214872] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x6b9d90) 00:17:51.725 [2024-12-16 11:39:52.214879] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:51.725 [2024-12-16 11:39:52.214896] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x6faa80, cid 3, qid 0 00:17:51.725 [2024-12-16 11:39:52.214947] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:17:51.725 [2024-12-16 11:39:52.214954] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:17:51.725 [2024-12-16 11:39:52.214957] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:17:51.725 [2024-12-16 11:39:52.214962] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x6faa80) on tqpair=0x6b9d90 00:17:51.725 [2024-12-16 11:39:52.214973] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:17:51.725 [2024-12-16 11:39:52.214977] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:17:51.725 [2024-12-16 11:39:52.214981] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x6b9d90) 00:17:51.725 [2024-12-16 11:39:52.214989] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:51.725 [2024-12-16 11:39:52.215006] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x6faa80, cid 3, qid 0 00:17:51.725 [2024-12-16 11:39:52.215056] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:17:51.725 [2024-12-16 11:39:52.215063] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:17:51.725 [2024-12-16 11:39:52.215067] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:17:51.725 [2024-12-16 11:39:52.215071] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x6faa80) on tqpair=0x6b9d90 00:17:51.725 [2024-12-16 11:39:52.215081] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:17:51.725 [2024-12-16 11:39:52.215086] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:17:51.725 [2024-12-16 11:39:52.215090] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x6b9d90) 00:17:51.725 [2024-12-16 11:39:52.215097] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:51.725 [2024-12-16 11:39:52.215114] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x6faa80, cid 3, qid 0 00:17:51.725 [2024-12-16 11:39:52.218202] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:17:51.725 [2024-12-16 11:39:52.218225] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:17:51.725 [2024-12-16 11:39:52.218230] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:17:51.725 [2024-12-16 11:39:52.218235] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x6faa80) on tqpair=0x6b9d90 00:17:51.725 [2024-12-16 11:39:52.218249] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:17:51.725 [2024-12-16 11:39:52.218255] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:17:51.725 [2024-12-16 11:39:52.218259] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x6b9d90) 00:17:51.725 [2024-12-16 11:39:52.218267] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:51.725 [2024-12-16 11:39:52.218294] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x6faa80, cid 3, qid 0 00:17:51.725 [2024-12-16 11:39:52.218355] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:17:51.725 [2024-12-16 11:39:52.218362] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:17:51.725 [2024-12-16 11:39:52.218366] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:17:51.725 [2024-12-16 11:39:52.218370] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x6faa80) on tqpair=0x6b9d90 00:17:51.725 [2024-12-16 11:39:52.218379] nvme_ctrlr.c:1273:nvme_ctrlr_shutdown_poll_async: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] shutdown complete in 6 milliseconds 00:17:51.725 00:17:51.725 11:39:52 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@45 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_identify -r ' trtype:tcp adrfam:IPv4 traddr:10.0.0.3 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' -L all 00:17:51.725 [2024-12-16 11:39:52.263442] Starting SPDK v25.01-pre git sha1 c360129e7 / DPDK 24.03.0 initialization... 00:17:51.725 [2024-12-16 11:39:52.263495] [ DPDK EAL parameters: identify --no-shconf -c 0x1 -n 1 -m 0 --no-pci --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid89116 ] 00:17:51.725 [2024-12-16 11:39:52.423637] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 0] setting state to connect adminq (no timeout) 00:17:51.725 [2024-12-16 11:39:52.423715] nvme_tcp.c:2238:nvme_tcp_qpair_connect_sock: *DEBUG*: adrfam 1 ai_family 2 00:17:51.725 [2024-12-16 11:39:52.423723] nvme_tcp.c:2242:nvme_tcp_qpair_connect_sock: *DEBUG*: trsvcid is 4420 00:17:51.725 [2024-12-16 11:39:52.423735] nvme_tcp.c:2263:nvme_tcp_qpair_connect_sock: *DEBUG*: sock_impl_name is (null) 00:17:51.725 [2024-12-16 11:39:52.423745] sock.c: 373:spdk_sock_connect_ext: *DEBUG*: Creating a client socket using impl posix 00:17:51.725 [2024-12-16 11:39:52.424019] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 0] setting state to wait for connect adminq (no timeout) 00:17:51.725 [2024-12-16 11:39:52.424078] nvme_tcp.c:1455:nvme_tcp_send_icreq_complete: *DEBUG*: Complete the icreq send for tqpair=0x21bed90 0 00:17:51.725 [2024-12-16 11:39:52.429161] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 1 00:17:51.725 [2024-12-16 11:39:52.429185] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =1 00:17:51.725 [2024-12-16 11:39:52.429191] nvme_tcp.c:1501:nvme_tcp_icresp_handle: *DEBUG*: host_hdgst_enable: 0 00:17:51.725 [2024-12-16 11:39:52.429195] nvme_tcp.c:1502:nvme_tcp_icresp_handle: *DEBUG*: host_ddgst_enable: 0 00:17:51.725 [2024-12-16 11:39:52.429233] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:17:51.725 [2024-12-16 11:39:52.429240] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:17:51.725 [2024-12-16 11:39:52.429244] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x21bed90) 00:17:51.725 [2024-12-16 11:39:52.429257] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:0 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x400 00:17:51.725 [2024-12-16 11:39:52.429288] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x21ff600, cid 0, qid 0 00:17:51.725 [2024-12-16 11:39:52.435258] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:17:51.725 [2024-12-16 11:39:52.435280] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:17:51.725 [2024-12-16 11:39:52.435301] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:17:51.725 [2024-12-16 11:39:52.435306] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x21ff600) on tqpair=0x21bed90 00:17:51.725 [2024-12-16 11:39:52.435320] nvme_fabric.c: 621:nvme_fabric_qpair_connect_poll: *DEBUG*: CNTLID 0x0001 00:17:51.725 [2024-12-16 11:39:52.435328] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to read vs (no timeout) 00:17:51.725 [2024-12-16 11:39:52.435335] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to read vs wait for vs (no timeout) 00:17:51.725 [2024-12-16 11:39:52.435355] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:17:51.725 [2024-12-16 11:39:52.435360] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:17:51.725 [2024-12-16 11:39:52.435364] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x21bed90) 00:17:51.725 [2024-12-16 11:39:52.435373] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:51.725 [2024-12-16 11:39:52.435401] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x21ff600, cid 0, qid 0 00:17:51.725 [2024-12-16 11:39:52.435476] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:17:51.725 [2024-12-16 11:39:52.435483] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:17:51.725 [2024-12-16 11:39:52.435503] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:17:51.725 [2024-12-16 11:39:52.435507] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x21ff600) on tqpair=0x21bed90 00:17:51.725 [2024-12-16 11:39:52.435517] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to read cap (no timeout) 00:17:51.725 [2024-12-16 11:39:52.435527] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to read cap wait for cap (no timeout) 00:17:51.726 [2024-12-16 11:39:52.435535] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:17:51.726 [2024-12-16 11:39:52.435539] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:17:51.726 [2024-12-16 11:39:52.435543] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x21bed90) 00:17:51.726 [2024-12-16 11:39:52.435551] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:51.726 [2024-12-16 11:39:52.435571] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x21ff600, cid 0, qid 0 00:17:51.726 [2024-12-16 11:39:52.436068] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:17:51.726 [2024-12-16 11:39:52.436083] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:17:51.726 [2024-12-16 11:39:52.436088] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:17:51.726 [2024-12-16 11:39:52.436092] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x21ff600) on tqpair=0x21bed90 00:17:51.726 [2024-12-16 11:39:52.436099] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to check en (no timeout) 00:17:51.726 [2024-12-16 11:39:52.436108] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to check en wait for cc (timeout 15000 ms) 00:17:51.726 [2024-12-16 11:39:52.436117] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:17:51.726 [2024-12-16 11:39:52.436121] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:17:51.726 [2024-12-16 11:39:52.436125] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x21bed90) 00:17:51.726 [2024-12-16 11:39:52.436133] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:51.726 [2024-12-16 11:39:52.436167] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x21ff600, cid 0, qid 0 00:17:51.726 [2024-12-16 11:39:52.436404] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:17:51.726 [2024-12-16 11:39:52.436419] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:17:51.726 [2024-12-16 11:39:52.436423] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:17:51.726 [2024-12-16 11:39:52.436428] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x21ff600) on tqpair=0x21bed90 00:17:51.726 [2024-12-16 11:39:52.436434] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to disable and wait for CSTS.RDY = 0 (timeout 15000 ms) 00:17:51.726 [2024-12-16 11:39:52.436446] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:17:51.726 [2024-12-16 11:39:52.436451] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:17:51.726 [2024-12-16 11:39:52.436455] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x21bed90) 00:17:51.726 [2024-12-16 11:39:52.436463] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:51.726 [2024-12-16 11:39:52.436483] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x21ff600, cid 0, qid 0 00:17:51.726 [2024-12-16 11:39:52.436656] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:17:51.726 [2024-12-16 11:39:52.436668] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:17:51.726 [2024-12-16 11:39:52.436672] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:17:51.726 [2024-12-16 11:39:52.436677] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x21ff600) on tqpair=0x21bed90 00:17:51.726 [2024-12-16 11:39:52.436682] nvme_ctrlr.c:3906:nvme_ctrlr_process_init_wait_for_ready_0: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] CC.EN = 0 && CSTS.RDY = 0 00:17:51.726 [2024-12-16 11:39:52.436688] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to controller is disabled (timeout 15000 ms) 00:17:51.726 [2024-12-16 11:39:52.436697] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to enable controller by writing CC.EN = 1 (timeout 15000 ms) 00:17:51.726 [2024-12-16 11:39:52.436809] nvme_ctrlr.c:4104:nvme_ctrlr_process_init: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] Setting CC.EN = 1 00:17:51.726 [2024-12-16 11:39:52.436818] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to enable controller by writing CC.EN = 1 reg (timeout 15000 ms) 00:17:51.726 [2024-12-16 11:39:52.436828] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:17:51.726 [2024-12-16 11:39:52.436833] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:17:51.726 [2024-12-16 11:39:52.436837] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x21bed90) 00:17:51.726 [2024-12-16 11:39:52.436845] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY SET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:51.726 [2024-12-16 11:39:52.436866] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x21ff600, cid 0, qid 0 00:17:51.726 [2024-12-16 11:39:52.437219] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:17:51.726 [2024-12-16 11:39:52.437234] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:17:51.726 [2024-12-16 11:39:52.437239] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:17:51.726 [2024-12-16 11:39:52.437244] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x21ff600) on tqpair=0x21bed90 00:17:51.726 [2024-12-16 11:39:52.437250] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to wait for CSTS.RDY = 1 (timeout 15000 ms) 00:17:51.726 [2024-12-16 11:39:52.437261] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:17:51.726 [2024-12-16 11:39:52.437266] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:17:51.726 [2024-12-16 11:39:52.437270] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x21bed90) 00:17:51.726 [2024-12-16 11:39:52.437278] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:51.726 [2024-12-16 11:39:52.437299] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x21ff600, cid 0, qid 0 00:17:51.726 [2024-12-16 11:39:52.437557] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:17:51.726 [2024-12-16 11:39:52.437571] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:17:51.726 [2024-12-16 11:39:52.437576] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:17:51.726 [2024-12-16 11:39:52.437580] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x21ff600) on tqpair=0x21bed90 00:17:51.726 [2024-12-16 11:39:52.437585] nvme_ctrlr.c:3941:nvme_ctrlr_process_init_enable_wait_for_ready_1: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] CC.EN = 1 && CSTS.RDY = 1 - controller is ready 00:17:51.726 [2024-12-16 11:39:52.437591] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to reset admin queue (timeout 30000 ms) 00:17:51.726 [2024-12-16 11:39:52.437600] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to identify controller (no timeout) 00:17:51.726 [2024-12-16 11:39:52.437611] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to wait for identify controller (timeout 30000 ms) 00:17:51.726 [2024-12-16 11:39:52.437622] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:17:51.726 [2024-12-16 11:39:52.437626] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x21bed90) 00:17:51.726 [2024-12-16 11:39:52.437634] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:0 nsid:0 cdw10:00000001 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:51.726 [2024-12-16 11:39:52.437655] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x21ff600, cid 0, qid 0 00:17:51.726 [2024-12-16 11:39:52.437997] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:17:51.726 [2024-12-16 11:39:52.438012] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:17:51.726 [2024-12-16 11:39:52.438016] nvme_tcp.c:1619:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:17:51.726 [2024-12-16 11:39:52.438021] nvme_tcp.c:1620:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x21bed90): datao=0, datal=4096, cccid=0 00:17:51.726 [2024-12-16 11:39:52.438026] nvme_tcp.c:1631:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x21ff600) on tqpair(0x21bed90): expected_datao=0, payload_size=4096 00:17:51.726 [2024-12-16 11:39:52.438031] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:17:51.726 [2024-12-16 11:39:52.438040] nvme_tcp.c:1421:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:17:51.726 [2024-12-16 11:39:52.438045] nvme_tcp.c:1255:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:17:51.726 [2024-12-16 11:39:52.438054] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:17:51.726 [2024-12-16 11:39:52.438060] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:17:51.726 [2024-12-16 11:39:52.438064] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:17:51.726 [2024-12-16 11:39:52.438068] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x21ff600) on tqpair=0x21bed90 00:17:51.726 [2024-12-16 11:39:52.438077] nvme_ctrlr.c:2081:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] transport max_xfer_size 4294967295 00:17:51.726 [2024-12-16 11:39:52.438083] nvme_ctrlr.c:2085:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] MDTS max_xfer_size 131072 00:17:51.726 [2024-12-16 11:39:52.438088] nvme_ctrlr.c:2088:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] CNTLID 0x0001 00:17:51.726 [2024-12-16 11:39:52.438092] nvme_ctrlr.c:2112:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] transport max_sges 16 00:17:51.726 [2024-12-16 11:39:52.438097] nvme_ctrlr.c:2127:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] fuses compare and write: 1 00:17:51.726 [2024-12-16 11:39:52.438103] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to configure AER (timeout 30000 ms) 00:17:51.726 [2024-12-16 11:39:52.438112] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to wait for configure aer (timeout 30000 ms) 00:17:51.726 [2024-12-16 11:39:52.438120] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:17:51.726 [2024-12-16 11:39:52.438125] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:17:51.726 [2024-12-16 11:39:52.438129] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x21bed90) 00:17:51.726 [2024-12-16 11:39:52.438137] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES ASYNC EVENT CONFIGURATION cid:0 cdw10:0000000b SGL DATA BLOCK OFFSET 0x0 len:0x0 00:17:51.726 [2024-12-16 11:39:52.438171] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x21ff600, cid 0, qid 0 00:17:51.726 [2024-12-16 11:39:52.438589] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:17:51.726 [2024-12-16 11:39:52.438604] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:17:51.727 [2024-12-16 11:39:52.438609] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:17:51.727 [2024-12-16 11:39:52.438613] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x21ff600) on tqpair=0x21bed90 00:17:51.727 [2024-12-16 11:39:52.438622] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:17:51.727 [2024-12-16 11:39:52.438626] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:17:51.727 [2024-12-16 11:39:52.438630] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x21bed90) 00:17:51.727 [2024-12-16 11:39:52.438637] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:17:51.727 [2024-12-16 11:39:52.438644] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:17:51.727 [2024-12-16 11:39:52.438649] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:17:51.727 [2024-12-16 11:39:52.438653] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=1 on tqpair(0x21bed90) 00:17:51.727 [2024-12-16 11:39:52.438659] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:17:51.727 [2024-12-16 11:39:52.438665] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:17:51.727 [2024-12-16 11:39:52.438669] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:17:51.727 [2024-12-16 11:39:52.438673] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=2 on tqpair(0x21bed90) 00:17:51.727 [2024-12-16 11:39:52.438679] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:17:51.727 [2024-12-16 11:39:52.438685] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:17:51.727 [2024-12-16 11:39:52.438689] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:17:51.727 [2024-12-16 11:39:52.438693] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x21bed90) 00:17:51.727 [2024-12-16 11:39:52.438699] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:17:51.727 [2024-12-16 11:39:52.438704] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to set keep alive timeout (timeout 30000 ms) 00:17:51.727 [2024-12-16 11:39:52.438719] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to wait for set keep alive timeout (timeout 30000 ms) 00:17:51.727 [2024-12-16 11:39:52.438728] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:17:51.727 [2024-12-16 11:39:52.438732] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x21bed90) 00:17:51.727 [2024-12-16 11:39:52.438739] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES KEEP ALIVE TIMER cid:4 cdw10:0000000f SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:51.727 [2024-12-16 11:39:52.438761] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x21ff600, cid 0, qid 0 00:17:51.727 [2024-12-16 11:39:52.438778] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x21ff780, cid 1, qid 0 00:17:51.727 [2024-12-16 11:39:52.438783] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x21ff900, cid 2, qid 0 00:17:51.727 [2024-12-16 11:39:52.438788] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x21ffa80, cid 3, qid 0 00:17:51.727 [2024-12-16 11:39:52.438793] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x21ffc00, cid 4, qid 0 00:17:51.727 [2024-12-16 11:39:52.442267] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:17:51.727 [2024-12-16 11:39:52.442287] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:17:51.727 [2024-12-16 11:39:52.442292] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:17:51.727 [2024-12-16 11:39:52.442296] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x21ffc00) on tqpair=0x21bed90 00:17:51.727 [2024-12-16 11:39:52.442303] nvme_ctrlr.c:3059:nvme_ctrlr_set_keep_alive_timeout_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] Sending keep alive every 5000000 us 00:17:51.727 [2024-12-16 11:39:52.442309] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to identify controller iocs specific (timeout 30000 ms) 00:17:51.727 [2024-12-16 11:39:52.442325] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to set number of queues (timeout 30000 ms) 00:17:51.727 [2024-12-16 11:39:52.442333] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to wait for set number of queues (timeout 30000 ms) 00:17:51.727 [2024-12-16 11:39:52.442341] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:17:51.727 [2024-12-16 11:39:52.442345] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:17:51.727 [2024-12-16 11:39:52.442355] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x21bed90) 00:17:51.727 [2024-12-16 11:39:52.442364] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES NUMBER OF QUEUES cid:4 cdw10:00000007 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:17:51.727 [2024-12-16 11:39:52.442389] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x21ffc00, cid 4, qid 0 00:17:51.727 [2024-12-16 11:39:52.442476] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:17:51.727 [2024-12-16 11:39:52.442483] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:17:51.727 [2024-12-16 11:39:52.442487] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:17:51.727 [2024-12-16 11:39:52.442491] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x21ffc00) on tqpair=0x21bed90 00:17:51.727 [2024-12-16 11:39:52.442555] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to identify active ns (timeout 30000 ms) 00:17:51.727 [2024-12-16 11:39:52.442567] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to wait for identify active ns (timeout 30000 ms) 00:17:51.727 [2024-12-16 11:39:52.442576] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:17:51.727 [2024-12-16 11:39:52.442580] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x21bed90) 00:17:51.727 [2024-12-16 11:39:52.442588] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:4 nsid:0 cdw10:00000002 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:51.727 [2024-12-16 11:39:52.442609] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x21ffc00, cid 4, qid 0 00:17:51.727 [2024-12-16 11:39:52.443020] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:17:51.727 [2024-12-16 11:39:52.443036] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:17:51.727 [2024-12-16 11:39:52.443041] nvme_tcp.c:1619:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:17:51.727 [2024-12-16 11:39:52.443045] nvme_tcp.c:1620:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x21bed90): datao=0, datal=4096, cccid=4 00:17:51.727 [2024-12-16 11:39:52.443050] nvme_tcp.c:1631:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x21ffc00) on tqpair(0x21bed90): expected_datao=0, payload_size=4096 00:17:51.727 [2024-12-16 11:39:52.443055] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:17:51.727 [2024-12-16 11:39:52.443063] nvme_tcp.c:1421:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:17:51.727 [2024-12-16 11:39:52.443068] nvme_tcp.c:1255:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:17:51.727 [2024-12-16 11:39:52.443076] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:17:51.727 [2024-12-16 11:39:52.443083] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:17:51.727 [2024-12-16 11:39:52.443087] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:17:51.727 [2024-12-16 11:39:52.443091] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x21ffc00) on tqpair=0x21bed90 00:17:51.727 [2024-12-16 11:39:52.443114] nvme_ctrlr.c:4735:spdk_nvme_ctrlr_get_ns: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] Namespace 1 was added 00:17:51.727 [2024-12-16 11:39:52.443126] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to identify ns (timeout 30000 ms) 00:17:51.727 [2024-12-16 11:39:52.443138] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to wait for identify ns (timeout 30000 ms) 00:17:51.727 [2024-12-16 11:39:52.443160] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:17:51.727 [2024-12-16 11:39:52.443165] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x21bed90) 00:17:51.727 [2024-12-16 11:39:52.443173] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:4 nsid:1 cdw10:00000000 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:51.727 [2024-12-16 11:39:52.443197] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x21ffc00, cid 4, qid 0 00:17:51.727 [2024-12-16 11:39:52.443522] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:17:51.727 [2024-12-16 11:39:52.443537] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:17:51.727 [2024-12-16 11:39:52.443541] nvme_tcp.c:1619:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:17:51.727 [2024-12-16 11:39:52.443545] nvme_tcp.c:1620:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x21bed90): datao=0, datal=4096, cccid=4 00:17:51.727 [2024-12-16 11:39:52.443550] nvme_tcp.c:1631:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x21ffc00) on tqpair(0x21bed90): expected_datao=0, payload_size=4096 00:17:51.727 [2024-12-16 11:39:52.443555] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:17:51.727 [2024-12-16 11:39:52.443563] nvme_tcp.c:1421:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:17:51.727 [2024-12-16 11:39:52.443567] nvme_tcp.c:1255:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:17:51.727 [2024-12-16 11:39:52.443576] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:17:51.727 [2024-12-16 11:39:52.443591] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:17:51.727 [2024-12-16 11:39:52.443596] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:17:51.727 [2024-12-16 11:39:52.443600] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x21ffc00) on tqpair=0x21bed90 00:17:51.727 [2024-12-16 11:39:52.443618] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to identify namespace id descriptors (timeout 30000 ms) 00:17:51.727 [2024-12-16 11:39:52.443631] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to wait for identify namespace id descriptors (timeout 30000 ms) 00:17:51.727 [2024-12-16 11:39:52.443640] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:17:51.727 [2024-12-16 11:39:52.443645] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x21bed90) 00:17:51.727 [2024-12-16 11:39:52.443653] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:4 nsid:1 cdw10:00000003 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:51.727 [2024-12-16 11:39:52.443676] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x21ffc00, cid 4, qid 0 00:17:51.727 [2024-12-16 11:39:52.444048] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:17:51.727 [2024-12-16 11:39:52.444064] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:17:51.727 [2024-12-16 11:39:52.444068] nvme_tcp.c:1619:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:17:51.727 [2024-12-16 11:39:52.444072] nvme_tcp.c:1620:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x21bed90): datao=0, datal=4096, cccid=4 00:17:51.727 [2024-12-16 11:39:52.444077] nvme_tcp.c:1631:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x21ffc00) on tqpair(0x21bed90): expected_datao=0, payload_size=4096 00:17:51.727 [2024-12-16 11:39:52.444082] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:17:51.727 [2024-12-16 11:39:52.444090] nvme_tcp.c:1421:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:17:51.727 [2024-12-16 11:39:52.444094] nvme_tcp.c:1255:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:17:51.727 [2024-12-16 11:39:52.444103] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:17:51.727 [2024-12-16 11:39:52.444109] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:17:51.727 [2024-12-16 11:39:52.444113] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:17:51.727 [2024-12-16 11:39:52.444117] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x21ffc00) on tqpair=0x21bed90 00:17:51.727 [2024-12-16 11:39:52.444127] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to identify ns iocs specific (timeout 30000 ms) 00:17:51.727 [2024-12-16 11:39:52.444137] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to set supported log pages (timeout 30000 ms) 00:17:51.728 [2024-12-16 11:39:52.444160] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to set supported features (timeout 30000 ms) 00:17:51.728 [2024-12-16 11:39:52.444173] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to set host behavior support feature (timeout 30000 ms) 00:17:51.728 [2024-12-16 11:39:52.444179] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to set doorbell buffer config (timeout 30000 ms) 00:17:51.728 [2024-12-16 11:39:52.444185] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to set host ID (timeout 30000 ms) 00:17:51.728 [2024-12-16 11:39:52.444191] nvme_ctrlr.c:3147:nvme_ctrlr_set_host_id: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] NVMe-oF transport - not sending Set Features - Host ID 00:17:51.728 [2024-12-16 11:39:52.444196] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to transport ready (timeout 30000 ms) 00:17:51.728 [2024-12-16 11:39:52.444202] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to ready (no timeout) 00:17:51.728 [2024-12-16 11:39:52.444219] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:17:51.728 [2024-12-16 11:39:52.444224] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x21bed90) 00:17:51.728 [2024-12-16 11:39:52.444232] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES ARBITRATION cid:4 cdw10:00000001 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:51.728 [2024-12-16 11:39:52.444240] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:17:51.728 [2024-12-16 11:39:52.444244] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:17:51.728 [2024-12-16 11:39:52.444248] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=5 on tqpair(0x21bed90) 00:17:51.728 [2024-12-16 11:39:52.444254] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: KEEP ALIVE (18) qid:0 cid:5 nsid:0 cdw10:00000000 cdw11:00000000 00:17:51.728 [2024-12-16 11:39:52.444283] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x21ffc00, cid 4, qid 0 00:17:51.728 [2024-12-16 11:39:52.444291] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x21ffd80, cid 5, qid 0 00:17:51.728 [2024-12-16 11:39:52.444661] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:17:51.728 [2024-12-16 11:39:52.444677] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:17:51.728 [2024-12-16 11:39:52.444682] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:17:51.728 [2024-12-16 11:39:52.444686] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x21ffc00) on tqpair=0x21bed90 00:17:51.728 [2024-12-16 11:39:52.444693] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:17:51.728 [2024-12-16 11:39:52.444700] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:17:51.728 [2024-12-16 11:39:52.444704] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:17:51.728 [2024-12-16 11:39:52.444708] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x21ffd80) on tqpair=0x21bed90 00:17:51.728 [2024-12-16 11:39:52.444719] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:17:51.728 [2024-12-16 11:39:52.444724] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=5 on tqpair(0x21bed90) 00:17:51.728 [2024-12-16 11:39:52.444732] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES POWER MANAGEMENT cid:5 cdw10:00000002 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:51.728 [2024-12-16 11:39:52.444752] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x21ffd80, cid 5, qid 0 00:17:51.728 [2024-12-16 11:39:52.444913] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:17:51.728 [2024-12-16 11:39:52.444920] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:17:51.728 [2024-12-16 11:39:52.444923] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:17:51.728 [2024-12-16 11:39:52.444928] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x21ffd80) on tqpair=0x21bed90 00:17:51.728 [2024-12-16 11:39:52.444938] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:17:51.728 [2024-12-16 11:39:52.444943] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=5 on tqpair(0x21bed90) 00:17:51.728 [2024-12-16 11:39:52.444950] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES TEMPERATURE THRESHOLD cid:5 cdw10:00000004 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:51.728 [2024-12-16 11:39:52.444968] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x21ffd80, cid 5, qid 0 00:17:51.728 [2024-12-16 11:39:52.445352] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:17:51.728 [2024-12-16 11:39:52.445367] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:17:51.728 [2024-12-16 11:39:52.445372] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:17:51.728 [2024-12-16 11:39:52.445376] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x21ffd80) on tqpair=0x21bed90 00:17:51.728 [2024-12-16 11:39:52.445388] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:17:51.728 [2024-12-16 11:39:52.445393] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=5 on tqpair(0x21bed90) 00:17:51.728 [2024-12-16 11:39:52.445400] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES NUMBER OF QUEUES cid:5 cdw10:00000007 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:51.728 [2024-12-16 11:39:52.445420] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x21ffd80, cid 5, qid 0 00:17:51.728 [2024-12-16 11:39:52.445670] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:17:51.728 [2024-12-16 11:39:52.445683] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:17:51.728 [2024-12-16 11:39:52.445688] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:17:51.728 [2024-12-16 11:39:52.445692] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x21ffd80) on tqpair=0x21bed90 00:17:51.728 [2024-12-16 11:39:52.445721] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:17:51.728 [2024-12-16 11:39:52.445727] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=5 on tqpair(0x21bed90) 00:17:51.728 [2024-12-16 11:39:52.445735] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:5 nsid:ffffffff cdw10:07ff0001 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:51.728 [2024-12-16 11:39:52.445742] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:17:51.728 [2024-12-16 11:39:52.445747] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x21bed90) 00:17:51.728 [2024-12-16 11:39:52.445753] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:4 nsid:ffffffff cdw10:007f0002 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:51.728 [2024-12-16 11:39:52.445761] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:17:51.728 [2024-12-16 11:39:52.445765] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=6 on tqpair(0x21bed90) 00:17:51.728 [2024-12-16 11:39:52.445772] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:6 nsid:ffffffff cdw10:007f0003 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:51.728 [2024-12-16 11:39:52.445780] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:17:51.728 [2024-12-16 11:39:52.445784] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=7 on tqpair(0x21bed90) 00:17:51.728 [2024-12-16 11:39:52.445791] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:7 nsid:ffffffff cdw10:03ff0005 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:51.728 [2024-12-16 11:39:52.445813] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x21ffd80, cid 5, qid 0 00:17:51.728 [2024-12-16 11:39:52.445820] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x21ffc00, cid 4, qid 0 00:17:51.728 [2024-12-16 11:39:52.445825] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x21fff00, cid 6, qid 0 00:17:51.728 [2024-12-16 11:39:52.445830] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x2200080, cid 7, qid 0 00:17:51.728 [2024-12-16 11:39:52.453222] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:17:51.728 [2024-12-16 11:39:52.453242] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:17:51.728 [2024-12-16 11:39:52.453246] nvme_tcp.c:1619:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:17:51.728 [2024-12-16 11:39:52.453267] nvme_tcp.c:1620:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x21bed90): datao=0, datal=8192, cccid=5 00:17:51.728 [2024-12-16 11:39:52.453272] nvme_tcp.c:1631:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x21ffd80) on tqpair(0x21bed90): expected_datao=0, payload_size=8192 00:17:51.728 [2024-12-16 11:39:52.453277] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:17:51.728 [2024-12-16 11:39:52.453285] nvme_tcp.c:1421:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:17:51.728 [2024-12-16 11:39:52.453289] nvme_tcp.c:1255:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:17:51.728 [2024-12-16 11:39:52.453295] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:17:51.728 [2024-12-16 11:39:52.453300] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:17:51.728 [2024-12-16 11:39:52.453304] nvme_tcp.c:1619:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:17:51.728 [2024-12-16 11:39:52.453307] nvme_tcp.c:1620:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x21bed90): datao=0, datal=512, cccid=4 00:17:51.728 [2024-12-16 11:39:52.453312] nvme_tcp.c:1631:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x21ffc00) on tqpair(0x21bed90): expected_datao=0, payload_size=512 00:17:51.728 [2024-12-16 11:39:52.453317] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:17:51.728 [2024-12-16 11:39:52.453323] nvme_tcp.c:1421:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:17:51.728 [2024-12-16 11:39:52.453327] nvme_tcp.c:1255:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:17:51.728 [2024-12-16 11:39:52.453332] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:17:51.728 [2024-12-16 11:39:52.453338] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:17:51.728 [2024-12-16 11:39:52.453341] nvme_tcp.c:1619:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:17:51.728 [2024-12-16 11:39:52.453345] nvme_tcp.c:1620:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x21bed90): datao=0, datal=512, cccid=6 00:17:51.728 [2024-12-16 11:39:52.453349] nvme_tcp.c:1631:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x21fff00) on tqpair(0x21bed90): expected_datao=0, payload_size=512 00:17:51.728 [2024-12-16 11:39:52.453353] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:17:51.728 [2024-12-16 11:39:52.453359] nvme_tcp.c:1421:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:17:51.728 [2024-12-16 11:39:52.453363] nvme_tcp.c:1255:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:17:51.728 [2024-12-16 11:39:52.453368] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:17:51.728 [2024-12-16 11:39:52.453374] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:17:51.728 [2024-12-16 11:39:52.453377] nvme_tcp.c:1619:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:17:51.728 [2024-12-16 11:39:52.453381] nvme_tcp.c:1620:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x21bed90): datao=0, datal=4096, cccid=7 00:17:51.728 [2024-12-16 11:39:52.453385] nvme_tcp.c:1631:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x2200080) on tqpair(0x21bed90): expected_datao=0, payload_size=4096 00:17:51.728 [2024-12-16 11:39:52.453389] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:17:51.728 [2024-12-16 11:39:52.453395] nvme_tcp.c:1421:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:17:51.728 [2024-12-16 11:39:52.453399] nvme_tcp.c:1255:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:17:51.728 [2024-12-16 11:39:52.453405] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:17:51.728 [2024-12-16 11:39:52.453410] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:17:51.728 [2024-12-16 11:39:52.453413] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:17:51.728 [2024-12-16 11:39:52.453417] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x21ffd80) on tqpair=0x21bed90 00:17:51.728 [2024-12-16 11:39:52.453478] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:17:51.728 [2024-12-16 11:39:52.453486] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:17:51.728 [2024-12-16 11:39:52.453490] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:17:51.728 [2024-12-16 11:39:52.453494] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x21ffc00) on tqpair=0x21bed90 00:17:51.728 ===================================================== 00:17:51.728 NVMe over Fabrics controller at 10.0.0.3:4420: nqn.2016-06.io.spdk:cnode1 00:17:51.728 ===================================================== 00:17:51.728 Controller Capabilities/Features 00:17:51.729 ================================ 00:17:51.729 Vendor ID: 8086 00:17:51.729 Subsystem Vendor ID: 8086 00:17:51.729 Serial Number: SPDK00000000000001 00:17:51.729 Model Number: SPDK bdev Controller 00:17:51.729 Firmware Version: 25.01 00:17:51.729 Recommended Arb Burst: 6 00:17:51.729 IEEE OUI Identifier: e4 d2 5c 00:17:51.729 Multi-path I/O 00:17:51.729 May have multiple subsystem ports: Yes 00:17:51.729 May have multiple controllers: Yes 00:17:51.729 Associated with SR-IOV VF: No 00:17:51.729 Max Data Transfer Size: 131072 00:17:51.729 Max Number of Namespaces: 32 00:17:51.729 Max Number of I/O Queues: 127 00:17:51.729 NVMe Specification Version (VS): 1.3 00:17:51.729 NVMe Specification Version (Identify): 1.3 00:17:51.729 Maximum Queue Entries: 128 00:17:51.729 Contiguous Queues Required: Yes 00:17:51.729 Arbitration Mechanisms Supported 00:17:51.729 Weighted Round Robin: Not Supported 00:17:51.729 Vendor Specific: Not Supported 00:17:51.729 Reset Timeout: 15000 ms 00:17:51.729 Doorbell Stride: 4 bytes 00:17:51.729 NVM Subsystem Reset: Not Supported 00:17:51.729 Command Sets Supported 00:17:51.729 NVM Command Set: Supported 00:17:51.729 Boot Partition: Not Supported 00:17:51.729 Memory Page Size Minimum: 4096 bytes 00:17:51.729 Memory Page Size Maximum: 4096 bytes 00:17:51.729 Persistent Memory Region: Not Supported 00:17:51.729 Optional Asynchronous Events Supported 00:17:51.729 Namespace Attribute Notices: Supported 00:17:51.729 Firmware Activation Notices: Not Supported 00:17:51.729 ANA Change Notices: Not Supported 00:17:51.729 PLE Aggregate Log Change Notices: Not Supported 00:17:51.729 LBA Status Info Alert Notices: Not Supported 00:17:51.729 EGE Aggregate Log Change Notices: Not Supported 00:17:51.729 Normal NVM Subsystem Shutdown event: Not Supported 00:17:51.729 Zone Descriptor Change Notices: Not Supported 00:17:51.729 Discovery Log Change Notices: Not Supported 00:17:51.729 Controller Attributes 00:17:51.729 128-bit Host Identifier: Supported 00:17:51.729 Non-Operational Permissive Mode: Not Supported 00:17:51.729 NVM Sets: Not Supported 00:17:51.729 Read Recovery Levels: Not Supported 00:17:51.729 Endurance Groups: Not Supported 00:17:51.729 Predictable Latency Mode: Not Supported 00:17:51.729 Traffic Based Keep ALive: Not Supported 00:17:51.729 Namespace Granularity: Not Supported 00:17:51.729 SQ Associations: Not Supported 00:17:51.729 UUID List: Not Supported 00:17:51.729 Multi-Domain Subsystem: Not Supported 00:17:51.729 Fixed Capacity Management: Not Supported 00:17:51.729 Variable Capacity Management: Not Supported 00:17:51.729 Delete Endurance Group: Not Supported 00:17:51.729 Delete NVM Set: Not Supported 00:17:51.729 Extended LBA Formats Supported: Not Supported 00:17:51.729 Flexible Data Placement Supported: Not Supported 00:17:51.729 00:17:51.729 Controller Memory Buffer Support 00:17:51.729 ================================ 00:17:51.729 Supported: No 00:17:51.729 00:17:51.729 Persistent Memory Region Support 00:17:51.729 ================================ 00:17:51.729 Supported: No 00:17:51.729 00:17:51.729 Admin Command Set Attributes 00:17:51.729 ============================ 00:17:51.729 Security Send/Receive: Not Supported 00:17:51.729 Format NVM: Not Supported 00:17:51.729 Firmware Activate/Download: Not Supported 00:17:51.729 Namespace Management: Not Supported 00:17:51.729 Device Self-Test: Not Supported 00:17:51.729 Directives: Not Supported 00:17:51.729 NVMe-MI: Not Supported 00:17:51.729 Virtualization Management: Not Supported 00:17:51.729 Doorbell Buffer Config: Not Supported 00:17:51.729 Get LBA Status Capability: Not Supported 00:17:51.729 Command & Feature Lockdown Capability: Not Supported 00:17:51.729 Abort Command Limit: 4 00:17:51.729 Async Event Request Limit: 4 00:17:51.729 Number of Firmware Slots: N/A 00:17:51.729 Firmware Slot 1 Read-Only: N/A 00:17:51.729 Firmware Activation Without Reset: [2024-12-16 11:39:52.453507] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:17:51.729 [2024-12-16 11:39:52.453514] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:17:51.729 [2024-12-16 11:39:52.453517] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:17:51.729 [2024-12-16 11:39:52.453521] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x21fff00) on tqpair=0x21bed90 00:17:51.729 [2024-12-16 11:39:52.453529] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:17:51.729 [2024-12-16 11:39:52.453535] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:17:51.729 [2024-12-16 11:39:52.453539] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:17:51.729 [2024-12-16 11:39:52.453543] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x2200080) on tqpair=0x21bed90 00:17:51.729 N/A 00:17:51.729 Multiple Update Detection Support: N/A 00:17:51.729 Firmware Update Granularity: No Information Provided 00:17:51.729 Per-Namespace SMART Log: No 00:17:51.729 Asymmetric Namespace Access Log Page: Not Supported 00:17:51.729 Subsystem NQN: nqn.2016-06.io.spdk:cnode1 00:17:51.729 Command Effects Log Page: Supported 00:17:51.729 Get Log Page Extended Data: Supported 00:17:51.729 Telemetry Log Pages: Not Supported 00:17:51.729 Persistent Event Log Pages: Not Supported 00:17:51.729 Supported Log Pages Log Page: May Support 00:17:51.729 Commands Supported & Effects Log Page: Not Supported 00:17:51.729 Feature Identifiers & Effects Log Page:May Support 00:17:51.729 NVMe-MI Commands & Effects Log Page: May Support 00:17:51.729 Data Area 4 for Telemetry Log: Not Supported 00:17:51.729 Error Log Page Entries Supported: 128 00:17:51.729 Keep Alive: Supported 00:17:51.729 Keep Alive Granularity: 10000 ms 00:17:51.729 00:17:51.729 NVM Command Set Attributes 00:17:51.729 ========================== 00:17:51.729 Submission Queue Entry Size 00:17:51.729 Max: 64 00:17:51.729 Min: 64 00:17:51.729 Completion Queue Entry Size 00:17:51.729 Max: 16 00:17:51.729 Min: 16 00:17:51.729 Number of Namespaces: 32 00:17:51.729 Compare Command: Supported 00:17:51.729 Write Uncorrectable Command: Not Supported 00:17:51.729 Dataset Management Command: Supported 00:17:51.729 Write Zeroes Command: Supported 00:17:51.729 Set Features Save Field: Not Supported 00:17:51.729 Reservations: Supported 00:17:51.729 Timestamp: Not Supported 00:17:51.729 Copy: Supported 00:17:51.729 Volatile Write Cache: Present 00:17:51.729 Atomic Write Unit (Normal): 1 00:17:51.729 Atomic Write Unit (PFail): 1 00:17:51.729 Atomic Compare & Write Unit: 1 00:17:51.729 Fused Compare & Write: Supported 00:17:51.729 Scatter-Gather List 00:17:51.729 SGL Command Set: Supported 00:17:51.729 SGL Keyed: Supported 00:17:51.729 SGL Bit Bucket Descriptor: Not Supported 00:17:51.729 SGL Metadata Pointer: Not Supported 00:17:51.729 Oversized SGL: Not Supported 00:17:51.729 SGL Metadata Address: Not Supported 00:17:51.729 SGL Offset: Supported 00:17:51.729 Transport SGL Data Block: Not Supported 00:17:51.729 Replay Protected Memory Block: Not Supported 00:17:51.729 00:17:51.729 Firmware Slot Information 00:17:51.729 ========================= 00:17:51.729 Active slot: 1 00:17:51.729 Slot 1 Firmware Revision: 25.01 00:17:51.729 00:17:51.729 00:17:51.729 Commands Supported and Effects 00:17:51.729 ============================== 00:17:51.729 Admin Commands 00:17:51.729 -------------- 00:17:51.729 Get Log Page (02h): Supported 00:17:51.729 Identify (06h): Supported 00:17:51.729 Abort (08h): Supported 00:17:51.729 Set Features (09h): Supported 00:17:51.729 Get Features (0Ah): Supported 00:17:51.729 Asynchronous Event Request (0Ch): Supported 00:17:51.729 Keep Alive (18h): Supported 00:17:51.729 I/O Commands 00:17:51.729 ------------ 00:17:51.729 Flush (00h): Supported LBA-Change 00:17:51.729 Write (01h): Supported LBA-Change 00:17:51.729 Read (02h): Supported 00:17:51.729 Compare (05h): Supported 00:17:51.729 Write Zeroes (08h): Supported LBA-Change 00:17:51.729 Dataset Management (09h): Supported LBA-Change 00:17:51.729 Copy (19h): Supported LBA-Change 00:17:51.729 00:17:51.729 Error Log 00:17:51.729 ========= 00:17:51.729 00:17:51.729 Arbitration 00:17:51.729 =========== 00:17:51.729 Arbitration Burst: 1 00:17:51.729 00:17:51.729 Power Management 00:17:51.729 ================ 00:17:51.729 Number of Power States: 1 00:17:51.729 Current Power State: Power State #0 00:17:51.729 Power State #0: 00:17:51.729 Max Power: 0.00 W 00:17:51.729 Non-Operational State: Operational 00:17:51.729 Entry Latency: Not Reported 00:17:51.729 Exit Latency: Not Reported 00:17:51.729 Relative Read Throughput: 0 00:17:51.729 Relative Read Latency: 0 00:17:51.729 Relative Write Throughput: 0 00:17:51.729 Relative Write Latency: 0 00:17:51.729 Idle Power: Not Reported 00:17:51.729 Active Power: Not Reported 00:17:51.729 Non-Operational Permissive Mode: Not Supported 00:17:51.729 00:17:51.729 Health Information 00:17:51.729 ================== 00:17:51.729 Critical Warnings: 00:17:51.729 Available Spare Space: OK 00:17:51.729 Temperature: OK 00:17:51.729 Device Reliability: OK 00:17:51.729 Read Only: No 00:17:51.729 Volatile Memory Backup: OK 00:17:51.729 Current Temperature: 0 Kelvin (-273 Celsius) 00:17:51.729 Temperature Threshold: 0 Kelvin (-273 Celsius) 00:17:51.729 Available Spare: 0% 00:17:51.729 Available Spare Threshold: 0% 00:17:51.730 Life Percentage Used:[2024-12-16 11:39:52.453650] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:17:51.730 [2024-12-16 11:39:52.453657] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=7 on tqpair(0x21bed90) 00:17:51.730 [2024-12-16 11:39:52.453666] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES ERROR_RECOVERY cid:7 cdw10:00000005 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:51.730 [2024-12-16 11:39:52.453694] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x2200080, cid 7, qid 0 00:17:51.730 [2024-12-16 11:39:52.454193] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:17:51.730 [2024-12-16 11:39:52.454200] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:17:51.730 [2024-12-16 11:39:52.454204] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:17:51.730 [2024-12-16 11:39:52.454208] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x2200080) on tqpair=0x21bed90 00:17:51.730 [2024-12-16 11:39:52.454327] nvme_ctrlr.c:4399:nvme_ctrlr_destruct_async: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] Prepare to destruct SSD 00:17:51.730 [2024-12-16 11:39:52.454345] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x21ff600) on tqpair=0x21bed90 00:17:51.730 [2024-12-16 11:39:52.454352] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:51.730 [2024-12-16 11:39:52.454358] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x21ff780) on tqpair=0x21bed90 00:17:51.730 [2024-12-16 11:39:52.454363] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:51.730 [2024-12-16 11:39:52.454368] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x21ff900) on tqpair=0x21bed90 00:17:51.730 [2024-12-16 11:39:52.454372] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:51.730 [2024-12-16 11:39:52.454377] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x21ffa80) on tqpair=0x21bed90 00:17:51.730 [2024-12-16 11:39:52.454382] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:51.730 [2024-12-16 11:39:52.454391] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:17:51.730 [2024-12-16 11:39:52.454396] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:17:51.730 [2024-12-16 11:39:52.454400] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x21bed90) 00:17:51.730 [2024-12-16 11:39:52.454408] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:51.730 [2024-12-16 11:39:52.454436] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x21ffa80, cid 3, qid 0 00:17:51.730 [2024-12-16 11:39:52.454522] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:17:51.730 [2024-12-16 11:39:52.454530] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:17:51.730 [2024-12-16 11:39:52.454534] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:17:51.730 [2024-12-16 11:39:52.454538] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x21ffa80) on tqpair=0x21bed90 00:17:51.730 [2024-12-16 11:39:52.454546] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:17:51.730 [2024-12-16 11:39:52.454551] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:17:51.730 [2024-12-16 11:39:52.454554] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x21bed90) 00:17:51.730 [2024-12-16 11:39:52.454562] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY SET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:51.730 [2024-12-16 11:39:52.454583] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x21ffa80, cid 3, qid 0 00:17:51.730 [2024-12-16 11:39:52.454659] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:17:51.730 [2024-12-16 11:39:52.454666] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:17:51.730 [2024-12-16 11:39:52.454669] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:17:51.730 [2024-12-16 11:39:52.454674] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x21ffa80) on tqpair=0x21bed90 00:17:51.730 [2024-12-16 11:39:52.454679] nvme_ctrlr.c:1151:nvme_ctrlr_shutdown_set_cc_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] RTD3E = 0 us 00:17:51.730 [2024-12-16 11:39:52.454684] nvme_ctrlr.c:1154:nvme_ctrlr_shutdown_set_cc_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] shutdown timeout = 10000 ms 00:17:51.730 [2024-12-16 11:39:52.454694] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:17:51.730 [2024-12-16 11:39:52.454699] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:17:51.730 [2024-12-16 11:39:52.454704] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x21bed90) 00:17:51.730 [2024-12-16 11:39:52.454712] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:51.730 [2024-12-16 11:39:52.454729] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x21ffa80, cid 3, qid 0 00:17:51.730 [2024-12-16 11:39:52.454785] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:17:51.730 [2024-12-16 11:39:52.454791] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:17:51.730 [2024-12-16 11:39:52.454795] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:17:51.730 [2024-12-16 11:39:52.454799] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x21ffa80) on tqpair=0x21bed90 00:17:51.730 [2024-12-16 11:39:52.454810] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:17:51.730 [2024-12-16 11:39:52.454815] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:17:51.730 [2024-12-16 11:39:52.454819] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x21bed90) 00:17:51.730 [2024-12-16 11:39:52.454842] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:51.730 [2024-12-16 11:39:52.454858] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x21ffa80, cid 3, qid 0 00:17:51.730 [2024-12-16 11:39:52.454908] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:17:51.730 [2024-12-16 11:39:52.454914] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:17:51.730 [2024-12-16 11:39:52.454918] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:17:51.730 [2024-12-16 11:39:52.454922] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x21ffa80) on tqpair=0x21bed90 00:17:51.730 [2024-12-16 11:39:52.454932] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:17:51.730 [2024-12-16 11:39:52.454937] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:17:51.730 [2024-12-16 11:39:52.454941] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x21bed90) 00:17:51.730 [2024-12-16 11:39:52.454948] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:51.730 [2024-12-16 11:39:52.454964] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x21ffa80, cid 3, qid 0 00:17:51.730 [2024-12-16 11:39:52.455017] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:17:51.730 [2024-12-16 11:39:52.455024] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:17:51.730 [2024-12-16 11:39:52.455027] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:17:51.730 [2024-12-16 11:39:52.455031] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x21ffa80) on tqpair=0x21bed90 00:17:51.730 [2024-12-16 11:39:52.455041] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:17:51.730 [2024-12-16 11:39:52.455046] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:17:51.730 [2024-12-16 11:39:52.455049] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x21bed90) 00:17:51.730 [2024-12-16 11:39:52.455056] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:51.730 [2024-12-16 11:39:52.455073] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x21ffa80, cid 3, qid 0 00:17:51.730 [2024-12-16 11:39:52.455125] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:17:51.730 [2024-12-16 11:39:52.455132] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:17:51.730 [2024-12-16 11:39:52.455136] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:17:51.730 [2024-12-16 11:39:52.455140] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x21ffa80) on tqpair=0x21bed90 00:17:51.730 [2024-12-16 11:39:52.455149] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:17:51.730 [2024-12-16 11:39:52.455156] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:17:51.730 [2024-12-16 11:39:52.455160] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x21bed90) 00:17:51.730 [2024-12-16 11:39:52.455168] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:51.730 [2024-12-16 11:39:52.455197] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x21ffa80, cid 3, qid 0 00:17:51.730 [2024-12-16 11:39:52.455653] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:17:51.730 [2024-12-16 11:39:52.455667] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:17:51.730 [2024-12-16 11:39:52.455671] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:17:51.730 [2024-12-16 11:39:52.455676] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x21ffa80) on tqpair=0x21bed90 00:17:51.730 [2024-12-16 11:39:52.455687] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:17:51.730 [2024-12-16 11:39:52.455692] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:17:51.730 [2024-12-16 11:39:52.455696] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x21bed90) 00:17:51.730 [2024-12-16 11:39:52.455704] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:51.730 [2024-12-16 11:39:52.455725] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x21ffa80, cid 3, qid 0 00:17:51.730 [2024-12-16 11:39:52.455782] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:17:51.730 [2024-12-16 11:39:52.455788] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:17:51.730 [2024-12-16 11:39:52.455792] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:17:51.730 [2024-12-16 11:39:52.455796] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x21ffa80) on tqpair=0x21bed90 00:17:51.730 [2024-12-16 11:39:52.455807] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:17:51.730 [2024-12-16 11:39:52.455812] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:17:51.730 [2024-12-16 11:39:52.455816] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x21bed90) 00:17:51.730 [2024-12-16 11:39:52.455823] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:51.730 [2024-12-16 11:39:52.455841] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x21ffa80, cid 3, qid 0 00:17:51.731 [2024-12-16 11:39:52.455894] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:17:51.731 [2024-12-16 11:39:52.455900] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:17:51.731 [2024-12-16 11:39:52.455904] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:17:51.731 [2024-12-16 11:39:52.455908] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x21ffa80) on tqpair=0x21bed90 00:17:51.731 [2024-12-16 11:39:52.455934] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:17:51.731 [2024-12-16 11:39:52.455939] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:17:51.731 [2024-12-16 11:39:52.455942] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x21bed90) 00:17:51.731 [2024-12-16 11:39:52.455949] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:51.731 [2024-12-16 11:39:52.455966] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x21ffa80, cid 3, qid 0 00:17:51.731 [2024-12-16 11:39:52.456021] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:17:51.731 [2024-12-16 11:39:52.456027] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:17:51.731 [2024-12-16 11:39:52.456031] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:17:51.731 [2024-12-16 11:39:52.456035] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x21ffa80) on tqpair=0x21bed90 00:17:51.731 [2024-12-16 11:39:52.456045] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:17:51.731 [2024-12-16 11:39:52.456050] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:17:51.731 [2024-12-16 11:39:52.456053] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x21bed90) 00:17:51.731 [2024-12-16 11:39:52.456060] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:51.731 [2024-12-16 11:39:52.456077] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x21ffa80, cid 3, qid 0 00:17:51.731 [2024-12-16 11:39:52.456129] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:17:51.731 [2024-12-16 11:39:52.456136] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:17:51.731 [2024-12-16 11:39:52.456141] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:17:51.731 [2024-12-16 11:39:52.456145] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x21ffa80) on tqpair=0x21bed90 00:17:51.731 [2024-12-16 11:39:52.456182] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:17:51.731 [2024-12-16 11:39:52.456188] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:17:51.731 [2024-12-16 11:39:52.456192] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x21bed90) 00:17:51.731 [2024-12-16 11:39:52.456200] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:51.731 [2024-12-16 11:39:52.456219] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x21ffa80, cid 3, qid 0 00:17:51.731 [2024-12-16 11:39:52.456291] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:17:51.731 [2024-12-16 11:39:52.456297] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:17:51.731 [2024-12-16 11:39:52.456301] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:17:51.731 [2024-12-16 11:39:52.456308] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x21ffa80) on tqpair=0x21bed90 00:17:51.731 [2024-12-16 11:39:52.456335] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:17:51.731 [2024-12-16 11:39:52.456340] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:17:51.731 [2024-12-16 11:39:52.456344] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x21bed90) 00:17:51.731 [2024-12-16 11:39:52.456351] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:51.731 [2024-12-16 11:39:52.456368] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x21ffa80, cid 3, qid 0 00:17:51.731 [2024-12-16 11:39:52.456422] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:17:51.731 [2024-12-16 11:39:52.456429] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:17:51.731 [2024-12-16 11:39:52.456433] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:17:51.731 [2024-12-16 11:39:52.456437] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x21ffa80) on tqpair=0x21bed90 00:17:51.731 [2024-12-16 11:39:52.456447] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:17:51.731 [2024-12-16 11:39:52.456452] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:17:51.731 [2024-12-16 11:39:52.456456] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x21bed90) 00:17:51.731 [2024-12-16 11:39:52.456463] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:51.731 [2024-12-16 11:39:52.456480] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x21ffa80, cid 3, qid 0 00:17:51.731 [2024-12-16 11:39:52.456538] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:17:51.731 [2024-12-16 11:39:52.456545] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:17:51.731 [2024-12-16 11:39:52.456549] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:17:51.731 [2024-12-16 11:39:52.456553] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x21ffa80) on tqpair=0x21bed90 00:17:51.731 [2024-12-16 11:39:52.456563] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:17:51.731 [2024-12-16 11:39:52.456568] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:17:51.731 [2024-12-16 11:39:52.456572] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x21bed90) 00:17:51.731 [2024-12-16 11:39:52.456579] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:51.731 [2024-12-16 11:39:52.456596] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x21ffa80, cid 3, qid 0 00:17:51.731 [2024-12-16 11:39:52.456959] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:17:51.731 [2024-12-16 11:39:52.456968] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:17:51.731 [2024-12-16 11:39:52.456971] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:17:51.731 [2024-12-16 11:39:52.456975] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x21ffa80) on tqpair=0x21bed90 00:17:51.731 [2024-12-16 11:39:52.456986] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:17:51.731 [2024-12-16 11:39:52.456990] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:17:51.731 [2024-12-16 11:39:52.456994] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x21bed90) 00:17:51.731 [2024-12-16 11:39:52.457001] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:51.731 [2024-12-16 11:39:52.457018] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x21ffa80, cid 3, qid 0 00:17:51.731 [2024-12-16 11:39:52.457111] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:17:51.731 [2024-12-16 11:39:52.457117] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:17:51.731 [2024-12-16 11:39:52.457121] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:17:51.731 [2024-12-16 11:39:52.457125] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x21ffa80) on tqpair=0x21bed90 00:17:51.731 [2024-12-16 11:39:52.457135] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:17:51.731 [2024-12-16 11:39:52.457139] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:17:51.731 [2024-12-16 11:39:52.457143] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x21bed90) 00:17:51.731 [2024-12-16 11:39:52.457167] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:51.731 [2024-12-16 11:39:52.457183] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x21ffa80, cid 3, qid 0 00:17:51.731 [2024-12-16 11:39:52.461228] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:17:51.731 [2024-12-16 11:39:52.461249] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:17:51.731 [2024-12-16 11:39:52.461254] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:17:51.731 [2024-12-16 11:39:52.461258] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x21ffa80) on tqpair=0x21bed90 00:17:51.731 [2024-12-16 11:39:52.461272] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:17:51.731 [2024-12-16 11:39:52.461277] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:17:51.731 [2024-12-16 11:39:52.461281] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x21bed90) 00:17:51.731 [2024-12-16 11:39:52.461290] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:51.731 [2024-12-16 11:39:52.461314] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x21ffa80, cid 3, qid 0 00:17:51.731 [2024-12-16 11:39:52.461386] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:17:51.731 [2024-12-16 11:39:52.461392] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:17:51.731 [2024-12-16 11:39:52.461396] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:17:51.731 [2024-12-16 11:39:52.461400] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x21ffa80) on tqpair=0x21bed90 00:17:51.731 [2024-12-16 11:39:52.461424] nvme_ctrlr.c:1273:nvme_ctrlr_shutdown_poll_async: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] shutdown complete in 6 milliseconds 00:17:51.990 0% 00:17:51.990 Data Units Read: 0 00:17:51.990 Data Units Written: 0 00:17:51.990 Host Read Commands: 0 00:17:51.990 Host Write Commands: 0 00:17:51.990 Controller Busy Time: 0 minutes 00:17:51.990 Power Cycles: 0 00:17:51.990 Power On Hours: 0 hours 00:17:51.990 Unsafe Shutdowns: 0 00:17:51.990 Unrecoverable Media Errors: 0 00:17:51.990 Lifetime Error Log Entries: 0 00:17:51.990 Warning Temperature Time: 0 minutes 00:17:51.990 Critical Temperature Time: 0 minutes 00:17:51.990 00:17:51.990 Number of Queues 00:17:51.990 ================ 00:17:51.990 Number of I/O Submission Queues: 127 00:17:51.990 Number of I/O Completion Queues: 127 00:17:51.990 00:17:51.990 Active Namespaces 00:17:51.990 ================= 00:17:51.990 Namespace ID:1 00:17:51.990 Error Recovery Timeout: Unlimited 00:17:51.990 Command Set Identifier: NVM (00h) 00:17:51.990 Deallocate: Supported 00:17:51.990 Deallocated/Unwritten Error: Not Supported 00:17:51.990 Deallocated Read Value: Unknown 00:17:51.990 Deallocate in Write Zeroes: Not Supported 00:17:51.990 Deallocated Guard Field: 0xFFFF 00:17:51.990 Flush: Supported 00:17:51.990 Reservation: Supported 00:17:51.990 Namespace Sharing Capabilities: Multiple Controllers 00:17:51.990 Size (in LBAs): 131072 (0GiB) 00:17:51.990 Capacity (in LBAs): 131072 (0GiB) 00:17:51.990 Utilization (in LBAs): 131072 (0GiB) 00:17:51.990 NGUID: ABCDEF0123456789ABCDEF0123456789 00:17:51.990 EUI64: ABCDEF0123456789 00:17:51.990 UUID: 4d3a0805-b7fc-4103-87f3-53e15074d998 00:17:51.990 Thin Provisioning: Not Supported 00:17:51.990 Per-NS Atomic Units: Yes 00:17:51.990 Atomic Boundary Size (Normal): 0 00:17:51.990 Atomic Boundary Size (PFail): 0 00:17:51.990 Atomic Boundary Offset: 0 00:17:51.990 Maximum Single Source Range Length: 65535 00:17:51.990 Maximum Copy Length: 65535 00:17:51.990 Maximum Source Range Count: 1 00:17:51.990 NGUID/EUI64 Never Reused: No 00:17:51.990 Namespace Write Protected: No 00:17:51.990 Number of LBA Formats: 1 00:17:51.990 Current LBA Format: LBA Format #00 00:17:51.990 LBA Format #00: Data Size: 512 Metadata Size: 0 00:17:51.990 00:17:51.990 11:39:52 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@51 -- # sync 00:17:51.990 11:39:52 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@52 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:17:51.990 11:39:52 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:51.990 11:39:52 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:17:51.990 11:39:52 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:51.990 11:39:52 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@54 -- # trap - SIGINT SIGTERM EXIT 00:17:51.990 11:39:52 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@56 -- # nvmftestfini 00:17:51.990 11:39:52 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@516 -- # nvmfcleanup 00:17:51.990 11:39:52 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@121 -- # sync 00:17:51.990 11:39:52 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:17:51.990 11:39:52 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@124 -- # set +e 00:17:51.990 11:39:52 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@125 -- # for i in {1..20} 00:17:51.990 11:39:52 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:17:51.990 rmmod nvme_tcp 00:17:51.990 rmmod nvme_fabrics 00:17:51.990 rmmod nvme_keyring 00:17:51.990 11:39:52 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:17:51.990 11:39:52 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@128 -- # set -e 00:17:51.990 11:39:52 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@129 -- # return 0 00:17:51.990 11:39:52 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@517 -- # '[' -n 89069 ']' 00:17:51.990 11:39:52 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@518 -- # killprocess 89069 00:17:51.990 11:39:52 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@954 -- # '[' -z 89069 ']' 00:17:51.990 11:39:52 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@958 -- # kill -0 89069 00:17:51.991 11:39:52 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@959 -- # uname 00:17:51.991 11:39:52 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:17:51.991 11:39:52 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 89069 00:17:51.991 killing process with pid 89069 00:17:51.991 11:39:52 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:17:51.991 11:39:52 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:17:51.991 11:39:52 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@972 -- # echo 'killing process with pid 89069' 00:17:51.991 11:39:52 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@973 -- # kill 89069 00:17:51.991 11:39:52 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@978 -- # wait 89069 00:17:52.249 11:39:52 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:17:52.249 11:39:52 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:17:52.249 11:39:52 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:17:52.249 11:39:52 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@297 -- # iptr 00:17:52.249 11:39:52 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@791 -- # iptables-save 00:17:52.249 11:39:52 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:17:52.249 11:39:52 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@791 -- # iptables-restore 00:17:52.249 11:39:52 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@298 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:17:52.249 11:39:52 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@299 -- # nvmf_veth_fini 00:17:52.249 11:39:52 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@233 -- # ip link set nvmf_init_br nomaster 00:17:52.249 11:39:52 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@234 -- # ip link set nvmf_init_br2 nomaster 00:17:52.249 11:39:52 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@235 -- # ip link set nvmf_tgt_br nomaster 00:17:52.249 11:39:52 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@236 -- # ip link set nvmf_tgt_br2 nomaster 00:17:52.249 11:39:52 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@237 -- # ip link set nvmf_init_br down 00:17:52.249 11:39:52 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@238 -- # ip link set nvmf_init_br2 down 00:17:52.249 11:39:52 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@239 -- # ip link set nvmf_tgt_br down 00:17:52.249 11:39:52 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@240 -- # ip link set nvmf_tgt_br2 down 00:17:52.249 11:39:52 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@241 -- # ip link delete nvmf_br type bridge 00:17:52.507 11:39:53 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@242 -- # ip link delete nvmf_init_if 00:17:52.507 11:39:53 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@243 -- # ip link delete nvmf_init_if2 00:17:52.507 11:39:53 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@244 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:17:52.507 11:39:53 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@245 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:17:52.507 11:39:53 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@246 -- # remove_spdk_ns 00:17:52.507 11:39:53 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:17:52.507 11:39:53 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:17:52.507 11:39:53 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:17:52.507 11:39:53 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@300 -- # return 0 00:17:52.507 00:17:52.507 real 0m2.472s 00:17:52.507 user 0m5.213s 00:17:52.507 sys 0m0.800s 00:17:52.507 11:39:53 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@1130 -- # xtrace_disable 00:17:52.507 11:39:53 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:17:52.507 ************************************ 00:17:52.508 END TEST nvmf_identify 00:17:52.508 ************************************ 00:17:52.508 11:39:53 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@23 -- # run_test nvmf_perf /home/vagrant/spdk_repo/spdk/test/nvmf/host/perf.sh --transport=tcp 00:17:52.508 11:39:53 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:17:52.508 11:39:53 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1111 -- # xtrace_disable 00:17:52.508 11:39:53 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:17:52.508 ************************************ 00:17:52.508 START TEST nvmf_perf 00:17:52.508 ************************************ 00:17:52.508 11:39:53 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/host/perf.sh --transport=tcp 00:17:52.508 * Looking for test storage... 00:17:52.508 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/host 00:17:52.508 11:39:53 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:17:52.508 11:39:53 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1711 -- # lcov --version 00:17:52.508 11:39:53 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:17:52.768 11:39:53 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:17:52.768 11:39:53 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:17:52.768 11:39:53 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@333 -- # local ver1 ver1_l 00:17:52.768 11:39:53 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@334 -- # local ver2 ver2_l 00:17:52.768 11:39:53 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@336 -- # IFS=.-: 00:17:52.768 11:39:53 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@336 -- # read -ra ver1 00:17:52.768 11:39:53 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@337 -- # IFS=.-: 00:17:52.768 11:39:53 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@337 -- # read -ra ver2 00:17:52.768 11:39:53 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@338 -- # local 'op=<' 00:17:52.768 11:39:53 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@340 -- # ver1_l=2 00:17:52.768 11:39:53 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@341 -- # ver2_l=1 00:17:52.768 11:39:53 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:17:52.768 11:39:53 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@344 -- # case "$op" in 00:17:52.768 11:39:53 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@345 -- # : 1 00:17:52.768 11:39:53 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@364 -- # (( v = 0 )) 00:17:52.768 11:39:53 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:17:52.768 11:39:53 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@365 -- # decimal 1 00:17:52.768 11:39:53 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@353 -- # local d=1 00:17:52.768 11:39:53 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:17:52.768 11:39:53 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@355 -- # echo 1 00:17:52.768 11:39:53 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@365 -- # ver1[v]=1 00:17:52.768 11:39:53 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@366 -- # decimal 2 00:17:52.768 11:39:53 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@353 -- # local d=2 00:17:52.768 11:39:53 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:17:52.768 11:39:53 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@355 -- # echo 2 00:17:52.768 11:39:53 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@366 -- # ver2[v]=2 00:17:52.768 11:39:53 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:17:52.768 11:39:53 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:17:52.768 11:39:53 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@368 -- # return 0 00:17:52.768 11:39:53 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:17:52.768 11:39:53 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:17:52.768 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:17:52.768 --rc genhtml_branch_coverage=1 00:17:52.768 --rc genhtml_function_coverage=1 00:17:52.768 --rc genhtml_legend=1 00:17:52.768 --rc geninfo_all_blocks=1 00:17:52.768 --rc geninfo_unexecuted_blocks=1 00:17:52.768 00:17:52.768 ' 00:17:52.768 11:39:53 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:17:52.768 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:17:52.768 --rc genhtml_branch_coverage=1 00:17:52.768 --rc genhtml_function_coverage=1 00:17:52.768 --rc genhtml_legend=1 00:17:52.768 --rc geninfo_all_blocks=1 00:17:52.768 --rc geninfo_unexecuted_blocks=1 00:17:52.768 00:17:52.768 ' 00:17:52.768 11:39:53 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:17:52.768 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:17:52.768 --rc genhtml_branch_coverage=1 00:17:52.768 --rc genhtml_function_coverage=1 00:17:52.768 --rc genhtml_legend=1 00:17:52.768 --rc geninfo_all_blocks=1 00:17:52.768 --rc geninfo_unexecuted_blocks=1 00:17:52.768 00:17:52.768 ' 00:17:52.768 11:39:53 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:17:52.768 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:17:52.768 --rc genhtml_branch_coverage=1 00:17:52.768 --rc genhtml_function_coverage=1 00:17:52.768 --rc genhtml_legend=1 00:17:52.768 --rc geninfo_all_blocks=1 00:17:52.768 --rc geninfo_unexecuted_blocks=1 00:17:52.768 00:17:52.768 ' 00:17:52.768 11:39:53 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@10 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:17:52.768 11:39:53 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@7 -- # uname -s 00:17:52.768 11:39:53 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:17:52.768 11:39:53 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:17:52.768 11:39:53 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:17:52.768 11:39:53 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:17:52.768 11:39:53 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:17:52.768 11:39:53 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:17:52.768 11:39:53 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:17:52.768 11:39:53 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:17:52.768 11:39:53 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:17:52.768 11:39:53 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:17:52.768 11:39:53 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:2fb5c03d-b67f-4471-b8a9-a83bf7b15be8 00:17:52.768 11:39:53 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@18 -- # NVME_HOSTID=2fb5c03d-b67f-4471-b8a9-a83bf7b15be8 00:17:52.768 11:39:53 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:17:52.768 11:39:53 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:17:52.768 11:39:53 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:17:52.768 11:39:53 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:17:52.768 11:39:53 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@49 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:17:52.768 11:39:53 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@15 -- # shopt -s extglob 00:17:52.768 11:39:53 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:17:52.768 11:39:53 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:17:52.768 11:39:53 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:17:52.768 11:39:53 nvmf_tcp.nvmf_host.nvmf_perf -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:52.768 11:39:53 nvmf_tcp.nvmf_host.nvmf_perf -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:52.768 11:39:53 nvmf_tcp.nvmf_host.nvmf_perf -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:52.768 11:39:53 nvmf_tcp.nvmf_host.nvmf_perf -- paths/export.sh@5 -- # export PATH 00:17:52.768 11:39:53 nvmf_tcp.nvmf_host.nvmf_perf -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:52.768 11:39:53 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@51 -- # : 0 00:17:52.768 11:39:53 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:17:52.768 11:39:53 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:17:52.768 11:39:53 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:17:52.768 11:39:53 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:17:52.768 11:39:53 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:17:52.768 11:39:53 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:17:52.769 /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:17:52.769 11:39:53 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:17:52.769 11:39:53 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:17:52.769 11:39:53 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@55 -- # have_pci_nics=0 00:17:52.769 11:39:53 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@12 -- # MALLOC_BDEV_SIZE=64 00:17:52.769 11:39:53 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@13 -- # MALLOC_BLOCK_SIZE=512 00:17:52.769 11:39:53 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@15 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:17:52.769 11:39:53 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@17 -- # nvmftestinit 00:17:52.769 11:39:53 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:17:52.769 11:39:53 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:17:52.769 11:39:53 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@476 -- # prepare_net_devs 00:17:52.769 11:39:53 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@438 -- # local -g is_hw=no 00:17:52.769 11:39:53 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@440 -- # remove_spdk_ns 00:17:52.769 11:39:53 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:17:52.769 11:39:53 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:17:52.769 11:39:53 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:17:52.769 11:39:53 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@442 -- # [[ virt != virt ]] 00:17:52.769 11:39:53 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@444 -- # [[ no == yes ]] 00:17:52.769 11:39:53 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@451 -- # [[ virt == phy ]] 00:17:52.769 11:39:53 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@454 -- # [[ virt == phy-fallback ]] 00:17:52.769 11:39:53 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@459 -- # [[ tcp == tcp ]] 00:17:52.769 11:39:53 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@460 -- # nvmf_veth_init 00:17:52.769 11:39:53 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@145 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:17:52.769 11:39:53 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@146 -- # NVMF_SECOND_INITIATOR_IP=10.0.0.2 00:17:52.769 11:39:53 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@147 -- # NVMF_FIRST_TARGET_IP=10.0.0.3 00:17:52.769 11:39:53 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@148 -- # NVMF_SECOND_TARGET_IP=10.0.0.4 00:17:52.769 11:39:53 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@149 -- # NVMF_INITIATOR_IP=10.0.0.1 00:17:52.769 11:39:53 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@150 -- # NVMF_BRIDGE=nvmf_br 00:17:52.769 11:39:53 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@151 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:17:52.769 11:39:53 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@152 -- # NVMF_INITIATOR_INTERFACE2=nvmf_init_if2 00:17:52.769 11:39:53 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@153 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:17:52.769 11:39:53 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@154 -- # NVMF_INITIATOR_BRIDGE2=nvmf_init_br2 00:17:52.769 11:39:53 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@155 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:17:52.769 11:39:53 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@156 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:17:52.769 11:39:53 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@157 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:17:52.769 11:39:53 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@158 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:17:52.769 11:39:53 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@159 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:17:52.769 11:39:53 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@160 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:17:52.769 11:39:53 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@162 -- # ip link set nvmf_init_br nomaster 00:17:52.769 Cannot find device "nvmf_init_br" 00:17:52.769 11:39:53 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@162 -- # true 00:17:52.769 11:39:53 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@163 -- # ip link set nvmf_init_br2 nomaster 00:17:52.769 Cannot find device "nvmf_init_br2" 00:17:52.769 11:39:53 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@163 -- # true 00:17:52.769 11:39:53 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@164 -- # ip link set nvmf_tgt_br nomaster 00:17:52.769 Cannot find device "nvmf_tgt_br" 00:17:52.769 11:39:53 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@164 -- # true 00:17:52.769 11:39:53 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@165 -- # ip link set nvmf_tgt_br2 nomaster 00:17:52.769 Cannot find device "nvmf_tgt_br2" 00:17:52.769 11:39:53 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@165 -- # true 00:17:52.769 11:39:53 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@166 -- # ip link set nvmf_init_br down 00:17:52.769 Cannot find device "nvmf_init_br" 00:17:52.769 11:39:53 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@166 -- # true 00:17:52.769 11:39:53 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@167 -- # ip link set nvmf_init_br2 down 00:17:52.769 Cannot find device "nvmf_init_br2" 00:17:52.769 11:39:53 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@167 -- # true 00:17:52.769 11:39:53 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@168 -- # ip link set nvmf_tgt_br down 00:17:52.769 Cannot find device "nvmf_tgt_br" 00:17:52.769 11:39:53 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@168 -- # true 00:17:52.769 11:39:53 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@169 -- # ip link set nvmf_tgt_br2 down 00:17:52.769 Cannot find device "nvmf_tgt_br2" 00:17:52.769 11:39:53 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@169 -- # true 00:17:52.769 11:39:53 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@170 -- # ip link delete nvmf_br type bridge 00:17:53.028 Cannot find device "nvmf_br" 00:17:53.028 11:39:53 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@170 -- # true 00:17:53.028 11:39:53 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@171 -- # ip link delete nvmf_init_if 00:17:53.028 Cannot find device "nvmf_init_if" 00:17:53.028 11:39:53 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@171 -- # true 00:17:53.028 11:39:53 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@172 -- # ip link delete nvmf_init_if2 00:17:53.028 Cannot find device "nvmf_init_if2" 00:17:53.028 11:39:53 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@172 -- # true 00:17:53.028 11:39:53 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@173 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:17:53.028 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:17:53.028 11:39:53 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@173 -- # true 00:17:53.028 11:39:53 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@174 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:17:53.028 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:17:53.028 11:39:53 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@174 -- # true 00:17:53.028 11:39:53 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@177 -- # ip netns add nvmf_tgt_ns_spdk 00:17:53.028 11:39:53 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@180 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:17:53.028 11:39:53 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@181 -- # ip link add nvmf_init_if2 type veth peer name nvmf_init_br2 00:17:53.028 11:39:53 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@182 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:17:53.028 11:39:53 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@183 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:17:53.028 11:39:53 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:17:53.028 11:39:53 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@187 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:17:53.028 11:39:53 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@190 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:17:53.028 11:39:53 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@191 -- # ip addr add 10.0.0.2/24 dev nvmf_init_if2 00:17:53.028 11:39:53 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@192 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if 00:17:53.028 11:39:53 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@193 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.4/24 dev nvmf_tgt_if2 00:17:53.028 11:39:53 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@196 -- # ip link set nvmf_init_if up 00:17:53.028 11:39:53 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@197 -- # ip link set nvmf_init_if2 up 00:17:53.028 11:39:53 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@198 -- # ip link set nvmf_init_br up 00:17:53.028 11:39:53 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@199 -- # ip link set nvmf_init_br2 up 00:17:53.028 11:39:53 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@200 -- # ip link set nvmf_tgt_br up 00:17:53.028 11:39:53 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@201 -- # ip link set nvmf_tgt_br2 up 00:17:53.028 11:39:53 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@202 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:17:53.028 11:39:53 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@203 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:17:53.028 11:39:53 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@204 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:17:53.028 11:39:53 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@207 -- # ip link add nvmf_br type bridge 00:17:53.028 11:39:53 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@208 -- # ip link set nvmf_br up 00:17:53.028 11:39:53 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@211 -- # ip link set nvmf_init_br master nvmf_br 00:17:53.028 11:39:53 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@212 -- # ip link set nvmf_init_br2 master nvmf_br 00:17:53.028 11:39:53 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@213 -- # ip link set nvmf_tgt_br master nvmf_br 00:17:53.028 11:39:53 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@214 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:17:53.028 11:39:53 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@217 -- # ipts -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:17:53.028 11:39:53 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT' 00:17:53.028 11:39:53 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@218 -- # ipts -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT 00:17:53.028 11:39:53 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT' 00:17:53.028 11:39:53 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@219 -- # ipts -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:17:53.028 11:39:53 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@790 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT -m comment --comment 'SPDK_NVMF:-A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT' 00:17:53.028 11:39:53 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@222 -- # ping -c 1 10.0.0.3 00:17:53.028 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:17:53.028 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.057 ms 00:17:53.028 00:17:53.028 --- 10.0.0.3 ping statistics --- 00:17:53.028 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:17:53.028 rtt min/avg/max/mdev = 0.057/0.057/0.057/0.000 ms 00:17:53.028 11:39:53 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@223 -- # ping -c 1 10.0.0.4 00:17:53.028 PING 10.0.0.4 (10.0.0.4) 56(84) bytes of data. 00:17:53.028 64 bytes from 10.0.0.4: icmp_seq=1 ttl=64 time=0.061 ms 00:17:53.028 00:17:53.028 --- 10.0.0.4 ping statistics --- 00:17:53.028 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:17:53.028 rtt min/avg/max/mdev = 0.061/0.061/0.061/0.000 ms 00:17:53.028 11:39:53 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@224 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:17:53.287 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:17:53.287 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.024 ms 00:17:53.287 00:17:53.287 --- 10.0.0.1 ping statistics --- 00:17:53.287 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:17:53.287 rtt min/avg/max/mdev = 0.024/0.024/0.024/0.000 ms 00:17:53.287 11:39:53 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@225 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.2 00:17:53.287 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:17:53.287 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.099 ms 00:17:53.287 00:17:53.287 --- 10.0.0.2 ping statistics --- 00:17:53.287 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:17:53.287 rtt min/avg/max/mdev = 0.099/0.099/0.099/0.000 ms 00:17:53.287 11:39:53 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@227 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:17:53.287 11:39:53 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@461 -- # return 0 00:17:53.287 11:39:53 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:17:53.287 11:39:53 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:17:53.287 11:39:53 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:17:53.287 11:39:53 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:17:53.287 11:39:53 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:17:53.287 11:39:53 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:17:53.287 11:39:53 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:17:53.287 11:39:53 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@18 -- # nvmfappstart -m 0xF 00:17:53.287 11:39:53 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:17:53.287 11:39:53 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@726 -- # xtrace_disable 00:17:53.287 11:39:53 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@10 -- # set +x 00:17:53.287 11:39:53 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@509 -- # nvmfpid=89330 00:17:53.287 11:39:53 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@508 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:17:53.287 11:39:53 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@510 -- # waitforlisten 89330 00:17:53.287 11:39:53 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@835 -- # '[' -z 89330 ']' 00:17:53.287 11:39:53 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:17:53.287 11:39:53 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@840 -- # local max_retries=100 00:17:53.287 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:17:53.287 11:39:53 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:17:53.287 11:39:53 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@844 -- # xtrace_disable 00:17:53.287 11:39:53 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@10 -- # set +x 00:17:53.287 [2024-12-16 11:39:53.880836] Starting SPDK v25.01-pre git sha1 c360129e7 / DPDK 24.03.0 initialization... 00:17:53.287 [2024-12-16 11:39:53.880929] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:17:53.287 [2024-12-16 11:39:54.035428] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:17:53.545 [2024-12-16 11:39:54.103059] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:17:53.545 [2024-12-16 11:39:54.103399] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:17:53.545 [2024-12-16 11:39:54.103525] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:17:53.545 [2024-12-16 11:39:54.103677] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:17:53.545 [2024-12-16 11:39:54.103767] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:17:53.545 [2024-12-16 11:39:54.105312] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 1 00:17:53.545 [2024-12-16 11:39:54.105391] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 2 00:17:53.545 [2024-12-16 11:39:54.105499] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 3 00:17:53.545 [2024-12-16 11:39:54.105508] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:17:53.545 11:39:54 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:17:53.545 11:39:54 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@868 -- # return 0 00:17:53.545 11:39:54 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:17:53.545 11:39:54 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@732 -- # xtrace_disable 00:17:53.545 11:39:54 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@10 -- # set +x 00:17:53.545 11:39:54 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:17:53.545 11:39:54 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py load_subsystem_config 00:17:53.545 11:39:54 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/gen_nvme.sh 00:17:54.112 11:39:54 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@30 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py framework_get_config bdev 00:17:54.112 11:39:54 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@30 -- # jq -r '.[].params | select(.name=="Nvme0").traddr' 00:17:54.371 11:39:55 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@30 -- # local_nvme_trid=0000:00:10.0 00:17:54.371 11:39:55 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:17:54.937 11:39:55 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@31 -- # bdevs=' Malloc0' 00:17:54.937 11:39:55 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@33 -- # '[' -n 0000:00:10.0 ']' 00:17:54.937 11:39:55 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@34 -- # bdevs=' Malloc0 Nvme0n1' 00:17:54.937 11:39:55 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@37 -- # '[' tcp == rdma ']' 00:17:54.937 11:39:55 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@42 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o 00:17:54.937 [2024-12-16 11:39:55.638794] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:17:54.937 11:39:55 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@44 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:17:55.196 11:39:55 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@45 -- # for bdev in $bdevs 00:17:55.196 11:39:55 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@46 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:17:55.455 11:39:56 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@45 -- # for bdev in $bdevs 00:17:55.455 11:39:56 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@46 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Nvme0n1 00:17:55.714 11:39:56 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@48 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 00:17:55.972 [2024-12-16 11:39:56.672102] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4420 *** 00:17:55.972 11:39:56 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@49 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.3 -s 4420 00:17:56.231 11:39:56 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@52 -- # '[' -n 0000:00:10.0 ']' 00:17:56.231 11:39:56 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@53 -- # perf_app -i 0 -q 32 -o 4096 -w randrw -M 50 -t 1 -r 'trtype:PCIe traddr:0000:00:10.0' 00:17:56.231 11:39:56 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@21 -- # '[' 0 -eq 1 ']' 00:17:56.231 11:39:56 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@24 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf -i 0 -q 32 -o 4096 -w randrw -M 50 -t 1 -r 'trtype:PCIe traddr:0000:00:10.0' 00:17:57.608 Initializing NVMe Controllers 00:17:57.608 Attached to NVMe Controller at 0000:00:10.0 [1b36:0010] 00:17:57.608 Associating PCIE (0000:00:10.0) NSID 1 with lcore 0 00:17:57.608 Initialization complete. Launching workers. 00:17:57.608 ======================================================== 00:17:57.608 Latency(us) 00:17:57.608 Device Information : IOPS MiB/s Average min max 00:17:57.608 PCIE (0000:00:10.0) NSID 1 from core 0: 23708.47 92.61 1349.87 287.18 8174.02 00:17:57.608 ======================================================== 00:17:57.608 Total : 23708.47 92.61 1349.87 287.18 8174.02 00:17:57.608 00:17:57.608 11:39:58 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@56 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf -q 1 -o 4096 -w randrw -M 50 -t 1 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.3 trsvcid:4420' 00:17:58.984 Initializing NVMe Controllers 00:17:58.984 Attached to NVMe over Fabrics controller at 10.0.0.3:4420: nqn.2016-06.io.spdk:cnode1 00:17:58.984 Associating TCP (addr:10.0.0.3 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:17:58.984 Associating TCP (addr:10.0.0.3 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 with lcore 0 00:17:58.984 Initialization complete. Launching workers. 00:17:58.984 ======================================================== 00:17:58.984 Latency(us) 00:17:58.984 Device Information : IOPS MiB/s Average min max 00:17:58.984 TCP (addr:10.0.0.3 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 3613.00 14.11 275.34 101.04 7127.93 00:17:58.984 TCP (addr:10.0.0.3 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 from core 0: 125.00 0.49 8046.72 5245.06 14992.22 00:17:58.984 ======================================================== 00:17:58.984 Total : 3738.00 14.60 535.22 101.04 14992.22 00:17:58.984 00:17:58.984 11:39:59 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@57 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf -q 32 -o 4096 -w randrw -M 50 -t 1 -HI -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.3 trsvcid:4420' 00:18:00.359 Initializing NVMe Controllers 00:18:00.359 Attached to NVMe over Fabrics controller at 10.0.0.3:4420: nqn.2016-06.io.spdk:cnode1 00:18:00.359 Associating TCP (addr:10.0.0.3 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:18:00.359 Associating TCP (addr:10.0.0.3 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 with lcore 0 00:18:00.359 Initialization complete. Launching workers. 00:18:00.359 ======================================================== 00:18:00.359 Latency(us) 00:18:00.359 Device Information : IOPS MiB/s Average min max 00:18:00.359 TCP (addr:10.0.0.3 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 8871.00 34.65 3608.53 730.90 9828.71 00:18:00.359 TCP (addr:10.0.0.3 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 from core 0: 2697.00 10.54 11960.86 6004.82 23430.85 00:18:00.359 ======================================================== 00:18:00.359 Total : 11568.00 45.19 5555.82 730.90 23430.85 00:18:00.359 00:18:00.359 11:40:00 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@59 -- # [[ '' == \e\8\1\0 ]] 00:18:00.359 11:40:00 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@60 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf -q 128 -o 262144 -O 16384 -w randrw -M 50 -t 2 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.3 trsvcid:4420' 00:18:02.891 Initializing NVMe Controllers 00:18:02.891 Attached to NVMe over Fabrics controller at 10.0.0.3:4420: nqn.2016-06.io.spdk:cnode1 00:18:02.891 Controller IO queue size 128, less than required. 00:18:02.891 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:18:02.891 Controller IO queue size 128, less than required. 00:18:02.891 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:18:02.891 Associating TCP (addr:10.0.0.3 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:18:02.891 Associating TCP (addr:10.0.0.3 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 with lcore 0 00:18:02.891 Initialization complete. Launching workers. 00:18:02.891 ======================================================== 00:18:02.891 Latency(us) 00:18:02.891 Device Information : IOPS MiB/s Average min max 00:18:02.891 TCP (addr:10.0.0.3 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 1333.09 333.27 97970.06 53202.96 160606.17 00:18:02.891 TCP (addr:10.0.0.3 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 from core 0: 614.08 153.52 218937.63 79661.27 345811.94 00:18:02.891 ======================================================== 00:18:02.891 Total : 1947.17 486.79 136119.70 53202.96 345811.94 00:18:02.891 00:18:02.891 11:40:03 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@64 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf -q 128 -o 36964 -O 4096 -w randrw -M 50 -t 5 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.3 trsvcid:4420' -c 0xf -P 4 00:18:02.891 Initializing NVMe Controllers 00:18:02.891 Attached to NVMe over Fabrics controller at 10.0.0.3:4420: nqn.2016-06.io.spdk:cnode1 00:18:02.891 Controller IO queue size 128, less than required. 00:18:02.891 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:18:02.891 WARNING: IO size 36964 (-o) is not a multiple of nsid 1 sector size 512. Removing this ns from test 00:18:02.891 Controller IO queue size 128, less than required. 00:18:02.891 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:18:02.891 WARNING: IO size 36964 (-o) is not a multiple of nsid 2 sector size 4096. Removing this ns from test 00:18:02.891 WARNING: Some requested NVMe devices were skipped 00:18:02.891 No valid NVMe controllers or AIO or URING devices found 00:18:02.891 11:40:03 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@65 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf -q 128 -o 262144 -w randrw -M 50 -t 2 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.3 trsvcid:4420' --transport-stat 00:18:05.422 Initializing NVMe Controllers 00:18:05.422 Attached to NVMe over Fabrics controller at 10.0.0.3:4420: nqn.2016-06.io.spdk:cnode1 00:18:05.422 Controller IO queue size 128, less than required. 00:18:05.422 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:18:05.422 Controller IO queue size 128, less than required. 00:18:05.422 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:18:05.422 Associating TCP (addr:10.0.0.3 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:18:05.423 Associating TCP (addr:10.0.0.3 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 with lcore 0 00:18:05.423 Initialization complete. Launching workers. 00:18:05.423 00:18:05.423 ==================== 00:18:05.423 lcore 0, ns TCP (addr:10.0.0.3 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 statistics: 00:18:05.423 TCP transport: 00:18:05.423 polls: 8037 00:18:05.423 idle_polls: 4729 00:18:05.423 sock_completions: 3308 00:18:05.423 nvme_completions: 4947 00:18:05.423 submitted_requests: 7458 00:18:05.423 queued_requests: 1 00:18:05.423 00:18:05.423 ==================== 00:18:05.423 lcore 0, ns TCP (addr:10.0.0.3 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 statistics: 00:18:05.423 TCP transport: 00:18:05.423 polls: 8400 00:18:05.423 idle_polls: 5339 00:18:05.423 sock_completions: 3061 00:18:05.423 nvme_completions: 5993 00:18:05.423 submitted_requests: 8974 00:18:05.423 queued_requests: 1 00:18:05.423 ======================================================== 00:18:05.423 Latency(us) 00:18:05.423 Device Information : IOPS MiB/s Average min max 00:18:05.423 TCP (addr:10.0.0.3 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 1234.46 308.62 105810.84 70676.93 163331.32 00:18:05.423 TCP (addr:10.0.0.3 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 from core 0: 1495.53 373.88 86196.05 41384.39 143525.74 00:18:05.423 ======================================================== 00:18:05.423 Total : 2730.00 682.50 95065.57 41384.39 163331.32 00:18:05.423 00:18:05.423 11:40:06 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@66 -- # sync 00:18:05.681 11:40:06 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@67 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:18:05.940 11:40:06 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@69 -- # '[' 0 -eq 1 ']' 00:18:05.940 11:40:06 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@112 -- # trap - SIGINT SIGTERM EXIT 00:18:05.940 11:40:06 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@114 -- # nvmftestfini 00:18:05.940 11:40:06 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@516 -- # nvmfcleanup 00:18:05.940 11:40:06 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@121 -- # sync 00:18:05.940 11:40:06 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:18:05.940 11:40:06 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@124 -- # set +e 00:18:05.940 11:40:06 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@125 -- # for i in {1..20} 00:18:05.940 11:40:06 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:18:05.940 rmmod nvme_tcp 00:18:05.940 rmmod nvme_fabrics 00:18:05.940 rmmod nvme_keyring 00:18:05.940 11:40:06 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:18:05.940 11:40:06 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@128 -- # set -e 00:18:05.940 11:40:06 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@129 -- # return 0 00:18:05.940 11:40:06 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@517 -- # '[' -n 89330 ']' 00:18:05.940 11:40:06 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@518 -- # killprocess 89330 00:18:05.940 11:40:06 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@954 -- # '[' -z 89330 ']' 00:18:05.940 11:40:06 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@958 -- # kill -0 89330 00:18:05.940 11:40:06 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@959 -- # uname 00:18:05.940 11:40:06 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:18:05.940 11:40:06 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 89330 00:18:05.940 killing process with pid 89330 00:18:05.940 11:40:06 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:18:05.940 11:40:06 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:18:05.940 11:40:06 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@972 -- # echo 'killing process with pid 89330' 00:18:05.940 11:40:06 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@973 -- # kill 89330 00:18:05.940 11:40:06 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@978 -- # wait 89330 00:18:06.876 11:40:07 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:18:06.876 11:40:07 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:18:06.876 11:40:07 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:18:06.876 11:40:07 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@297 -- # iptr 00:18:06.876 11:40:07 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:18:06.876 11:40:07 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@791 -- # iptables-save 00:18:06.876 11:40:07 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@791 -- # iptables-restore 00:18:06.876 11:40:07 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@298 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:18:06.876 11:40:07 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@299 -- # nvmf_veth_fini 00:18:06.876 11:40:07 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@233 -- # ip link set nvmf_init_br nomaster 00:18:06.876 11:40:07 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@234 -- # ip link set nvmf_init_br2 nomaster 00:18:06.876 11:40:07 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@235 -- # ip link set nvmf_tgt_br nomaster 00:18:06.876 11:40:07 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@236 -- # ip link set nvmf_tgt_br2 nomaster 00:18:06.876 11:40:07 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@237 -- # ip link set nvmf_init_br down 00:18:06.876 11:40:07 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@238 -- # ip link set nvmf_init_br2 down 00:18:06.876 11:40:07 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@239 -- # ip link set nvmf_tgt_br down 00:18:06.876 11:40:07 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@240 -- # ip link set nvmf_tgt_br2 down 00:18:06.876 11:40:07 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@241 -- # ip link delete nvmf_br type bridge 00:18:06.876 11:40:07 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@242 -- # ip link delete nvmf_init_if 00:18:06.876 11:40:07 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@243 -- # ip link delete nvmf_init_if2 00:18:06.876 11:40:07 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@244 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:18:06.876 11:40:07 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@245 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:18:06.876 11:40:07 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@246 -- # remove_spdk_ns 00:18:06.876 11:40:07 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:18:06.876 11:40:07 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:18:06.876 11:40:07 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:18:06.876 11:40:07 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@300 -- # return 0 00:18:06.876 00:18:06.876 real 0m14.399s 00:18:06.876 user 0m52.073s 00:18:06.876 sys 0m3.577s 00:18:06.876 ************************************ 00:18:06.876 END TEST nvmf_perf 00:18:06.876 ************************************ 00:18:06.876 11:40:07 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1130 -- # xtrace_disable 00:18:06.876 11:40:07 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@10 -- # set +x 00:18:06.876 11:40:07 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@24 -- # run_test nvmf_fio_host /home/vagrant/spdk_repo/spdk/test/nvmf/host/fio.sh --transport=tcp 00:18:06.876 11:40:07 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:18:06.876 11:40:07 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1111 -- # xtrace_disable 00:18:06.876 11:40:07 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:18:06.876 ************************************ 00:18:06.876 START TEST nvmf_fio_host 00:18:06.876 ************************************ 00:18:06.876 11:40:07 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/host/fio.sh --transport=tcp 00:18:07.136 * Looking for test storage... 00:18:07.136 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/host 00:18:07.136 11:40:07 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:18:07.136 11:40:07 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1711 -- # lcov --version 00:18:07.136 11:40:07 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:18:07.136 11:40:07 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:18:07.136 11:40:07 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:18:07.136 11:40:07 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@333 -- # local ver1 ver1_l 00:18:07.136 11:40:07 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@334 -- # local ver2 ver2_l 00:18:07.136 11:40:07 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@336 -- # IFS=.-: 00:18:07.136 11:40:07 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@336 -- # read -ra ver1 00:18:07.136 11:40:07 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@337 -- # IFS=.-: 00:18:07.136 11:40:07 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@337 -- # read -ra ver2 00:18:07.136 11:40:07 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@338 -- # local 'op=<' 00:18:07.136 11:40:07 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@340 -- # ver1_l=2 00:18:07.136 11:40:07 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@341 -- # ver2_l=1 00:18:07.136 11:40:07 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:18:07.136 11:40:07 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@344 -- # case "$op" in 00:18:07.136 11:40:07 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@345 -- # : 1 00:18:07.136 11:40:07 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@364 -- # (( v = 0 )) 00:18:07.136 11:40:07 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:18:07.136 11:40:07 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@365 -- # decimal 1 00:18:07.136 11:40:07 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@353 -- # local d=1 00:18:07.136 11:40:07 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:18:07.136 11:40:07 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@355 -- # echo 1 00:18:07.136 11:40:07 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@365 -- # ver1[v]=1 00:18:07.136 11:40:07 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@366 -- # decimal 2 00:18:07.136 11:40:07 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@353 -- # local d=2 00:18:07.136 11:40:07 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:18:07.136 11:40:07 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@355 -- # echo 2 00:18:07.136 11:40:07 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@366 -- # ver2[v]=2 00:18:07.136 11:40:07 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:18:07.136 11:40:07 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:18:07.136 11:40:07 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@368 -- # return 0 00:18:07.136 11:40:07 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:18:07.136 11:40:07 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:18:07.136 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:18:07.136 --rc genhtml_branch_coverage=1 00:18:07.136 --rc genhtml_function_coverage=1 00:18:07.136 --rc genhtml_legend=1 00:18:07.136 --rc geninfo_all_blocks=1 00:18:07.136 --rc geninfo_unexecuted_blocks=1 00:18:07.136 00:18:07.136 ' 00:18:07.136 11:40:07 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:18:07.136 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:18:07.136 --rc genhtml_branch_coverage=1 00:18:07.136 --rc genhtml_function_coverage=1 00:18:07.136 --rc genhtml_legend=1 00:18:07.136 --rc geninfo_all_blocks=1 00:18:07.136 --rc geninfo_unexecuted_blocks=1 00:18:07.136 00:18:07.136 ' 00:18:07.136 11:40:07 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:18:07.136 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:18:07.136 --rc genhtml_branch_coverage=1 00:18:07.136 --rc genhtml_function_coverage=1 00:18:07.136 --rc genhtml_legend=1 00:18:07.136 --rc geninfo_all_blocks=1 00:18:07.136 --rc geninfo_unexecuted_blocks=1 00:18:07.136 00:18:07.136 ' 00:18:07.136 11:40:07 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:18:07.136 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:18:07.136 --rc genhtml_branch_coverage=1 00:18:07.136 --rc genhtml_function_coverage=1 00:18:07.136 --rc genhtml_legend=1 00:18:07.136 --rc geninfo_all_blocks=1 00:18:07.136 --rc geninfo_unexecuted_blocks=1 00:18:07.137 00:18:07.137 ' 00:18:07.137 11:40:07 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@9 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:18:07.137 11:40:07 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@15 -- # shopt -s extglob 00:18:07.137 11:40:07 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:18:07.137 11:40:07 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:18:07.137 11:40:07 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:18:07.137 11:40:07 nvmf_tcp.nvmf_host.nvmf_fio_host -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:07.137 11:40:07 nvmf_tcp.nvmf_host.nvmf_fio_host -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:07.137 11:40:07 nvmf_tcp.nvmf_host.nvmf_fio_host -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:07.137 11:40:07 nvmf_tcp.nvmf_host.nvmf_fio_host -- paths/export.sh@5 -- # export PATH 00:18:07.137 11:40:07 nvmf_tcp.nvmf_host.nvmf_fio_host -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:07.137 11:40:07 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@10 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:18:07.137 11:40:07 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@7 -- # uname -s 00:18:07.137 11:40:07 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:18:07.137 11:40:07 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:18:07.137 11:40:07 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:18:07.137 11:40:07 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:18:07.137 11:40:07 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:18:07.137 11:40:07 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:18:07.137 11:40:07 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:18:07.137 11:40:07 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:18:07.137 11:40:07 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:18:07.137 11:40:07 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:18:07.137 11:40:07 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:2fb5c03d-b67f-4471-b8a9-a83bf7b15be8 00:18:07.137 11:40:07 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@18 -- # NVME_HOSTID=2fb5c03d-b67f-4471-b8a9-a83bf7b15be8 00:18:07.137 11:40:07 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:18:07.137 11:40:07 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:18:07.137 11:40:07 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:18:07.137 11:40:07 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:18:07.137 11:40:07 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@49 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:18:07.137 11:40:07 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@15 -- # shopt -s extglob 00:18:07.137 11:40:07 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:18:07.137 11:40:07 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:18:07.137 11:40:07 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:18:07.137 11:40:07 nvmf_tcp.nvmf_host.nvmf_fio_host -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:07.137 11:40:07 nvmf_tcp.nvmf_host.nvmf_fio_host -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:07.137 11:40:07 nvmf_tcp.nvmf_host.nvmf_fio_host -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:07.137 11:40:07 nvmf_tcp.nvmf_host.nvmf_fio_host -- paths/export.sh@5 -- # export PATH 00:18:07.137 11:40:07 nvmf_tcp.nvmf_host.nvmf_fio_host -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:07.137 11:40:07 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@51 -- # : 0 00:18:07.137 11:40:07 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:18:07.137 11:40:07 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:18:07.137 11:40:07 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:18:07.137 11:40:07 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:18:07.137 11:40:07 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:18:07.137 11:40:07 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:18:07.137 /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:18:07.137 11:40:07 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:18:07.137 11:40:07 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:18:07.137 11:40:07 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@55 -- # have_pci_nics=0 00:18:07.137 11:40:07 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@12 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:18:07.137 11:40:07 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@14 -- # nvmftestinit 00:18:07.137 11:40:07 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:18:07.137 11:40:07 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:18:07.137 11:40:07 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@476 -- # prepare_net_devs 00:18:07.137 11:40:07 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@438 -- # local -g is_hw=no 00:18:07.137 11:40:07 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@440 -- # remove_spdk_ns 00:18:07.137 11:40:07 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:18:07.137 11:40:07 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:18:07.137 11:40:07 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:18:07.137 11:40:07 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@442 -- # [[ virt != virt ]] 00:18:07.137 11:40:07 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@444 -- # [[ no == yes ]] 00:18:07.137 11:40:07 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@451 -- # [[ virt == phy ]] 00:18:07.137 11:40:07 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@454 -- # [[ virt == phy-fallback ]] 00:18:07.137 11:40:07 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@459 -- # [[ tcp == tcp ]] 00:18:07.137 11:40:07 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@460 -- # nvmf_veth_init 00:18:07.137 11:40:07 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@145 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:18:07.137 11:40:07 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@146 -- # NVMF_SECOND_INITIATOR_IP=10.0.0.2 00:18:07.137 11:40:07 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@147 -- # NVMF_FIRST_TARGET_IP=10.0.0.3 00:18:07.137 11:40:07 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@148 -- # NVMF_SECOND_TARGET_IP=10.0.0.4 00:18:07.137 11:40:07 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@149 -- # NVMF_INITIATOR_IP=10.0.0.1 00:18:07.137 11:40:07 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@150 -- # NVMF_BRIDGE=nvmf_br 00:18:07.137 11:40:07 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@151 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:18:07.137 11:40:07 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@152 -- # NVMF_INITIATOR_INTERFACE2=nvmf_init_if2 00:18:07.137 11:40:07 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@153 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:18:07.137 11:40:07 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@154 -- # NVMF_INITIATOR_BRIDGE2=nvmf_init_br2 00:18:07.137 11:40:07 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@155 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:18:07.137 11:40:07 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@156 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:18:07.137 11:40:07 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@157 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:18:07.137 11:40:07 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@158 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:18:07.137 11:40:07 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@159 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:18:07.137 11:40:07 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@160 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:18:07.137 11:40:07 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@162 -- # ip link set nvmf_init_br nomaster 00:18:07.137 Cannot find device "nvmf_init_br" 00:18:07.138 11:40:07 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@162 -- # true 00:18:07.138 11:40:07 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@163 -- # ip link set nvmf_init_br2 nomaster 00:18:07.397 Cannot find device "nvmf_init_br2" 00:18:07.397 11:40:07 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@163 -- # true 00:18:07.397 11:40:07 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@164 -- # ip link set nvmf_tgt_br nomaster 00:18:07.397 Cannot find device "nvmf_tgt_br" 00:18:07.397 11:40:07 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@164 -- # true 00:18:07.397 11:40:07 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@165 -- # ip link set nvmf_tgt_br2 nomaster 00:18:07.397 Cannot find device "nvmf_tgt_br2" 00:18:07.397 11:40:07 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@165 -- # true 00:18:07.397 11:40:07 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@166 -- # ip link set nvmf_init_br down 00:18:07.397 Cannot find device "nvmf_init_br" 00:18:07.397 11:40:07 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@166 -- # true 00:18:07.397 11:40:07 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@167 -- # ip link set nvmf_init_br2 down 00:18:07.397 Cannot find device "nvmf_init_br2" 00:18:07.397 11:40:07 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@167 -- # true 00:18:07.397 11:40:07 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@168 -- # ip link set nvmf_tgt_br down 00:18:07.397 Cannot find device "nvmf_tgt_br" 00:18:07.397 11:40:07 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@168 -- # true 00:18:07.397 11:40:07 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@169 -- # ip link set nvmf_tgt_br2 down 00:18:07.397 Cannot find device "nvmf_tgt_br2" 00:18:07.397 11:40:07 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@169 -- # true 00:18:07.397 11:40:07 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@170 -- # ip link delete nvmf_br type bridge 00:18:07.397 Cannot find device "nvmf_br" 00:18:07.397 11:40:07 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@170 -- # true 00:18:07.397 11:40:07 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@171 -- # ip link delete nvmf_init_if 00:18:07.397 Cannot find device "nvmf_init_if" 00:18:07.397 11:40:07 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@171 -- # true 00:18:07.397 11:40:07 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@172 -- # ip link delete nvmf_init_if2 00:18:07.397 Cannot find device "nvmf_init_if2" 00:18:07.397 11:40:08 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@172 -- # true 00:18:07.397 11:40:08 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@173 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:18:07.397 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:18:07.397 11:40:08 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@173 -- # true 00:18:07.397 11:40:08 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@174 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:18:07.397 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:18:07.397 11:40:08 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@174 -- # true 00:18:07.397 11:40:08 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@177 -- # ip netns add nvmf_tgt_ns_spdk 00:18:07.397 11:40:08 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@180 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:18:07.397 11:40:08 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@181 -- # ip link add nvmf_init_if2 type veth peer name nvmf_init_br2 00:18:07.397 11:40:08 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@182 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:18:07.397 11:40:08 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@183 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:18:07.397 11:40:08 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:18:07.397 11:40:08 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@187 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:18:07.397 11:40:08 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@190 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:18:07.397 11:40:08 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@191 -- # ip addr add 10.0.0.2/24 dev nvmf_init_if2 00:18:07.397 11:40:08 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@192 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if 00:18:07.657 11:40:08 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@193 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.4/24 dev nvmf_tgt_if2 00:18:07.657 11:40:08 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@196 -- # ip link set nvmf_init_if up 00:18:07.657 11:40:08 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@197 -- # ip link set nvmf_init_if2 up 00:18:07.657 11:40:08 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@198 -- # ip link set nvmf_init_br up 00:18:07.657 11:40:08 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@199 -- # ip link set nvmf_init_br2 up 00:18:07.657 11:40:08 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@200 -- # ip link set nvmf_tgt_br up 00:18:07.657 11:40:08 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@201 -- # ip link set nvmf_tgt_br2 up 00:18:07.657 11:40:08 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@202 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:18:07.657 11:40:08 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@203 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:18:07.657 11:40:08 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@204 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:18:07.657 11:40:08 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@207 -- # ip link add nvmf_br type bridge 00:18:07.657 11:40:08 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@208 -- # ip link set nvmf_br up 00:18:07.657 11:40:08 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@211 -- # ip link set nvmf_init_br master nvmf_br 00:18:07.657 11:40:08 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@212 -- # ip link set nvmf_init_br2 master nvmf_br 00:18:07.657 11:40:08 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@213 -- # ip link set nvmf_tgt_br master nvmf_br 00:18:07.657 11:40:08 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@214 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:18:07.657 11:40:08 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@217 -- # ipts -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:18:07.657 11:40:08 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT' 00:18:07.657 11:40:08 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@218 -- # ipts -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT 00:18:07.657 11:40:08 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT' 00:18:07.657 11:40:08 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@219 -- # ipts -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:18:07.657 11:40:08 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@790 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT -m comment --comment 'SPDK_NVMF:-A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT' 00:18:07.657 11:40:08 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@222 -- # ping -c 1 10.0.0.3 00:18:07.657 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:18:07.657 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.091 ms 00:18:07.657 00:18:07.657 --- 10.0.0.3 ping statistics --- 00:18:07.657 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:18:07.657 rtt min/avg/max/mdev = 0.091/0.091/0.091/0.000 ms 00:18:07.657 11:40:08 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@223 -- # ping -c 1 10.0.0.4 00:18:07.657 PING 10.0.0.4 (10.0.0.4) 56(84) bytes of data. 00:18:07.657 64 bytes from 10.0.0.4: icmp_seq=1 ttl=64 time=0.050 ms 00:18:07.657 00:18:07.657 --- 10.0.0.4 ping statistics --- 00:18:07.657 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:18:07.657 rtt min/avg/max/mdev = 0.050/0.050/0.050/0.000 ms 00:18:07.657 11:40:08 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@224 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:18:07.657 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:18:07.657 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.022 ms 00:18:07.657 00:18:07.657 --- 10.0.0.1 ping statistics --- 00:18:07.657 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:18:07.657 rtt min/avg/max/mdev = 0.022/0.022/0.022/0.000 ms 00:18:07.657 11:40:08 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@225 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.2 00:18:07.657 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:18:07.657 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.048 ms 00:18:07.657 00:18:07.657 --- 10.0.0.2 ping statistics --- 00:18:07.657 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:18:07.657 rtt min/avg/max/mdev = 0.048/0.048/0.048/0.000 ms 00:18:07.657 11:40:08 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@227 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:18:07.657 11:40:08 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@461 -- # return 0 00:18:07.657 11:40:08 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:18:07.657 11:40:08 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:18:07.657 11:40:08 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:18:07.657 11:40:08 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:18:07.657 11:40:08 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:18:07.657 11:40:08 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:18:07.657 11:40:08 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:18:07.657 11:40:08 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@16 -- # [[ y != y ]] 00:18:07.657 11:40:08 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@21 -- # timing_enter start_nvmf_tgt 00:18:07.657 11:40:08 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@726 -- # xtrace_disable 00:18:07.657 11:40:08 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@10 -- # set +x 00:18:07.657 11:40:08 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@24 -- # nvmfpid=89855 00:18:07.657 11:40:08 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@23 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:18:07.657 11:40:08 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@26 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:18:07.657 11:40:08 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@28 -- # waitforlisten 89855 00:18:07.657 11:40:08 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@835 -- # '[' -z 89855 ']' 00:18:07.657 11:40:08 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:18:07.657 11:40:08 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@840 -- # local max_retries=100 00:18:07.657 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:18:07.657 11:40:08 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:18:07.657 11:40:08 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@844 -- # xtrace_disable 00:18:07.657 11:40:08 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@10 -- # set +x 00:18:07.657 [2024-12-16 11:40:08.376603] Starting SPDK v25.01-pre git sha1 c360129e7 / DPDK 24.03.0 initialization... 00:18:07.657 [2024-12-16 11:40:08.376706] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:18:07.916 [2024-12-16 11:40:08.518598] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:18:07.916 [2024-12-16 11:40:08.582443] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:18:07.916 [2024-12-16 11:40:08.582704] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:18:07.916 [2024-12-16 11:40:08.582883] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:18:07.916 [2024-12-16 11:40:08.583015] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:18:07.916 [2024-12-16 11:40:08.583050] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:18:07.916 [2024-12-16 11:40:08.584365] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 1 00:18:07.916 [2024-12-16 11:40:08.584450] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 2 00:18:07.916 [2024-12-16 11:40:08.587195] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 3 00:18:07.916 [2024-12-16 11:40:08.587231] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:18:08.175 11:40:08 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:18:08.175 11:40:08 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@868 -- # return 0 00:18:08.175 11:40:08 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@29 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:18:08.434 [2024-12-16 11:40:09.041920] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:18:08.434 11:40:09 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@30 -- # timing_exit start_nvmf_tgt 00:18:08.434 11:40:09 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@732 -- # xtrace_disable 00:18:08.434 11:40:09 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@10 -- # set +x 00:18:08.434 11:40:09 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@32 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc1 00:18:08.693 Malloc1 00:18:08.693 11:40:09 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@33 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:18:09.261 11:40:09 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@34 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:18:09.261 11:40:09 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@35 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 00:18:09.520 [2024-12-16 11:40:10.222188] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4420 *** 00:18:09.520 11:40:10 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@36 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.3 -s 4420 00:18:09.779 11:40:10 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@38 -- # PLUGIN_DIR=/home/vagrant/spdk_repo/spdk/app/fio/nvme 00:18:09.779 11:40:10 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@41 -- # fio_nvme /home/vagrant/spdk_repo/spdk/app/fio/nvme/example_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.3 trsvcid=4420 ns=1' --bs=4096 00:18:09.779 11:40:10 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1364 -- # fio_plugin /home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme /home/vagrant/spdk_repo/spdk/app/fio/nvme/example_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.3 trsvcid=4420 ns=1' --bs=4096 00:18:09.779 11:40:10 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1341 -- # local fio_dir=/usr/src/fio 00:18:09.779 11:40:10 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1343 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:18:09.779 11:40:10 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1343 -- # local sanitizers 00:18:09.779 11:40:10 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1344 -- # local plugin=/home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme 00:18:09.779 11:40:10 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1345 -- # shift 00:18:09.779 11:40:10 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1347 -- # local asan_lib= 00:18:09.779 11:40:10 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1348 -- # for sanitizer in "${sanitizers[@]}" 00:18:09.779 11:40:10 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1349 -- # grep libasan 00:18:09.779 11:40:10 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1349 -- # awk '{print $3}' 00:18:09.779 11:40:10 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1349 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme 00:18:09.779 11:40:10 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1349 -- # asan_lib= 00:18:09.779 11:40:10 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1350 -- # [[ -n '' ]] 00:18:09.779 11:40:10 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1348 -- # for sanitizer in "${sanitizers[@]}" 00:18:09.779 11:40:10 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1349 -- # grep libclang_rt.asan 00:18:09.779 11:40:10 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1349 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme 00:18:09.779 11:40:10 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1349 -- # awk '{print $3}' 00:18:10.038 11:40:10 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1349 -- # asan_lib= 00:18:10.038 11:40:10 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1350 -- # [[ -n '' ]] 00:18:10.038 11:40:10 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1356 -- # LD_PRELOAD=' /home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme' 00:18:10.038 11:40:10 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1356 -- # /usr/src/fio/fio /home/vagrant/spdk_repo/spdk/app/fio/nvme/example_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.3 trsvcid=4420 ns=1' --bs=4096 00:18:10.038 test: (g=0): rw=randrw, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk, iodepth=128 00:18:10.038 fio-3.35 00:18:10.038 Starting 1 thread 00:18:12.571 00:18:12.571 test: (groupid=0, jobs=1): err= 0: pid=89967: Mon Dec 16 11:40:12 2024 00:18:12.571 read: IOPS=9083, BW=35.5MiB/s (37.2MB/s)(71.2MiB/2007msec) 00:18:12.571 slat (usec): min=2, max=281, avg= 2.82, stdev= 3.14 00:18:12.571 clat (usec): min=2833, max=13600, avg=7367.27, stdev=597.33 00:18:12.571 lat (usec): min=2870, max=13603, avg=7370.10, stdev=597.16 00:18:12.571 clat percentiles (usec): 00:18:12.571 | 1.00th=[ 6259], 5.00th=[ 6587], 10.00th=[ 6783], 20.00th=[ 6980], 00:18:12.571 | 30.00th=[ 7111], 40.00th=[ 7242], 50.00th=[ 7308], 60.00th=[ 7439], 00:18:12.571 | 70.00th=[ 7570], 80.00th=[ 7701], 90.00th=[ 7963], 95.00th=[ 8160], 00:18:12.571 | 99.00th=[ 8717], 99.50th=[10945], 99.90th=[12649], 99.95th=[12911], 00:18:12.571 | 99.99th=[13566] 00:18:12.571 bw ( KiB/s): min=35640, max=36680, per=99.95%, avg=36314.00, stdev=487.34, samples=4 00:18:12.571 iops : min= 8912, max= 9170, avg=9079.00, stdev=120.91, samples=4 00:18:12.571 write: IOPS=9092, BW=35.5MiB/s (37.2MB/s)(71.3MiB/2007msec); 0 zone resets 00:18:12.571 slat (usec): min=2, max=221, avg= 2.88, stdev= 2.19 00:18:12.572 clat (usec): min=2112, max=12846, avg=6661.51, stdev=517.43 00:18:12.572 lat (usec): min=2124, max=12849, avg=6664.39, stdev=517.36 00:18:12.572 clat percentiles (usec): 00:18:12.572 | 1.00th=[ 5604], 5.00th=[ 5997], 10.00th=[ 6128], 20.00th=[ 6325], 00:18:12.572 | 30.00th=[ 6456], 40.00th=[ 6587], 50.00th=[ 6652], 60.00th=[ 6718], 00:18:12.572 | 70.00th=[ 6849], 80.00th=[ 6980], 90.00th=[ 7177], 95.00th=[ 7308], 00:18:12.572 | 99.00th=[ 7701], 99.50th=[ 8717], 99.90th=[11469], 99.95th=[11863], 00:18:12.572 | 99.99th=[12780] 00:18:12.572 bw ( KiB/s): min=36032, max=36608, per=100.00%, avg=36386.00, stdev=258.35, samples=4 00:18:12.572 iops : min= 9008, max= 9152, avg=9096.50, stdev=64.59, samples=4 00:18:12.572 lat (msec) : 4=0.12%, 10=99.39%, 20=0.49% 00:18:12.572 cpu : usr=67.65%, sys=23.03%, ctx=6, majf=0, minf=6 00:18:12.572 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.1%, >=64=99.8% 00:18:12.572 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:18:12.572 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:18:12.572 issued rwts: total=18230,18248,0,0 short=0,0,0,0 dropped=0,0,0,0 00:18:12.572 latency : target=0, window=0, percentile=100.00%, depth=128 00:18:12.572 00:18:12.572 Run status group 0 (all jobs): 00:18:12.572 READ: bw=35.5MiB/s (37.2MB/s), 35.5MiB/s-35.5MiB/s (37.2MB/s-37.2MB/s), io=71.2MiB (74.7MB), run=2007-2007msec 00:18:12.572 WRITE: bw=35.5MiB/s (37.2MB/s), 35.5MiB/s-35.5MiB/s (37.2MB/s-37.2MB/s), io=71.3MiB (74.7MB), run=2007-2007msec 00:18:12.572 11:40:13 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@45 -- # fio_nvme /home/vagrant/spdk_repo/spdk/app/fio/nvme/mock_sgl_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.3 trsvcid=4420 ns=1' 00:18:12.572 11:40:13 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1364 -- # fio_plugin /home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme /home/vagrant/spdk_repo/spdk/app/fio/nvme/mock_sgl_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.3 trsvcid=4420 ns=1' 00:18:12.572 11:40:13 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1341 -- # local fio_dir=/usr/src/fio 00:18:12.572 11:40:13 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1343 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:18:12.572 11:40:13 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1343 -- # local sanitizers 00:18:12.572 11:40:13 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1344 -- # local plugin=/home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme 00:18:12.572 11:40:13 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1345 -- # shift 00:18:12.572 11:40:13 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1347 -- # local asan_lib= 00:18:12.572 11:40:13 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1348 -- # for sanitizer in "${sanitizers[@]}" 00:18:12.572 11:40:13 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1349 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme 00:18:12.572 11:40:13 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1349 -- # grep libasan 00:18:12.572 11:40:13 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1349 -- # awk '{print $3}' 00:18:12.572 11:40:13 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1349 -- # asan_lib= 00:18:12.572 11:40:13 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1350 -- # [[ -n '' ]] 00:18:12.572 11:40:13 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1348 -- # for sanitizer in "${sanitizers[@]}" 00:18:12.572 11:40:13 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1349 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme 00:18:12.572 11:40:13 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1349 -- # grep libclang_rt.asan 00:18:12.572 11:40:13 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1349 -- # awk '{print $3}' 00:18:12.572 11:40:13 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1349 -- # asan_lib= 00:18:12.572 11:40:13 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1350 -- # [[ -n '' ]] 00:18:12.572 11:40:13 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1356 -- # LD_PRELOAD=' /home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme' 00:18:12.572 11:40:13 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1356 -- # /usr/src/fio/fio /home/vagrant/spdk_repo/spdk/app/fio/nvme/mock_sgl_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.3 trsvcid=4420 ns=1' 00:18:12.572 test: (g=0): rw=randrw, bs=(R) 16.0KiB-16.0KiB, (W) 16.0KiB-16.0KiB, (T) 16.0KiB-16.0KiB, ioengine=spdk, iodepth=128 00:18:12.572 fio-3.35 00:18:12.572 Starting 1 thread 00:18:15.136 00:18:15.136 test: (groupid=0, jobs=1): err= 0: pid=90020: Mon Dec 16 11:40:15 2024 00:18:15.136 read: IOPS=7921, BW=124MiB/s (130MB/s)(248MiB/2007msec) 00:18:15.136 slat (usec): min=2, max=145, avg= 3.99, stdev= 2.36 00:18:15.136 clat (usec): min=2580, max=17534, avg=9413.46, stdev=2280.88 00:18:15.136 lat (usec): min=2598, max=17538, avg=9417.45, stdev=2280.99 00:18:15.136 clat percentiles (usec): 00:18:15.136 | 1.00th=[ 4883], 5.00th=[ 5997], 10.00th=[ 6587], 20.00th=[ 7373], 00:18:15.136 | 30.00th=[ 8029], 40.00th=[ 8586], 50.00th=[ 9241], 60.00th=[ 9896], 00:18:15.136 | 70.00th=[10814], 80.00th=[11469], 90.00th=[12256], 95.00th=[13173], 00:18:15.136 | 99.00th=[15401], 99.50th=[16057], 99.90th=[17171], 99.95th=[17433], 00:18:15.136 | 99.99th=[17433] 00:18:15.136 bw ( KiB/s): min=60672, max=74560, per=51.77%, avg=65616.00, stdev=6512.43, samples=4 00:18:15.136 iops : min= 3792, max= 4660, avg=4101.00, stdev=407.03, samples=4 00:18:15.136 write: IOPS=4654, BW=72.7MiB/s (76.3MB/s)(134MiB/1842msec); 0 zone resets 00:18:15.136 slat (usec): min=32, max=344, avg=39.66, stdev= 9.01 00:18:15.136 clat (usec): min=4961, max=18733, avg=11656.64, stdev=1942.04 00:18:15.136 lat (usec): min=4998, max=18768, avg=11696.30, stdev=1943.32 00:18:15.136 clat percentiles (usec): 00:18:15.136 | 1.00th=[ 7832], 5.00th=[ 8717], 10.00th=[ 9241], 20.00th=[10028], 00:18:15.136 | 30.00th=[10421], 40.00th=[11076], 50.00th=[11469], 60.00th=[11994], 00:18:15.136 | 70.00th=[12649], 80.00th=[13304], 90.00th=[14353], 95.00th=[15139], 00:18:15.136 | 99.00th=[16188], 99.50th=[16712], 99.90th=[17957], 99.95th=[18220], 00:18:15.136 | 99.99th=[18744] 00:18:15.136 bw ( KiB/s): min=62592, max=78304, per=91.77%, avg=68344.00, stdev=7377.71, samples=4 00:18:15.136 iops : min= 3912, max= 4894, avg=4271.50, stdev=461.11, samples=4 00:18:15.136 lat (msec) : 4=0.16%, 10=46.39%, 20=53.45% 00:18:15.136 cpu : usr=73.44%, sys=17.29%, ctx=37, majf=0, minf=15 00:18:15.136 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.2%, 16=0.3%, 32=0.7%, >=64=98.7% 00:18:15.136 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:18:15.136 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:18:15.136 issued rwts: total=15898,8574,0,0 short=0,0,0,0 dropped=0,0,0,0 00:18:15.136 latency : target=0, window=0, percentile=100.00%, depth=128 00:18:15.136 00:18:15.136 Run status group 0 (all jobs): 00:18:15.136 READ: bw=124MiB/s (130MB/s), 124MiB/s-124MiB/s (130MB/s-130MB/s), io=248MiB (260MB), run=2007-2007msec 00:18:15.136 WRITE: bw=72.7MiB/s (76.3MB/s), 72.7MiB/s-72.7MiB/s (76.3MB/s-76.3MB/s), io=134MiB (140MB), run=1842-1842msec 00:18:15.136 11:40:15 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@47 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:18:15.136 11:40:15 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@49 -- # '[' 0 -eq 1 ']' 00:18:15.136 11:40:15 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@83 -- # trap - SIGINT SIGTERM EXIT 00:18:15.136 11:40:15 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@85 -- # rm -f ./local-test-0-verify.state 00:18:15.136 11:40:15 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@86 -- # nvmftestfini 00:18:15.136 11:40:15 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@516 -- # nvmfcleanup 00:18:15.136 11:40:15 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@121 -- # sync 00:18:15.395 11:40:15 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:18:15.395 11:40:15 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@124 -- # set +e 00:18:15.395 11:40:15 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@125 -- # for i in {1..20} 00:18:15.395 11:40:15 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:18:15.395 rmmod nvme_tcp 00:18:15.395 rmmod nvme_fabrics 00:18:15.395 rmmod nvme_keyring 00:18:15.395 11:40:15 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:18:15.395 11:40:15 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@128 -- # set -e 00:18:15.395 11:40:15 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@129 -- # return 0 00:18:15.395 11:40:15 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@517 -- # '[' -n 89855 ']' 00:18:15.395 11:40:15 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@518 -- # killprocess 89855 00:18:15.395 11:40:15 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@954 -- # '[' -z 89855 ']' 00:18:15.395 11:40:15 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@958 -- # kill -0 89855 00:18:15.395 11:40:15 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@959 -- # uname 00:18:15.395 11:40:15 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:18:15.395 11:40:15 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 89855 00:18:15.395 killing process with pid 89855 00:18:15.395 11:40:15 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:18:15.395 11:40:15 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:18:15.395 11:40:15 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@972 -- # echo 'killing process with pid 89855' 00:18:15.395 11:40:15 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@973 -- # kill 89855 00:18:15.395 11:40:15 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@978 -- # wait 89855 00:18:15.653 11:40:16 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:18:15.653 11:40:16 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:18:15.653 11:40:16 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:18:15.653 11:40:16 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@297 -- # iptr 00:18:15.653 11:40:16 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@791 -- # iptables-save 00:18:15.653 11:40:16 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:18:15.653 11:40:16 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@791 -- # iptables-restore 00:18:15.653 11:40:16 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@298 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:18:15.653 11:40:16 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@299 -- # nvmf_veth_fini 00:18:15.653 11:40:16 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@233 -- # ip link set nvmf_init_br nomaster 00:18:15.653 11:40:16 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@234 -- # ip link set nvmf_init_br2 nomaster 00:18:15.653 11:40:16 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@235 -- # ip link set nvmf_tgt_br nomaster 00:18:15.653 11:40:16 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@236 -- # ip link set nvmf_tgt_br2 nomaster 00:18:15.654 11:40:16 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@237 -- # ip link set nvmf_init_br down 00:18:15.654 11:40:16 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@238 -- # ip link set nvmf_init_br2 down 00:18:15.654 11:40:16 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@239 -- # ip link set nvmf_tgt_br down 00:18:15.654 11:40:16 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@240 -- # ip link set nvmf_tgt_br2 down 00:18:15.654 11:40:16 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@241 -- # ip link delete nvmf_br type bridge 00:18:15.654 11:40:16 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@242 -- # ip link delete nvmf_init_if 00:18:15.654 11:40:16 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@243 -- # ip link delete nvmf_init_if2 00:18:15.654 11:40:16 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@244 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:18:15.913 11:40:16 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@245 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:18:15.913 11:40:16 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@246 -- # remove_spdk_ns 00:18:15.913 11:40:16 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:18:15.913 11:40:16 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:18:15.913 11:40:16 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:18:15.913 11:40:16 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@300 -- # return 0 00:18:15.913 ************************************ 00:18:15.913 END TEST nvmf_fio_host 00:18:15.913 ************************************ 00:18:15.913 00:18:15.913 real 0m8.846s 00:18:15.913 user 0m34.929s 00:18:15.913 sys 0m2.368s 00:18:15.913 11:40:16 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1130 -- # xtrace_disable 00:18:15.913 11:40:16 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@10 -- # set +x 00:18:15.913 11:40:16 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@25 -- # run_test nvmf_failover /home/vagrant/spdk_repo/spdk/test/nvmf/host/failover.sh --transport=tcp 00:18:15.913 11:40:16 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:18:15.913 11:40:16 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1111 -- # xtrace_disable 00:18:15.913 11:40:16 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:18:15.913 ************************************ 00:18:15.913 START TEST nvmf_failover 00:18:15.913 ************************************ 00:18:15.913 11:40:16 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/host/failover.sh --transport=tcp 00:18:15.913 * Looking for test storage... 00:18:15.913 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/host 00:18:15.913 11:40:16 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:18:15.913 11:40:16 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@1711 -- # lcov --version 00:18:15.913 11:40:16 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:18:16.172 11:40:16 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:18:16.172 11:40:16 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:18:16.172 11:40:16 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@333 -- # local ver1 ver1_l 00:18:16.172 11:40:16 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@334 -- # local ver2 ver2_l 00:18:16.172 11:40:16 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@336 -- # IFS=.-: 00:18:16.172 11:40:16 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@336 -- # read -ra ver1 00:18:16.172 11:40:16 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@337 -- # IFS=.-: 00:18:16.172 11:40:16 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@337 -- # read -ra ver2 00:18:16.172 11:40:16 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@338 -- # local 'op=<' 00:18:16.172 11:40:16 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@340 -- # ver1_l=2 00:18:16.172 11:40:16 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@341 -- # ver2_l=1 00:18:16.172 11:40:16 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:18:16.172 11:40:16 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@344 -- # case "$op" in 00:18:16.172 11:40:16 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@345 -- # : 1 00:18:16.172 11:40:16 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@364 -- # (( v = 0 )) 00:18:16.172 11:40:16 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:18:16.172 11:40:16 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@365 -- # decimal 1 00:18:16.172 11:40:16 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@353 -- # local d=1 00:18:16.172 11:40:16 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:18:16.172 11:40:16 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@355 -- # echo 1 00:18:16.172 11:40:16 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@365 -- # ver1[v]=1 00:18:16.172 11:40:16 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@366 -- # decimal 2 00:18:16.172 11:40:16 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@353 -- # local d=2 00:18:16.172 11:40:16 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:18:16.172 11:40:16 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@355 -- # echo 2 00:18:16.172 11:40:16 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@366 -- # ver2[v]=2 00:18:16.172 11:40:16 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:18:16.172 11:40:16 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:18:16.172 11:40:16 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@368 -- # return 0 00:18:16.172 11:40:16 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:18:16.172 11:40:16 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:18:16.172 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:18:16.172 --rc genhtml_branch_coverage=1 00:18:16.172 --rc genhtml_function_coverage=1 00:18:16.172 --rc genhtml_legend=1 00:18:16.172 --rc geninfo_all_blocks=1 00:18:16.172 --rc geninfo_unexecuted_blocks=1 00:18:16.172 00:18:16.172 ' 00:18:16.172 11:40:16 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:18:16.172 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:18:16.172 --rc genhtml_branch_coverage=1 00:18:16.172 --rc genhtml_function_coverage=1 00:18:16.172 --rc genhtml_legend=1 00:18:16.172 --rc geninfo_all_blocks=1 00:18:16.172 --rc geninfo_unexecuted_blocks=1 00:18:16.172 00:18:16.172 ' 00:18:16.172 11:40:16 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:18:16.172 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:18:16.172 --rc genhtml_branch_coverage=1 00:18:16.172 --rc genhtml_function_coverage=1 00:18:16.172 --rc genhtml_legend=1 00:18:16.172 --rc geninfo_all_blocks=1 00:18:16.172 --rc geninfo_unexecuted_blocks=1 00:18:16.172 00:18:16.172 ' 00:18:16.172 11:40:16 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:18:16.172 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:18:16.172 --rc genhtml_branch_coverage=1 00:18:16.172 --rc genhtml_function_coverage=1 00:18:16.172 --rc genhtml_legend=1 00:18:16.172 --rc geninfo_all_blocks=1 00:18:16.172 --rc geninfo_unexecuted_blocks=1 00:18:16.172 00:18:16.172 ' 00:18:16.172 11:40:16 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:18:16.172 11:40:16 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@7 -- # uname -s 00:18:16.172 11:40:16 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:18:16.172 11:40:16 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:18:16.172 11:40:16 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:18:16.172 11:40:16 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:18:16.172 11:40:16 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:18:16.172 11:40:16 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:18:16.172 11:40:16 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:18:16.172 11:40:16 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:18:16.172 11:40:16 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:18:16.172 11:40:16 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:18:16.172 11:40:16 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:2fb5c03d-b67f-4471-b8a9-a83bf7b15be8 00:18:16.172 11:40:16 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@18 -- # NVME_HOSTID=2fb5c03d-b67f-4471-b8a9-a83bf7b15be8 00:18:16.172 11:40:16 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:18:16.172 11:40:16 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:18:16.172 11:40:16 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:18:16.172 11:40:16 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:18:16.172 11:40:16 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@49 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:18:16.172 11:40:16 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@15 -- # shopt -s extglob 00:18:16.172 11:40:16 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:18:16.172 11:40:16 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:18:16.172 11:40:16 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:18:16.172 11:40:16 nvmf_tcp.nvmf_host.nvmf_failover -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:16.172 11:40:16 nvmf_tcp.nvmf_host.nvmf_failover -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:16.172 11:40:16 nvmf_tcp.nvmf_host.nvmf_failover -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:16.172 11:40:16 nvmf_tcp.nvmf_host.nvmf_failover -- paths/export.sh@5 -- # export PATH 00:18:16.173 11:40:16 nvmf_tcp.nvmf_host.nvmf_failover -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:16.173 11:40:16 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@51 -- # : 0 00:18:16.173 11:40:16 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:18:16.173 11:40:16 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:18:16.173 11:40:16 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:18:16.173 11:40:16 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:18:16.173 11:40:16 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:18:16.173 11:40:16 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:18:16.173 /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:18:16.173 11:40:16 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:18:16.173 11:40:16 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:18:16.173 11:40:16 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@55 -- # have_pci_nics=0 00:18:16.173 11:40:16 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@11 -- # MALLOC_BDEV_SIZE=64 00:18:16.173 11:40:16 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:18:16.173 11:40:16 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@14 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:18:16.173 11:40:16 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@16 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:18:16.173 11:40:16 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@18 -- # nvmftestinit 00:18:16.173 11:40:16 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:18:16.173 11:40:16 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:18:16.173 11:40:16 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@476 -- # prepare_net_devs 00:18:16.173 11:40:16 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@438 -- # local -g is_hw=no 00:18:16.173 11:40:16 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@440 -- # remove_spdk_ns 00:18:16.173 11:40:16 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:18:16.173 11:40:16 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:18:16.173 11:40:16 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:18:16.173 11:40:16 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@442 -- # [[ virt != virt ]] 00:18:16.173 11:40:16 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@444 -- # [[ no == yes ]] 00:18:16.173 11:40:16 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@451 -- # [[ virt == phy ]] 00:18:16.173 11:40:16 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@454 -- # [[ virt == phy-fallback ]] 00:18:16.173 11:40:16 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@459 -- # [[ tcp == tcp ]] 00:18:16.173 11:40:16 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@460 -- # nvmf_veth_init 00:18:16.173 11:40:16 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@145 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:18:16.173 11:40:16 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@146 -- # NVMF_SECOND_INITIATOR_IP=10.0.0.2 00:18:16.173 11:40:16 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@147 -- # NVMF_FIRST_TARGET_IP=10.0.0.3 00:18:16.173 11:40:16 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@148 -- # NVMF_SECOND_TARGET_IP=10.0.0.4 00:18:16.173 11:40:16 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@149 -- # NVMF_INITIATOR_IP=10.0.0.1 00:18:16.173 11:40:16 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@150 -- # NVMF_BRIDGE=nvmf_br 00:18:16.173 11:40:16 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@151 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:18:16.173 11:40:16 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@152 -- # NVMF_INITIATOR_INTERFACE2=nvmf_init_if2 00:18:16.173 11:40:16 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@153 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:18:16.173 11:40:16 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@154 -- # NVMF_INITIATOR_BRIDGE2=nvmf_init_br2 00:18:16.173 11:40:16 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@155 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:18:16.173 11:40:16 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@156 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:18:16.173 11:40:16 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@157 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:18:16.173 11:40:16 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@158 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:18:16.173 11:40:16 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@159 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:18:16.173 11:40:16 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@160 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:18:16.173 11:40:16 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@162 -- # ip link set nvmf_init_br nomaster 00:18:16.173 Cannot find device "nvmf_init_br" 00:18:16.173 11:40:16 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@162 -- # true 00:18:16.173 11:40:16 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@163 -- # ip link set nvmf_init_br2 nomaster 00:18:16.173 Cannot find device "nvmf_init_br2" 00:18:16.173 11:40:16 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@163 -- # true 00:18:16.173 11:40:16 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@164 -- # ip link set nvmf_tgt_br nomaster 00:18:16.173 Cannot find device "nvmf_tgt_br" 00:18:16.173 11:40:16 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@164 -- # true 00:18:16.173 11:40:16 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@165 -- # ip link set nvmf_tgt_br2 nomaster 00:18:16.173 Cannot find device "nvmf_tgt_br2" 00:18:16.173 11:40:16 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@165 -- # true 00:18:16.173 11:40:16 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@166 -- # ip link set nvmf_init_br down 00:18:16.173 Cannot find device "nvmf_init_br" 00:18:16.173 11:40:16 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@166 -- # true 00:18:16.173 11:40:16 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@167 -- # ip link set nvmf_init_br2 down 00:18:16.173 Cannot find device "nvmf_init_br2" 00:18:16.173 11:40:16 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@167 -- # true 00:18:16.173 11:40:16 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@168 -- # ip link set nvmf_tgt_br down 00:18:16.173 Cannot find device "nvmf_tgt_br" 00:18:16.173 11:40:16 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@168 -- # true 00:18:16.173 11:40:16 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@169 -- # ip link set nvmf_tgt_br2 down 00:18:16.173 Cannot find device "nvmf_tgt_br2" 00:18:16.173 11:40:16 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@169 -- # true 00:18:16.173 11:40:16 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@170 -- # ip link delete nvmf_br type bridge 00:18:16.173 Cannot find device "nvmf_br" 00:18:16.173 11:40:16 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@170 -- # true 00:18:16.173 11:40:16 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@171 -- # ip link delete nvmf_init_if 00:18:16.173 Cannot find device "nvmf_init_if" 00:18:16.173 11:40:16 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@171 -- # true 00:18:16.173 11:40:16 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@172 -- # ip link delete nvmf_init_if2 00:18:16.173 Cannot find device "nvmf_init_if2" 00:18:16.173 11:40:16 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@172 -- # true 00:18:16.173 11:40:16 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@173 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:18:16.173 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:18:16.173 11:40:16 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@173 -- # true 00:18:16.173 11:40:16 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@174 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:18:16.173 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:18:16.173 11:40:16 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@174 -- # true 00:18:16.173 11:40:16 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@177 -- # ip netns add nvmf_tgt_ns_spdk 00:18:16.173 11:40:16 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@180 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:18:16.173 11:40:16 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@181 -- # ip link add nvmf_init_if2 type veth peer name nvmf_init_br2 00:18:16.173 11:40:16 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@182 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:18:16.173 11:40:16 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@183 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:18:16.432 11:40:16 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:18:16.432 11:40:16 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@187 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:18:16.432 11:40:16 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@190 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:18:16.432 11:40:16 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@191 -- # ip addr add 10.0.0.2/24 dev nvmf_init_if2 00:18:16.432 11:40:16 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@192 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if 00:18:16.432 11:40:16 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@193 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.4/24 dev nvmf_tgt_if2 00:18:16.432 11:40:16 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@196 -- # ip link set nvmf_init_if up 00:18:16.432 11:40:17 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@197 -- # ip link set nvmf_init_if2 up 00:18:16.432 11:40:17 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@198 -- # ip link set nvmf_init_br up 00:18:16.432 11:40:17 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@199 -- # ip link set nvmf_init_br2 up 00:18:16.432 11:40:17 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@200 -- # ip link set nvmf_tgt_br up 00:18:16.432 11:40:17 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@201 -- # ip link set nvmf_tgt_br2 up 00:18:16.432 11:40:17 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@202 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:18:16.432 11:40:17 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@203 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:18:16.432 11:40:17 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@204 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:18:16.432 11:40:17 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@207 -- # ip link add nvmf_br type bridge 00:18:16.432 11:40:17 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@208 -- # ip link set nvmf_br up 00:18:16.432 11:40:17 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@211 -- # ip link set nvmf_init_br master nvmf_br 00:18:16.432 11:40:17 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@212 -- # ip link set nvmf_init_br2 master nvmf_br 00:18:16.432 11:40:17 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@213 -- # ip link set nvmf_tgt_br master nvmf_br 00:18:16.432 11:40:17 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@214 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:18:16.432 11:40:17 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@217 -- # ipts -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:18:16.432 11:40:17 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT' 00:18:16.432 11:40:17 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@218 -- # ipts -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT 00:18:16.432 11:40:17 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT' 00:18:16.432 11:40:17 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@219 -- # ipts -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:18:16.432 11:40:17 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@790 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT -m comment --comment 'SPDK_NVMF:-A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT' 00:18:16.432 11:40:17 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@222 -- # ping -c 1 10.0.0.3 00:18:16.432 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:18:16.432 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.179 ms 00:18:16.432 00:18:16.432 --- 10.0.0.3 ping statistics --- 00:18:16.432 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:18:16.432 rtt min/avg/max/mdev = 0.179/0.179/0.179/0.000 ms 00:18:16.432 11:40:17 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@223 -- # ping -c 1 10.0.0.4 00:18:16.432 PING 10.0.0.4 (10.0.0.4) 56(84) bytes of data. 00:18:16.432 64 bytes from 10.0.0.4: icmp_seq=1 ttl=64 time=0.042 ms 00:18:16.432 00:18:16.432 --- 10.0.0.4 ping statistics --- 00:18:16.432 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:18:16.432 rtt min/avg/max/mdev = 0.042/0.042/0.042/0.000 ms 00:18:16.432 11:40:17 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@224 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:18:16.432 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:18:16.432 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.026 ms 00:18:16.432 00:18:16.432 --- 10.0.0.1 ping statistics --- 00:18:16.432 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:18:16.432 rtt min/avg/max/mdev = 0.026/0.026/0.026/0.000 ms 00:18:16.433 11:40:17 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@225 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.2 00:18:16.433 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:18:16.433 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.048 ms 00:18:16.433 00:18:16.433 --- 10.0.0.2 ping statistics --- 00:18:16.433 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:18:16.433 rtt min/avg/max/mdev = 0.048/0.048/0.048/0.000 ms 00:18:16.433 11:40:17 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@227 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:18:16.433 11:40:17 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@461 -- # return 0 00:18:16.433 11:40:17 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:18:16.433 11:40:17 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:18:16.433 11:40:17 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:18:16.433 11:40:17 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:18:16.433 11:40:17 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:18:16.433 11:40:17 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:18:16.433 11:40:17 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:18:16.433 11:40:17 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@20 -- # nvmfappstart -m 0xE 00:18:16.433 11:40:17 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:18:16.433 11:40:17 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@726 -- # xtrace_disable 00:18:16.433 11:40:17 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@10 -- # set +x 00:18:16.433 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:18:16.433 11:40:17 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@509 -- # nvmfpid=90288 00:18:16.433 11:40:17 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@510 -- # waitforlisten 90288 00:18:16.433 11:40:17 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@835 -- # '[' -z 90288 ']' 00:18:16.433 11:40:17 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:18:16.433 11:40:17 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@508 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xE 00:18:16.433 11:40:17 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@840 -- # local max_retries=100 00:18:16.433 11:40:17 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:18:16.433 11:40:17 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@844 -- # xtrace_disable 00:18:16.433 11:40:17 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@10 -- # set +x 00:18:16.692 [2024-12-16 11:40:17.225698] Starting SPDK v25.01-pre git sha1 c360129e7 / DPDK 24.03.0 initialization... 00:18:16.692 [2024-12-16 11:40:17.225798] [ DPDK EAL parameters: nvmf -c 0xE --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:18:16.692 [2024-12-16 11:40:17.379037] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:18:16.692 [2024-12-16 11:40:17.433624] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:18:16.692 [2024-12-16 11:40:17.433933] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:18:16.692 [2024-12-16 11:40:17.434104] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:18:16.692 [2024-12-16 11:40:17.434284] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:18:16.692 [2024-12-16 11:40:17.434330] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:18:16.692 [2024-12-16 11:40:17.435729] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 2 00:18:16.692 [2024-12-16 11:40:17.435814] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 3 00:18:16.692 [2024-12-16 11:40:17.435820] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 1 00:18:16.950 11:40:17 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:18:16.950 11:40:17 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@868 -- # return 0 00:18:16.950 11:40:17 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:18:16.950 11:40:17 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@732 -- # xtrace_disable 00:18:16.950 11:40:17 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@10 -- # set +x 00:18:16.950 11:40:17 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:18:16.950 11:40:17 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@22 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:18:17.209 [2024-12-16 11:40:17.915817] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:18:17.209 11:40:17 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@23 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc0 00:18:17.775 Malloc0 00:18:17.775 11:40:18 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@24 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:18:18.033 11:40:18 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@25 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:18:18.291 11:40:18 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@26 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 00:18:18.549 [2024-12-16 11:40:19.093406] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4420 *** 00:18:18.549 11:40:19 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@27 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4421 00:18:18.808 [2024-12-16 11:40:19.337510] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4421 *** 00:18:18.808 11:40:19 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4422 00:18:19.066 [2024-12-16 11:40:19.581719] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4422 *** 00:18:19.066 11:40:19 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@30 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 15 -f 00:18:19.066 11:40:19 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@31 -- # bdevperf_pid=90392 00:18:19.066 11:40:19 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@33 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; cat $testdir/try.txt; rm -f $testdir/try.txt; killprocess $bdevperf_pid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:18:19.066 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:18:19.066 11:40:19 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@34 -- # waitforlisten 90392 /var/tmp/bdevperf.sock 00:18:19.066 11:40:19 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@835 -- # '[' -z 90392 ']' 00:18:19.066 11:40:19 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:18:19.066 11:40:19 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@840 -- # local max_retries=100 00:18:19.066 11:40:19 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:18:19.066 11:40:19 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@844 -- # xtrace_disable 00:18:19.066 11:40:19 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@10 -- # set +x 00:18:19.325 11:40:19 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:18:19.325 11:40:19 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@868 -- # return 0 00:18:19.325 11:40:19 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@35 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.3 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -x failover 00:18:19.583 NVMe0n1 00:18:19.583 11:40:20 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@36 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.3 -s 4421 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -x failover 00:18:20.150 00:18:20.150 11:40:20 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@39 -- # run_test_pid=90426 00:18:20.150 11:40:20 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@38 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:18:20.150 11:40:20 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@41 -- # sleep 1 00:18:21.085 11:40:21 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@43 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 00:18:21.344 [2024-12-16 11:40:21.960729] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2050a90 is same with the state(6) to be set 00:18:21.344 [2024-12-16 11:40:21.960796] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2050a90 is same with the state(6) to be set 00:18:21.344 [2024-12-16 11:40:21.960825] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2050a90 is same with the state(6) to be set 00:18:21.344 [2024-12-16 11:40:21.960834] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2050a90 is same with the state(6) to be set 00:18:21.344 [2024-12-16 11:40:21.960843] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2050a90 is same with the state(6) to be set 00:18:21.344 [2024-12-16 11:40:21.960852] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2050a90 is same with the state(6) to be set 00:18:21.344 [2024-12-16 11:40:21.960861] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2050a90 is same with the state(6) to be set 00:18:21.344 [2024-12-16 11:40:21.960869] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2050a90 is same with the state(6) to be set 00:18:21.344 [2024-12-16 11:40:21.960877] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2050a90 is same with the state(6) to be set 00:18:21.344 [2024-12-16 11:40:21.960885] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2050a90 is same with the state(6) to be set 00:18:21.344 [2024-12-16 11:40:21.960893] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2050a90 is same with the state(6) to be set 00:18:21.344 [2024-12-16 11:40:21.960902] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2050a90 is same with the state(6) to be set 00:18:21.344 [2024-12-16 11:40:21.960910] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2050a90 is same with the state(6) to be set 00:18:21.344 [2024-12-16 11:40:21.960918] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2050a90 is same with the state(6) to be set 00:18:21.344 [2024-12-16 11:40:21.960926] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2050a90 is same with the state(6) to be set 00:18:21.344 [2024-12-16 11:40:21.960934] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2050a90 is same with the state(6) to be set 00:18:21.344 [2024-12-16 11:40:21.960942] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2050a90 is same with the state(6) to be set 00:18:21.344 [2024-12-16 11:40:21.960951] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2050a90 is same with the state(6) to be set 00:18:21.344 [2024-12-16 11:40:21.960959] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2050a90 is same with the state(6) to be set 00:18:21.344 [2024-12-16 11:40:21.960967] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2050a90 is same with the state(6) to be set 00:18:21.344 [2024-12-16 11:40:21.960975] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2050a90 is same with the state(6) to be set 00:18:21.344 [2024-12-16 11:40:21.960983] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2050a90 is same with the state(6) to be set 00:18:21.344 [2024-12-16 11:40:21.960991] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2050a90 is same with the state(6) to be set 00:18:21.344 [2024-12-16 11:40:21.960999] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2050a90 is same with the state(6) to be set 00:18:21.344 [2024-12-16 11:40:21.961006] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2050a90 is same with the state(6) to be set 00:18:21.344 [2024-12-16 11:40:21.961014] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2050a90 is same with the state(6) to be set 00:18:21.344 [2024-12-16 11:40:21.961022] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2050a90 is same with the state(6) to be set 00:18:21.344 [2024-12-16 11:40:21.961030] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2050a90 is same with the state(6) to be set 00:18:21.344 [2024-12-16 11:40:21.961039] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2050a90 is same with the state(6) to be set 00:18:21.344 [2024-12-16 11:40:21.961047] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2050a90 is same with the state(6) to be set 00:18:21.344 [2024-12-16 11:40:21.961056] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2050a90 is same with the state(6) to be set 00:18:21.344 [2024-12-16 11:40:21.961064] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2050a90 is same with the state(6) to be set 00:18:21.344 [2024-12-16 11:40:21.961072] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2050a90 is same with the state(6) to be set 00:18:21.344 [2024-12-16 11:40:21.961082] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2050a90 is same with the state(6) to be set 00:18:21.344 [2024-12-16 11:40:21.961090] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2050a90 is same with the state(6) to be set 00:18:21.344 [2024-12-16 11:40:21.961100] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2050a90 is same with the state(6) to be set 00:18:21.344 [2024-12-16 11:40:21.961108] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2050a90 is same with the state(6) to be set 00:18:21.344 [2024-12-16 11:40:21.961132] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2050a90 is same with the state(6) to be set 00:18:21.344 [2024-12-16 11:40:21.961140] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2050a90 is same with the state(6) to be set 00:18:21.344 [2024-12-16 11:40:21.961163] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2050a90 is same with the state(6) to be set 00:18:21.344 [2024-12-16 11:40:21.961171] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2050a90 is same with the state(6) to be set 00:18:21.344 [2024-12-16 11:40:21.961179] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2050a90 is same with the state(6) to be set 00:18:21.344 [2024-12-16 11:40:21.961206] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2050a90 is same with the state(6) to be set 00:18:21.344 [2024-12-16 11:40:21.961215] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2050a90 is same with the state(6) to be set 00:18:21.344 [2024-12-16 11:40:21.961223] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2050a90 is same with the state(6) to be set 00:18:21.344 [2024-12-16 11:40:21.961232] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2050a90 is same with the state(6) to be set 00:18:21.344 [2024-12-16 11:40:21.961240] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2050a90 is same with the state(6) to be set 00:18:21.344 [2024-12-16 11:40:21.961247] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2050a90 is same with the state(6) to be set 00:18:21.344 [2024-12-16 11:40:21.961255] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2050a90 is same with the state(6) to be set 00:18:21.344 [2024-12-16 11:40:21.961263] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2050a90 is same with the state(6) to be set 00:18:21.344 [2024-12-16 11:40:21.961272] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2050a90 is same with the state(6) to be set 00:18:21.344 [2024-12-16 11:40:21.961280] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2050a90 is same with the state(6) to be set 00:18:21.344 [2024-12-16 11:40:21.961288] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2050a90 is same with the state(6) to be set 00:18:21.344 [2024-12-16 11:40:21.961296] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2050a90 is same with the state(6) to be set 00:18:21.344 [2024-12-16 11:40:21.961304] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2050a90 is same with the state(6) to be set 00:18:21.344 [2024-12-16 11:40:21.961312] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2050a90 is same with the state(6) to be set 00:18:21.344 [2024-12-16 11:40:21.961321] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2050a90 is same with the state(6) to be set 00:18:21.344 [2024-12-16 11:40:21.961329] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2050a90 is same with the state(6) to be set 00:18:21.344 [2024-12-16 11:40:21.961337] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2050a90 is same with the state(6) to be set 00:18:21.344 [2024-12-16 11:40:21.961345] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2050a90 is same with the state(6) to be set 00:18:21.344 [2024-12-16 11:40:21.961353] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2050a90 is same with the state(6) to be set 00:18:21.344 [2024-12-16 11:40:21.961361] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2050a90 is same with the state(6) to be set 00:18:21.344 [2024-12-16 11:40:21.961369] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2050a90 is same with the state(6) to be set 00:18:21.344 [2024-12-16 11:40:21.961377] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2050a90 is same with the state(6) to be set 00:18:21.344 [2024-12-16 11:40:21.961385] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2050a90 is same with the state(6) to be set 00:18:21.344 [2024-12-16 11:40:21.961408] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2050a90 is same with the state(6) to be set 00:18:21.345 [2024-12-16 11:40:21.961417] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2050a90 is same with the state(6) to be set 00:18:21.345 [2024-12-16 11:40:21.961429] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2050a90 is same with the state(6) to be set 00:18:21.345 [2024-12-16 11:40:21.961438] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2050a90 is same with the state(6) to be set 00:18:21.345 [2024-12-16 11:40:21.961447] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2050a90 is same with the state(6) to be set 00:18:21.345 [2024-12-16 11:40:21.961455] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2050a90 is same with the state(6) to be set 00:18:21.345 [2024-12-16 11:40:21.961464] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2050a90 is same with the state(6) to be set 00:18:21.345 [2024-12-16 11:40:21.961472] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2050a90 is same with the state(6) to be set 00:18:21.345 [2024-12-16 11:40:21.961480] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2050a90 is same with the state(6) to be set 00:18:21.345 [2024-12-16 11:40:21.961488] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2050a90 is same with the state(6) to be set 00:18:21.345 [2024-12-16 11:40:21.961496] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2050a90 is same with the state(6) to be set 00:18:21.345 [2024-12-16 11:40:21.961504] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2050a90 is same with the state(6) to be set 00:18:21.345 [2024-12-16 11:40:21.961512] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2050a90 is same with the state(6) to be set 00:18:21.345 [2024-12-16 11:40:21.961520] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2050a90 is same with the state(6) to be set 00:18:21.345 [2024-12-16 11:40:21.961528] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2050a90 is same with the state(6) to be set 00:18:21.345 [2024-12-16 11:40:21.961536] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2050a90 is same with the state(6) to be set 00:18:21.345 [2024-12-16 11:40:21.961559] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2050a90 is same with the state(6) to be set 00:18:21.345 [2024-12-16 11:40:21.961567] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2050a90 is same with the state(6) to be set 00:18:21.345 [2024-12-16 11:40:21.961575] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2050a90 is same with the state(6) to be set 00:18:21.345 [2024-12-16 11:40:21.961583] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2050a90 is same with the state(6) to be set 00:18:21.345 11:40:21 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@45 -- # sleep 3 00:18:24.628 11:40:24 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@47 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.3 -s 4422 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -x failover 00:18:24.628 00:18:24.886 11:40:25 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@48 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4421 00:18:25.144 [2024-12-16 11:40:25.697681] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2051540 is same with the state(6) to be set 00:18:25.144 [2024-12-16 11:40:25.697949] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2051540 is same with the state(6) to be set 00:18:25.144 [2024-12-16 11:40:25.698098] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2051540 is same with the state(6) to be set 00:18:25.144 [2024-12-16 11:40:25.698112] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2051540 is same with the state(6) to be set 00:18:25.144 [2024-12-16 11:40:25.698123] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2051540 is same with the state(6) to be set 00:18:25.144 [2024-12-16 11:40:25.698131] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2051540 is same with the state(6) to be set 00:18:25.144 [2024-12-16 11:40:25.698160] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2051540 is same with the state(6) to be set 00:18:25.144 [2024-12-16 11:40:25.698171] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2051540 is same with the state(6) to be set 00:18:25.144 [2024-12-16 11:40:25.698179] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2051540 is same with the state(6) to be set 00:18:25.144 [2024-12-16 11:40:25.698188] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2051540 is same with the state(6) to be set 00:18:25.144 [2024-12-16 11:40:25.698197] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2051540 is same with the state(6) to be set 00:18:25.144 [2024-12-16 11:40:25.698205] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2051540 is same with the state(6) to be set 00:18:25.144 [2024-12-16 11:40:25.698214] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2051540 is same with the state(6) to be set 00:18:25.144 [2024-12-16 11:40:25.698222] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2051540 is same with the state(6) to be set 00:18:25.144 [2024-12-16 11:40:25.698231] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2051540 is same with the state(6) to be set 00:18:25.144 [2024-12-16 11:40:25.698239] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2051540 is same with the state(6) to be set 00:18:25.144 [2024-12-16 11:40:25.698248] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2051540 is same with the state(6) to be set 00:18:25.144 [2024-12-16 11:40:25.698256] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2051540 is same with the state(6) to be set 00:18:25.145 [2024-12-16 11:40:25.698264] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2051540 is same with the state(6) to be set 00:18:25.145 [2024-12-16 11:40:25.698273] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2051540 is same with the state(6) to be set 00:18:25.145 [2024-12-16 11:40:25.698282] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2051540 is same with the state(6) to be set 00:18:25.145 [2024-12-16 11:40:25.698290] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2051540 is same with the state(6) to be set 00:18:25.145 [2024-12-16 11:40:25.698298] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2051540 is same with the state(6) to be set 00:18:25.145 [2024-12-16 11:40:25.698307] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2051540 is same with the state(6) to be set 00:18:25.145 [2024-12-16 11:40:25.698315] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2051540 is same with the state(6) to be set 00:18:25.145 [2024-12-16 11:40:25.698325] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2051540 is same with the state(6) to be set 00:18:25.145 [2024-12-16 11:40:25.698334] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2051540 is same with the state(6) to be set 00:18:25.145 [2024-12-16 11:40:25.698343] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2051540 is same with the state(6) to be set 00:18:25.145 [2024-12-16 11:40:25.698351] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2051540 is same with the state(6) to be set 00:18:25.145 [2024-12-16 11:40:25.698361] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2051540 is same with the state(6) to be set 00:18:25.145 [2024-12-16 11:40:25.698369] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2051540 is same with the state(6) to be set 00:18:25.145 [2024-12-16 11:40:25.698378] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2051540 is same with the state(6) to be set 00:18:25.145 [2024-12-16 11:40:25.698387] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2051540 is same with the state(6) to be set 00:18:25.145 [2024-12-16 11:40:25.698395] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2051540 is same with the state(6) to be set 00:18:25.145 [2024-12-16 11:40:25.698404] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2051540 is same with the state(6) to be set 00:18:25.145 [2024-12-16 11:40:25.698412] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2051540 is same with the state(6) to be set 00:18:25.145 [2024-12-16 11:40:25.698420] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2051540 is same with the state(6) to be set 00:18:25.145 [2024-12-16 11:40:25.698428] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2051540 is same with the state(6) to be set 00:18:25.145 [2024-12-16 11:40:25.698436] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2051540 is same with the state(6) to be set 00:18:25.145 [2024-12-16 11:40:25.698444] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2051540 is same with the state(6) to be set 00:18:25.145 [2024-12-16 11:40:25.698453] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2051540 is same with the state(6) to be set 00:18:25.145 [2024-12-16 11:40:25.698461] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2051540 is same with the state(6) to be set 00:18:25.145 [2024-12-16 11:40:25.698469] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2051540 is same with the state(6) to be set 00:18:25.145 [2024-12-16 11:40:25.698476] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2051540 is same with the state(6) to be set 00:18:25.145 [2024-12-16 11:40:25.698484] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2051540 is same with the state(6) to be set 00:18:25.145 [2024-12-16 11:40:25.698492] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2051540 is same with the state(6) to be set 00:18:25.145 [2024-12-16 11:40:25.698500] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2051540 is same with the state(6) to be set 00:18:25.145 [2024-12-16 11:40:25.698508] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2051540 is same with the state(6) to be set 00:18:25.145 [2024-12-16 11:40:25.698516] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2051540 is same with the state(6) to be set 00:18:25.145 [2024-12-16 11:40:25.698540] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2051540 is same with the state(6) to be set 00:18:25.145 [2024-12-16 11:40:25.698547] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2051540 is same with the state(6) to be set 00:18:25.145 [2024-12-16 11:40:25.698555] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2051540 is same with the state(6) to be set 00:18:25.145 [2024-12-16 11:40:25.698562] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2051540 is same with the state(6) to be set 00:18:25.145 [2024-12-16 11:40:25.698570] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2051540 is same with the state(6) to be set 00:18:25.145 [2024-12-16 11:40:25.698577] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2051540 is same with the state(6) to be set 00:18:25.145 [2024-12-16 11:40:25.698585] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2051540 is same with the state(6) to be set 00:18:25.145 [2024-12-16 11:40:25.698593] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2051540 is same with the state(6) to be set 00:18:25.145 [2024-12-16 11:40:25.698602] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2051540 is same with the state(6) to be set 00:18:25.145 [2024-12-16 11:40:25.698610] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2051540 is same with the state(6) to be set 00:18:25.145 [2024-12-16 11:40:25.698618] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2051540 is same with the state(6) to be set 00:18:25.145 [2024-12-16 11:40:25.698626] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2051540 is same with the state(6) to be set 00:18:25.145 [2024-12-16 11:40:25.698634] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2051540 is same with the state(6) to be set 00:18:25.145 11:40:25 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@50 -- # sleep 3 00:18:28.428 11:40:28 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@53 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 00:18:28.428 [2024-12-16 11:40:29.000671] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4420 *** 00:18:28.428 11:40:29 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@55 -- # sleep 1 00:18:29.363 11:40:30 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@57 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4422 00:18:29.621 [2024-12-16 11:40:30.309401] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f17f50 is same with the state(6) to be set 00:18:29.621 [2024-12-16 11:40:30.309455] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f17f50 is same with the state(6) to be set 00:18:29.621 [2024-12-16 11:40:30.309468] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f17f50 is same with the state(6) to be set 00:18:29.621 [2024-12-16 11:40:30.309477] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f17f50 is same with the state(6) to be set 00:18:29.621 [2024-12-16 11:40:30.309485] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f17f50 is same with the state(6) to be set 00:18:29.621 [2024-12-16 11:40:30.309494] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f17f50 is same with the state(6) to be set 00:18:29.621 [2024-12-16 11:40:30.309503] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f17f50 is same with the state(6) to be set 00:18:29.621 [2024-12-16 11:40:30.309511] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f17f50 is same with the state(6) to be set 00:18:29.621 [2024-12-16 11:40:30.309520] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f17f50 is same with the state(6) to be set 00:18:29.621 [2024-12-16 11:40:30.309528] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f17f50 is same with the state(6) to be set 00:18:29.621 [2024-12-16 11:40:30.309536] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f17f50 is same with the state(6) to be set 00:18:29.621 [2024-12-16 11:40:30.309544] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f17f50 is same with the state(6) to be set 00:18:29.621 [2024-12-16 11:40:30.309553] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f17f50 is same with the state(6) to be set 00:18:29.621 [2024-12-16 11:40:30.309561] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f17f50 is same with the state(6) to be set 00:18:29.621 [2024-12-16 11:40:30.309569] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f17f50 is same with the state(6) to be set 00:18:29.621 [2024-12-16 11:40:30.309577] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f17f50 is same with the state(6) to be set 00:18:29.621 [2024-12-16 11:40:30.309585] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f17f50 is same with the state(6) to be set 00:18:29.621 [2024-12-16 11:40:30.309603] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f17f50 is same with the state(6) to be set 00:18:29.621 [2024-12-16 11:40:30.309611] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f17f50 is same with the state(6) to be set 00:18:29.621 [2024-12-16 11:40:30.309620] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f17f50 is same with the state(6) to be set 00:18:29.621 [2024-12-16 11:40:30.309628] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f17f50 is same with the state(6) to be set 00:18:29.621 [2024-12-16 11:40:30.309636] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f17f50 is same with the state(6) to be set 00:18:29.621 [2024-12-16 11:40:30.309644] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f17f50 is same with the state(6) to be set 00:18:29.621 [2024-12-16 11:40:30.309652] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f17f50 is same with the state(6) to be set 00:18:29.621 [2024-12-16 11:40:30.309660] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f17f50 is same with the state(6) to be set 00:18:29.621 [2024-12-16 11:40:30.309668] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f17f50 is same with the state(6) to be set 00:18:29.621 [2024-12-16 11:40:30.309677] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f17f50 is same with the state(6) to be set 00:18:29.621 [2024-12-16 11:40:30.309685] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f17f50 is same with the state(6) to be set 00:18:29.621 [2024-12-16 11:40:30.309696] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f17f50 is same with the state(6) to be set 00:18:29.621 [2024-12-16 11:40:30.309704] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f17f50 is same with the state(6) to be set 00:18:29.621 [2024-12-16 11:40:30.309713] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f17f50 is same with the state(6) to be set 00:18:29.621 [2024-12-16 11:40:30.309721] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f17f50 is same with the state(6) to be set 00:18:29.621 [2024-12-16 11:40:30.309730] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f17f50 is same with the state(6) to be set 00:18:29.621 11:40:30 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@59 -- # wait 90426 00:18:36.214 { 00:18:36.214 "results": [ 00:18:36.214 { 00:18:36.214 "job": "NVMe0n1", 00:18:36.214 "core_mask": "0x1", 00:18:36.214 "workload": "verify", 00:18:36.214 "status": "finished", 00:18:36.214 "verify_range": { 00:18:36.214 "start": 0, 00:18:36.214 "length": 16384 00:18:36.214 }, 00:18:36.214 "queue_depth": 128, 00:18:36.214 "io_size": 4096, 00:18:36.214 "runtime": 15.005525, 00:18:36.214 "iops": 8932.50985886865, 00:18:36.214 "mibps": 34.892616636205666, 00:18:36.214 "io_failed": 3085, 00:18:36.214 "io_timeout": 0, 00:18:36.214 "avg_latency_us": 13975.422986245823, 00:18:36.214 "min_latency_us": 886.2254545454546, 00:18:36.214 "max_latency_us": 23116.334545454545 00:18:36.214 } 00:18:36.214 ], 00:18:36.214 "core_count": 1 00:18:36.214 } 00:18:36.214 11:40:35 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@61 -- # killprocess 90392 00:18:36.214 11:40:35 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@954 -- # '[' -z 90392 ']' 00:18:36.214 11:40:35 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@958 -- # kill -0 90392 00:18:36.214 11:40:35 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@959 -- # uname 00:18:36.214 11:40:35 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:18:36.214 11:40:35 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 90392 00:18:36.214 killing process with pid 90392 00:18:36.214 11:40:35 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:18:36.214 11:40:35 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:18:36.214 11:40:35 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@972 -- # echo 'killing process with pid 90392' 00:18:36.214 11:40:35 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@973 -- # kill 90392 00:18:36.214 11:40:35 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@978 -- # wait 90392 00:18:36.214 11:40:36 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@63 -- # cat /home/vagrant/spdk_repo/spdk/test/nvmf/host/try.txt 00:18:36.214 [2024-12-16 11:40:19.660769] Starting SPDK v25.01-pre git sha1 c360129e7 / DPDK 24.03.0 initialization... 00:18:36.214 [2024-12-16 11:40:19.660884] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid90392 ] 00:18:36.214 [2024-12-16 11:40:19.813877] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:18:36.214 [2024-12-16 11:40:19.867596] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:18:36.214 Running I/O for 15 seconds... 00:18:36.214 9307.00 IOPS, 36.36 MiB/s [2024-12-16T11:40:36.967Z] [2024-12-16 11:40:21.963418] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:88 nsid:1 lba:86944 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:36.214 [2024-12-16 11:40:21.963462] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:36.214 [2024-12-16 11:40:21.963488] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:87136 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:36.214 [2024-12-16 11:40:21.963511] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:36.214 [2024-12-16 11:40:21.963529] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:87144 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:36.214 [2024-12-16 11:40:21.963544] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:36.214 [2024-12-16 11:40:21.963560] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:87 nsid:1 lba:87152 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:36.214 [2024-12-16 11:40:21.963574] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:36.214 [2024-12-16 11:40:21.963590] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:75 nsid:1 lba:87160 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:36.214 [2024-12-16 11:40:21.963604] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:36.214 [2024-12-16 11:40:21.963633] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:86 nsid:1 lba:87168 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:36.214 [2024-12-16 11:40:21.963649] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:36.214 [2024-12-16 11:40:21.963666] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:103 nsid:1 lba:87176 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:36.214 [2024-12-16 11:40:21.963681] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:36.214 [2024-12-16 11:40:21.963696] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:97 nsid:1 lba:87184 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:36.214 [2024-12-16 11:40:21.963711] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:36.214 [2024-12-16 11:40:21.963726] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:87192 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:36.214 [2024-12-16 11:40:21.963740] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:36.214 [2024-12-16 11:40:21.963756] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:87200 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:36.214 [2024-12-16 11:40:21.963770] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:36.214 [2024-12-16 11:40:21.963786] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:92 nsid:1 lba:87208 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:36.214 [2024-12-16 11:40:21.963800] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:36.214 [2024-12-16 11:40:21.963844] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:123 nsid:1 lba:87216 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:36.214 [2024-12-16 11:40:21.963860] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:36.214 [2024-12-16 11:40:21.963876] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:109 nsid:1 lba:87224 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:36.214 [2024-12-16 11:40:21.963890] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:36.214 [2024-12-16 11:40:21.963906] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:64 nsid:1 lba:87232 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:36.214 [2024-12-16 11:40:21.963920] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:36.214 [2024-12-16 11:40:21.963936] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:111 nsid:1 lba:87240 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:36.214 [2024-12-16 11:40:21.963950] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:36.214 [2024-12-16 11:40:21.963974] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:87248 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:36.214 [2024-12-16 11:40:21.963990] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:36.214 [2024-12-16 11:40:21.964006] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:76 nsid:1 lba:87256 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:36.214 [2024-12-16 11:40:21.964030] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:36.214 [2024-12-16 11:40:21.964046] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:117 nsid:1 lba:87264 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:36.214 [2024-12-16 11:40:21.964060] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:36.214 [2024-12-16 11:40:21.964076] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:87272 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:36.214 [2024-12-16 11:40:21.964090] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:36.214 [2024-12-16 11:40:21.964105] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:68 nsid:1 lba:87280 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:36.214 [2024-12-16 11:40:21.964120] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:36.214 [2024-12-16 11:40:21.964135] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:105 nsid:1 lba:87288 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:36.214 [2024-12-16 11:40:21.964164] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:36.214 [2024-12-16 11:40:21.964181] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:87296 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:36.214 [2024-12-16 11:40:21.964195] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:36.214 [2024-12-16 11:40:21.964211] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:87304 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:36.214 [2024-12-16 11:40:21.964225] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:36.214 [2024-12-16 11:40:21.964241] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:87312 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:36.214 [2024-12-16 11:40:21.964264] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:36.214 [2024-12-16 11:40:21.964281] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:99 nsid:1 lba:87320 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:36.215 [2024-12-16 11:40:21.964295] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:36.215 [2024-12-16 11:40:21.964310] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:112 nsid:1 lba:87328 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:36.215 [2024-12-16 11:40:21.964324] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:36.215 [2024-12-16 11:40:21.964340] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:86952 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:36.215 [2024-12-16 11:40:21.964354] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:36.215 [2024-12-16 11:40:21.964369] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:86960 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:36.215 [2024-12-16 11:40:21.964384] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:36.215 [2024-12-16 11:40:21.964400] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:86968 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:36.215 [2024-12-16 11:40:21.964414] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:36.215 [2024-12-16 11:40:21.964430] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:84 nsid:1 lba:86976 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:36.215 [2024-12-16 11:40:21.964452] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:36.215 [2024-12-16 11:40:21.964469] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:86984 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:36.215 [2024-12-16 11:40:21.964482] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:36.215 [2024-12-16 11:40:21.964503] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:86992 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:36.215 [2024-12-16 11:40:21.964517] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:36.215 [2024-12-16 11:40:21.964532] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:119 nsid:1 lba:87000 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:36.215 [2024-12-16 11:40:21.964547] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:36.215 [2024-12-16 11:40:21.964563] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:65 nsid:1 lba:87008 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:36.215 [2024-12-16 11:40:21.964577] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:36.215 [2024-12-16 11:40:21.964592] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:93 nsid:1 lba:87336 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:36.215 [2024-12-16 11:40:21.964606] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:36.215 [2024-12-16 11:40:21.964622] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:106 nsid:1 lba:87016 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:36.215 [2024-12-16 11:40:21.964636] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:36.215 [2024-12-16 11:40:21.964659] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:87024 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:36.215 [2024-12-16 11:40:21.964674] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:36.215 [2024-12-16 11:40:21.964689] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:70 nsid:1 lba:87032 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:36.215 [2024-12-16 11:40:21.964703] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:36.215 [2024-12-16 11:40:21.964719] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:87040 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:36.215 [2024-12-16 11:40:21.964733] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:36.215 [2024-12-16 11:40:21.964749] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:74 nsid:1 lba:87048 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:36.215 [2024-12-16 11:40:21.964763] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:36.215 [2024-12-16 11:40:21.964779] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:87056 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:36.215 [2024-12-16 11:40:21.964793] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:36.215 [2024-12-16 11:40:21.964809] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:87064 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:36.215 [2024-12-16 11:40:21.964823] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:36.215 [2024-12-16 11:40:21.964839] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:126 nsid:1 lba:87072 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:36.215 [2024-12-16 11:40:21.964852] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:36.215 [2024-12-16 11:40:21.964868] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:87344 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:36.215 [2024-12-16 11:40:21.964882] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:36.215 [2024-12-16 11:40:21.964898] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:87352 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:36.215 [2024-12-16 11:40:21.964912] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:36.215 [2024-12-16 11:40:21.964928] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:87360 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:36.215 [2024-12-16 11:40:21.964947] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:36.215 [2024-12-16 11:40:21.964963] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:118 nsid:1 lba:87368 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:36.215 [2024-12-16 11:40:21.964977] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:36.215 [2024-12-16 11:40:21.964997] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:77 nsid:1 lba:87376 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:36.215 [2024-12-16 11:40:21.965012] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:36.215 [2024-12-16 11:40:21.965027] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:81 nsid:1 lba:87384 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:36.215 [2024-12-16 11:40:21.965047] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:36.215 [2024-12-16 11:40:21.965064] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:87392 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:36.215 [2024-12-16 11:40:21.965078] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:36.215 [2024-12-16 11:40:21.965094] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:87400 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:36.215 [2024-12-16 11:40:21.965108] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:36.215 [2024-12-16 11:40:21.965124] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:87408 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:36.215 [2024-12-16 11:40:21.965148] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:36.215 [2024-12-16 11:40:21.965166] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:87416 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:36.215 [2024-12-16 11:40:21.965180] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:36.215 [2024-12-16 11:40:21.965196] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:102 nsid:1 lba:87424 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:36.215 [2024-12-16 11:40:21.965218] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:36.215 [2024-12-16 11:40:21.965234] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:116 nsid:1 lba:87432 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:36.215 [2024-12-16 11:40:21.965248] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:36.215 [2024-12-16 11:40:21.965263] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:87440 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:36.215 [2024-12-16 11:40:21.965278] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:36.215 [2024-12-16 11:40:21.965293] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:72 nsid:1 lba:87448 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:36.215 [2024-12-16 11:40:21.965307] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:36.215 [2024-12-16 11:40:21.965323] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:87456 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:36.215 [2024-12-16 11:40:21.965336] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:36.215 [2024-12-16 11:40:21.965352] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:100 nsid:1 lba:87464 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:36.215 [2024-12-16 11:40:21.965366] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:36.215 [2024-12-16 11:40:21.965381] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:87472 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:36.215 [2024-12-16 11:40:21.965395] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:36.216 [2024-12-16 11:40:21.965411] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:87480 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:36.216 [2024-12-16 11:40:21.965425] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:36.216 [2024-12-16 11:40:21.965440] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:87488 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:36.216 [2024-12-16 11:40:21.965466] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:36.216 [2024-12-16 11:40:21.965483] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:94 nsid:1 lba:87496 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:36.216 [2024-12-16 11:40:21.965497] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:36.216 [2024-12-16 11:40:21.965516] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:87504 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:36.216 [2024-12-16 11:40:21.965531] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:36.216 [2024-12-16 11:40:21.965546] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:96 nsid:1 lba:87512 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:36.216 [2024-12-16 11:40:21.965560] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:36.216 [2024-12-16 11:40:21.965576] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:87520 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:36.216 [2024-12-16 11:40:21.965590] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:36.216 [2024-12-16 11:40:21.965605] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:87528 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:36.216 [2024-12-16 11:40:21.965620] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:36.216 [2024-12-16 11:40:21.965636] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:98 nsid:1 lba:87536 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:36.216 [2024-12-16 11:40:21.965650] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:36.216 [2024-12-16 11:40:21.965665] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:115 nsid:1 lba:87544 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:36.216 [2024-12-16 11:40:21.965679] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:36.216 [2024-12-16 11:40:21.965695] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:87552 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:36.216 [2024-12-16 11:40:21.965710] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:36.216 [2024-12-16 11:40:21.965725] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:87560 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:36.216 [2024-12-16 11:40:21.965739] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:36.216 [2024-12-16 11:40:21.965754] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:87568 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:36.216 [2024-12-16 11:40:21.965774] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:36.216 [2024-12-16 11:40:21.965789] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:83 nsid:1 lba:87576 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:36.216 [2024-12-16 11:40:21.965803] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:36.216 [2024-12-16 11:40:21.965819] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:87584 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:36.216 [2024-12-16 11:40:21.965832] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:36.216 [2024-12-16 11:40:21.965855] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:71 nsid:1 lba:87592 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:36.216 [2024-12-16 11:40:21.965870] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:36.216 [2024-12-16 11:40:21.965885] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:87600 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:36.216 [2024-12-16 11:40:21.965899] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:36.216 [2024-12-16 11:40:21.965915] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:87608 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:36.216 [2024-12-16 11:40:21.965929] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:36.216 [2024-12-16 11:40:21.965945] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:87616 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:36.216 [2024-12-16 11:40:21.965964] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:36.216 [2024-12-16 11:40:21.965980] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:87624 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:36.216 [2024-12-16 11:40:21.965994] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:36.216 [2024-12-16 11:40:21.966014] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:95 nsid:1 lba:87632 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:36.216 [2024-12-16 11:40:21.966029] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:36.216 [2024-12-16 11:40:21.966045] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:66 nsid:1 lba:87640 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:36.216 [2024-12-16 11:40:21.966059] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:36.216 [2024-12-16 11:40:21.966074] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:87648 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:36.216 [2024-12-16 11:40:21.966088] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:36.216 [2024-12-16 11:40:21.966104] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:108 nsid:1 lba:87656 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:36.216 [2024-12-16 11:40:21.966118] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:36.216 [2024-12-16 11:40:21.966134] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:120 nsid:1 lba:87664 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:36.216 [2024-12-16 11:40:21.966159] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:36.216 [2024-12-16 11:40:21.966176] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:85 nsid:1 lba:87672 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:36.216 [2024-12-16 11:40:21.966190] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:36.216 [2024-12-16 11:40:21.966206] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:87680 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:36.216 [2024-12-16 11:40:21.966220] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:36.216 [2024-12-16 11:40:21.966236] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:114 nsid:1 lba:87688 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:36.216 [2024-12-16 11:40:21.966253] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:36.216 [2024-12-16 11:40:21.966272] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:87696 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:36.216 [2024-12-16 11:40:21.966287] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:36.216 [2024-12-16 11:40:21.966303] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:87704 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:36.216 [2024-12-16 11:40:21.966322] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:36.216 [2024-12-16 11:40:21.966338] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:87712 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:36.216 [2024-12-16 11:40:21.966352] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:36.216 [2024-12-16 11:40:21.966367] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:87720 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:36.216 [2024-12-16 11:40:21.966381] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:36.216 [2024-12-16 11:40:21.966413] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:18:36.216 [2024-12-16 11:40:21.966429] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:87728 len:8 PRP1 0x0 PRP2 0x0 00:18:36.216 [2024-12-16 11:40:21.966443] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:36.216 [2024-12-16 11:40:21.966461] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:18:36.216 [2024-12-16 11:40:21.966472] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:18:36.216 [2024-12-16 11:40:21.966488] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:87736 len:8 PRP1 0x0 PRP2 0x0 00:18:36.216 [2024-12-16 11:40:21.966502] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:36.216 [2024-12-16 11:40:21.966516] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:18:36.216 [2024-12-16 11:40:21.966531] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:18:36.216 [2024-12-16 11:40:21.966542] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:87744 len:8 PRP1 0x0 PRP2 0x0 00:18:36.216 [2024-12-16 11:40:21.966555] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:36.216 [2024-12-16 11:40:21.966569] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:18:36.216 [2024-12-16 11:40:21.966579] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:18:36.216 [2024-12-16 11:40:21.966590] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:87752 len:8 PRP1 0x0 PRP2 0x0 00:18:36.216 [2024-12-16 11:40:21.966604] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:36.216 [2024-12-16 11:40:21.966618] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:18:36.216 [2024-12-16 11:40:21.966628] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:18:36.216 [2024-12-16 11:40:21.966639] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:87760 len:8 PRP1 0x0 PRP2 0x0 00:18:36.217 [2024-12-16 11:40:21.966652] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:36.217 [2024-12-16 11:40:21.966666] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:18:36.217 [2024-12-16 11:40:21.966684] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:18:36.217 [2024-12-16 11:40:21.966696] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:87768 len:8 PRP1 0x0 PRP2 0x0 00:18:36.217 [2024-12-16 11:40:21.966709] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:36.217 [2024-12-16 11:40:21.966723] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:18:36.217 [2024-12-16 11:40:21.966733] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:18:36.217 [2024-12-16 11:40:21.966744] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:87776 len:8 PRP1 0x0 PRP2 0x0 00:18:36.217 [2024-12-16 11:40:21.966757] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:36.217 [2024-12-16 11:40:21.966771] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:18:36.217 [2024-12-16 11:40:21.966781] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:18:36.217 [2024-12-16 11:40:21.966792] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:87784 len:8 PRP1 0x0 PRP2 0x0 00:18:36.217 [2024-12-16 11:40:21.966805] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:36.217 [2024-12-16 11:40:21.966819] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:18:36.217 [2024-12-16 11:40:21.966829] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:18:36.217 [2024-12-16 11:40:21.966840] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:87792 len:8 PRP1 0x0 PRP2 0x0 00:18:36.217 [2024-12-16 11:40:21.966853] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:36.217 [2024-12-16 11:40:21.966867] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:18:36.217 [2024-12-16 11:40:21.966877] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:18:36.217 [2024-12-16 11:40:21.966893] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:87800 len:8 PRP1 0x0 PRP2 0x0 00:18:36.217 [2024-12-16 11:40:21.966906] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:36.217 [2024-12-16 11:40:21.966920] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:18:36.217 [2024-12-16 11:40:21.966935] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:18:36.217 [2024-12-16 11:40:21.966945] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:87808 len:8 PRP1 0x0 PRP2 0x0 00:18:36.217 [2024-12-16 11:40:21.966959] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:36.217 [2024-12-16 11:40:21.966973] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:18:36.217 [2024-12-16 11:40:21.966983] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:18:36.217 [2024-12-16 11:40:21.966993] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:87816 len:8 PRP1 0x0 PRP2 0x0 00:18:36.217 [2024-12-16 11:40:21.967007] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:36.217 [2024-12-16 11:40:21.967020] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:18:36.217 [2024-12-16 11:40:21.967031] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:18:36.217 [2024-12-16 11:40:21.967041] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:87824 len:8 PRP1 0x0 PRP2 0x0 00:18:36.217 [2024-12-16 11:40:21.967054] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:36.217 [2024-12-16 11:40:21.967075] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:18:36.217 [2024-12-16 11:40:21.967085] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:18:36.217 [2024-12-16 11:40:21.967096] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:87832 len:8 PRP1 0x0 PRP2 0x0 00:18:36.217 [2024-12-16 11:40:21.967110] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:36.217 [2024-12-16 11:40:21.967124] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:18:36.217 [2024-12-16 11:40:21.967134] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:18:36.217 [2024-12-16 11:40:21.967157] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:87840 len:8 PRP1 0x0 PRP2 0x0 00:18:36.217 [2024-12-16 11:40:21.967171] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:36.217 [2024-12-16 11:40:21.967186] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:18:36.217 [2024-12-16 11:40:21.967196] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:18:36.217 [2024-12-16 11:40:21.967207] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:87848 len:8 PRP1 0x0 PRP2 0x0 00:18:36.217 [2024-12-16 11:40:21.967220] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:36.217 [2024-12-16 11:40:21.967234] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:18:36.217 [2024-12-16 11:40:21.967244] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:18:36.217 [2024-12-16 11:40:21.967255] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:87856 len:8 PRP1 0x0 PRP2 0x0 00:18:36.217 [2024-12-16 11:40:21.967269] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:36.217 [2024-12-16 11:40:21.967283] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:18:36.217 [2024-12-16 11:40:21.967293] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:18:36.217 [2024-12-16 11:40:21.967308] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:87864 len:8 PRP1 0x0 PRP2 0x0 00:18:36.217 [2024-12-16 11:40:21.967322] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:36.217 [2024-12-16 11:40:21.967336] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:18:36.217 [2024-12-16 11:40:21.967350] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:18:36.217 [2024-12-16 11:40:21.967361] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:87872 len:8 PRP1 0x0 PRP2 0x0 00:18:36.217 [2024-12-16 11:40:21.967374] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:36.217 [2024-12-16 11:40:21.967389] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:18:36.217 [2024-12-16 11:40:21.967399] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:18:36.217 [2024-12-16 11:40:21.967409] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:87880 len:8 PRP1 0x0 PRP2 0x0 00:18:36.217 [2024-12-16 11:40:21.967423] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:36.217 [2024-12-16 11:40:21.967437] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:18:36.217 [2024-12-16 11:40:21.967447] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:18:36.217 [2024-12-16 11:40:21.967458] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:87888 len:8 PRP1 0x0 PRP2 0x0 00:18:36.217 [2024-12-16 11:40:21.967479] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:36.217 [2024-12-16 11:40:21.967493] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:18:36.217 [2024-12-16 11:40:21.967503] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:18:36.217 [2024-12-16 11:40:21.967514] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:87896 len:8 PRP1 0x0 PRP2 0x0 00:18:36.217 [2024-12-16 11:40:21.967532] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:36.217 [2024-12-16 11:40:21.967546] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:18:36.217 [2024-12-16 11:40:21.967556] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:18:36.217 [2024-12-16 11:40:21.967567] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:87904 len:8 PRP1 0x0 PRP2 0x0 00:18:36.217 [2024-12-16 11:40:21.967580] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:36.217 [2024-12-16 11:40:21.967594] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:18:36.217 [2024-12-16 11:40:21.967605] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:18:36.217 [2024-12-16 11:40:21.967626] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:87912 len:8 PRP1 0x0 PRP2 0x0 00:18:36.217 [2024-12-16 11:40:21.967641] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:36.217 [2024-12-16 11:40:21.967655] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:18:36.217 [2024-12-16 11:40:21.967665] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:18:36.217 [2024-12-16 11:40:21.967676] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:87920 len:8 PRP1 0x0 PRP2 0x0 00:18:36.217 [2024-12-16 11:40:21.967689] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:36.217 [2024-12-16 11:40:21.967703] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:18:36.217 [2024-12-16 11:40:21.967713] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:18:36.217 [2024-12-16 11:40:21.976953] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:87928 len:8 PRP1 0x0 PRP2 0x0 00:18:36.217 [2024-12-16 11:40:21.977001] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:36.217 [2024-12-16 11:40:21.977030] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:18:36.217 [2024-12-16 11:40:21.977047] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:18:36.217 [2024-12-16 11:40:21.977063] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:87936 len:8 PRP1 0x0 PRP2 0x0 00:18:36.217 [2024-12-16 11:40:21.977084] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:36.217 [2024-12-16 11:40:21.977103] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:18:36.217 [2024-12-16 11:40:21.977118] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:18:36.217 [2024-12-16 11:40:21.977133] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:87944 len:8 PRP1 0x0 PRP2 0x0 00:18:36.217 [2024-12-16 11:40:21.977170] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:36.217 [2024-12-16 11:40:21.977195] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:18:36.217 [2024-12-16 11:40:21.977210] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:18:36.218 [2024-12-16 11:40:21.977245] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:87952 len:8 PRP1 0x0 PRP2 0x0 00:18:36.218 [2024-12-16 11:40:21.977267] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:36.218 [2024-12-16 11:40:21.977288] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:18:36.218 [2024-12-16 11:40:21.977302] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:18:36.218 [2024-12-16 11:40:21.977318] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:87960 len:8 PRP1 0x0 PRP2 0x0 00:18:36.218 [2024-12-16 11:40:21.977337] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:36.218 [2024-12-16 11:40:21.977356] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:18:36.218 [2024-12-16 11:40:21.977371] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:18:36.218 [2024-12-16 11:40:21.977386] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:87080 len:8 PRP1 0x0 PRP2 0x0 00:18:36.218 [2024-12-16 11:40:21.977405] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:36.218 [2024-12-16 11:40:21.977436] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:18:36.218 [2024-12-16 11:40:21.977446] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:18:36.218 [2024-12-16 11:40:21.977460] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:87088 len:8 PRP1 0x0 PRP2 0x0 00:18:36.218 [2024-12-16 11:40:21.977484] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:36.218 [2024-12-16 11:40:21.977507] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:18:36.218 [2024-12-16 11:40:21.977517] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:18:36.218 [2024-12-16 11:40:21.977528] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:87096 len:8 PRP1 0x0 PRP2 0x0 00:18:36.218 [2024-12-16 11:40:21.977541] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:36.218 [2024-12-16 11:40:21.977555] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:18:36.218 [2024-12-16 11:40:21.977565] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:18:36.218 [2024-12-16 11:40:21.977576] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:87104 len:8 PRP1 0x0 PRP2 0x0 00:18:36.218 [2024-12-16 11:40:21.977589] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:36.218 [2024-12-16 11:40:21.977603] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:18:36.218 [2024-12-16 11:40:21.977613] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:18:36.218 [2024-12-16 11:40:21.977624] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:87112 len:8 PRP1 0x0 PRP2 0x0 00:18:36.218 [2024-12-16 11:40:21.977637] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:36.218 [2024-12-16 11:40:21.977650] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:18:36.218 [2024-12-16 11:40:21.977660] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:18:36.218 [2024-12-16 11:40:21.977670] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:87120 len:8 PRP1 0x0 PRP2 0x0 00:18:36.218 [2024-12-16 11:40:21.977684] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:36.218 [2024-12-16 11:40:21.977706] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:18:36.218 [2024-12-16 11:40:21.977717] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:18:36.218 [2024-12-16 11:40:21.977728] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:87128 len:8 PRP1 0x0 PRP2 0x0 00:18:36.218 [2024-12-16 11:40:21.977741] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:36.218 [2024-12-16 11:40:21.977806] bdev_nvme.c:2057:bdev_nvme_failover_trid: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 1] Start failover from 10.0.0.3:4420 to 10.0.0.3:4421 00:18:36.218 [2024-12-16 11:40:21.977871] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:18:36.218 [2024-12-16 11:40:21.977893] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:36.218 [2024-12-16 11:40:21.977910] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:18:36.218 [2024-12-16 11:40:21.977924] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:36.218 [2024-12-16 11:40:21.977938] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:18:36.218 [2024-12-16 11:40:21.977952] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:36.218 [2024-12-16 11:40:21.977981] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:18:36.218 [2024-12-16 11:40:21.977994] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:36.218 [2024-12-16 11:40:21.978009] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] in failed state. 00:18:36.218 [2024-12-16 11:40:21.978074] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x17f9f50 (9): Bad file descriptor 00:18:36.218 [2024-12-16 11:40:21.983115] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 1] resetting controller 00:18:36.218 [2024-12-16 11:40:22.010879] bdev_nvme.c:2287:bdev_nvme_reset_ctrlr_complete: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 3] Resetting controller successful. 00:18:36.218 9044.00 IOPS, 35.33 MiB/s [2024-12-16T11:40:36.971Z] 9159.33 IOPS, 35.78 MiB/s [2024-12-16T11:40:36.971Z] 9236.50 IOPS, 36.08 MiB/s [2024-12-16T11:40:36.971Z] [2024-12-16 11:40:25.698984] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:116 nsid:1 lba:105336 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:36.218 [2024-12-16 11:40:25.699031] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:36.218 [2024-12-16 11:40:25.699056] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:66 nsid:1 lba:105344 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:36.218 [2024-12-16 11:40:25.699072] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:36.218 [2024-12-16 11:40:25.699088] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:120 nsid:1 lba:105352 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:36.218 [2024-12-16 11:40:25.699102] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:36.218 [2024-12-16 11:40:25.699118] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:105360 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:36.218 [2024-12-16 11:40:25.699132] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:36.218 [2024-12-16 11:40:25.699147] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:105368 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:36.218 [2024-12-16 11:40:25.699202] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:36.218 [2024-12-16 11:40:25.699222] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:104 nsid:1 lba:105376 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:36.218 [2024-12-16 11:40:25.699236] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:36.218 [2024-12-16 11:40:25.699251] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:105384 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:36.218 [2024-12-16 11:40:25.699265] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:36.218 [2024-12-16 11:40:25.699297] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:84 nsid:1 lba:105392 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:36.218 [2024-12-16 11:40:25.699310] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:36.218 [2024-12-16 11:40:25.699326] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:112 nsid:1 lba:105400 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:36.218 [2024-12-16 11:40:25.699340] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:36.218 [2024-12-16 11:40:25.699355] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:105408 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:36.218 [2024-12-16 11:40:25.699369] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:36.218 [2024-12-16 11:40:25.699384] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:105416 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:36.218 [2024-12-16 11:40:25.699398] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:36.218 [2024-12-16 11:40:25.699414] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:105424 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:36.219 [2024-12-16 11:40:25.699428] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:36.219 [2024-12-16 11:40:25.699443] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:105432 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:36.219 [2024-12-16 11:40:25.699457] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:36.219 [2024-12-16 11:40:25.699473] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:105440 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:36.219 [2024-12-16 11:40:25.699486] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:36.219 [2024-12-16 11:40:25.699502] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:105448 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:36.219 [2024-12-16 11:40:25.699516] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:36.219 [2024-12-16 11:40:25.699531] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:105456 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:36.219 [2024-12-16 11:40:25.699545] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:36.219 [2024-12-16 11:40:25.699561] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:85 nsid:1 lba:105464 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:36.219 [2024-12-16 11:40:25.699576] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:36.219 [2024-12-16 11:40:25.699599] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:105472 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:36.219 [2024-12-16 11:40:25.699614] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:36.219 [2024-12-16 11:40:25.699663] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:93 nsid:1 lba:105480 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:36.219 [2024-12-16 11:40:25.699680] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:36.219 [2024-12-16 11:40:25.699696] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:105488 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:36.219 [2024-12-16 11:40:25.699711] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:36.219 [2024-12-16 11:40:25.699726] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:105496 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:36.219 [2024-12-16 11:40:25.699740] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:36.219 [2024-12-16 11:40:25.699756] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:78 nsid:1 lba:105504 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:36.219 [2024-12-16 11:40:25.699770] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:36.219 [2024-12-16 11:40:25.699785] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:105512 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:36.219 [2024-12-16 11:40:25.699799] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:36.219 [2024-12-16 11:40:25.699815] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:105520 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:36.219 [2024-12-16 11:40:25.699828] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:36.219 [2024-12-16 11:40:25.699844] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:105528 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:36.219 [2024-12-16 11:40:25.699858] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:36.219 [2024-12-16 11:40:25.699874] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:109 nsid:1 lba:105536 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:36.219 [2024-12-16 11:40:25.699887] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:36.219 [2024-12-16 11:40:25.699903] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:111 nsid:1 lba:105544 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:36.219 [2024-12-16 11:40:25.699917] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:36.219 [2024-12-16 11:40:25.699932] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:105552 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:36.219 [2024-12-16 11:40:25.699961] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:36.219 [2024-12-16 11:40:25.699976] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:105560 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:36.219 [2024-12-16 11:40:25.699989] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:36.219 [2024-12-16 11:40:25.700004] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:70 nsid:1 lba:105568 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:36.219 [2024-12-16 11:40:25.700017] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:36.219 [2024-12-16 11:40:25.700040] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:94 nsid:1 lba:105576 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:36.219 [2024-12-16 11:40:25.700054] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:36.219 [2024-12-16 11:40:25.700069] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:105584 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:36.219 [2024-12-16 11:40:25.700082] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:36.219 [2024-12-16 11:40:25.700097] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:118 nsid:1 lba:105592 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:36.219 [2024-12-16 11:40:25.700110] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:36.219 [2024-12-16 11:40:25.700125] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:97 nsid:1 lba:105600 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:36.219 [2024-12-16 11:40:25.700139] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:36.219 [2024-12-16 11:40:25.700181] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:105608 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:36.219 [2024-12-16 11:40:25.700198] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:36.219 [2024-12-16 11:40:25.700214] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:105616 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:36.219 [2024-12-16 11:40:25.700228] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:36.219 [2024-12-16 11:40:25.700244] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:74 nsid:1 lba:105624 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:36.219 [2024-12-16 11:40:25.700258] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:36.219 [2024-12-16 11:40:25.700274] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:65 nsid:1 lba:105632 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:36.219 [2024-12-16 11:40:25.700287] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:36.219 [2024-12-16 11:40:25.700303] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:106 nsid:1 lba:105640 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:36.219 [2024-12-16 11:40:25.700317] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:36.219 [2024-12-16 11:40:25.700332] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:105648 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:36.219 [2024-12-16 11:40:25.700346] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:36.219 [2024-12-16 11:40:25.700362] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:119 nsid:1 lba:105656 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:36.219 [2024-12-16 11:40:25.700376] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:36.219 [2024-12-16 11:40:25.700391] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:80 nsid:1 lba:105664 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:36.219 [2024-12-16 11:40:25.700405] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:36.219 [2024-12-16 11:40:25.700420] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:108 nsid:1 lba:105672 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:36.219 [2024-12-16 11:40:25.700441] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:36.219 [2024-12-16 11:40:25.700457] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:105680 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:36.219 [2024-12-16 11:40:25.700472] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:36.219 [2024-12-16 11:40:25.700487] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:114 nsid:1 lba:105688 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:36.219 [2024-12-16 11:40:25.700501] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:36.219 [2024-12-16 11:40:25.700517] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:102 nsid:1 lba:105696 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:36.219 [2024-12-16 11:40:25.700530] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:36.219 [2024-12-16 11:40:25.700546] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:105704 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:36.219 [2024-12-16 11:40:25.700561] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:36.219 [2024-12-16 11:40:25.700577] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:91 nsid:1 lba:105712 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:36.219 [2024-12-16 11:40:25.700592] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:36.219 [2024-12-16 11:40:25.700607] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:105720 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:36.219 [2024-12-16 11:40:25.700621] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:36.219 [2024-12-16 11:40:25.700637] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:105728 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:36.219 [2024-12-16 11:40:25.700650] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:36.219 [2024-12-16 11:40:25.700666] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:105736 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:36.219 [2024-12-16 11:40:25.700680] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:36.220 [2024-12-16 11:40:25.700695] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:105744 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:36.220 [2024-12-16 11:40:25.700709] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:36.220 [2024-12-16 11:40:25.700724] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:105752 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:36.220 [2024-12-16 11:40:25.700738] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:36.220 [2024-12-16 11:40:25.700754] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:105760 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:36.220 [2024-12-16 11:40:25.700768] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:36.220 [2024-12-16 11:40:25.700783] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:105768 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:36.220 [2024-12-16 11:40:25.700797] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:36.220 [2024-12-16 11:40:25.700819] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:105776 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:36.220 [2024-12-16 11:40:25.700834] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:36.220 [2024-12-16 11:40:25.700850] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:110 nsid:1 lba:105784 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:36.220 [2024-12-16 11:40:25.700863] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:36.220 [2024-12-16 11:40:25.700879] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:105792 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:36.220 [2024-12-16 11:40:25.700893] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:36.220 [2024-12-16 11:40:25.700909] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:105800 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:36.220 [2024-12-16 11:40:25.700923] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:36.220 [2024-12-16 11:40:25.700939] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:113 nsid:1 lba:105808 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:36.220 [2024-12-16 11:40:25.700953] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:36.220 [2024-12-16 11:40:25.700968] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:117 nsid:1 lba:105816 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:36.220 [2024-12-16 11:40:25.700982] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:36.220 [2024-12-16 11:40:25.700998] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:105824 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:36.220 [2024-12-16 11:40:25.701012] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:36.220 [2024-12-16 11:40:25.701028] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:105832 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:36.220 [2024-12-16 11:40:25.701042] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:36.220 [2024-12-16 11:40:25.701057] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:105840 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:36.220 [2024-12-16 11:40:25.701071] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:36.220 [2024-12-16 11:40:25.701087] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:105848 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:36.220 [2024-12-16 11:40:25.701101] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:36.220 [2024-12-16 11:40:25.701116] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:105856 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:36.220 [2024-12-16 11:40:25.701130] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:36.220 [2024-12-16 11:40:25.701157] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:71 nsid:1 lba:105864 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:36.220 [2024-12-16 11:40:25.701172] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:36.220 [2024-12-16 11:40:25.701188] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:105872 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:36.220 [2024-12-16 11:40:25.701209] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:36.220 [2024-12-16 11:40:25.701225] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:105880 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:36.220 [2024-12-16 11:40:25.701239] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:36.220 [2024-12-16 11:40:25.701255] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:68 nsid:1 lba:105888 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:36.220 [2024-12-16 11:40:25.701269] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:36.220 [2024-12-16 11:40:25.701284] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:75 nsid:1 lba:105896 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:36.220 [2024-12-16 11:40:25.701298] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:36.220 [2024-12-16 11:40:25.701314] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:76 nsid:1 lba:105904 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:36.220 [2024-12-16 11:40:25.701328] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:36.220 [2024-12-16 11:40:25.701344] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:103 nsid:1 lba:105912 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:36.220 [2024-12-16 11:40:25.701358] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:36.220 [2024-12-16 11:40:25.701373] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:67 nsid:1 lba:105920 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:36.220 [2024-12-16 11:40:25.701388] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:36.220 [2024-12-16 11:40:25.701404] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:105928 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:36.220 [2024-12-16 11:40:25.701418] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:36.220 [2024-12-16 11:40:25.701434] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:105936 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:36.220 [2024-12-16 11:40:25.701448] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:36.220 [2024-12-16 11:40:25.701463] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:89 nsid:1 lba:105944 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:36.220 [2024-12-16 11:40:25.701477] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:36.220 [2024-12-16 11:40:25.701493] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:105952 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:36.220 [2024-12-16 11:40:25.701507] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:36.220 [2024-12-16 11:40:25.701523] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:105960 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:36.220 [2024-12-16 11:40:25.701537] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:36.220 [2024-12-16 11:40:25.701553] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:100 nsid:1 lba:105968 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:36.220 [2024-12-16 11:40:25.701567] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:36.220 [2024-12-16 11:40:25.701590] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:105976 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:36.220 [2024-12-16 11:40:25.701605] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:36.220 [2024-12-16 11:40:25.701621] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:105984 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:36.220 [2024-12-16 11:40:25.701635] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:36.220 [2024-12-16 11:40:25.701650] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:105992 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:36.220 [2024-12-16 11:40:25.701664] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:36.220 [2024-12-16 11:40:25.701680] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:95 nsid:1 lba:106000 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:36.220 [2024-12-16 11:40:25.701694] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:36.220 [2024-12-16 11:40:25.701709] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:124 nsid:1 lba:106008 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:36.220 [2024-12-16 11:40:25.701723] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:36.220 [2024-12-16 11:40:25.701738] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:122 nsid:1 lba:106016 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:36.220 [2024-12-16 11:40:25.701752] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:36.220 [2024-12-16 11:40:25.701768] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:77 nsid:1 lba:106024 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:36.220 [2024-12-16 11:40:25.701781] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:36.220 [2024-12-16 11:40:25.701797] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:107 nsid:1 lba:106032 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:36.220 [2024-12-16 11:40:25.701811] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:36.220 [2024-12-16 11:40:25.701826] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:73 nsid:1 lba:106040 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:36.220 [2024-12-16 11:40:25.701840] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:36.220 [2024-12-16 11:40:25.701856] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:105 nsid:1 lba:106048 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:36.220 [2024-12-16 11:40:25.701870] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:36.220 [2024-12-16 11:40:25.701886] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:92 nsid:1 lba:106056 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:36.221 [2024-12-16 11:40:25.701899] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:36.221 [2024-12-16 11:40:25.701915] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:106064 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:36.221 [2024-12-16 11:40:25.701929] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:36.221 [2024-12-16 11:40:25.701944] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:88 nsid:1 lba:106072 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:36.221 [2024-12-16 11:40:25.701967] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:36.221 [2024-12-16 11:40:25.701984] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:115 nsid:1 lba:106080 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:36.221 [2024-12-16 11:40:25.701999] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:36.221 [2024-12-16 11:40:25.702014] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:98 nsid:1 lba:106088 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:36.221 [2024-12-16 11:40:25.702029] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:36.221 [2024-12-16 11:40:25.702045] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:106096 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:36.221 [2024-12-16 11:40:25.702059] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:36.221 [2024-12-16 11:40:25.702074] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:106104 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:36.221 [2024-12-16 11:40:25.702088] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:36.221 [2024-12-16 11:40:25.702104] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:106112 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:36.221 [2024-12-16 11:40:25.702118] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:36.221 [2024-12-16 11:40:25.702133] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:106120 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:36.221 [2024-12-16 11:40:25.702157] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:36.221 [2024-12-16 11:40:25.702174] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:121 nsid:1 lba:106128 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:36.221 [2024-12-16 11:40:25.702188] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:36.221 [2024-12-16 11:40:25.702204] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:72 nsid:1 lba:106136 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:36.221 [2024-12-16 11:40:25.702217] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:36.221 [2024-12-16 11:40:25.702233] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:123 nsid:1 lba:106144 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:36.221 [2024-12-16 11:40:25.702247] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:36.221 [2024-12-16 11:40:25.702263] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:69 nsid:1 lba:106152 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:36.221 [2024-12-16 11:40:25.702277] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:36.221 [2024-12-16 11:40:25.702292] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:106160 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:36.221 [2024-12-16 11:40:25.702306] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:36.221 [2024-12-16 11:40:25.702321] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:82 nsid:1 lba:106168 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:36.221 [2024-12-16 11:40:25.702335] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:36.221 [2024-12-16 11:40:25.702351] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:106176 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:36.221 [2024-12-16 11:40:25.702372] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:36.221 [2024-12-16 11:40:25.702389] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:86 nsid:1 lba:106184 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:36.221 [2024-12-16 11:40:25.702403] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:36.221 [2024-12-16 11:40:25.702440] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:18:36.221 [2024-12-16 11:40:25.702457] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:106192 len:8 PRP1 0x0 PRP2 0x0 00:18:36.221 [2024-12-16 11:40:25.702471] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:36.221 [2024-12-16 11:40:25.702490] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:18:36.221 [2024-12-16 11:40:25.702501] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:18:36.221 [2024-12-16 11:40:25.702513] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:106200 len:8 PRP1 0x0 PRP2 0x0 00:18:36.221 [2024-12-16 11:40:25.702526] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:36.221 [2024-12-16 11:40:25.702547] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:18:36.221 [2024-12-16 11:40:25.702558] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:18:36.221 [2024-12-16 11:40:25.702569] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:106208 len:8 PRP1 0x0 PRP2 0x0 00:18:36.221 [2024-12-16 11:40:25.702583] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:36.221 [2024-12-16 11:40:25.702597] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:18:36.221 [2024-12-16 11:40:25.702607] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:18:36.221 [2024-12-16 11:40:25.702617] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:106216 len:8 PRP1 0x0 PRP2 0x0 00:18:36.221 [2024-12-16 11:40:25.702631] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:36.221 [2024-12-16 11:40:25.702645] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:18:36.221 [2024-12-16 11:40:25.702655] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:18:36.221 [2024-12-16 11:40:25.702665] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:106224 len:8 PRP1 0x0 PRP2 0x0 00:18:36.221 [2024-12-16 11:40:25.702678] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:36.221 [2024-12-16 11:40:25.702692] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:18:36.221 [2024-12-16 11:40:25.702702] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:18:36.221 [2024-12-16 11:40:25.702713] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:106232 len:8 PRP1 0x0 PRP2 0x0 00:18:36.221 [2024-12-16 11:40:25.702726] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:36.221 [2024-12-16 11:40:25.702739] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:18:36.221 [2024-12-16 11:40:25.702749] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:18:36.221 [2024-12-16 11:40:25.702760] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:106240 len:8 PRP1 0x0 PRP2 0x0 00:18:36.221 [2024-12-16 11:40:25.702781] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:36.221 [2024-12-16 11:40:25.702810] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:18:36.221 [2024-12-16 11:40:25.702821] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:18:36.221 [2024-12-16 11:40:25.702832] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:106248 len:8 PRP1 0x0 PRP2 0x0 00:18:36.221 [2024-12-16 11:40:25.702846] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:36.221 [2024-12-16 11:40:25.702860] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:18:36.221 [2024-12-16 11:40:25.702870] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:18:36.221 [2024-12-16 11:40:25.702881] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:106256 len:8 PRP1 0x0 PRP2 0x0 00:18:36.221 [2024-12-16 11:40:25.702894] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:36.221 [2024-12-16 11:40:25.702908] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:18:36.221 [2024-12-16 11:40:25.702918] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:18:36.221 [2024-12-16 11:40:25.702929] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:106264 len:8 PRP1 0x0 PRP2 0x0 00:18:36.221 [2024-12-16 11:40:25.702942] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:36.221 [2024-12-16 11:40:25.702960] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:18:36.221 [2024-12-16 11:40:25.702970] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:18:36.221 [2024-12-16 11:40:25.702981] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:106272 len:8 PRP1 0x0 PRP2 0x0 00:18:36.221 [2024-12-16 11:40:25.702994] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:36.221 [2024-12-16 11:40:25.703008] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:18:36.221 [2024-12-16 11:40:25.703018] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:18:36.221 [2024-12-16 11:40:25.703028] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:106280 len:8 PRP1 0x0 PRP2 0x0 00:18:36.221 [2024-12-16 11:40:25.703042] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:36.221 [2024-12-16 11:40:25.703056] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:18:36.221 [2024-12-16 11:40:25.703066] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:18:36.221 [2024-12-16 11:40:25.703076] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:106288 len:8 PRP1 0x0 PRP2 0x0 00:18:36.221 [2024-12-16 11:40:25.703090] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:36.221 [2024-12-16 11:40:25.703103] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:18:36.221 [2024-12-16 11:40:25.703113] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:18:36.221 [2024-12-16 11:40:25.703124] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:106296 len:8 PRP1 0x0 PRP2 0x0 00:18:36.222 [2024-12-16 11:40:25.703137] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:36.222 [2024-12-16 11:40:25.703166] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:18:36.222 [2024-12-16 11:40:25.703177] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:18:36.222 [2024-12-16 11:40:25.703194] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:106304 len:8 PRP1 0x0 PRP2 0x0 00:18:36.222 [2024-12-16 11:40:25.703209] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:36.222 [2024-12-16 11:40:25.703223] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:18:36.222 [2024-12-16 11:40:25.703233] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:18:36.222 [2024-12-16 11:40:25.703244] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:106312 len:8 PRP1 0x0 PRP2 0x0 00:18:36.222 [2024-12-16 11:40:25.703257] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:36.222 [2024-12-16 11:40:25.703271] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:18:36.222 [2024-12-16 11:40:25.703281] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:18:36.222 [2024-12-16 11:40:25.703292] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:106320 len:8 PRP1 0x0 PRP2 0x0 00:18:36.222 [2024-12-16 11:40:25.703305] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:36.222 [2024-12-16 11:40:25.703318] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:18:36.222 [2024-12-16 11:40:25.703328] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:18:36.222 [2024-12-16 11:40:25.703339] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:106328 len:8 PRP1 0x0 PRP2 0x0 00:18:36.222 [2024-12-16 11:40:25.703353] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:36.222 [2024-12-16 11:40:25.703369] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:18:36.222 [2024-12-16 11:40:25.703379] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:18:36.222 [2024-12-16 11:40:25.703390] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:106336 len:8 PRP1 0x0 PRP2 0x0 00:18:36.222 [2024-12-16 11:40:25.712005] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:36.222 [2024-12-16 11:40:25.712054] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:18:36.222 [2024-12-16 11:40:25.712070] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:18:36.222 [2024-12-16 11:40:25.712085] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:106344 len:8 PRP1 0x0 PRP2 0x0 00:18:36.222 [2024-12-16 11:40:25.712103] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:36.222 [2024-12-16 11:40:25.712120] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:18:36.222 [2024-12-16 11:40:25.712133] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:18:36.222 [2024-12-16 11:40:25.712163] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:106352 len:8 PRP1 0x0 PRP2 0x0 00:18:36.222 [2024-12-16 11:40:25.712183] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:36.222 [2024-12-16 11:40:25.712257] bdev_nvme.c:2057:bdev_nvme_failover_trid: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 3] Start failover from 10.0.0.3:4421 to 10.0.0.3:4422 00:18:36.222 [2024-12-16 11:40:25.712335] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:18:36.222 [2024-12-16 11:40:25.712362] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:36.222 [2024-12-16 11:40:25.712383] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:18:36.222 [2024-12-16 11:40:25.712418] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:36.222 [2024-12-16 11:40:25.712437] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:18:36.222 [2024-12-16 11:40:25.712467] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:36.222 [2024-12-16 11:40:25.712485] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:18:36.222 [2024-12-16 11:40:25.712502] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:36.222 [2024-12-16 11:40:25.712519] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 3] in failed state. 00:18:36.222 [2024-12-16 11:40:25.712585] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x17f9f50 (9): Bad file descriptor 00:18:36.222 [2024-12-16 11:40:25.717516] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 3] resetting controller 00:18:36.222 [2024-12-16 11:40:25.744827] bdev_nvme.c:2287:bdev_nvme_reset_ctrlr_complete: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 5] Resetting controller successful. 00:18:36.222 9050.80 IOPS, 35.35 MiB/s [2024-12-16T11:40:36.975Z] 9002.00 IOPS, 35.16 MiB/s [2024-12-16T11:40:36.975Z] 9013.86 IOPS, 35.21 MiB/s [2024-12-16T11:40:36.975Z] 9018.38 IOPS, 35.23 MiB/s [2024-12-16T11:40:36.975Z] 9040.89 IOPS, 35.32 MiB/s [2024-12-16T11:40:36.975Z] [2024-12-16 11:40:30.310895] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:50904 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:36.222 [2024-12-16 11:40:30.310942] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:36.222 [2024-12-16 11:40:30.310968] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:69 nsid:1 lba:50912 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:36.222 [2024-12-16 11:40:30.310984] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:36.222 [2024-12-16 11:40:30.311001] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:83 nsid:1 lba:50920 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:36.222 [2024-12-16 11:40:30.311016] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:36.222 [2024-12-16 11:40:30.311032] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:50928 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:36.222 [2024-12-16 11:40:30.311062] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:36.222 [2024-12-16 11:40:30.311078] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:50936 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:36.222 [2024-12-16 11:40:30.311091] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:36.222 [2024-12-16 11:40:30.311107] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:50944 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:36.222 [2024-12-16 11:40:30.311121] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:36.222 [2024-12-16 11:40:30.311136] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:50952 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:36.222 [2024-12-16 11:40:30.311150] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:36.222 [2024-12-16 11:40:30.311183] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:91 nsid:1 lba:50960 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:36.222 [2024-12-16 11:40:30.311197] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:36.222 [2024-12-16 11:40:30.311262] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:50592 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:36.222 [2024-12-16 11:40:30.311278] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:36.222 [2024-12-16 11:40:30.311294] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:115 nsid:1 lba:50600 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:36.222 [2024-12-16 11:40:30.311308] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:36.222 [2024-12-16 11:40:30.311324] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:50608 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:36.222 [2024-12-16 11:40:30.311338] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:36.222 [2024-12-16 11:40:30.311353] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:50616 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:36.222 [2024-12-16 11:40:30.311367] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:36.222 [2024-12-16 11:40:30.311383] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:50624 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:36.222 [2024-12-16 11:40:30.311396] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:36.222 [2024-12-16 11:40:30.311412] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:50632 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:36.222 [2024-12-16 11:40:30.311426] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:36.222 [2024-12-16 11:40:30.311441] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:50968 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:36.222 [2024-12-16 11:40:30.311455] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:36.222 [2024-12-16 11:40:30.311471] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:66 nsid:1 lba:50976 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:36.222 [2024-12-16 11:40:30.311485] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:36.222 [2024-12-16 11:40:30.311501] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:50984 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:36.222 [2024-12-16 11:40:30.311516] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:36.222 [2024-12-16 11:40:30.311531] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:50992 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:36.222 [2024-12-16 11:40:30.311546] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:36.222 [2024-12-16 11:40:30.311561] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:51000 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:36.222 [2024-12-16 11:40:30.311576] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:36.222 [2024-12-16 11:40:30.311591] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:79 nsid:1 lba:51008 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:36.222 [2024-12-16 11:40:30.311605] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:36.222 [2024-12-16 11:40:30.311630] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:104 nsid:1 lba:51016 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:36.223 [2024-12-16 11:40:30.311655] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:36.223 [2024-12-16 11:40:30.311672] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:113 nsid:1 lba:51024 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:36.223 [2024-12-16 11:40:30.311687] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:36.223 [2024-12-16 11:40:30.311702] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:123 nsid:1 lba:51032 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:36.223 [2024-12-16 11:40:30.311716] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:36.223 [2024-12-16 11:40:30.311732] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:112 nsid:1 lba:51040 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:36.223 [2024-12-16 11:40:30.311746] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:36.223 [2024-12-16 11:40:30.311762] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:51048 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:36.223 [2024-12-16 11:40:30.311775] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:36.223 [2024-12-16 11:40:30.311791] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:111 nsid:1 lba:51056 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:36.223 [2024-12-16 11:40:30.311805] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:36.223 [2024-12-16 11:40:30.311820] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:87 nsid:1 lba:51064 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:36.223 [2024-12-16 11:40:30.311834] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:36.223 [2024-12-16 11:40:30.311850] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:105 nsid:1 lba:51072 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:36.223 [2024-12-16 11:40:30.311864] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:36.223 [2024-12-16 11:40:30.311879] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:51080 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:36.223 [2024-12-16 11:40:30.311893] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:36.223 [2024-12-16 11:40:30.311909] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:51088 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:36.223 [2024-12-16 11:40:30.311922] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:36.223 [2024-12-16 11:40:30.311938] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:101 nsid:1 lba:51096 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:36.223 [2024-12-16 11:40:30.311952] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:36.223 [2024-12-16 11:40:30.311984] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:107 nsid:1 lba:51104 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:36.223 [2024-12-16 11:40:30.311998] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:36.223 [2024-12-16 11:40:30.312014] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:92 nsid:1 lba:51112 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:36.223 [2024-12-16 11:40:30.312027] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:36.223 [2024-12-16 11:40:30.312049] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:102 nsid:1 lba:51120 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:36.223 [2024-12-16 11:40:30.312064] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:36.223 [2024-12-16 11:40:30.312079] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:74 nsid:1 lba:51128 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:36.223 [2024-12-16 11:40:30.312093] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:36.223 [2024-12-16 11:40:30.312109] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:51136 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:36.223 [2024-12-16 11:40:30.312122] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:36.223 [2024-12-16 11:40:30.312137] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:96 nsid:1 lba:51144 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:36.223 [2024-12-16 11:40:30.312150] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:36.223 [2024-12-16 11:40:30.312194] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:93 nsid:1 lba:51152 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:36.223 [2024-12-16 11:40:30.312209] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:36.223 [2024-12-16 11:40:30.312225] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:51160 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:36.223 [2024-12-16 11:40:30.312239] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:36.223 [2024-12-16 11:40:30.312255] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:51168 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:36.223 [2024-12-16 11:40:30.312268] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:36.223 [2024-12-16 11:40:30.312285] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:51176 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:36.223 [2024-12-16 11:40:30.312298] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:36.223 [2024-12-16 11:40:30.312314] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:100 nsid:1 lba:51184 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:36.223 [2024-12-16 11:40:30.312328] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:36.223 [2024-12-16 11:40:30.312343] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:51192 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:36.223 [2024-12-16 11:40:30.312357] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:36.223 [2024-12-16 11:40:30.312373] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:51200 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:36.223 [2024-12-16 11:40:30.312387] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:36.223 [2024-12-16 11:40:30.312403] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:78 nsid:1 lba:51208 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:36.223 [2024-12-16 11:40:30.312417] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:36.223 [2024-12-16 11:40:30.312432] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:51216 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:36.223 [2024-12-16 11:40:30.312446] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:36.223 [2024-12-16 11:40:30.312470] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:51224 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:36.223 [2024-12-16 11:40:30.312485] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:36.223 [2024-12-16 11:40:30.312502] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:117 nsid:1 lba:51232 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:36.223 [2024-12-16 11:40:30.312516] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:36.223 [2024-12-16 11:40:30.312531] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:82 nsid:1 lba:51240 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:36.223 [2024-12-16 11:40:30.312553] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:36.223 [2024-12-16 11:40:30.312568] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:70 nsid:1 lba:51248 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:36.223 [2024-12-16 11:40:30.312582] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:36.223 [2024-12-16 11:40:30.312598] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:80 nsid:1 lba:51256 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:36.223 [2024-12-16 11:40:30.312612] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:36.223 [2024-12-16 11:40:30.312627] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:77 nsid:1 lba:51264 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:36.224 [2024-12-16 11:40:30.312641] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:36.224 [2024-12-16 11:40:30.312657] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:108 nsid:1 lba:51272 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:36.224 [2024-12-16 11:40:30.312671] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:36.224 [2024-12-16 11:40:30.312687] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:68 nsid:1 lba:51280 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:36.224 [2024-12-16 11:40:30.312700] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:36.224 [2024-12-16 11:40:30.312716] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:76 nsid:1 lba:50640 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:36.224 [2024-12-16 11:40:30.312730] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:36.224 [2024-12-16 11:40:30.312746] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:90 nsid:1 lba:50648 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:36.224 [2024-12-16 11:40:30.312760] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:36.224 [2024-12-16 11:40:30.312776] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:110 nsid:1 lba:50656 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:36.224 [2024-12-16 11:40:30.312789] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:36.224 [2024-12-16 11:40:30.312805] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:75 nsid:1 lba:50664 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:36.224 [2024-12-16 11:40:30.312819] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:36.224 [2024-12-16 11:40:30.312835] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:50672 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:36.224 [2024-12-16 11:40:30.312856] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:36.224 [2024-12-16 11:40:30.312872] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:50680 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:36.224 [2024-12-16 11:40:30.312886] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:36.224 [2024-12-16 11:40:30.312902] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:50688 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:36.224 [2024-12-16 11:40:30.312916] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:36.224 [2024-12-16 11:40:30.312931] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:50696 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:36.224 [2024-12-16 11:40:30.312946] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:36.224 [2024-12-16 11:40:30.312976] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:50704 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:36.224 [2024-12-16 11:40:30.312990] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:36.224 [2024-12-16 11:40:30.313006] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:71 nsid:1 lba:50712 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:36.224 [2024-12-16 11:40:30.313020] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:36.224 [2024-12-16 11:40:30.313035] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:50720 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:36.224 [2024-12-16 11:40:30.313048] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:36.224 [2024-12-16 11:40:30.313063] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:50728 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:36.224 [2024-12-16 11:40:30.313077] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:36.224 [2024-12-16 11:40:30.313092] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:65 nsid:1 lba:50736 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:36.224 [2024-12-16 11:40:30.313106] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:36.224 [2024-12-16 11:40:30.313121] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:109 nsid:1 lba:50744 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:36.224 [2024-12-16 11:40:30.313135] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:36.224 [2024-12-16 11:40:30.313151] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:50752 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:36.224 [2024-12-16 11:40:30.313195] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:36.224 [2024-12-16 11:40:30.313212] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:50760 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:36.224 [2024-12-16 11:40:30.313226] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:36.224 [2024-12-16 11:40:30.313241] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:50768 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:36.224 [2024-12-16 11:40:30.313255] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:36.224 [2024-12-16 11:40:30.313278] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:50776 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:36.224 [2024-12-16 11:40:30.313292] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:36.224 [2024-12-16 11:40:30.313308] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:97 nsid:1 lba:50784 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:36.224 [2024-12-16 11:40:30.313322] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:36.224 [2024-12-16 11:40:30.313338] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:114 nsid:1 lba:50792 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:36.224 [2024-12-16 11:40:30.313352] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:36.224 [2024-12-16 11:40:30.313367] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:50800 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:36.224 [2024-12-16 11:40:30.313381] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:36.224 [2024-12-16 11:40:30.313397] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:50808 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:36.224 [2024-12-16 11:40:30.313410] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:36.224 [2024-12-16 11:40:30.313426] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:50816 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:36.224 [2024-12-16 11:40:30.313440] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:36.224 [2024-12-16 11:40:30.313456] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:126 nsid:1 lba:50824 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:36.224 [2024-12-16 11:40:30.313471] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:36.224 [2024-12-16 11:40:30.313487] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:50832 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:36.224 [2024-12-16 11:40:30.313502] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:36.224 [2024-12-16 11:40:30.313518] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:51288 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:36.224 [2024-12-16 11:40:30.313532] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:36.224 [2024-12-16 11:40:30.313548] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:122 nsid:1 lba:51296 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:36.224 [2024-12-16 11:40:30.313562] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:36.224 [2024-12-16 11:40:30.313578] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:51304 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:36.224 [2024-12-16 11:40:30.313592] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:36.224 [2024-12-16 11:40:30.313608] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:51312 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:36.224 [2024-12-16 11:40:30.313622] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:36.224 [2024-12-16 11:40:30.313638] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:51320 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:36.224 [2024-12-16 11:40:30.313658] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:36.224 [2024-12-16 11:40:30.313675] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:88 nsid:1 lba:51328 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:36.224 [2024-12-16 11:40:30.313689] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:36.224 [2024-12-16 11:40:30.313705] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:86 nsid:1 lba:51336 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:36.224 [2024-12-16 11:40:30.313718] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:36.224 [2024-12-16 11:40:30.313734] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:50840 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:36.224 [2024-12-16 11:40:30.313748] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:36.224 [2024-12-16 11:40:30.313763] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:50848 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:36.224 [2024-12-16 11:40:30.313777] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:36.224 [2024-12-16 11:40:30.313793] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:64 nsid:1 lba:50856 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:36.224 [2024-12-16 11:40:30.313807] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:36.224 [2024-12-16 11:40:30.313822] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:103 nsid:1 lba:50864 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:36.224 [2024-12-16 11:40:30.313836] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:36.224 [2024-12-16 11:40:30.313852] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:106 nsid:1 lba:50872 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:36.224 [2024-12-16 11:40:30.313866] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:36.224 [2024-12-16 11:40:30.313881] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:116 nsid:1 lba:50880 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:36.225 [2024-12-16 11:40:30.313895] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:36.225 [2024-12-16 11:40:30.313912] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:81 nsid:1 lba:50888 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:36.225 [2024-12-16 11:40:30.313925] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:36.225 [2024-12-16 11:40:30.313941] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:50896 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:36.225 [2024-12-16 11:40:30.313963] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:36.225 [2024-12-16 11:40:30.313980] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:119 nsid:1 lba:51344 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:36.225 [2024-12-16 11:40:30.313994] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:36.225 [2024-12-16 11:40:30.314010] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:51352 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:36.225 [2024-12-16 11:40:30.314024] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:36.225 [2024-12-16 11:40:30.314040] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:51360 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:36.225 [2024-12-16 11:40:30.314060] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:36.225 [2024-12-16 11:40:30.314077] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:85 nsid:1 lba:51368 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:36.225 [2024-12-16 11:40:30.314091] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:36.225 [2024-12-16 11:40:30.314114] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:94 nsid:1 lba:51376 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:36.225 [2024-12-16 11:40:30.314128] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:36.225 [2024-12-16 11:40:30.314144] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:120 nsid:1 lba:51384 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:36.225 [2024-12-16 11:40:30.314168] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:36.225 [2024-12-16 11:40:30.314187] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:51392 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:36.225 [2024-12-16 11:40:30.314201] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:36.225 [2024-12-16 11:40:30.314216] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:51400 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:36.225 [2024-12-16 11:40:30.314230] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:36.225 [2024-12-16 11:40:30.314245] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:51408 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:36.225 [2024-12-16 11:40:30.314259] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:36.225 [2024-12-16 11:40:30.314275] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:51416 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:36.225 [2024-12-16 11:40:30.314288] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:36.225 [2024-12-16 11:40:30.314304] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:89 nsid:1 lba:51424 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:36.225 [2024-12-16 11:40:30.314318] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:36.225 [2024-12-16 11:40:30.314333] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:51432 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:36.225 [2024-12-16 11:40:30.314347] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:36.225 [2024-12-16 11:40:30.314363] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:73 nsid:1 lba:51440 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:36.225 [2024-12-16 11:40:30.314377] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:36.225 [2024-12-16 11:40:30.314393] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:98 nsid:1 lba:51448 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:36.225 [2024-12-16 11:40:30.314406] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:36.225 [2024-12-16 11:40:30.314422] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:84 nsid:1 lba:51456 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:36.225 [2024-12-16 11:40:30.314436] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:36.225 [2024-12-16 11:40:30.314459] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:51464 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:36.225 [2024-12-16 11:40:30.314478] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:36.225 [2024-12-16 11:40:30.314494] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:51472 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:36.225 [2024-12-16 11:40:30.314508] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:36.225 [2024-12-16 11:40:30.314523] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:99 nsid:1 lba:51480 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:36.225 [2024-12-16 11:40:30.314537] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:36.225 [2024-12-16 11:40:30.314553] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:67 nsid:1 lba:51488 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:36.225 [2024-12-16 11:40:30.314566] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:36.225 [2024-12-16 11:40:30.314582] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:121 nsid:1 lba:51496 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:36.225 [2024-12-16 11:40:30.314596] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:36.225 [2024-12-16 11:40:30.314621] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:51504 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:36.225 [2024-12-16 11:40:30.314635] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:36.225 [2024-12-16 11:40:30.314651] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:51512 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:36.225 [2024-12-16 11:40:30.314665] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:36.225 [2024-12-16 11:40:30.314680] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:118 nsid:1 lba:51520 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:36.225 [2024-12-16 11:40:30.314695] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:36.225 [2024-12-16 11:40:30.314710] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:51528 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:36.225 [2024-12-16 11:40:30.314724] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:36.225 [2024-12-16 11:40:30.314740] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:51536 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:36.225 [2024-12-16 11:40:30.314754] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:36.225 [2024-12-16 11:40:30.314769] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:95 nsid:1 lba:51544 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:36.225 [2024-12-16 11:40:30.314783] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:36.225 [2024-12-16 11:40:30.314799] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:72 nsid:1 lba:51552 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:36.225 [2024-12-16 11:40:30.314813] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:36.225 [2024-12-16 11:40:30.314828] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:124 nsid:1 lba:51560 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:36.225 [2024-12-16 11:40:30.314849] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:36.225 [2024-12-16 11:40:30.314865] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:51568 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:36.225 [2024-12-16 11:40:30.314879] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:36.225 [2024-12-16 11:40:30.314895] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:51576 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:36.225 [2024-12-16 11:40:30.314909] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:36.225 [2024-12-16 11:40:30.314924] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:51584 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:36.225 [2024-12-16 11:40:30.314938] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:36.225 [2024-12-16 11:40:30.314954] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:51592 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:36.225 [2024-12-16 11:40:30.314972] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:36.225 [2024-12-16 11:40:30.315004] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:18:36.225 [2024-12-16 11:40:30.315020] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:51600 len:8 PRP1 0x0 PRP2 0x0 00:18:36.225 [2024-12-16 11:40:30.315034] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:36.225 [2024-12-16 11:40:30.315053] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:18:36.225 [2024-12-16 11:40:30.315064] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:18:36.225 [2024-12-16 11:40:30.315075] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:51608 len:8 PRP1 0x0 PRP2 0x0 00:18:36.225 [2024-12-16 11:40:30.315088] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:36.225 [2024-12-16 11:40:30.315163] bdev_nvme.c:2057:bdev_nvme_failover_trid: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 5] Start failover from 10.0.0.3:4422 to 10.0.0.3:4420 00:18:36.225 [2024-12-16 11:40:30.315227] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:18:36.225 [2024-12-16 11:40:30.315248] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:36.225 [2024-12-16 11:40:30.315264] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:18:36.225 [2024-12-16 11:40:30.315278] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:36.225 [2024-12-16 11:40:30.315293] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:18:36.225 [2024-12-16 11:40:30.315307] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:36.226 [2024-12-16 11:40:30.315321] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:18:36.226 [2024-12-16 11:40:30.315335] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:36.226 [2024-12-16 11:40:30.315349] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 5] in failed state. 00:18:36.226 [2024-12-16 11:40:30.315397] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x17f9f50 (9): Bad file descriptor 00:18:36.226 [2024-12-16 11:40:30.319222] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 5] resetting controller 00:18:36.226 [2024-12-16 11:40:30.343511] bdev_nvme.c:2287:bdev_nvme_reset_ctrlr_complete: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 6] Resetting controller successful. 00:18:36.226 9002.30 IOPS, 35.17 MiB/s [2024-12-16T11:40:36.979Z] 8983.00 IOPS, 35.09 MiB/s [2024-12-16T11:40:36.979Z] 8972.50 IOPS, 35.05 MiB/s [2024-12-16T11:40:36.979Z] 8952.08 IOPS, 34.97 MiB/s [2024-12-16T11:40:36.979Z] 8942.64 IOPS, 34.93 MiB/s 00:18:36.226 Latency(us) 00:18:36.226 [2024-12-16T11:40:36.979Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:18:36.226 Job: NVMe0n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:18:36.226 Verification LBA range: start 0x0 length 0x4000 00:18:36.226 NVMe0n1 : 15.01 8932.51 34.89 205.59 0.00 13975.42 886.23 23116.33 00:18:36.226 [2024-12-16T11:40:36.979Z] =================================================================================================================== 00:18:36.226 [2024-12-16T11:40:36.979Z] Total : 8932.51 34.89 205.59 0.00 13975.42 886.23 23116.33 00:18:36.226 Received shutdown signal, test time was about 15.000000 seconds 00:18:36.226 00:18:36.226 Latency(us) 00:18:36.226 [2024-12-16T11:40:36.979Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:18:36.226 [2024-12-16T11:40:36.979Z] =================================================================================================================== 00:18:36.226 [2024-12-16T11:40:36.979Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:18:36.226 11:40:36 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@65 -- # grep -c 'Resetting controller successful' 00:18:36.226 11:40:36 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@65 -- # count=3 00:18:36.226 11:40:36 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@67 -- # (( count != 3 )) 00:18:36.226 11:40:36 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@73 -- # bdevperf_pid=90629 00:18:36.226 11:40:36 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@75 -- # waitforlisten 90629 /var/tmp/bdevperf.sock 00:18:36.226 11:40:36 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@72 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 1 -f 00:18:36.226 11:40:36 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@835 -- # '[' -z 90629 ']' 00:18:36.226 11:40:36 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:18:36.226 11:40:36 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@840 -- # local max_retries=100 00:18:36.226 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:18:36.226 11:40:36 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:18:36.226 11:40:36 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@844 -- # xtrace_disable 00:18:36.226 11:40:36 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@10 -- # set +x 00:18:36.226 11:40:36 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:18:36.226 11:40:36 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@868 -- # return 0 00:18:36.226 11:40:36 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@76 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4421 00:18:36.226 [2024-12-16 11:40:36.668104] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4421 *** 00:18:36.226 11:40:36 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@77 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4422 00:18:36.226 [2024-12-16 11:40:36.920285] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4422 *** 00:18:36.226 11:40:36 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@78 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.3 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -x failover 00:18:36.793 NVMe0n1 00:18:36.793 11:40:37 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@79 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.3 -s 4421 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -x failover 00:18:37.051 00:18:37.051 11:40:37 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@80 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.3 -s 4422 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -x failover 00:18:37.309 00:18:37.309 11:40:37 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@82 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:18:37.309 11:40:37 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@82 -- # grep -q NVMe0 00:18:37.567 11:40:38 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@84 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_detach_controller NVMe0 -t tcp -a 10.0.0.3 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:18:37.826 11:40:38 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@87 -- # sleep 3 00:18:41.108 11:40:41 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@88 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:18:41.108 11:40:41 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@88 -- # grep -q NVMe0 00:18:41.108 11:40:41 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@90 -- # run_test_pid=90748 00:18:41.108 11:40:41 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@89 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:18:41.108 11:40:41 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@92 -- # wait 90748 00:18:42.484 { 00:18:42.484 "results": [ 00:18:42.484 { 00:18:42.484 "job": "NVMe0n1", 00:18:42.484 "core_mask": "0x1", 00:18:42.484 "workload": "verify", 00:18:42.484 "status": "finished", 00:18:42.484 "verify_range": { 00:18:42.484 "start": 0, 00:18:42.484 "length": 16384 00:18:42.484 }, 00:18:42.484 "queue_depth": 128, 00:18:42.484 "io_size": 4096, 00:18:42.484 "runtime": 1.008423, 00:18:42.484 "iops": 8755.25449141878, 00:18:42.484 "mibps": 34.20021285710461, 00:18:42.484 "io_failed": 0, 00:18:42.484 "io_timeout": 0, 00:18:42.484 "avg_latency_us": 14549.316410589072, 00:18:42.484 "min_latency_us": 2517.1781818181817, 00:18:42.484 "max_latency_us": 15966.952727272726 00:18:42.484 } 00:18:42.484 ], 00:18:42.484 "core_count": 1 00:18:42.484 } 00:18:42.484 11:40:42 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@94 -- # cat /home/vagrant/spdk_repo/spdk/test/nvmf/host/try.txt 00:18:42.484 [2024-12-16 11:40:36.111240] Starting SPDK v25.01-pre git sha1 c360129e7 / DPDK 24.03.0 initialization... 00:18:42.484 [2024-12-16 11:40:36.111353] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid90629 ] 00:18:42.484 [2024-12-16 11:40:36.259961] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:18:42.484 [2024-12-16 11:40:36.313668] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:18:42.484 [2024-12-16 11:40:38.383547] bdev_nvme.c:2057:bdev_nvme_failover_trid: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 7] Start failover from 10.0.0.3:4420 to 10.0.0.3:4421 00:18:42.484 [2024-12-16 11:40:38.383680] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:18:42.484 [2024-12-16 11:40:38.383707] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:42.484 [2024-12-16 11:40:38.383727] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:18:42.484 [2024-12-16 11:40:38.383742] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:42.484 [2024-12-16 11:40:38.383757] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:18:42.484 [2024-12-16 11:40:38.383771] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:42.484 [2024-12-16 11:40:38.383785] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:18:42.484 [2024-12-16 11:40:38.383799] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:42.484 [2024-12-16 11:40:38.383813] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 7] in failed state. 00:18:42.484 [2024-12-16 11:40:38.383868] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 7] resetting controller 00:18:42.484 [2024-12-16 11:40:38.383901] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb7cf50 (9): Bad file descriptor 00:18:42.484 [2024-12-16 11:40:38.387718] bdev_nvme.c:2287:bdev_nvme_reset_ctrlr_complete: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 10] Resetting controller successful. 00:18:42.484 Running I/O for 1 seconds... 00:18:42.484 8701.00 IOPS, 33.99 MiB/s 00:18:42.484 Latency(us) 00:18:42.484 [2024-12-16T11:40:43.237Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:18:42.484 Job: NVMe0n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:18:42.484 Verification LBA range: start 0x0 length 0x4000 00:18:42.484 NVMe0n1 : 1.01 8755.25 34.20 0.00 0.00 14549.32 2517.18 15966.95 00:18:42.484 [2024-12-16T11:40:43.237Z] =================================================================================================================== 00:18:42.484 [2024-12-16T11:40:43.237Z] Total : 8755.25 34.20 0.00 0.00 14549.32 2517.18 15966.95 00:18:42.484 11:40:42 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@95 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:18:42.484 11:40:42 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@95 -- # grep -q NVMe0 00:18:42.484 11:40:43 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@98 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_detach_controller NVMe0 -t tcp -a 10.0.0.3 -s 4422 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:18:42.743 11:40:43 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@99 -- # grep -q NVMe0 00:18:42.743 11:40:43 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@99 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:18:43.001 11:40:43 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@100 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_detach_controller NVMe0 -t tcp -a 10.0.0.3 -s 4421 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:18:43.259 11:40:43 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@101 -- # sleep 3 00:18:46.538 11:40:46 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@103 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:18:46.538 11:40:46 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@103 -- # grep -q NVMe0 00:18:46.804 11:40:47 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@108 -- # killprocess 90629 00:18:46.804 11:40:47 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@954 -- # '[' -z 90629 ']' 00:18:46.804 11:40:47 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@958 -- # kill -0 90629 00:18:46.804 11:40:47 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@959 -- # uname 00:18:46.804 11:40:47 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:18:46.804 11:40:47 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 90629 00:18:46.804 11:40:47 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:18:46.805 11:40:47 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:18:46.805 11:40:47 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@972 -- # echo 'killing process with pid 90629' 00:18:46.805 killing process with pid 90629 00:18:46.805 11:40:47 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@973 -- # kill 90629 00:18:46.805 11:40:47 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@978 -- # wait 90629 00:18:46.805 11:40:47 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@110 -- # sync 00:18:47.062 11:40:47 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@111 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:18:47.320 11:40:47 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@113 -- # trap - SIGINT SIGTERM EXIT 00:18:47.320 11:40:47 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@115 -- # rm -f /home/vagrant/spdk_repo/spdk/test/nvmf/host/try.txt 00:18:47.320 11:40:47 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@116 -- # nvmftestfini 00:18:47.320 11:40:47 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@516 -- # nvmfcleanup 00:18:47.320 11:40:47 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@121 -- # sync 00:18:47.320 11:40:47 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:18:47.321 11:40:47 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@124 -- # set +e 00:18:47.321 11:40:47 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@125 -- # for i in {1..20} 00:18:47.321 11:40:47 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:18:47.321 rmmod nvme_tcp 00:18:47.321 rmmod nvme_fabrics 00:18:47.321 rmmod nvme_keyring 00:18:47.321 11:40:47 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:18:47.321 11:40:47 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@128 -- # set -e 00:18:47.321 11:40:47 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@129 -- # return 0 00:18:47.321 11:40:47 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@517 -- # '[' -n 90288 ']' 00:18:47.321 11:40:47 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@518 -- # killprocess 90288 00:18:47.321 11:40:47 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@954 -- # '[' -z 90288 ']' 00:18:47.321 11:40:47 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@958 -- # kill -0 90288 00:18:47.321 11:40:47 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@959 -- # uname 00:18:47.321 11:40:47 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:18:47.321 11:40:47 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 90288 00:18:47.321 killing process with pid 90288 00:18:47.321 11:40:47 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:18:47.321 11:40:47 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:18:47.321 11:40:47 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@972 -- # echo 'killing process with pid 90288' 00:18:47.321 11:40:47 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@973 -- # kill 90288 00:18:47.321 11:40:47 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@978 -- # wait 90288 00:18:47.579 11:40:48 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:18:47.579 11:40:48 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:18:47.579 11:40:48 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:18:47.579 11:40:48 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@297 -- # iptr 00:18:47.579 11:40:48 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@791 -- # iptables-save 00:18:47.579 11:40:48 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:18:47.579 11:40:48 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@791 -- # iptables-restore 00:18:47.579 11:40:48 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@298 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:18:47.579 11:40:48 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@299 -- # nvmf_veth_fini 00:18:47.579 11:40:48 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@233 -- # ip link set nvmf_init_br nomaster 00:18:47.579 11:40:48 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@234 -- # ip link set nvmf_init_br2 nomaster 00:18:47.579 11:40:48 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@235 -- # ip link set nvmf_tgt_br nomaster 00:18:47.579 11:40:48 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@236 -- # ip link set nvmf_tgt_br2 nomaster 00:18:47.579 11:40:48 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@237 -- # ip link set nvmf_init_br down 00:18:47.579 11:40:48 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@238 -- # ip link set nvmf_init_br2 down 00:18:47.579 11:40:48 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@239 -- # ip link set nvmf_tgt_br down 00:18:47.579 11:40:48 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@240 -- # ip link set nvmf_tgt_br2 down 00:18:47.579 11:40:48 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@241 -- # ip link delete nvmf_br type bridge 00:18:47.579 11:40:48 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@242 -- # ip link delete nvmf_init_if 00:18:47.837 11:40:48 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@243 -- # ip link delete nvmf_init_if2 00:18:47.838 11:40:48 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@244 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:18:47.838 11:40:48 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@245 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:18:47.838 11:40:48 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@246 -- # remove_spdk_ns 00:18:47.838 11:40:48 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:18:47.838 11:40:48 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:18:47.838 11:40:48 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:18:47.838 11:40:48 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@300 -- # return 0 00:18:47.838 ************************************ 00:18:47.838 END TEST nvmf_failover 00:18:47.838 ************************************ 00:18:47.838 00:18:47.838 real 0m31.933s 00:18:47.838 user 2m3.708s 00:18:47.838 sys 0m4.572s 00:18:47.838 11:40:48 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@1130 -- # xtrace_disable 00:18:47.838 11:40:48 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@10 -- # set +x 00:18:47.838 11:40:48 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@26 -- # run_test nvmf_host_discovery /home/vagrant/spdk_repo/spdk/test/nvmf/host/discovery.sh --transport=tcp 00:18:47.838 11:40:48 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:18:47.838 11:40:48 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1111 -- # xtrace_disable 00:18:47.838 11:40:48 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:18:47.838 ************************************ 00:18:47.838 START TEST nvmf_host_discovery 00:18:47.838 ************************************ 00:18:47.838 11:40:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/host/discovery.sh --transport=tcp 00:18:47.838 * Looking for test storage... 00:18:47.838 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/host 00:18:47.838 11:40:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:18:47.838 11:40:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@1711 -- # lcov --version 00:18:47.838 11:40:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:18:48.097 11:40:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:18:48.097 11:40:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:18:48.097 11:40:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@333 -- # local ver1 ver1_l 00:18:48.097 11:40:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@334 -- # local ver2 ver2_l 00:18:48.097 11:40:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@336 -- # IFS=.-: 00:18:48.097 11:40:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@336 -- # read -ra ver1 00:18:48.097 11:40:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@337 -- # IFS=.-: 00:18:48.097 11:40:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@337 -- # read -ra ver2 00:18:48.097 11:40:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@338 -- # local 'op=<' 00:18:48.097 11:40:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@340 -- # ver1_l=2 00:18:48.097 11:40:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@341 -- # ver2_l=1 00:18:48.097 11:40:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:18:48.097 11:40:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@344 -- # case "$op" in 00:18:48.097 11:40:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@345 -- # : 1 00:18:48.097 11:40:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@364 -- # (( v = 0 )) 00:18:48.097 11:40:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:18:48.097 11:40:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@365 -- # decimal 1 00:18:48.097 11:40:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@353 -- # local d=1 00:18:48.097 11:40:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:18:48.097 11:40:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@355 -- # echo 1 00:18:48.097 11:40:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@365 -- # ver1[v]=1 00:18:48.097 11:40:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@366 -- # decimal 2 00:18:48.097 11:40:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@353 -- # local d=2 00:18:48.097 11:40:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:18:48.097 11:40:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@355 -- # echo 2 00:18:48.097 11:40:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@366 -- # ver2[v]=2 00:18:48.097 11:40:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:18:48.097 11:40:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:18:48.097 11:40:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@368 -- # return 0 00:18:48.097 11:40:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:18:48.097 11:40:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:18:48.097 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:18:48.097 --rc genhtml_branch_coverage=1 00:18:48.097 --rc genhtml_function_coverage=1 00:18:48.097 --rc genhtml_legend=1 00:18:48.097 --rc geninfo_all_blocks=1 00:18:48.097 --rc geninfo_unexecuted_blocks=1 00:18:48.097 00:18:48.097 ' 00:18:48.097 11:40:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:18:48.097 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:18:48.097 --rc genhtml_branch_coverage=1 00:18:48.097 --rc genhtml_function_coverage=1 00:18:48.097 --rc genhtml_legend=1 00:18:48.097 --rc geninfo_all_blocks=1 00:18:48.097 --rc geninfo_unexecuted_blocks=1 00:18:48.097 00:18:48.097 ' 00:18:48.097 11:40:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:18:48.097 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:18:48.097 --rc genhtml_branch_coverage=1 00:18:48.097 --rc genhtml_function_coverage=1 00:18:48.097 --rc genhtml_legend=1 00:18:48.097 --rc geninfo_all_blocks=1 00:18:48.097 --rc geninfo_unexecuted_blocks=1 00:18:48.097 00:18:48.097 ' 00:18:48.097 11:40:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:18:48.097 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:18:48.097 --rc genhtml_branch_coverage=1 00:18:48.097 --rc genhtml_function_coverage=1 00:18:48.097 --rc genhtml_legend=1 00:18:48.097 --rc geninfo_all_blocks=1 00:18:48.097 --rc geninfo_unexecuted_blocks=1 00:18:48.097 00:18:48.097 ' 00:18:48.097 11:40:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:18:48.098 11:40:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@7 -- # uname -s 00:18:48.098 11:40:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:18:48.098 11:40:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:18:48.098 11:40:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:18:48.098 11:40:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:18:48.098 11:40:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:18:48.098 11:40:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:18:48.098 11:40:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:18:48.098 11:40:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:18:48.098 11:40:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:18:48.098 11:40:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:18:48.098 11:40:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:2fb5c03d-b67f-4471-b8a9-a83bf7b15be8 00:18:48.098 11:40:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@18 -- # NVME_HOSTID=2fb5c03d-b67f-4471-b8a9-a83bf7b15be8 00:18:48.098 11:40:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:18:48.098 11:40:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:18:48.098 11:40:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:18:48.098 11:40:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:18:48.098 11:40:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@49 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:18:48.098 11:40:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@15 -- # shopt -s extglob 00:18:48.098 11:40:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:18:48.098 11:40:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:18:48.098 11:40:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:18:48.098 11:40:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:48.098 11:40:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:48.098 11:40:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:48.098 11:40:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- paths/export.sh@5 -- # export PATH 00:18:48.098 11:40:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:48.098 11:40:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@51 -- # : 0 00:18:48.098 11:40:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:18:48.098 11:40:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:18:48.098 11:40:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:18:48.098 11:40:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:18:48.098 11:40:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:18:48.098 11:40:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:18:48.098 /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:18:48.098 11:40:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:18:48.098 11:40:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:18:48.098 11:40:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@55 -- # have_pci_nics=0 00:18:48.098 11:40:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@11 -- # '[' tcp == rdma ']' 00:18:48.098 11:40:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@16 -- # DISCOVERY_PORT=8009 00:18:48.098 11:40:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@17 -- # DISCOVERY_NQN=nqn.2014-08.org.nvmexpress.discovery 00:18:48.098 11:40:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@20 -- # NQN=nqn.2016-06.io.spdk:cnode 00:18:48.098 11:40:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@22 -- # HOST_NQN=nqn.2021-12.io.spdk:test 00:18:48.098 11:40:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@23 -- # HOST_SOCK=/tmp/host.sock 00:18:48.098 11:40:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@25 -- # nvmftestinit 00:18:48.098 11:40:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:18:48.098 11:40:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:18:48.098 11:40:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@476 -- # prepare_net_devs 00:18:48.098 11:40:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@438 -- # local -g is_hw=no 00:18:48.098 11:40:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@440 -- # remove_spdk_ns 00:18:48.098 11:40:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:18:48.098 11:40:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:18:48.098 11:40:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:18:48.098 11:40:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@442 -- # [[ virt != virt ]] 00:18:48.098 11:40:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@444 -- # [[ no == yes ]] 00:18:48.098 11:40:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@451 -- # [[ virt == phy ]] 00:18:48.098 11:40:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@454 -- # [[ virt == phy-fallback ]] 00:18:48.098 11:40:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@459 -- # [[ tcp == tcp ]] 00:18:48.098 11:40:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@460 -- # nvmf_veth_init 00:18:48.098 11:40:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@145 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:18:48.098 11:40:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@146 -- # NVMF_SECOND_INITIATOR_IP=10.0.0.2 00:18:48.098 11:40:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@147 -- # NVMF_FIRST_TARGET_IP=10.0.0.3 00:18:48.098 11:40:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@148 -- # NVMF_SECOND_TARGET_IP=10.0.0.4 00:18:48.098 11:40:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@149 -- # NVMF_INITIATOR_IP=10.0.0.1 00:18:48.098 11:40:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@150 -- # NVMF_BRIDGE=nvmf_br 00:18:48.098 11:40:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@151 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:18:48.098 11:40:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@152 -- # NVMF_INITIATOR_INTERFACE2=nvmf_init_if2 00:18:48.098 11:40:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@153 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:18:48.098 11:40:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@154 -- # NVMF_INITIATOR_BRIDGE2=nvmf_init_br2 00:18:48.098 11:40:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@155 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:18:48.098 11:40:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@156 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:18:48.098 11:40:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@157 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:18:48.098 11:40:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@158 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:18:48.098 11:40:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@159 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:18:48.098 11:40:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@160 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:18:48.098 11:40:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@162 -- # ip link set nvmf_init_br nomaster 00:18:48.098 Cannot find device "nvmf_init_br" 00:18:48.098 11:40:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@162 -- # true 00:18:48.098 11:40:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@163 -- # ip link set nvmf_init_br2 nomaster 00:18:48.098 Cannot find device "nvmf_init_br2" 00:18:48.098 11:40:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@163 -- # true 00:18:48.098 11:40:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@164 -- # ip link set nvmf_tgt_br nomaster 00:18:48.098 Cannot find device "nvmf_tgt_br" 00:18:48.098 11:40:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@164 -- # true 00:18:48.098 11:40:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@165 -- # ip link set nvmf_tgt_br2 nomaster 00:18:48.098 Cannot find device "nvmf_tgt_br2" 00:18:48.098 11:40:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@165 -- # true 00:18:48.098 11:40:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@166 -- # ip link set nvmf_init_br down 00:18:48.098 Cannot find device "nvmf_init_br" 00:18:48.098 11:40:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@166 -- # true 00:18:48.098 11:40:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@167 -- # ip link set nvmf_init_br2 down 00:18:48.098 Cannot find device "nvmf_init_br2" 00:18:48.098 11:40:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@167 -- # true 00:18:48.099 11:40:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@168 -- # ip link set nvmf_tgt_br down 00:18:48.099 Cannot find device "nvmf_tgt_br" 00:18:48.099 11:40:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@168 -- # true 00:18:48.099 11:40:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@169 -- # ip link set nvmf_tgt_br2 down 00:18:48.099 Cannot find device "nvmf_tgt_br2" 00:18:48.099 11:40:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@169 -- # true 00:18:48.099 11:40:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@170 -- # ip link delete nvmf_br type bridge 00:18:48.099 Cannot find device "nvmf_br" 00:18:48.099 11:40:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@170 -- # true 00:18:48.099 11:40:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@171 -- # ip link delete nvmf_init_if 00:18:48.099 Cannot find device "nvmf_init_if" 00:18:48.099 11:40:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@171 -- # true 00:18:48.099 11:40:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@172 -- # ip link delete nvmf_init_if2 00:18:48.358 Cannot find device "nvmf_init_if2" 00:18:48.358 11:40:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@172 -- # true 00:18:48.358 11:40:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@173 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:18:48.358 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:18:48.358 11:40:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@173 -- # true 00:18:48.358 11:40:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@174 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:18:48.358 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:18:48.358 11:40:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@174 -- # true 00:18:48.358 11:40:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@177 -- # ip netns add nvmf_tgt_ns_spdk 00:18:48.358 11:40:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@180 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:18:48.358 11:40:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@181 -- # ip link add nvmf_init_if2 type veth peer name nvmf_init_br2 00:18:48.358 11:40:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@182 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:18:48.358 11:40:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@183 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:18:48.358 11:40:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:18:48.358 11:40:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@187 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:18:48.358 11:40:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@190 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:18:48.358 11:40:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@191 -- # ip addr add 10.0.0.2/24 dev nvmf_init_if2 00:18:48.358 11:40:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@192 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if 00:18:48.358 11:40:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@193 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.4/24 dev nvmf_tgt_if2 00:18:48.358 11:40:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@196 -- # ip link set nvmf_init_if up 00:18:48.358 11:40:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@197 -- # ip link set nvmf_init_if2 up 00:18:48.358 11:40:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@198 -- # ip link set nvmf_init_br up 00:18:48.358 11:40:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@199 -- # ip link set nvmf_init_br2 up 00:18:48.358 11:40:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@200 -- # ip link set nvmf_tgt_br up 00:18:48.358 11:40:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@201 -- # ip link set nvmf_tgt_br2 up 00:18:48.358 11:40:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@202 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:18:48.358 11:40:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@203 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:18:48.358 11:40:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@204 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:18:48.358 11:40:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@207 -- # ip link add nvmf_br type bridge 00:18:48.358 11:40:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@208 -- # ip link set nvmf_br up 00:18:48.358 11:40:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@211 -- # ip link set nvmf_init_br master nvmf_br 00:18:48.358 11:40:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@212 -- # ip link set nvmf_init_br2 master nvmf_br 00:18:48.358 11:40:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@213 -- # ip link set nvmf_tgt_br master nvmf_br 00:18:48.358 11:40:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@214 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:18:48.358 11:40:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@217 -- # ipts -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:18:48.358 11:40:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT' 00:18:48.358 11:40:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@218 -- # ipts -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT 00:18:48.358 11:40:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT' 00:18:48.358 11:40:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@219 -- # ipts -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:18:48.358 11:40:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@790 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT -m comment --comment 'SPDK_NVMF:-A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT' 00:18:48.358 11:40:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@222 -- # ping -c 1 10.0.0.3 00:18:48.358 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:18:48.358 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.062 ms 00:18:48.358 00:18:48.358 --- 10.0.0.3 ping statistics --- 00:18:48.358 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:18:48.358 rtt min/avg/max/mdev = 0.062/0.062/0.062/0.000 ms 00:18:48.358 11:40:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@223 -- # ping -c 1 10.0.0.4 00:18:48.358 PING 10.0.0.4 (10.0.0.4) 56(84) bytes of data. 00:18:48.358 64 bytes from 10.0.0.4: icmp_seq=1 ttl=64 time=0.068 ms 00:18:48.358 00:18:48.358 --- 10.0.0.4 ping statistics --- 00:18:48.358 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:18:48.358 rtt min/avg/max/mdev = 0.068/0.068/0.068/0.000 ms 00:18:48.358 11:40:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@224 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:18:48.358 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:18:48.358 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.026 ms 00:18:48.358 00:18:48.358 --- 10.0.0.1 ping statistics --- 00:18:48.358 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:18:48.358 rtt min/avg/max/mdev = 0.026/0.026/0.026/0.000 ms 00:18:48.358 11:40:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@225 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.2 00:18:48.358 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:18:48.358 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.067 ms 00:18:48.358 00:18:48.358 --- 10.0.0.2 ping statistics --- 00:18:48.358 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:18:48.358 rtt min/avg/max/mdev = 0.067/0.067/0.067/0.000 ms 00:18:48.358 11:40:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@227 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:18:48.358 11:40:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@461 -- # return 0 00:18:48.358 11:40:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:18:48.358 11:40:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:18:48.358 11:40:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:18:48.358 11:40:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:18:48.358 11:40:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:18:48.358 11:40:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:18:48.358 11:40:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:18:48.358 11:40:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@30 -- # nvmfappstart -m 0x2 00:18:48.358 11:40:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:18:48.358 11:40:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@726 -- # xtrace_disable 00:18:48.358 11:40:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:18:48.358 11:40:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@509 -- # nvmfpid=91107 00:18:48.358 11:40:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@508 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:18:48.358 11:40:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@510 -- # waitforlisten 91107 00:18:48.358 11:40:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@835 -- # '[' -z 91107 ']' 00:18:48.358 11:40:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:18:48.358 11:40:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@840 -- # local max_retries=100 00:18:48.358 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:18:48.358 11:40:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:18:48.358 11:40:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@844 -- # xtrace_disable 00:18:48.358 11:40:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:18:48.616 [2024-12-16 11:40:49.167240] Starting SPDK v25.01-pre git sha1 c360129e7 / DPDK 24.03.0 initialization... 00:18:48.617 [2024-12-16 11:40:49.167963] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:18:48.617 [2024-12-16 11:40:49.315071] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:18:48.875 [2024-12-16 11:40:49.370645] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:18:48.875 [2024-12-16 11:40:49.370719] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:18:48.875 [2024-12-16 11:40:49.370746] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:18:48.875 [2024-12-16 11:40:49.370754] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:18:48.875 [2024-12-16 11:40:49.370761] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:18:48.875 [2024-12-16 11:40:49.371202] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 1 00:18:48.875 11:40:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:18:48.875 11:40:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@868 -- # return 0 00:18:48.875 11:40:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:18:48.875 11:40:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@732 -- # xtrace_disable 00:18:48.875 11:40:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:18:48.875 11:40:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:18:48.875 11:40:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@32 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:18:48.875 11:40:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:48.875 11:40:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:18:48.875 [2024-12-16 11:40:49.551306] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:18:48.875 11:40:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:48.875 11:40:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@33 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2014-08.org.nvmexpress.discovery -t tcp -a 10.0.0.3 -s 8009 00:18:48.875 11:40:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:48.875 11:40:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:18:48.875 [2024-12-16 11:40:49.559448] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 8009 *** 00:18:48.875 11:40:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:48.875 11:40:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@35 -- # rpc_cmd bdev_null_create null0 1000 512 00:18:48.875 11:40:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:48.875 11:40:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:18:48.875 null0 00:18:48.875 11:40:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:48.875 11:40:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@36 -- # rpc_cmd bdev_null_create null1 1000 512 00:18:48.875 11:40:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:48.875 11:40:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:18:48.875 null1 00:18:48.875 11:40:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:48.875 11:40:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@37 -- # rpc_cmd bdev_wait_for_examine 00:18:48.875 11:40:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:48.875 11:40:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:18:48.875 11:40:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:48.875 11:40:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@45 -- # hostpid=91145 00:18:48.875 11:40:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@44 -- # /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -m 0x1 -r /tmp/host.sock 00:18:48.875 11:40:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@46 -- # waitforlisten 91145 /tmp/host.sock 00:18:48.875 11:40:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@835 -- # '[' -z 91145 ']' 00:18:48.875 11:40:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@839 -- # local rpc_addr=/tmp/host.sock 00:18:48.875 11:40:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@840 -- # local max_retries=100 00:18:48.875 Waiting for process to start up and listen on UNIX domain socket /tmp/host.sock... 00:18:48.875 11:40:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /tmp/host.sock...' 00:18:48.875 11:40:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@844 -- # xtrace_disable 00:18:48.875 11:40:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:18:49.133 [2024-12-16 11:40:49.655803] Starting SPDK v25.01-pre git sha1 c360129e7 / DPDK 24.03.0 initialization... 00:18:49.133 [2024-12-16 11:40:49.655903] [ DPDK EAL parameters: nvmf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid91145 ] 00:18:49.133 [2024-12-16 11:40:49.807588] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:18:49.133 [2024-12-16 11:40:49.876683] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:18:50.067 11:40:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:18:50.067 11:40:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@868 -- # return 0 00:18:50.067 11:40:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@48 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; kill $hostpid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:18:50.067 11:40:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@50 -- # rpc_cmd -s /tmp/host.sock log_set_flag bdev_nvme 00:18:50.067 11:40:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:50.067 11:40:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:18:50.067 11:40:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:50.067 11:40:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@51 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme -t tcp -a 10.0.0.3 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test 00:18:50.067 11:40:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:50.067 11:40:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:18:50.067 11:40:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:50.067 11:40:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@72 -- # notify_id=0 00:18:50.067 11:40:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@83 -- # get_subsystem_names 00:18:50.067 11:40:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:18:50.067 11:40:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:50.067 11:40:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:18:50.067 11:40:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # jq -r '.[].name' 00:18:50.067 11:40:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # sort 00:18:50.067 11:40:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # xargs 00:18:50.067 11:40:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:50.067 11:40:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@83 -- # [[ '' == '' ]] 00:18:50.067 11:40:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@84 -- # get_bdev_list 00:18:50.067 11:40:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:18:50.067 11:40:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:50.067 11:40:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:18:50.067 11:40:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:18:50.067 11:40:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:18:50.067 11:40:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:18:50.067 11:40:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:50.067 11:40:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@84 -- # [[ '' == '' ]] 00:18:50.067 11:40:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@86 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 00:18:50.067 11:40:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:50.067 11:40:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:18:50.325 11:40:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:50.325 11:40:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@87 -- # get_subsystem_names 00:18:50.325 11:40:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:18:50.325 11:40:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:50.325 11:40:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # jq -r '.[].name' 00:18:50.325 11:40:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:18:50.325 11:40:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # sort 00:18:50.325 11:40:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # xargs 00:18:50.325 11:40:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:50.325 11:40:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@87 -- # [[ '' == '' ]] 00:18:50.325 11:40:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@88 -- # get_bdev_list 00:18:50.325 11:40:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:18:50.325 11:40:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:18:50.325 11:40:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:50.325 11:40:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:18:50.325 11:40:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:18:50.325 11:40:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:18:50.325 11:40:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:50.325 11:40:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@88 -- # [[ '' == '' ]] 00:18:50.325 11:40:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@90 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 null0 00:18:50.325 11:40:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:50.325 11:40:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:18:50.325 11:40:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:50.325 11:40:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@91 -- # get_subsystem_names 00:18:50.325 11:40:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:18:50.325 11:40:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:50.325 11:40:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # jq -r '.[].name' 00:18:50.325 11:40:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:18:50.325 11:40:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # sort 00:18:50.325 11:40:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # xargs 00:18:50.325 11:40:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:50.325 11:40:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@91 -- # [[ '' == '' ]] 00:18:50.325 11:40:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@92 -- # get_bdev_list 00:18:50.325 11:40:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:18:50.325 11:40:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:50.325 11:40:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:18:50.325 11:40:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:18:50.325 11:40:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:18:50.325 11:40:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:18:50.325 11:40:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:50.325 11:40:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@92 -- # [[ '' == '' ]] 00:18:50.325 11:40:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@96 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.3 -s 4420 00:18:50.325 11:40:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:50.325 11:40:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:18:50.325 [2024-12-16 11:40:51.075916] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4420 *** 00:18:50.583 11:40:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:50.583 11:40:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@97 -- # get_subsystem_names 00:18:50.583 11:40:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:18:50.583 11:40:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:50.583 11:40:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # jq -r '.[].name' 00:18:50.583 11:40:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:18:50.583 11:40:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # sort 00:18:50.583 11:40:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # xargs 00:18:50.583 11:40:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:50.583 11:40:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@97 -- # [[ '' == '' ]] 00:18:50.583 11:40:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@98 -- # get_bdev_list 00:18:50.583 11:40:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:18:50.583 11:40:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:18:50.583 11:40:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:50.583 11:40:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:18:50.583 11:40:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:18:50.583 11:40:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:18:50.583 11:40:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:50.583 11:40:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@98 -- # [[ '' == '' ]] 00:18:50.583 11:40:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@99 -- # is_notification_count_eq 0 00:18:50.583 11:40:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@79 -- # expected_count=0 00:18:50.583 11:40:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@80 -- # waitforcondition 'get_notification_count && ((notification_count == expected_count))' 00:18:50.583 11:40:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # local 'cond=get_notification_count && ((notification_count == expected_count))' 00:18:50.583 11:40:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # local max=10 00:18:50.583 11:40:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@920 -- # (( max-- )) 00:18:50.583 11:40:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # eval get_notification_count '&&' '((notification_count' == 'expected_count))' 00:18:50.583 11:40:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # get_notification_count 00:18:50.583 11:40:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # rpc_cmd -s /tmp/host.sock notify_get_notifications -i 0 00:18:50.583 11:40:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:50.583 11:40:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:18:50.583 11:40:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # jq '. | length' 00:18:50.583 11:40:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:50.583 11:40:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # notification_count=0 00:18:50.583 11:40:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@75 -- # notify_id=0 00:18:50.583 11:40:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # (( notification_count == expected_count )) 00:18:50.583 11:40:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@922 -- # return 0 00:18:50.584 11:40:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@103 -- # rpc_cmd nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode0 nqn.2021-12.io.spdk:test 00:18:50.584 11:40:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:50.584 11:40:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:18:50.584 11:40:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:50.584 11:40:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@105 -- # waitforcondition '[[ "$(get_subsystem_names)" == "nvme0" ]]' 00:18:50.584 11:40:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # local 'cond=[[ "$(get_subsystem_names)" == "nvme0" ]]' 00:18:50.584 11:40:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # local max=10 00:18:50.584 11:40:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@920 -- # (( max-- )) 00:18:50.584 11:40:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # eval '[[' '"$(get_subsystem_names)"' == '"nvme0"' ']]' 00:18:50.584 11:40:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # get_subsystem_names 00:18:50.584 11:40:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:18:50.584 11:40:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # jq -r '.[].name' 00:18:50.584 11:40:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # sort 00:18:50.584 11:40:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:50.584 11:40:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # xargs 00:18:50.584 11:40:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:18:50.584 11:40:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:50.584 11:40:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # [[ '' == \n\v\m\e\0 ]] 00:18:50.584 11:40:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@924 -- # sleep 1 00:18:51.148 [2024-12-16 11:40:51.694611] bdev_nvme.c:7516:discovery_attach_cb: *INFO*: Discovery[10.0.0.3:8009] discovery ctrlr attached 00:18:51.148 [2024-12-16 11:40:51.694643] bdev_nvme.c:7602:discovery_poller: *INFO*: Discovery[10.0.0.3:8009] discovery ctrlr connected 00:18:51.148 [2024-12-16 11:40:51.694660] bdev_nvme.c:7479:get_discovery_log_page: *INFO*: Discovery[10.0.0.3:8009] sent discovery log page command 00:18:51.148 [2024-12-16 11:40:51.780752] bdev_nvme.c:7445:discovery_log_page_cb: *INFO*: Discovery[10.0.0.3:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.3:4420 new subsystem nvme0 00:18:51.148 [2024-12-16 11:40:51.835181] bdev_nvme.c:5663:nvme_ctrlr_create_done: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] ctrlr was created to 10.0.0.3:4420 00:18:51.148 [2024-12-16 11:40:51.835938] bdev_nvme.c:1990:bdev_nvme_create_qpair: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Connecting qpair 0x9c6ed0:1 started. 00:18:51.148 [2024-12-16 11:40:51.837898] bdev_nvme.c:7335:discovery_attach_controller_done: *INFO*: Discovery[10.0.0.3:8009] attach nvme0 done 00:18:51.148 [2024-12-16 11:40:51.837920] bdev_nvme.c:7294:discovery_remove_controllers: *INFO*: Discovery[10.0.0.3:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.3:4420 found again 00:18:51.148 [2024-12-16 11:40:51.842998] bdev_nvme.c:1792:bdev_nvme_disconnected_qpair_cb: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] qpair 0x9c6ed0 was disconnected and freed. delete nvme_qpair. 00:18:51.714 11:40:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@920 -- # (( max-- )) 00:18:51.714 11:40:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # eval '[[' '"$(get_subsystem_names)"' == '"nvme0"' ']]' 00:18:51.714 11:40:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # get_subsystem_names 00:18:51.714 11:40:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:18:51.714 11:40:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # jq -r '.[].name' 00:18:51.714 11:40:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # sort 00:18:51.714 11:40:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # xargs 00:18:51.714 11:40:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:51.714 11:40:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:18:51.714 11:40:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:51.714 11:40:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:51.714 11:40:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@922 -- # return 0 00:18:51.714 11:40:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@106 -- # waitforcondition '[[ "$(get_bdev_list)" == "nvme0n1" ]]' 00:18:51.714 11:40:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # local 'cond=[[ "$(get_bdev_list)" == "nvme0n1" ]]' 00:18:51.714 11:40:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # local max=10 00:18:51.714 11:40:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@920 -- # (( max-- )) 00:18:51.714 11:40:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # eval '[[' '"$(get_bdev_list)"' == '"nvme0n1"' ']]' 00:18:51.714 11:40:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # get_bdev_list 00:18:51.714 11:40:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:18:51.714 11:40:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:18:51.714 11:40:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:18:51.714 11:40:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:51.714 11:40:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:18:51.714 11:40:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:18:51.714 11:40:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:51.714 11:40:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # [[ nvme0n1 == \n\v\m\e\0\n\1 ]] 00:18:51.714 11:40:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@922 -- # return 0 00:18:51.714 11:40:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@107 -- # waitforcondition '[[ "$(get_subsystem_paths nvme0)" == "$NVMF_PORT" ]]' 00:18:51.714 11:40:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # local 'cond=[[ "$(get_subsystem_paths nvme0)" == "$NVMF_PORT" ]]' 00:18:51.714 11:40:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # local max=10 00:18:51.714 11:40:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@920 -- # (( max-- )) 00:18:51.714 11:40:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # eval '[[' '"$(get_subsystem_paths' 'nvme0)"' == '"$NVMF_PORT"' ']]' 00:18:51.714 11:40:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # get_subsystem_paths nvme0 00:18:51.714 11:40:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers -n nvme0 00:18:51.714 11:40:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:51.714 11:40:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # jq -r '.[].ctrlrs[].trid.trsvcid' 00:18:51.714 11:40:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:18:51.714 11:40:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # xargs 00:18:51.714 11:40:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # sort -n 00:18:51.714 11:40:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:51.978 11:40:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # [[ 4420 == \4\4\2\0 ]] 00:18:51.978 11:40:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@922 -- # return 0 00:18:51.978 11:40:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@108 -- # is_notification_count_eq 1 00:18:51.978 11:40:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@79 -- # expected_count=1 00:18:51.978 11:40:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@80 -- # waitforcondition 'get_notification_count && ((notification_count == expected_count))' 00:18:51.978 11:40:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # local 'cond=get_notification_count && ((notification_count == expected_count))' 00:18:51.978 11:40:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # local max=10 00:18:51.978 11:40:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@920 -- # (( max-- )) 00:18:51.978 11:40:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # eval get_notification_count '&&' '((notification_count' == 'expected_count))' 00:18:51.978 11:40:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # get_notification_count 00:18:51.978 11:40:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # rpc_cmd -s /tmp/host.sock notify_get_notifications -i 0 00:18:51.978 11:40:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # jq '. | length' 00:18:51.978 11:40:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:51.978 11:40:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:18:51.978 11:40:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:51.978 11:40:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # notification_count=1 00:18:51.978 11:40:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@75 -- # notify_id=1 00:18:51.978 11:40:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # (( notification_count == expected_count )) 00:18:51.978 11:40:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@922 -- # return 0 00:18:51.978 11:40:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@111 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 null1 00:18:51.978 11:40:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:51.978 11:40:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:18:51.978 11:40:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:51.978 11:40:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@113 -- # waitforcondition '[[ "$(get_bdev_list)" == "nvme0n1 nvme0n2" ]]' 00:18:51.978 11:40:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # local 'cond=[[ "$(get_bdev_list)" == "nvme0n1 nvme0n2" ]]' 00:18:51.978 11:40:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # local max=10 00:18:51.978 11:40:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@920 -- # (( max-- )) 00:18:51.978 11:40:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # eval '[[' '"$(get_bdev_list)"' == '"nvme0n1' 'nvme0n2"' ']]' 00:18:51.978 11:40:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # get_bdev_list 00:18:51.978 [2024-12-16 11:40:52.556883] bdev_nvme.c:1990:bdev_nvme_create_qpair: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Connecting qpair 0x9c7480:1 started. 00:18:51.978 11:40:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:18:51.978 11:40:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:18:51.978 11:40:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:51.978 11:40:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:18:51.978 11:40:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:18:51.978 11:40:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:18:51.978 [2024-12-16 11:40:52.563775] bdev_nvme.c:1792:bdev_nvme_disconnected_qpair_cb: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] qpair 0x9c7480 was disconnected and freed. delete nvme_qpair. 00:18:51.978 11:40:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:51.978 11:40:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # [[ nvme0n1 nvme0n2 == \n\v\m\e\0\n\1\ \n\v\m\e\0\n\2 ]] 00:18:51.978 11:40:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@922 -- # return 0 00:18:51.978 11:40:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@114 -- # is_notification_count_eq 1 00:18:51.978 11:40:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@79 -- # expected_count=1 00:18:51.978 11:40:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@80 -- # waitforcondition 'get_notification_count && ((notification_count == expected_count))' 00:18:51.978 11:40:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # local 'cond=get_notification_count && ((notification_count == expected_count))' 00:18:51.978 11:40:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # local max=10 00:18:51.978 11:40:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@920 -- # (( max-- )) 00:18:51.978 11:40:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # eval get_notification_count '&&' '((notification_count' == 'expected_count))' 00:18:51.978 11:40:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # get_notification_count 00:18:51.978 11:40:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # rpc_cmd -s /tmp/host.sock notify_get_notifications -i 1 00:18:51.978 11:40:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:51.978 11:40:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:18:51.978 11:40:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # jq '. | length' 00:18:51.978 11:40:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:51.978 11:40:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # notification_count=1 00:18:51.978 11:40:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@75 -- # notify_id=2 00:18:51.978 11:40:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # (( notification_count == expected_count )) 00:18:51.978 11:40:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@922 -- # return 0 00:18:51.978 11:40:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@118 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.3 -s 4421 00:18:51.978 11:40:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:51.978 11:40:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:18:51.978 [2024-12-16 11:40:52.668862] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4421 *** 00:18:51.978 [2024-12-16 11:40:52.670107] bdev_nvme.c:7498:discovery_aer_cb: *INFO*: Discovery[10.0.0.3:8009] got aer 00:18:51.978 [2024-12-16 11:40:52.670138] bdev_nvme.c:7479:get_discovery_log_page: *INFO*: Discovery[10.0.0.3:8009] sent discovery log page command 00:18:51.978 11:40:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:51.978 11:40:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@120 -- # waitforcondition '[[ "$(get_subsystem_names)" == "nvme0" ]]' 00:18:51.978 11:40:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # local 'cond=[[ "$(get_subsystem_names)" == "nvme0" ]]' 00:18:51.978 11:40:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # local max=10 00:18:51.978 11:40:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@920 -- # (( max-- )) 00:18:51.978 11:40:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # eval '[[' '"$(get_subsystem_names)"' == '"nvme0"' ']]' 00:18:51.978 11:40:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # get_subsystem_names 00:18:51.978 11:40:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:18:51.978 11:40:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # jq -r '.[].name' 00:18:51.979 11:40:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:51.979 11:40:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # sort 00:18:51.979 11:40:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:18:51.979 11:40:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # xargs 00:18:51.979 11:40:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:52.245 11:40:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:52.245 11:40:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@922 -- # return 0 00:18:52.245 11:40:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@121 -- # waitforcondition '[[ "$(get_bdev_list)" == "nvme0n1 nvme0n2" ]]' 00:18:52.245 11:40:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # local 'cond=[[ "$(get_bdev_list)" == "nvme0n1 nvme0n2" ]]' 00:18:52.245 11:40:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # local max=10 00:18:52.245 11:40:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@920 -- # (( max-- )) 00:18:52.245 11:40:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # eval '[[' '"$(get_bdev_list)"' == '"nvme0n1' 'nvme0n2"' ']]' 00:18:52.245 11:40:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # get_bdev_list 00:18:52.245 11:40:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:18:52.245 11:40:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:18:52.245 11:40:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:52.245 11:40:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:18:52.245 11:40:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:18:52.245 11:40:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:18:52.245 [2024-12-16 11:40:52.758237] bdev_nvme.c:7440:discovery_log_page_cb: *INFO*: Discovery[10.0.0.3:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.3:4421 new path for nvme0 00:18:52.245 11:40:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:52.245 11:40:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # [[ nvme0n1 nvme0n2 == \n\v\m\e\0\n\1\ \n\v\m\e\0\n\2 ]] 00:18:52.245 11:40:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@922 -- # return 0 00:18:52.245 11:40:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@122 -- # waitforcondition '[[ "$(get_subsystem_paths nvme0)" == "$NVMF_PORT $NVMF_SECOND_PORT" ]]' 00:18:52.245 11:40:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # local 'cond=[[ "$(get_subsystem_paths nvme0)" == "$NVMF_PORT $NVMF_SECOND_PORT" ]]' 00:18:52.245 11:40:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # local max=10 00:18:52.245 11:40:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@920 -- # (( max-- )) 00:18:52.245 11:40:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # eval '[[' '"$(get_subsystem_paths' 'nvme0)"' == '"$NVMF_PORT' '$NVMF_SECOND_PORT"' ']]' 00:18:52.245 11:40:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # get_subsystem_paths nvme0 00:18:52.245 11:40:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers -n nvme0 00:18:52.245 11:40:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:52.245 11:40:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:18:52.245 11:40:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # jq -r '.[].ctrlrs[].trid.trsvcid' 00:18:52.245 11:40:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # xargs 00:18:52.246 11:40:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # sort -n 00:18:52.246 11:40:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:52.246 [2024-12-16 11:40:52.817645] bdev_nvme.c:5663:nvme_ctrlr_create_done: *INFO*: [nqn.2016-06.io.spdk:cnode0, 2] ctrlr was created to 10.0.0.3:4421 00:18:52.246 [2024-12-16 11:40:52.817717] bdev_nvme.c:7335:discovery_attach_controller_done: *INFO*: Discovery[10.0.0.3:8009] attach nvme0 done 00:18:52.246 [2024-12-16 11:40:52.817728] bdev_nvme.c:7294:discovery_remove_controllers: *INFO*: Discovery[10.0.0.3:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.3:4420 found again 00:18:52.246 [2024-12-16 11:40:52.817734] bdev_nvme.c:7294:discovery_remove_controllers: *INFO*: Discovery[10.0.0.3:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.3:4421 found again 00:18:52.246 11:40:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # [[ 4420 == \4\4\2\0\ \4\4\2\1 ]] 00:18:52.246 11:40:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@924 -- # sleep 1 00:18:53.180 11:40:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@920 -- # (( max-- )) 00:18:53.180 11:40:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # eval '[[' '"$(get_subsystem_paths' 'nvme0)"' == '"$NVMF_PORT' '$NVMF_SECOND_PORT"' ']]' 00:18:53.180 11:40:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # get_subsystem_paths nvme0 00:18:53.180 11:40:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers -n nvme0 00:18:53.180 11:40:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # jq -r '.[].ctrlrs[].trid.trsvcid' 00:18:53.180 11:40:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # sort -n 00:18:53.180 11:40:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # xargs 00:18:53.180 11:40:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:53.180 11:40:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:18:53.180 11:40:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:53.180 11:40:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # [[ 4420 4421 == \4\4\2\0\ \4\4\2\1 ]] 00:18:53.180 11:40:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@922 -- # return 0 00:18:53.180 11:40:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@123 -- # is_notification_count_eq 0 00:18:53.180 11:40:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@79 -- # expected_count=0 00:18:53.180 11:40:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@80 -- # waitforcondition 'get_notification_count && ((notification_count == expected_count))' 00:18:53.180 11:40:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # local 'cond=get_notification_count && ((notification_count == expected_count))' 00:18:53.180 11:40:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # local max=10 00:18:53.180 11:40:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@920 -- # (( max-- )) 00:18:53.180 11:40:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # eval get_notification_count '&&' '((notification_count' == 'expected_count))' 00:18:53.180 11:40:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # get_notification_count 00:18:53.180 11:40:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # rpc_cmd -s /tmp/host.sock notify_get_notifications -i 2 00:18:53.180 11:40:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:53.180 11:40:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:18:53.180 11:40:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # jq '. | length' 00:18:53.439 11:40:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:53.439 11:40:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # notification_count=0 00:18:53.439 11:40:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@75 -- # notify_id=2 00:18:53.439 11:40:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # (( notification_count == expected_count )) 00:18:53.439 11:40:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@922 -- # return 0 00:18:53.439 11:40:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@127 -- # rpc_cmd nvmf_subsystem_remove_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.3 -s 4420 00:18:53.439 11:40:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:53.439 11:40:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:18:53.439 [2024-12-16 11:40:53.986396] bdev_nvme.c:7498:discovery_aer_cb: *INFO*: Discovery[10.0.0.3:8009] got aer 00:18:53.439 [2024-12-16 11:40:53.986435] bdev_nvme.c:7479:get_discovery_log_page: *INFO*: Discovery[10.0.0.3:8009] sent discovery log page command 00:18:53.439 11:40:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:53.439 11:40:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@129 -- # waitforcondition '[[ "$(get_subsystem_names)" == "nvme0" ]]' 00:18:53.439 11:40:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # local 'cond=[[ "$(get_subsystem_names)" == "nvme0" ]]' 00:18:53.439 11:40:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # local max=10 00:18:53.439 11:40:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@920 -- # (( max-- )) 00:18:53.439 11:40:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # eval '[[' '"$(get_subsystem_names)"' == '"nvme0"' ']]' 00:18:53.439 11:40:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # get_subsystem_names 00:18:53.439 11:40:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:18:53.439 11:40:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # jq -r '.[].name' 00:18:53.439 11:40:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:53.439 11:40:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # sort 00:18:53.439 11:40:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:18:53.439 11:40:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # xargs 00:18:53.439 [2024-12-16 11:40:53.995468] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:18:53.439 [2024-12-16 11:40:53.995504] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:53.439 [2024-12-16 11:40:53.995518] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:18:53.439 [2024-12-16 11:40:53.995528] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:53.439 [2024-12-16 11:40:53.995538] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:18:53.439 [2024-12-16 11:40:53.995547] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:53.439 [2024-12-16 11:40:53.995557] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:18:53.439 [2024-12-16 11:40:53.995566] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:53.439 [2024-12-16 11:40:53.995576] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9a40b0 is same with the state(6) to be set 00:18:53.439 [2024-12-16 11:40:54.005426] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x9a40b0 (9): Bad file descriptor 00:18:53.439 11:40:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:53.439 [2024-12-16 11:40:54.015442] bdev_nvme.c:2550:bdev_nvme_reset_destroy_qpairs: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Delete qpairs for reset. 00:18:53.439 [2024-12-16 11:40:54.015470] bdev_nvme.c:2538:bdev_nvme_reset_destroy_qpair_done: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] qpairs were deleted. 00:18:53.439 [2024-12-16 11:40:54.015478] bdev_nvme.c:2134:nvme_ctrlr_disconnect: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Start disconnecting ctrlr. 00:18:53.439 [2024-12-16 11:40:54.015484] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0, 1] resetting controller 00:18:53.439 [2024-12-16 11:40:54.015518] bdev_nvme.c:2522:bdev_nvme_reconnect_ctrlr: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Start reconnecting ctrlr. 00:18:53.440 [2024-12-16 11:40:54.015606] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:18:53.440 [2024-12-16 11:40:54.015627] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9a40b0 with addr=10.0.0.3, port=4420 00:18:53.440 [2024-12-16 11:40:54.015638] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9a40b0 is same with the state(6) to be set 00:18:53.440 [2024-12-16 11:40:54.015679] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x9a40b0 (9): Bad file descriptor 00:18:53.440 [2024-12-16 11:40:54.015695] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] Ctrlr is in error state 00:18:53.440 [2024-12-16 11:40:54.015704] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] controller reinitialization failed 00:18:53.440 [2024-12-16 11:40:54.015715] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] in failed state. 00:18:53.440 [2024-12-16 11:40:54.015724] bdev_nvme.c:2512:bdev_nvme_reconnect_ctrlr_poll: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] ctrlr could not be connected. 00:18:53.440 [2024-12-16 11:40:54.015730] bdev_nvme.c:2279:bdev_nvme_reset_ctrlr_complete: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Clear pending resets. 00:18:53.440 [2024-12-16 11:40:54.015736] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] Resetting controller failed. 00:18:53.440 [2024-12-16 11:40:54.025529] bdev_nvme.c:2550:bdev_nvme_reset_destroy_qpairs: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Delete qpairs for reset. 00:18:53.440 [2024-12-16 11:40:54.025556] bdev_nvme.c:2538:bdev_nvme_reset_destroy_qpair_done: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] qpairs were deleted. 00:18:53.440 [2024-12-16 11:40:54.025563] bdev_nvme.c:2134:nvme_ctrlr_disconnect: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Start disconnecting ctrlr. 00:18:53.440 [2024-12-16 11:40:54.025571] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0, 1] resetting controller 00:18:53.440 [2024-12-16 11:40:54.025598] bdev_nvme.c:2522:bdev_nvme_reconnect_ctrlr: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Start reconnecting ctrlr. 00:18:53.440 [2024-12-16 11:40:54.025652] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:18:53.440 [2024-12-16 11:40:54.025672] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9a40b0 with addr=10.0.0.3, port=4420 00:18:53.440 [2024-12-16 11:40:54.025683] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9a40b0 is same with the state(6) to be set 00:18:53.440 [2024-12-16 11:40:54.025699] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x9a40b0 (9): Bad file descriptor 00:18:53.440 [2024-12-16 11:40:54.025712] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] Ctrlr is in error state 00:18:53.440 [2024-12-16 11:40:54.025721] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] controller reinitialization failed 00:18:53.440 [2024-12-16 11:40:54.025731] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] in failed state. 00:18:53.440 [2024-12-16 11:40:54.025739] bdev_nvme.c:2512:bdev_nvme_reconnect_ctrlr_poll: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] ctrlr could not be connected. 00:18:53.440 [2024-12-16 11:40:54.025744] bdev_nvme.c:2279:bdev_nvme_reset_ctrlr_complete: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Clear pending resets. 00:18:53.440 [2024-12-16 11:40:54.025749] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] Resetting controller failed. 00:18:53.440 [2024-12-16 11:40:54.035608] bdev_nvme.c:2550:bdev_nvme_reset_destroy_qpairs: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Delete qpairs for reset. 00:18:53.440 [2024-12-16 11:40:54.035634] bdev_nvme.c:2538:bdev_nvme_reset_destroy_qpair_done: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] qpairs were deleted. 00:18:53.440 [2024-12-16 11:40:54.035693] bdev_nvme.c:2134:nvme_ctrlr_disconnect: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Start disconnecting ctrlr. 00:18:53.440 [2024-12-16 11:40:54.035698] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0, 1] resetting controller 00:18:53.440 [2024-12-16 11:40:54.035726] bdev_nvme.c:2522:bdev_nvme_reconnect_ctrlr: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Start reconnecting ctrlr. 00:18:53.440 [2024-12-16 11:40:54.035785] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:18:53.440 [2024-12-16 11:40:54.035805] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9a40b0 with addr=10.0.0.3, port=4420 00:18:53.440 [2024-12-16 11:40:54.035816] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9a40b0 is same with the state(6) to be set 00:18:53.440 [2024-12-16 11:40:54.035832] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x9a40b0 (9): Bad file descriptor 00:18:53.440 [2024-12-16 11:40:54.035846] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] Ctrlr is in error state 00:18:53.440 [2024-12-16 11:40:54.035855] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] controller reinitialization failed 00:18:53.440 [2024-12-16 11:40:54.035864] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] in failed state. 00:18:53.440 [2024-12-16 11:40:54.035872] bdev_nvme.c:2512:bdev_nvme_reconnect_ctrlr_poll: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] ctrlr could not be connected. 00:18:53.440 [2024-12-16 11:40:54.035878] bdev_nvme.c:2279:bdev_nvme_reset_ctrlr_complete: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Clear pending resets. 00:18:53.440 [2024-12-16 11:40:54.035883] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] Resetting controller failed. 00:18:53.440 11:40:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:53.440 11:40:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@922 -- # return 0 00:18:53.440 11:40:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@130 -- # waitforcondition '[[ "$(get_bdev_list)" == "nvme0n1 nvme0n2" ]]' 00:18:53.440 11:40:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # local 'cond=[[ "$(get_bdev_list)" == "nvme0n1 nvme0n2" ]]' 00:18:53.440 11:40:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # local max=10 00:18:53.440 11:40:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@920 -- # (( max-- )) 00:18:53.440 [2024-12-16 11:40:54.045736] bdev_nvme.c:2550:bdev_nvme_reset_destroy_qpairs: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Delete qpairs for reset. 00:18:53.440 [2024-12-16 11:40:54.045759] bdev_nvme.c:2538:bdev_nvme_reset_destroy_qpair_done: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] qpairs were deleted. 00:18:53.440 [2024-12-16 11:40:54.045765] bdev_nvme.c:2134:nvme_ctrlr_disconnect: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Start disconnecting ctrlr. 00:18:53.440 [2024-12-16 11:40:54.045770] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0, 1] resetting controller 00:18:53.440 [2024-12-16 11:40:54.045798] bdev_nvme.c:2522:bdev_nvme_reconnect_ctrlr: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Start reconnecting ctrlr. 00:18:53.440 11:40:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # eval '[[' '"$(get_bdev_list)"' == '"nvme0n1' 'nvme0n2"' ']]' 00:18:53.440 [2024-12-16 11:40:54.045852] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:18:53.440 [2024-12-16 11:40:54.045871] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9a40b0 with addr=10.0.0.3, port=4420 00:18:53.440 [2024-12-16 11:40:54.045882] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9a40b0 is same with the state(6) to be set 00:18:53.440 [2024-12-16 11:40:54.045897] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x9a40b0 (9): Bad file descriptor 00:18:53.440 [2024-12-16 11:40:54.045912] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] Ctrlr is in error state 00:18:53.440 [2024-12-16 11:40:54.045921] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] controller reinitialization failed 00:18:53.440 [2024-12-16 11:40:54.045930] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] in failed state. 00:18:53.440 [2024-12-16 11:40:54.045938] bdev_nvme.c:2512:bdev_nvme_reconnect_ctrlr_poll: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] ctrlr could not be connected. 00:18:53.440 [2024-12-16 11:40:54.045944] bdev_nvme.c:2279:bdev_nvme_reset_ctrlr_complete: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Clear pending resets. 00:18:53.440 [2024-12-16 11:40:54.045949] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] Resetting controller failed. 00:18:53.440 11:40:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # get_bdev_list 00:18:53.440 11:40:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:18:53.440 11:40:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:18:53.440 11:40:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:18:53.440 11:40:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:53.440 11:40:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:18:53.440 11:40:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:18:53.440 [2024-12-16 11:40:54.055840] bdev_nvme.c:2550:bdev_nvme_reset_destroy_qpairs: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Delete qpairs for reset. 00:18:53.440 [2024-12-16 11:40:54.055869] bdev_nvme.c:2538:bdev_nvme_reset_destroy_qpair_done: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] qpairs were deleted. 00:18:53.440 [2024-12-16 11:40:54.055876] bdev_nvme.c:2134:nvme_ctrlr_disconnect: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Start disconnecting ctrlr. 00:18:53.440 [2024-12-16 11:40:54.055882] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0, 1] resetting controller 00:18:53.440 [2024-12-16 11:40:54.055907] bdev_nvme.c:2522:bdev_nvme_reconnect_ctrlr: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Start reconnecting ctrlr. 00:18:53.440 [2024-12-16 11:40:54.055962] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:18:53.440 [2024-12-16 11:40:54.055997] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9a40b0 with addr=10.0.0.3, port=4420 00:18:53.440 [2024-12-16 11:40:54.056007] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9a40b0 is same with the state(6) to be set 00:18:53.440 [2024-12-16 11:40:54.056038] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x9a40b0 (9): Bad file descriptor 00:18:53.440 [2024-12-16 11:40:54.056052] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] Ctrlr is in error state 00:18:53.440 [2024-12-16 11:40:54.056060] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] controller reinitialization failed 00:18:53.440 [2024-12-16 11:40:54.056069] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] in failed state. 00:18:53.440 [2024-12-16 11:40:54.056077] bdev_nvme.c:2512:bdev_nvme_reconnect_ctrlr_poll: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] ctrlr could not be connected. 00:18:53.440 [2024-12-16 11:40:54.056083] bdev_nvme.c:2279:bdev_nvme_reset_ctrlr_complete: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Clear pending resets. 00:18:53.440 [2024-12-16 11:40:54.056088] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] Resetting controller failed. 00:18:53.440 [2024-12-16 11:40:54.065916] bdev_nvme.c:2550:bdev_nvme_reset_destroy_qpairs: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Delete qpairs for reset. 00:18:53.440 [2024-12-16 11:40:54.065943] bdev_nvme.c:2538:bdev_nvme_reset_destroy_qpair_done: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] qpairs were deleted. 00:18:53.440 [2024-12-16 11:40:54.065950] bdev_nvme.c:2134:nvme_ctrlr_disconnect: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Start disconnecting ctrlr. 00:18:53.440 [2024-12-16 11:40:54.065955] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0, 1] resetting controller 00:18:53.440 [2024-12-16 11:40:54.065980] bdev_nvme.c:2522:bdev_nvme_reconnect_ctrlr: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Start reconnecting ctrlr. 00:18:53.440 [2024-12-16 11:40:54.066033] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:18:53.440 [2024-12-16 11:40:54.066053] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9a40b0 with addr=10.0.0.3, port=4420 00:18:53.440 [2024-12-16 11:40:54.066064] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9a40b0 is same with the state(6) to be set 00:18:53.440 [2024-12-16 11:40:54.066079] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x9a40b0 (9): Bad file descriptor 00:18:53.440 [2024-12-16 11:40:54.066093] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] Ctrlr is in error state 00:18:53.441 [2024-12-16 11:40:54.066101] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] controller reinitialization failed 00:18:53.441 [2024-12-16 11:40:54.066110] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] in failed state. 00:18:53.441 [2024-12-16 11:40:54.066118] bdev_nvme.c:2512:bdev_nvme_reconnect_ctrlr_poll: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] ctrlr could not be connected. 00:18:53.441 [2024-12-16 11:40:54.066124] bdev_nvme.c:2279:bdev_nvme_reset_ctrlr_complete: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Clear pending resets. 00:18:53.441 [2024-12-16 11:40:54.066128] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] Resetting controller failed. 00:18:53.441 [2024-12-16 11:40:54.072874] bdev_nvme.c:7303:discovery_remove_controllers: *INFO*: Discovery[10.0.0.3:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.3:4420 not found 00:18:53.441 [2024-12-16 11:40:54.072908] bdev_nvme.c:7294:discovery_remove_controllers: *INFO*: Discovery[10.0.0.3:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.3:4421 found again 00:18:53.441 11:40:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:53.441 11:40:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # [[ nvme0n1 nvme0n2 == \n\v\m\e\0\n\1\ \n\v\m\e\0\n\2 ]] 00:18:53.441 11:40:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@922 -- # return 0 00:18:53.441 11:40:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@131 -- # waitforcondition '[[ "$(get_subsystem_paths nvme0)" == "$NVMF_SECOND_PORT" ]]' 00:18:53.441 11:40:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # local 'cond=[[ "$(get_subsystem_paths nvme0)" == "$NVMF_SECOND_PORT" ]]' 00:18:53.441 11:40:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # local max=10 00:18:53.441 11:40:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@920 -- # (( max-- )) 00:18:53.441 11:40:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # eval '[[' '"$(get_subsystem_paths' 'nvme0)"' == '"$NVMF_SECOND_PORT"' ']]' 00:18:53.441 11:40:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # get_subsystem_paths nvme0 00:18:53.441 11:40:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers -n nvme0 00:18:53.441 11:40:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:53.441 11:40:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:18:53.441 11:40:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # jq -r '.[].ctrlrs[].trid.trsvcid' 00:18:53.441 11:40:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # sort -n 00:18:53.441 11:40:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # xargs 00:18:53.441 11:40:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:53.441 11:40:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # [[ 4421 == \4\4\2\1 ]] 00:18:53.441 11:40:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@922 -- # return 0 00:18:53.441 11:40:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@132 -- # is_notification_count_eq 0 00:18:53.441 11:40:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@79 -- # expected_count=0 00:18:53.441 11:40:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@80 -- # waitforcondition 'get_notification_count && ((notification_count == expected_count))' 00:18:53.441 11:40:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # local 'cond=get_notification_count && ((notification_count == expected_count))' 00:18:53.441 11:40:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # local max=10 00:18:53.441 11:40:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@920 -- # (( max-- )) 00:18:53.441 11:40:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # eval get_notification_count '&&' '((notification_count' == 'expected_count))' 00:18:53.441 11:40:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # get_notification_count 00:18:53.441 11:40:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # rpc_cmd -s /tmp/host.sock notify_get_notifications -i 2 00:18:53.441 11:40:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:53.441 11:40:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:18:53.441 11:40:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # jq '. | length' 00:18:53.441 11:40:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:53.699 11:40:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # notification_count=0 00:18:53.699 11:40:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@75 -- # notify_id=2 00:18:53.699 11:40:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # (( notification_count == expected_count )) 00:18:53.699 11:40:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@922 -- # return 0 00:18:53.699 11:40:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@134 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_stop_discovery -b nvme 00:18:53.699 11:40:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:53.699 11:40:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:18:53.699 11:40:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:53.699 11:40:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@136 -- # waitforcondition '[[ "$(get_subsystem_names)" == "" ]]' 00:18:53.699 11:40:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # local 'cond=[[ "$(get_subsystem_names)" == "" ]]' 00:18:53.699 11:40:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # local max=10 00:18:53.699 11:40:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@920 -- # (( max-- )) 00:18:53.699 11:40:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # eval '[[' '"$(get_subsystem_names)"' == '""' ']]' 00:18:53.699 11:40:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # get_subsystem_names 00:18:53.699 11:40:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:18:53.699 11:40:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # jq -r '.[].name' 00:18:53.699 11:40:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:53.699 11:40:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # sort 00:18:53.699 11:40:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # xargs 00:18:53.699 11:40:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:18:53.699 11:40:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:53.699 11:40:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # [[ '' == '' ]] 00:18:53.699 11:40:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@922 -- # return 0 00:18:53.699 11:40:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@137 -- # waitforcondition '[[ "$(get_bdev_list)" == "" ]]' 00:18:53.699 11:40:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # local 'cond=[[ "$(get_bdev_list)" == "" ]]' 00:18:53.699 11:40:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # local max=10 00:18:53.699 11:40:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@920 -- # (( max-- )) 00:18:53.699 11:40:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # eval '[[' '"$(get_bdev_list)"' == '""' ']]' 00:18:53.699 11:40:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # get_bdev_list 00:18:53.699 11:40:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:18:53.699 11:40:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:18:53.699 11:40:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:53.699 11:40:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:18:53.699 11:40:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:18:53.699 11:40:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:18:53.699 11:40:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:53.699 11:40:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # [[ '' == '' ]] 00:18:53.699 11:40:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@922 -- # return 0 00:18:53.699 11:40:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@138 -- # is_notification_count_eq 2 00:18:53.699 11:40:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@79 -- # expected_count=2 00:18:53.699 11:40:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@80 -- # waitforcondition 'get_notification_count && ((notification_count == expected_count))' 00:18:53.699 11:40:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # local 'cond=get_notification_count && ((notification_count == expected_count))' 00:18:53.699 11:40:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # local max=10 00:18:53.699 11:40:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@920 -- # (( max-- )) 00:18:53.699 11:40:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # eval get_notification_count '&&' '((notification_count' == 'expected_count))' 00:18:53.699 11:40:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # get_notification_count 00:18:53.699 11:40:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # rpc_cmd -s /tmp/host.sock notify_get_notifications -i 2 00:18:53.699 11:40:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # jq '. | length' 00:18:53.699 11:40:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:53.699 11:40:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:18:53.699 11:40:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:53.699 11:40:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # notification_count=2 00:18:53.699 11:40:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@75 -- # notify_id=4 00:18:53.699 11:40:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # (( notification_count == expected_count )) 00:18:53.699 11:40:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@922 -- # return 0 00:18:53.699 11:40:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@141 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme -t tcp -a 10.0.0.3 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test -w 00:18:53.699 11:40:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:53.699 11:40:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:18:55.074 [2024-12-16 11:40:55.405177] bdev_nvme.c:7516:discovery_attach_cb: *INFO*: Discovery[10.0.0.3:8009] discovery ctrlr attached 00:18:55.074 [2024-12-16 11:40:55.405246] bdev_nvme.c:7602:discovery_poller: *INFO*: Discovery[10.0.0.3:8009] discovery ctrlr connected 00:18:55.074 [2024-12-16 11:40:55.405283] bdev_nvme.c:7479:get_discovery_log_page: *INFO*: Discovery[10.0.0.3:8009] sent discovery log page command 00:18:55.074 [2024-12-16 11:40:55.492342] bdev_nvme.c:7445:discovery_log_page_cb: *INFO*: Discovery[10.0.0.3:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.3:4421 new subsystem nvme0 00:18:55.074 [2024-12-16 11:40:55.558767] bdev_nvme.c:5663:nvme_ctrlr_create_done: *INFO*: [nqn.2016-06.io.spdk:cnode0, 3] ctrlr was created to 10.0.0.3:4421 00:18:55.074 [2024-12-16 11:40:55.559421] bdev_nvme.c:1990:bdev_nvme_create_qpair: *INFO*: [nqn.2016-06.io.spdk:cnode0, 3] Connecting qpair 0x9a1860:1 started. 00:18:55.074 [2024-12-16 11:40:55.561701] bdev_nvme.c:7335:discovery_attach_controller_done: *INFO*: Discovery[10.0.0.3:8009] attach nvme0 done 00:18:55.074 [2024-12-16 11:40:55.561763] bdev_nvme.c:7294:discovery_remove_controllers: *INFO*: Discovery[10.0.0.3:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.3:4421 found again 00:18:55.074 11:40:55 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:55.074 11:40:55 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@143 -- # NOT rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme -t tcp -a 10.0.0.3 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test -w 00:18:55.074 11:40:55 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@652 -- # local es=0 00:18:55.074 [2024-12-16 11:40:55.563945] bdev_nvme.c:1792:bdev_nvme_disconnected_qpair_cb: *INFO*: [nqn.2016-06.io.spdk:cnode0, 3] qpair 0x9a1860 was disconnected and freed. delete nvme_qpair. 00:18:55.074 11:40:55 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@654 -- # valid_exec_arg rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme -t tcp -a 10.0.0.3 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test -w 00:18:55.074 11:40:55 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@640 -- # local arg=rpc_cmd 00:18:55.074 11:40:55 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:18:55.074 11:40:55 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@644 -- # type -t rpc_cmd 00:18:55.074 11:40:55 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:18:55.074 11:40:55 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@655 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme -t tcp -a 10.0.0.3 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test -w 00:18:55.074 11:40:55 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:55.074 11:40:55 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:18:55.074 2024/12/16 11:40:55 error on JSON-RPC call, method: bdev_nvme_start_discovery, params: map[adrfam:ipv4 hostnqn:nqn.2021-12.io.spdk:test name:nvme traddr:10.0.0.3 trsvcid:8009 trtype:tcp wait_for_attach:%!s(bool=true)], err: error received for bdev_nvme_start_discovery method, err: Code=-17 Msg=File exists 00:18:55.074 request: 00:18:55.074 { 00:18:55.074 "method": "bdev_nvme_start_discovery", 00:18:55.074 "params": { 00:18:55.074 "name": "nvme", 00:18:55.074 "trtype": "tcp", 00:18:55.074 "traddr": "10.0.0.3", 00:18:55.074 "adrfam": "ipv4", 00:18:55.074 "trsvcid": "8009", 00:18:55.074 "hostnqn": "nqn.2021-12.io.spdk:test", 00:18:55.074 "wait_for_attach": true 00:18:55.074 } 00:18:55.074 } 00:18:55.074 Got JSON-RPC error response 00:18:55.074 GoRPCClient: error on JSON-RPC call 00:18:55.074 11:40:55 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:18:55.074 11:40:55 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@655 -- # es=1 00:18:55.074 11:40:55 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:18:55.074 11:40:55 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:18:55.074 11:40:55 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:18:55.074 11:40:55 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@145 -- # get_discovery_ctrlrs 00:18:55.074 11:40:55 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@67 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_discovery_info 00:18:55.074 11:40:55 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:55.074 11:40:55 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:18:55.074 11:40:55 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@67 -- # xargs 00:18:55.074 11:40:55 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@67 -- # jq -r '.[].name' 00:18:55.074 11:40:55 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@67 -- # sort 00:18:55.074 11:40:55 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:55.074 11:40:55 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@145 -- # [[ nvme == \n\v\m\e ]] 00:18:55.074 11:40:55 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@146 -- # get_bdev_list 00:18:55.074 11:40:55 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:18:55.074 11:40:55 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:18:55.074 11:40:55 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:18:55.074 11:40:55 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:55.074 11:40:55 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:18:55.074 11:40:55 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:18:55.074 11:40:55 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:55.074 11:40:55 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@146 -- # [[ nvme0n1 nvme0n2 == \n\v\m\e\0\n\1\ \n\v\m\e\0\n\2 ]] 00:18:55.074 11:40:55 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@149 -- # NOT rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme_second -t tcp -a 10.0.0.3 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test -w 00:18:55.074 11:40:55 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@652 -- # local es=0 00:18:55.074 11:40:55 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@654 -- # valid_exec_arg rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme_second -t tcp -a 10.0.0.3 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test -w 00:18:55.074 11:40:55 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@640 -- # local arg=rpc_cmd 00:18:55.074 11:40:55 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:18:55.074 11:40:55 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@644 -- # type -t rpc_cmd 00:18:55.074 11:40:55 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:18:55.074 11:40:55 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@655 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme_second -t tcp -a 10.0.0.3 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test -w 00:18:55.074 11:40:55 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:55.074 11:40:55 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:18:55.074 2024/12/16 11:40:55 error on JSON-RPC call, method: bdev_nvme_start_discovery, params: map[adrfam:ipv4 hostnqn:nqn.2021-12.io.spdk:test name:nvme_second traddr:10.0.0.3 trsvcid:8009 trtype:tcp wait_for_attach:%!s(bool=true)], err: error received for bdev_nvme_start_discovery method, err: Code=-17 Msg=File exists 00:18:55.074 request: 00:18:55.074 { 00:18:55.074 "method": "bdev_nvme_start_discovery", 00:18:55.074 "params": { 00:18:55.074 "name": "nvme_second", 00:18:55.074 "trtype": "tcp", 00:18:55.074 "traddr": "10.0.0.3", 00:18:55.074 "adrfam": "ipv4", 00:18:55.074 "trsvcid": "8009", 00:18:55.074 "hostnqn": "nqn.2021-12.io.spdk:test", 00:18:55.074 "wait_for_attach": true 00:18:55.074 } 00:18:55.074 } 00:18:55.074 Got JSON-RPC error response 00:18:55.074 GoRPCClient: error on JSON-RPC call 00:18:55.074 11:40:55 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:18:55.074 11:40:55 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@655 -- # es=1 00:18:55.074 11:40:55 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:18:55.074 11:40:55 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:18:55.074 11:40:55 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:18:55.074 11:40:55 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@151 -- # get_discovery_ctrlrs 00:18:55.074 11:40:55 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@67 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_discovery_info 00:18:55.074 11:40:55 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:55.074 11:40:55 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:18:55.074 11:40:55 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@67 -- # jq -r '.[].name' 00:18:55.074 11:40:55 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@67 -- # sort 00:18:55.074 11:40:55 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@67 -- # xargs 00:18:55.074 11:40:55 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:55.074 11:40:55 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@151 -- # [[ nvme == \n\v\m\e ]] 00:18:55.074 11:40:55 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@152 -- # get_bdev_list 00:18:55.074 11:40:55 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:18:55.074 11:40:55 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:18:55.074 11:40:55 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:18:55.074 11:40:55 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:55.074 11:40:55 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:18:55.074 11:40:55 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:18:55.074 11:40:55 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:55.074 11:40:55 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@152 -- # [[ nvme0n1 nvme0n2 == \n\v\m\e\0\n\1\ \n\v\m\e\0\n\2 ]] 00:18:55.074 11:40:55 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@155 -- # NOT rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme_second -t tcp -a 10.0.0.3 -s 8010 -f ipv4 -q nqn.2021-12.io.spdk:test -T 3000 00:18:55.074 11:40:55 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@652 -- # local es=0 00:18:55.075 11:40:55 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@654 -- # valid_exec_arg rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme_second -t tcp -a 10.0.0.3 -s 8010 -f ipv4 -q nqn.2021-12.io.spdk:test -T 3000 00:18:55.075 11:40:55 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@640 -- # local arg=rpc_cmd 00:18:55.333 11:40:55 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:18:55.333 11:40:55 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@644 -- # type -t rpc_cmd 00:18:55.333 11:40:55 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:18:55.333 11:40:55 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@655 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme_second -t tcp -a 10.0.0.3 -s 8010 -f ipv4 -q nqn.2021-12.io.spdk:test -T 3000 00:18:55.333 11:40:55 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:55.333 11:40:55 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:18:56.268 [2024-12-16 11:40:56.834118] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:18:56.268 [2024-12-16 11:40:56.834245] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x942170 with addr=10.0.0.3, port=8010 00:18:56.268 [2024-12-16 11:40:56.834273] nvme_tcp.c:2612:nvme_tcp_ctrlr_construct: *ERROR*: failed to create admin qpair 00:18:56.268 [2024-12-16 11:40:56.834284] nvme.c: 842:nvme_probe_internal: *ERROR*: NVMe ctrlr scan failed 00:18:56.268 [2024-12-16 11:40:56.834294] bdev_nvme.c:7584:discovery_poller: *ERROR*: Discovery[10.0.0.3:8010] could not start discovery connect 00:18:57.203 [2024-12-16 11:40:57.834131] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:18:57.203 [2024-12-16 11:40:57.834491] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x942170 with addr=10.0.0.3, port=8010 00:18:57.203 [2024-12-16 11:40:57.834530] nvme_tcp.c:2612:nvme_tcp_ctrlr_construct: *ERROR*: failed to create admin qpair 00:18:57.203 [2024-12-16 11:40:57.834543] nvme.c: 842:nvme_probe_internal: *ERROR*: NVMe ctrlr scan failed 00:18:57.203 [2024-12-16 11:40:57.834554] bdev_nvme.c:7584:discovery_poller: *ERROR*: Discovery[10.0.0.3:8010] could not start discovery connect 00:18:58.193 [2024-12-16 11:40:58.833957] bdev_nvme.c:7559:discovery_poller: *ERROR*: Discovery[10.0.0.3:8010] timed out while attaching discovery ctrlr 00:18:58.193 2024/12/16 11:40:58 error on JSON-RPC call, method: bdev_nvme_start_discovery, params: map[adrfam:ipv4 attach_timeout_ms:3000 hostnqn:nqn.2021-12.io.spdk:test name:nvme_second traddr:10.0.0.3 trsvcid:8010 trtype:tcp wait_for_attach:%!s(bool=false)], err: error received for bdev_nvme_start_discovery method, err: Code=-110 Msg=Connection timed out 00:18:58.193 request: 00:18:58.193 { 00:18:58.193 "method": "bdev_nvme_start_discovery", 00:18:58.193 "params": { 00:18:58.193 "name": "nvme_second", 00:18:58.193 "trtype": "tcp", 00:18:58.193 "traddr": "10.0.0.3", 00:18:58.193 "adrfam": "ipv4", 00:18:58.193 "trsvcid": "8010", 00:18:58.193 "hostnqn": "nqn.2021-12.io.spdk:test", 00:18:58.193 "wait_for_attach": false, 00:18:58.193 "attach_timeout_ms": 3000 00:18:58.193 } 00:18:58.193 } 00:18:58.193 Got JSON-RPC error response 00:18:58.193 GoRPCClient: error on JSON-RPC call 00:18:58.193 11:40:58 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:18:58.193 11:40:58 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@655 -- # es=1 00:18:58.193 11:40:58 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:18:58.193 11:40:58 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:18:58.193 11:40:58 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:18:58.193 11:40:58 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@157 -- # get_discovery_ctrlrs 00:18:58.193 11:40:58 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@67 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_discovery_info 00:18:58.193 11:40:58 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@67 -- # jq -r '.[].name' 00:18:58.193 11:40:58 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:58.193 11:40:58 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@67 -- # sort 00:18:58.193 11:40:58 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:18:58.193 11:40:58 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@67 -- # xargs 00:18:58.194 11:40:58 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:58.194 11:40:58 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@157 -- # [[ nvme == \n\v\m\e ]] 00:18:58.194 11:40:58 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@159 -- # trap - SIGINT SIGTERM EXIT 00:18:58.194 11:40:58 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@161 -- # kill 91145 00:18:58.194 11:40:58 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@162 -- # nvmftestfini 00:18:58.194 11:40:58 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@516 -- # nvmfcleanup 00:18:58.194 11:40:58 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@121 -- # sync 00:18:58.194 11:40:58 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:18:58.194 11:40:58 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@124 -- # set +e 00:18:58.194 11:40:58 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@125 -- # for i in {1..20} 00:18:58.194 11:40:58 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:18:58.194 rmmod nvme_tcp 00:18:58.452 rmmod nvme_fabrics 00:18:58.452 rmmod nvme_keyring 00:18:58.452 11:40:58 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:18:58.452 11:40:58 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@128 -- # set -e 00:18:58.452 11:40:58 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@129 -- # return 0 00:18:58.452 11:40:58 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@517 -- # '[' -n 91107 ']' 00:18:58.452 11:40:58 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@518 -- # killprocess 91107 00:18:58.452 11:40:58 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@954 -- # '[' -z 91107 ']' 00:18:58.452 11:40:58 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@958 -- # kill -0 91107 00:18:58.452 11:40:58 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@959 -- # uname 00:18:58.452 11:40:58 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:18:58.452 11:40:58 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 91107 00:18:58.452 killing process with pid 91107 00:18:58.452 11:40:59 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:18:58.452 11:40:59 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:18:58.452 11:40:59 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@972 -- # echo 'killing process with pid 91107' 00:18:58.452 11:40:59 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@973 -- # kill 91107 00:18:58.452 11:40:59 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@978 -- # wait 91107 00:18:58.711 11:40:59 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:18:58.711 11:40:59 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:18:58.711 11:40:59 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:18:58.711 11:40:59 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@297 -- # iptr 00:18:58.711 11:40:59 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@791 -- # iptables-save 00:18:58.711 11:40:59 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@791 -- # iptables-restore 00:18:58.711 11:40:59 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:18:58.711 11:40:59 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@298 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:18:58.711 11:40:59 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@299 -- # nvmf_veth_fini 00:18:58.711 11:40:59 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@233 -- # ip link set nvmf_init_br nomaster 00:18:58.711 11:40:59 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@234 -- # ip link set nvmf_init_br2 nomaster 00:18:58.711 11:40:59 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@235 -- # ip link set nvmf_tgt_br nomaster 00:18:58.711 11:40:59 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@236 -- # ip link set nvmf_tgt_br2 nomaster 00:18:58.711 11:40:59 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@237 -- # ip link set nvmf_init_br down 00:18:58.711 11:40:59 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@238 -- # ip link set nvmf_init_br2 down 00:18:58.711 11:40:59 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@239 -- # ip link set nvmf_tgt_br down 00:18:58.711 11:40:59 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@240 -- # ip link set nvmf_tgt_br2 down 00:18:58.711 11:40:59 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@241 -- # ip link delete nvmf_br type bridge 00:18:58.711 11:40:59 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@242 -- # ip link delete nvmf_init_if 00:18:58.711 11:40:59 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@243 -- # ip link delete nvmf_init_if2 00:18:58.711 11:40:59 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@244 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:18:58.711 11:40:59 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@245 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:18:58.711 11:40:59 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@246 -- # remove_spdk_ns 00:18:58.711 11:40:59 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:18:58.711 11:40:59 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:18:58.711 11:40:59 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:18:58.969 11:40:59 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@300 -- # return 0 00:18:58.969 00:18:58.969 real 0m10.993s 00:18:58.969 user 0m21.668s 00:18:58.969 sys 0m1.774s 00:18:58.969 11:40:59 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@1130 -- # xtrace_disable 00:18:58.969 11:40:59 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:18:58.969 ************************************ 00:18:58.969 END TEST nvmf_host_discovery 00:18:58.969 ************************************ 00:18:58.969 11:40:59 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@27 -- # run_test nvmf_host_multipath_status /home/vagrant/spdk_repo/spdk/test/nvmf/host/multipath_status.sh --transport=tcp 00:18:58.969 11:40:59 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:18:58.969 11:40:59 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1111 -- # xtrace_disable 00:18:58.969 11:40:59 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:18:58.969 ************************************ 00:18:58.969 START TEST nvmf_host_multipath_status 00:18:58.969 ************************************ 00:18:58.969 11:40:59 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/host/multipath_status.sh --transport=tcp 00:18:58.969 * Looking for test storage... 00:18:58.969 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/host 00:18:58.969 11:40:59 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:18:58.969 11:40:59 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@1711 -- # lcov --version 00:18:58.969 11:40:59 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:18:59.229 11:40:59 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:18:59.229 11:40:59 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:18:59.229 11:40:59 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@333 -- # local ver1 ver1_l 00:18:59.229 11:40:59 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@334 -- # local ver2 ver2_l 00:18:59.229 11:40:59 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@336 -- # IFS=.-: 00:18:59.229 11:40:59 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@336 -- # read -ra ver1 00:18:59.229 11:40:59 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@337 -- # IFS=.-: 00:18:59.229 11:40:59 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@337 -- # read -ra ver2 00:18:59.229 11:40:59 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@338 -- # local 'op=<' 00:18:59.229 11:40:59 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@340 -- # ver1_l=2 00:18:59.229 11:40:59 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@341 -- # ver2_l=1 00:18:59.229 11:40:59 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:18:59.229 11:40:59 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@344 -- # case "$op" in 00:18:59.229 11:40:59 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@345 -- # : 1 00:18:59.229 11:40:59 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@364 -- # (( v = 0 )) 00:18:59.229 11:40:59 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:18:59.229 11:40:59 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@365 -- # decimal 1 00:18:59.229 11:40:59 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@353 -- # local d=1 00:18:59.229 11:40:59 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:18:59.229 11:40:59 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@355 -- # echo 1 00:18:59.229 11:40:59 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@365 -- # ver1[v]=1 00:18:59.229 11:40:59 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@366 -- # decimal 2 00:18:59.229 11:40:59 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@353 -- # local d=2 00:18:59.229 11:40:59 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:18:59.229 11:40:59 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@355 -- # echo 2 00:18:59.229 11:40:59 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@366 -- # ver2[v]=2 00:18:59.229 11:40:59 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:18:59.229 11:40:59 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:18:59.229 11:40:59 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@368 -- # return 0 00:18:59.229 11:40:59 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:18:59.229 11:40:59 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:18:59.229 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:18:59.229 --rc genhtml_branch_coverage=1 00:18:59.229 --rc genhtml_function_coverage=1 00:18:59.229 --rc genhtml_legend=1 00:18:59.229 --rc geninfo_all_blocks=1 00:18:59.229 --rc geninfo_unexecuted_blocks=1 00:18:59.229 00:18:59.229 ' 00:18:59.229 11:40:59 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:18:59.229 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:18:59.229 --rc genhtml_branch_coverage=1 00:18:59.229 --rc genhtml_function_coverage=1 00:18:59.229 --rc genhtml_legend=1 00:18:59.229 --rc geninfo_all_blocks=1 00:18:59.229 --rc geninfo_unexecuted_blocks=1 00:18:59.229 00:18:59.229 ' 00:18:59.229 11:40:59 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:18:59.229 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:18:59.229 --rc genhtml_branch_coverage=1 00:18:59.229 --rc genhtml_function_coverage=1 00:18:59.229 --rc genhtml_legend=1 00:18:59.229 --rc geninfo_all_blocks=1 00:18:59.229 --rc geninfo_unexecuted_blocks=1 00:18:59.229 00:18:59.229 ' 00:18:59.229 11:40:59 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:18:59.229 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:18:59.229 --rc genhtml_branch_coverage=1 00:18:59.229 --rc genhtml_function_coverage=1 00:18:59.229 --rc genhtml_legend=1 00:18:59.229 --rc geninfo_all_blocks=1 00:18:59.230 --rc geninfo_unexecuted_blocks=1 00:18:59.230 00:18:59.230 ' 00:18:59.230 11:40:59 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@10 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:18:59.230 11:40:59 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@7 -- # uname -s 00:18:59.230 11:40:59 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:18:59.230 11:40:59 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:18:59.230 11:40:59 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:18:59.230 11:40:59 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:18:59.230 11:40:59 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:18:59.230 11:40:59 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:18:59.230 11:40:59 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:18:59.230 11:40:59 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:18:59.230 11:40:59 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:18:59.230 11:40:59 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:18:59.230 11:40:59 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:2fb5c03d-b67f-4471-b8a9-a83bf7b15be8 00:18:59.230 11:40:59 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@18 -- # NVME_HOSTID=2fb5c03d-b67f-4471-b8a9-a83bf7b15be8 00:18:59.230 11:40:59 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:18:59.230 11:40:59 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:18:59.230 11:40:59 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:18:59.230 11:40:59 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:18:59.230 11:40:59 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@49 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:18:59.230 11:40:59 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@15 -- # shopt -s extglob 00:18:59.230 11:40:59 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:18:59.230 11:40:59 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:18:59.230 11:40:59 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:18:59.230 11:40:59 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:59.230 11:40:59 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:59.230 11:40:59 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:59.230 11:40:59 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- paths/export.sh@5 -- # export PATH 00:18:59.230 11:40:59 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:59.230 11:40:59 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@51 -- # : 0 00:18:59.230 11:40:59 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:18:59.230 11:40:59 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:18:59.230 11:40:59 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:18:59.230 11:40:59 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:18:59.230 11:40:59 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:18:59.230 11:40:59 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:18:59.230 /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:18:59.230 11:40:59 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:18:59.230 11:40:59 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:18:59.230 11:40:59 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@55 -- # have_pci_nics=0 00:18:59.230 11:40:59 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@12 -- # MALLOC_BDEV_SIZE=64 00:18:59.230 11:40:59 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@13 -- # MALLOC_BLOCK_SIZE=512 00:18:59.230 11:40:59 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@15 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:18:59.230 11:40:59 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@16 -- # bpf_sh=/home/vagrant/spdk_repo/spdk/scripts/bpftrace.sh 00:18:59.230 11:40:59 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@18 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:18:59.230 11:40:59 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@21 -- # NQN=nqn.2016-06.io.spdk:cnode1 00:18:59.230 11:40:59 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@31 -- # nvmftestinit 00:18:59.230 11:40:59 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:18:59.230 11:40:59 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:18:59.230 11:40:59 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@476 -- # prepare_net_devs 00:18:59.230 11:40:59 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@438 -- # local -g is_hw=no 00:18:59.230 11:40:59 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@440 -- # remove_spdk_ns 00:18:59.230 11:40:59 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:18:59.230 11:40:59 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:18:59.230 11:40:59 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:18:59.230 11:40:59 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@442 -- # [[ virt != virt ]] 00:18:59.230 11:40:59 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@444 -- # [[ no == yes ]] 00:18:59.230 11:40:59 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@451 -- # [[ virt == phy ]] 00:18:59.230 11:40:59 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@454 -- # [[ virt == phy-fallback ]] 00:18:59.230 11:40:59 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@459 -- # [[ tcp == tcp ]] 00:18:59.230 11:40:59 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@460 -- # nvmf_veth_init 00:18:59.230 11:40:59 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@145 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:18:59.230 11:40:59 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@146 -- # NVMF_SECOND_INITIATOR_IP=10.0.0.2 00:18:59.230 11:40:59 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@147 -- # NVMF_FIRST_TARGET_IP=10.0.0.3 00:18:59.230 11:40:59 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@148 -- # NVMF_SECOND_TARGET_IP=10.0.0.4 00:18:59.230 11:40:59 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@149 -- # NVMF_INITIATOR_IP=10.0.0.1 00:18:59.230 11:40:59 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@150 -- # NVMF_BRIDGE=nvmf_br 00:18:59.230 11:40:59 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@151 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:18:59.230 11:40:59 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@152 -- # NVMF_INITIATOR_INTERFACE2=nvmf_init_if2 00:18:59.230 11:40:59 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@153 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:18:59.230 11:40:59 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@154 -- # NVMF_INITIATOR_BRIDGE2=nvmf_init_br2 00:18:59.230 11:40:59 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@155 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:18:59.230 11:40:59 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@156 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:18:59.230 11:40:59 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@157 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:18:59.230 11:40:59 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@158 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:18:59.230 11:40:59 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@159 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:18:59.230 11:40:59 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@160 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:18:59.230 11:40:59 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@162 -- # ip link set nvmf_init_br nomaster 00:18:59.230 Cannot find device "nvmf_init_br" 00:18:59.230 11:40:59 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@162 -- # true 00:18:59.230 11:40:59 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@163 -- # ip link set nvmf_init_br2 nomaster 00:18:59.230 Cannot find device "nvmf_init_br2" 00:18:59.230 11:40:59 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@163 -- # true 00:18:59.230 11:40:59 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@164 -- # ip link set nvmf_tgt_br nomaster 00:18:59.230 Cannot find device "nvmf_tgt_br" 00:18:59.230 11:40:59 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@164 -- # true 00:18:59.230 11:40:59 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@165 -- # ip link set nvmf_tgt_br2 nomaster 00:18:59.230 Cannot find device "nvmf_tgt_br2" 00:18:59.230 11:40:59 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@165 -- # true 00:18:59.230 11:40:59 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@166 -- # ip link set nvmf_init_br down 00:18:59.230 Cannot find device "nvmf_init_br" 00:18:59.230 11:40:59 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@166 -- # true 00:18:59.230 11:40:59 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@167 -- # ip link set nvmf_init_br2 down 00:18:59.230 Cannot find device "nvmf_init_br2" 00:18:59.230 11:40:59 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@167 -- # true 00:18:59.231 11:40:59 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@168 -- # ip link set nvmf_tgt_br down 00:18:59.231 Cannot find device "nvmf_tgt_br" 00:18:59.231 11:40:59 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@168 -- # true 00:18:59.231 11:40:59 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@169 -- # ip link set nvmf_tgt_br2 down 00:18:59.231 Cannot find device "nvmf_tgt_br2" 00:18:59.231 11:40:59 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@169 -- # true 00:18:59.231 11:40:59 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@170 -- # ip link delete nvmf_br type bridge 00:18:59.231 Cannot find device "nvmf_br" 00:18:59.231 11:40:59 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@170 -- # true 00:18:59.231 11:40:59 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@171 -- # ip link delete nvmf_init_if 00:18:59.231 Cannot find device "nvmf_init_if" 00:18:59.231 11:40:59 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@171 -- # true 00:18:59.231 11:40:59 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@172 -- # ip link delete nvmf_init_if2 00:18:59.231 Cannot find device "nvmf_init_if2" 00:18:59.231 11:40:59 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@172 -- # true 00:18:59.231 11:40:59 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@173 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:18:59.231 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:18:59.231 11:40:59 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@173 -- # true 00:18:59.231 11:40:59 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@174 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:18:59.231 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:18:59.231 11:40:59 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@174 -- # true 00:18:59.231 11:40:59 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@177 -- # ip netns add nvmf_tgt_ns_spdk 00:18:59.231 11:40:59 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@180 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:18:59.231 11:40:59 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@181 -- # ip link add nvmf_init_if2 type veth peer name nvmf_init_br2 00:18:59.231 11:40:59 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@182 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:18:59.231 11:40:59 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@183 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:18:59.231 11:40:59 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:18:59.489 11:40:59 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@187 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:18:59.489 11:40:59 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@190 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:18:59.489 11:41:00 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@191 -- # ip addr add 10.0.0.2/24 dev nvmf_init_if2 00:18:59.489 11:41:00 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@192 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if 00:18:59.489 11:41:00 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@193 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.4/24 dev nvmf_tgt_if2 00:18:59.489 11:41:00 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@196 -- # ip link set nvmf_init_if up 00:18:59.489 11:41:00 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@197 -- # ip link set nvmf_init_if2 up 00:18:59.489 11:41:00 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@198 -- # ip link set nvmf_init_br up 00:18:59.489 11:41:00 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@199 -- # ip link set nvmf_init_br2 up 00:18:59.489 11:41:00 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@200 -- # ip link set nvmf_tgt_br up 00:18:59.489 11:41:00 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@201 -- # ip link set nvmf_tgt_br2 up 00:18:59.489 11:41:00 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@202 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:18:59.489 11:41:00 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@203 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:18:59.489 11:41:00 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@204 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:18:59.489 11:41:00 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@207 -- # ip link add nvmf_br type bridge 00:18:59.489 11:41:00 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@208 -- # ip link set nvmf_br up 00:18:59.489 11:41:00 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@211 -- # ip link set nvmf_init_br master nvmf_br 00:18:59.489 11:41:00 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@212 -- # ip link set nvmf_init_br2 master nvmf_br 00:18:59.489 11:41:00 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@213 -- # ip link set nvmf_tgt_br master nvmf_br 00:18:59.489 11:41:00 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@214 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:18:59.489 11:41:00 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@217 -- # ipts -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:18:59.489 11:41:00 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT' 00:18:59.489 11:41:00 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@218 -- # ipts -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT 00:18:59.489 11:41:00 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT' 00:18:59.489 11:41:00 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@219 -- # ipts -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:18:59.489 11:41:00 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@790 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT -m comment --comment 'SPDK_NVMF:-A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT' 00:18:59.489 11:41:00 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@222 -- # ping -c 1 10.0.0.3 00:18:59.489 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:18:59.489 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.049 ms 00:18:59.489 00:18:59.489 --- 10.0.0.3 ping statistics --- 00:18:59.489 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:18:59.489 rtt min/avg/max/mdev = 0.049/0.049/0.049/0.000 ms 00:18:59.490 11:41:00 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@223 -- # ping -c 1 10.0.0.4 00:18:59.490 PING 10.0.0.4 (10.0.0.4) 56(84) bytes of data. 00:18:59.490 64 bytes from 10.0.0.4: icmp_seq=1 ttl=64 time=0.066 ms 00:18:59.490 00:18:59.490 --- 10.0.0.4 ping statistics --- 00:18:59.490 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:18:59.490 rtt min/avg/max/mdev = 0.066/0.066/0.066/0.000 ms 00:18:59.490 11:41:00 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@224 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:18:59.490 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:18:59.490 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.053 ms 00:18:59.490 00:18:59.490 --- 10.0.0.1 ping statistics --- 00:18:59.490 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:18:59.490 rtt min/avg/max/mdev = 0.053/0.053/0.053/0.000 ms 00:18:59.490 11:41:00 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@225 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.2 00:18:59.490 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:18:59.490 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.071 ms 00:18:59.490 00:18:59.490 --- 10.0.0.2 ping statistics --- 00:18:59.490 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:18:59.490 rtt min/avg/max/mdev = 0.071/0.071/0.071/0.000 ms 00:18:59.490 11:41:00 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@227 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:18:59.490 11:41:00 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@461 -- # return 0 00:18:59.490 11:41:00 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:18:59.490 11:41:00 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:18:59.490 11:41:00 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:18:59.490 11:41:00 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:18:59.490 11:41:00 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:18:59.490 11:41:00 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:18:59.490 11:41:00 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:18:59.490 11:41:00 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@33 -- # nvmfappstart -m 0x3 00:18:59.490 11:41:00 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:18:59.490 11:41:00 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@726 -- # xtrace_disable 00:18:59.490 11:41:00 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@10 -- # set +x 00:18:59.490 11:41:00 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@509 -- # nvmfpid=91682 00:18:59.490 11:41:00 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@508 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x3 00:18:59.490 11:41:00 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@510 -- # waitforlisten 91682 00:18:59.490 11:41:00 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@835 -- # '[' -z 91682 ']' 00:18:59.490 11:41:00 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:18:59.490 11:41:00 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@840 -- # local max_retries=100 00:18:59.490 11:41:00 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:18:59.490 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:18:59.490 11:41:00 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@844 -- # xtrace_disable 00:18:59.490 11:41:00 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@10 -- # set +x 00:18:59.748 [2024-12-16 11:41:00.256219] Starting SPDK v25.01-pre git sha1 c360129e7 / DPDK 24.03.0 initialization... 00:18:59.748 [2024-12-16 11:41:00.256313] [ DPDK EAL parameters: nvmf -c 0x3 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:18:59.748 [2024-12-16 11:41:00.407206] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:18:59.748 [2024-12-16 11:41:00.475500] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:18:59.748 [2024-12-16 11:41:00.475585] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:18:59.748 [2024-12-16 11:41:00.475605] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:18:59.748 [2024-12-16 11:41:00.475616] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:18:59.748 [2024-12-16 11:41:00.475625] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:18:59.748 [2024-12-16 11:41:00.476955] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 1 00:18:59.748 [2024-12-16 11:41:00.476972] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:19:00.682 11:41:01 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:19:00.682 11:41:01 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@868 -- # return 0 00:19:00.682 11:41:01 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:19:00.682 11:41:01 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@732 -- # xtrace_disable 00:19:00.682 11:41:01 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@10 -- # set +x 00:19:00.682 11:41:01 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:19:00.683 11:41:01 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@34 -- # nvmfapp_pid=91682 00:19:00.683 11:41:01 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@36 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:19:00.941 [2024-12-16 11:41:01.629867] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:19:00.941 11:41:01 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@37 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc0 00:19:01.199 Malloc0 00:19:01.458 11:41:01 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@39 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -r -m 2 00:19:01.716 11:41:02 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@40 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:19:01.974 11:41:02 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@41 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 00:19:02.232 [2024-12-16 11:41:02.754584] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4420 *** 00:19:02.232 11:41:02 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@42 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4421 00:19:02.491 [2024-12-16 11:41:03.006735] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4421 *** 00:19:02.491 11:41:03 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@44 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 90 00:19:02.491 11:41:03 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@45 -- # bdevperf_pid=91786 00:19:02.491 11:41:03 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@47 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:19:02.491 11:41:03 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@48 -- # waitforlisten 91786 /var/tmp/bdevperf.sock 00:19:02.491 11:41:03 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@835 -- # '[' -z 91786 ']' 00:19:02.491 11:41:03 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:19:02.491 11:41:03 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@840 -- # local max_retries=100 00:19:02.491 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:19:02.491 11:41:03 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:19:02.491 11:41:03 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@844 -- # xtrace_disable 00:19:02.491 11:41:03 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@10 -- # set +x 00:19:02.750 11:41:03 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:19:02.750 11:41:03 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@868 -- # return 0 00:19:02.750 11:41:03 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@52 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_set_options -r -1 00:19:03.008 11:41:03 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@55 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b Nvme0 -t tcp -a 10.0.0.3 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -x multipath -l -1 -o 10 00:19:03.337 Nvme0n1 00:19:03.338 11:41:04 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@56 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b Nvme0 -t tcp -a 10.0.0.3 -s 4421 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -x multipath -l -1 -o 10 00:19:03.904 Nvme0n1 00:19:03.904 11:41:04 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@78 -- # sleep 2 00:19:03.904 11:41:04 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@76 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -t 120 -s /var/tmp/bdevperf.sock perform_tests 00:19:05.807 11:41:06 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@90 -- # set_ANA_state optimized optimized 00:19:05.807 11:41:06 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 -n optimized 00:19:06.066 11:41:06 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4421 -n optimized 00:19:06.324 11:41:07 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@91 -- # sleep 1 00:19:07.701 11:41:08 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@92 -- # check_status true false true true true true 00:19:07.701 11:41:08 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current true 00:19:07.701 11:41:08 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:19:07.701 11:41:08 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:19:07.701 11:41:08 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:19:07.701 11:41:08 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current false 00:19:07.701 11:41:08 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:19:07.701 11:41:08 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:19:07.959 11:41:08 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:19:07.959 11:41:08 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:19:07.959 11:41:08 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:19:07.959 11:41:08 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:19:08.218 11:41:08 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:19:08.218 11:41:08 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:19:08.218 11:41:08 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:19:08.218 11:41:08 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:19:08.476 11:41:09 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:19:08.476 11:41:09 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible true 00:19:08.476 11:41:09 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:19:08.476 11:41:09 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:19:08.735 11:41:09 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:19:08.735 11:41:09 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible true 00:19:08.735 11:41:09 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:19:08.735 11:41:09 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:19:09.302 11:41:09 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:19:09.302 11:41:09 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@94 -- # set_ANA_state non_optimized optimized 00:19:09.302 11:41:09 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 -n non_optimized 00:19:09.302 11:41:10 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4421 -n optimized 00:19:09.560 11:41:10 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@95 -- # sleep 1 00:19:10.934 11:41:11 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@96 -- # check_status false true true true true true 00:19:10.934 11:41:11 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current false 00:19:10.934 11:41:11 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:19:10.934 11:41:11 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:19:10.934 11:41:11 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:19:10.934 11:41:11 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current true 00:19:10.934 11:41:11 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:19:10.934 11:41:11 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:19:11.192 11:41:11 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:19:11.192 11:41:11 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:19:11.192 11:41:11 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:19:11.192 11:41:11 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:19:11.758 11:41:12 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:19:11.758 11:41:12 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:19:11.758 11:41:12 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:19:11.758 11:41:12 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:19:11.758 11:41:12 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:19:11.758 11:41:12 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible true 00:19:11.758 11:41:12 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:19:11.758 11:41:12 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:19:12.325 11:41:12 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:19:12.325 11:41:12 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible true 00:19:12.325 11:41:12 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:19:12.325 11:41:12 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:19:12.583 11:41:13 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:19:12.583 11:41:13 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@100 -- # set_ANA_state non_optimized non_optimized 00:19:12.583 11:41:13 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 -n non_optimized 00:19:12.840 11:41:13 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4421 -n non_optimized 00:19:13.098 11:41:13 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@101 -- # sleep 1 00:19:14.032 11:41:14 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@102 -- # check_status true false true true true true 00:19:14.032 11:41:14 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current true 00:19:14.032 11:41:14 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:19:14.032 11:41:14 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:19:14.290 11:41:14 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:19:14.290 11:41:14 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current false 00:19:14.290 11:41:14 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:19:14.290 11:41:14 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:19:14.548 11:41:15 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:19:14.548 11:41:15 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:19:14.548 11:41:15 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:19:14.548 11:41:15 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:19:15.115 11:41:15 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:19:15.115 11:41:15 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:19:15.115 11:41:15 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:19:15.115 11:41:15 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:19:15.373 11:41:15 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:19:15.373 11:41:15 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible true 00:19:15.373 11:41:15 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:19:15.373 11:41:15 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:19:15.631 11:41:16 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:19:15.631 11:41:16 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible true 00:19:15.631 11:41:16 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:19:15.631 11:41:16 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:19:15.889 11:41:16 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:19:15.889 11:41:16 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@104 -- # set_ANA_state non_optimized inaccessible 00:19:15.889 11:41:16 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 -n non_optimized 00:19:16.147 11:41:16 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4421 -n inaccessible 00:19:16.406 11:41:17 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@105 -- # sleep 1 00:19:17.342 11:41:18 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@106 -- # check_status true false true true true false 00:19:17.342 11:41:18 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current true 00:19:17.601 11:41:18 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:19:17.601 11:41:18 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:19:17.860 11:41:18 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:19:17.860 11:41:18 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current false 00:19:17.860 11:41:18 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:19:17.860 11:41:18 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:19:18.118 11:41:18 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:19:18.118 11:41:18 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:19:18.118 11:41:18 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:19:18.118 11:41:18 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:19:18.377 11:41:19 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:19:18.377 11:41:19 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:19:18.377 11:41:19 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:19:18.377 11:41:19 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:19:18.635 11:41:19 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:19:18.635 11:41:19 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible true 00:19:18.635 11:41:19 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:19:18.635 11:41:19 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:19:19.202 11:41:19 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:19:19.202 11:41:19 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible false 00:19:19.202 11:41:19 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:19:19.202 11:41:19 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:19:19.460 11:41:19 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:19:19.461 11:41:19 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@108 -- # set_ANA_state inaccessible inaccessible 00:19:19.461 11:41:19 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 -n inaccessible 00:19:19.719 11:41:20 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4421 -n inaccessible 00:19:19.977 11:41:20 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@109 -- # sleep 1 00:19:20.912 11:41:21 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@110 -- # check_status false false true true false false 00:19:20.912 11:41:21 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current false 00:19:20.912 11:41:21 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:19:20.912 11:41:21 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:19:21.170 11:41:21 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:19:21.170 11:41:21 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current false 00:19:21.170 11:41:21 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:19:21.170 11:41:21 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:19:21.428 11:41:22 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:19:21.428 11:41:22 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:19:21.428 11:41:22 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:19:21.428 11:41:22 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:19:21.686 11:41:22 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:19:21.686 11:41:22 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:19:21.686 11:41:22 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:19:21.686 11:41:22 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:19:21.945 11:41:22 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:19:21.945 11:41:22 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible false 00:19:21.945 11:41:22 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:19:21.945 11:41:22 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:19:22.204 11:41:22 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:19:22.204 11:41:22 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible false 00:19:22.204 11:41:22 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:19:22.204 11:41:22 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:19:22.772 11:41:23 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:19:22.772 11:41:23 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@112 -- # set_ANA_state inaccessible optimized 00:19:22.772 11:41:23 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 -n inaccessible 00:19:23.030 11:41:23 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4421 -n optimized 00:19:23.289 11:41:23 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@113 -- # sleep 1 00:19:24.223 11:41:24 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@114 -- # check_status false true true true false true 00:19:24.223 11:41:24 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current false 00:19:24.223 11:41:24 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:19:24.223 11:41:24 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:19:24.492 11:41:25 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:19:24.492 11:41:25 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current true 00:19:24.492 11:41:25 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:19:24.492 11:41:25 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:19:24.797 11:41:25 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:19:24.797 11:41:25 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:19:24.797 11:41:25 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:19:24.797 11:41:25 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:19:25.056 11:41:25 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:19:25.056 11:41:25 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:19:25.056 11:41:25 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:19:25.056 11:41:25 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:19:25.314 11:41:26 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:19:25.314 11:41:26 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible false 00:19:25.314 11:41:26 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:19:25.315 11:41:26 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:19:25.882 11:41:26 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:19:25.882 11:41:26 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible true 00:19:25.882 11:41:26 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:19:25.882 11:41:26 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:19:26.141 11:41:26 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:19:26.141 11:41:26 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@116 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_set_multipath_policy -b Nvme0n1 -p active_active 00:19:26.399 11:41:27 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@119 -- # set_ANA_state optimized optimized 00:19:26.399 11:41:27 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 -n optimized 00:19:26.657 11:41:27 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4421 -n optimized 00:19:26.915 11:41:27 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@120 -- # sleep 1 00:19:27.850 11:41:28 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@121 -- # check_status true true true true true true 00:19:27.850 11:41:28 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current true 00:19:27.850 11:41:28 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:19:27.850 11:41:28 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:19:28.109 11:41:28 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:19:28.109 11:41:28 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current true 00:19:28.109 11:41:28 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:19:28.109 11:41:28 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:19:28.677 11:41:29 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:19:28.677 11:41:29 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:19:28.677 11:41:29 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:19:28.677 11:41:29 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:19:28.935 11:41:29 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:19:28.935 11:41:29 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:19:28.935 11:41:29 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:19:28.935 11:41:29 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:19:29.194 11:41:29 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:19:29.194 11:41:29 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible true 00:19:29.194 11:41:29 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:19:29.194 11:41:29 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:19:29.453 11:41:30 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:19:29.453 11:41:30 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible true 00:19:29.453 11:41:30 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:19:29.453 11:41:30 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:19:29.712 11:41:30 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:19:29.712 11:41:30 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@123 -- # set_ANA_state non_optimized optimized 00:19:29.712 11:41:30 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 -n non_optimized 00:19:29.971 11:41:30 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4421 -n optimized 00:19:30.229 11:41:30 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@124 -- # sleep 1 00:19:31.165 11:41:31 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@125 -- # check_status false true true true true true 00:19:31.165 11:41:31 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current false 00:19:31.165 11:41:31 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:19:31.165 11:41:31 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:19:31.734 11:41:32 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:19:31.734 11:41:32 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current true 00:19:31.734 11:41:32 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:19:31.735 11:41:32 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:19:31.993 11:41:32 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:19:31.993 11:41:32 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:19:31.993 11:41:32 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:19:31.993 11:41:32 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:19:32.252 11:41:32 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:19:32.252 11:41:32 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:19:32.252 11:41:32 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:19:32.252 11:41:32 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:19:32.511 11:41:33 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:19:32.511 11:41:33 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible true 00:19:32.511 11:41:33 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:19:32.511 11:41:33 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:19:32.770 11:41:33 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:19:32.770 11:41:33 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible true 00:19:32.770 11:41:33 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:19:32.770 11:41:33 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:19:33.028 11:41:33 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:19:33.028 11:41:33 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@129 -- # set_ANA_state non_optimized non_optimized 00:19:33.028 11:41:33 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 -n non_optimized 00:19:33.287 11:41:34 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4421 -n non_optimized 00:19:33.855 11:41:34 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@130 -- # sleep 1 00:19:34.789 11:41:35 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@131 -- # check_status true true true true true true 00:19:34.789 11:41:35 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current true 00:19:34.789 11:41:35 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:19:34.789 11:41:35 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:19:35.047 11:41:35 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:19:35.048 11:41:35 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current true 00:19:35.048 11:41:35 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:19:35.048 11:41:35 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:19:35.327 11:41:35 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:19:35.327 11:41:35 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:19:35.327 11:41:35 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:19:35.327 11:41:35 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:19:35.586 11:41:36 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:19:35.586 11:41:36 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:19:35.586 11:41:36 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:19:35.586 11:41:36 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:19:35.844 11:41:36 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:19:35.844 11:41:36 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible true 00:19:35.844 11:41:36 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:19:35.844 11:41:36 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:19:36.103 11:41:36 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:19:36.103 11:41:36 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible true 00:19:36.103 11:41:36 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:19:36.103 11:41:36 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:19:36.362 11:41:37 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:19:36.362 11:41:37 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@133 -- # set_ANA_state non_optimized inaccessible 00:19:36.362 11:41:37 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 -n non_optimized 00:19:36.928 11:41:37 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4421 -n inaccessible 00:19:36.928 11:41:37 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@134 -- # sleep 1 00:19:38.303 11:41:38 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@135 -- # check_status true false true true true false 00:19:38.303 11:41:38 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current true 00:19:38.303 11:41:38 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:19:38.303 11:41:38 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:19:38.303 11:41:39 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:19:38.303 11:41:39 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current false 00:19:38.303 11:41:39 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:19:38.303 11:41:39 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:19:38.571 11:41:39 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:19:38.571 11:41:39 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:19:38.571 11:41:39 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:19:38.571 11:41:39 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:19:38.845 11:41:39 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:19:38.845 11:41:39 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:19:38.845 11:41:39 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:19:38.845 11:41:39 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:19:39.411 11:41:39 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:19:39.411 11:41:39 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible true 00:19:39.411 11:41:39 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:19:39.411 11:41:39 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:19:39.669 11:41:40 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:19:39.669 11:41:40 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible false 00:19:39.669 11:41:40 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:19:39.669 11:41:40 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:19:39.927 11:41:40 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:19:39.927 11:41:40 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@137 -- # killprocess 91786 00:19:39.927 11:41:40 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@954 -- # '[' -z 91786 ']' 00:19:39.927 11:41:40 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@958 -- # kill -0 91786 00:19:39.927 11:41:40 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@959 -- # uname 00:19:39.927 11:41:40 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:19:39.927 11:41:40 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 91786 00:19:39.927 killing process with pid 91786 00:19:39.927 11:41:40 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@960 -- # process_name=reactor_2 00:19:39.927 11:41:40 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@964 -- # '[' reactor_2 = sudo ']' 00:19:39.927 11:41:40 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@972 -- # echo 'killing process with pid 91786' 00:19:39.927 11:41:40 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@973 -- # kill 91786 00:19:39.927 11:41:40 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@978 -- # wait 91786 00:19:39.927 { 00:19:39.927 "results": [ 00:19:39.927 { 00:19:39.927 "job": "Nvme0n1", 00:19:39.927 "core_mask": "0x4", 00:19:39.927 "workload": "verify", 00:19:39.927 "status": "terminated", 00:19:39.927 "verify_range": { 00:19:39.927 "start": 0, 00:19:39.927 "length": 16384 00:19:39.927 }, 00:19:39.927 "queue_depth": 128, 00:19:39.927 "io_size": 4096, 00:19:39.927 "runtime": 35.910495, 00:19:39.927 "iops": 8450.120222514337, 00:19:39.927 "mibps": 33.00828211919663, 00:19:39.928 "io_failed": 0, 00:19:39.928 "io_timeout": 0, 00:19:39.928 "avg_latency_us": 15116.894278750171, 00:19:39.928 "min_latency_us": 166.63272727272727, 00:19:39.928 "max_latency_us": 4026531.84 00:19:39.928 } 00:19:39.928 ], 00:19:39.928 "core_count": 1 00:19:39.928 } 00:19:40.189 11:41:40 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@139 -- # wait 91786 00:19:40.189 11:41:40 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@141 -- # cat /home/vagrant/spdk_repo/spdk/test/nvmf/host/try.txt 00:19:40.189 [2024-12-16 11:41:03.073131] Starting SPDK v25.01-pre git sha1 c360129e7 / DPDK 24.03.0 initialization... 00:19:40.189 [2024-12-16 11:41:03.073283] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid91786 ] 00:19:40.189 [2024-12-16 11:41:03.217215] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:19:40.189 [2024-12-16 11:41:03.276547] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 2 00:19:40.189 Running I/O for 90 seconds... 00:19:40.189 9204.00 IOPS, 35.95 MiB/s [2024-12-16T11:41:40.942Z] 9258.50 IOPS, 36.17 MiB/s [2024-12-16T11:41:40.942Z] 9288.00 IOPS, 36.28 MiB/s [2024-12-16T11:41:40.942Z] 9317.25 IOPS, 36.40 MiB/s [2024-12-16T11:41:40.942Z] 9283.80 IOPS, 36.26 MiB/s [2024-12-16T11:41:40.942Z] 9248.67 IOPS, 36.13 MiB/s [2024-12-16T11:41:40.942Z] 9264.57 IOPS, 36.19 MiB/s [2024-12-16T11:41:40.942Z] 9231.00 IOPS, 36.06 MiB/s [2024-12-16T11:41:40.942Z] 9175.78 IOPS, 35.84 MiB/s [2024-12-16T11:41:40.942Z] 9190.10 IOPS, 35.90 MiB/s [2024-12-16T11:41:40.942Z] 9177.64 IOPS, 35.85 MiB/s [2024-12-16T11:41:40.942Z] 9187.58 IOPS, 35.89 MiB/s [2024-12-16T11:41:40.942Z] 9196.31 IOPS, 35.92 MiB/s [2024-12-16T11:41:40.942Z] 9203.07 IOPS, 35.95 MiB/s [2024-12-16T11:41:40.942Z] 9197.73 IOPS, 35.93 MiB/s [2024-12-16T11:41:40.942Z] [2024-12-16 11:41:20.224760] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:101656 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:40.189 [2024-12-16 11:41:20.224846] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:58 cdw0:0 sqhd:0028 p:0 m:0 dnr:0 00:19:40.189 [2024-12-16 11:41:20.224924] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:101664 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:40.189 [2024-12-16 11:41:20.224947] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:25 cdw0:0 sqhd:0029 p:0 m:0 dnr:0 00:19:40.189 [2024-12-16 11:41:20.224968] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:101672 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:40.189 [2024-12-16 11:41:20.224984] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:20 cdw0:0 sqhd:002a p:0 m:0 dnr:0 00:19:40.189 [2024-12-16 11:41:20.225005] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:101680 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:40.189 [2024-12-16 11:41:20.225020] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:15 cdw0:0 sqhd:002b p:0 m:0 dnr:0 00:19:40.189 [2024-12-16 11:41:20.225040] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:72 nsid:1 lba:101688 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:40.189 [2024-12-16 11:41:20.225056] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:72 cdw0:0 sqhd:002c p:0 m:0 dnr:0 00:19:40.189 [2024-12-16 11:41:20.225076] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:101696 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:40.189 [2024-12-16 11:41:20.225092] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:29 cdw0:0 sqhd:002d p:0 m:0 dnr:0 00:19:40.189 [2024-12-16 11:41:20.225112] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:97 nsid:1 lba:101704 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:40.189 [2024-12-16 11:41:20.225127] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:97 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:19:40.189 [2024-12-16 11:41:20.225148] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:64 nsid:1 lba:101712 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:40.189 [2024-12-16 11:41:20.225195] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:64 cdw0:0 sqhd:002f p:0 m:0 dnr:0 00:19:40.189 [2024-12-16 11:41:20.225219] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:101720 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:40.189 [2024-12-16 11:41:20.225236] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:31 cdw0:0 sqhd:0030 p:0 m:0 dnr:0 00:19:40.189 [2024-12-16 11:41:20.225296] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:101728 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:40.189 [2024-12-16 11:41:20.225314] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:53 cdw0:0 sqhd:0031 p:0 m:0 dnr:0 00:19:40.189 [2024-12-16 11:41:20.225335] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:118 nsid:1 lba:101736 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:40.189 [2024-12-16 11:41:20.225352] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:118 cdw0:0 sqhd:0032 p:0 m:0 dnr:0 00:19:40.189 [2024-12-16 11:41:20.225374] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:101744 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:40.189 [2024-12-16 11:41:20.225390] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:48 cdw0:0 sqhd:0033 p:0 m:0 dnr:0 00:19:40.189 [2024-12-16 11:41:20.225411] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:113 nsid:1 lba:101752 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:40.189 [2024-12-16 11:41:20.225427] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:113 cdw0:0 sqhd:0034 p:0 m:0 dnr:0 00:19:40.189 [2024-12-16 11:41:20.225448] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:107 nsid:1 lba:101760 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:40.189 [2024-12-16 11:41:20.225464] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:107 cdw0:0 sqhd:0035 p:0 m:0 dnr:0 00:19:40.189 [2024-12-16 11:41:20.225486] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:117 nsid:1 lba:101768 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:40.189 [2024-12-16 11:41:20.225502] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:117 cdw0:0 sqhd:0036 p:0 m:0 dnr:0 00:19:40.189 [2024-12-16 11:41:20.225523] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:101776 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:40.189 [2024-12-16 11:41:20.225539] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:3 cdw0:0 sqhd:0037 p:0 m:0 dnr:0 00:19:40.189 [2024-12-16 11:41:20.225560] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:101784 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:40.189 [2024-12-16 11:41:20.225576] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:5 cdw0:0 sqhd:0038 p:0 m:0 dnr:0 00:19:40.189 [2024-12-16 11:41:20.225598] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:101792 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:40.189 [2024-12-16 11:41:20.225613] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:52 cdw0:0 sqhd:0039 p:0 m:0 dnr:0 00:19:40.189 [2024-12-16 11:41:20.225649] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:89 nsid:1 lba:101800 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:40.189 [2024-12-16 11:41:20.225665] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:89 cdw0:0 sqhd:003a p:0 m:0 dnr:0 00:19:40.189 [2024-12-16 11:41:20.225686] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:122 nsid:1 lba:101808 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:40.189 [2024-12-16 11:41:20.225702] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:122 cdw0:0 sqhd:003b p:0 m:0 dnr:0 00:19:40.189 [2024-12-16 11:41:20.225723] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:103 nsid:1 lba:101816 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:40.189 [2024-12-16 11:41:20.225739] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:103 cdw0:0 sqhd:003c p:0 m:0 dnr:0 00:19:40.189 [2024-12-16 11:41:20.225760] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:121 nsid:1 lba:101824 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:40.189 [2024-12-16 11:41:20.225786] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:121 cdw0:0 sqhd:003d p:0 m:0 dnr:0 00:19:40.189 [2024-12-16 11:41:20.225810] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:126 nsid:1 lba:101832 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:40.189 [2024-12-16 11:41:20.225841] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:126 cdw0:0 sqhd:003e p:0 m:0 dnr:0 00:19:40.189 [2024-12-16 11:41:20.225862] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:101840 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:40.190 [2024-12-16 11:41:20.225893] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:47 cdw0:0 sqhd:003f p:0 m:0 dnr:0 00:19:40.190 [2024-12-16 11:41:20.225914] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:101848 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:40.190 [2024-12-16 11:41:20.225930] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:28 cdw0:0 sqhd:0040 p:0 m:0 dnr:0 00:19:40.190 [2024-12-16 11:41:20.225952] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:85 nsid:1 lba:101856 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:40.190 [2024-12-16 11:41:20.225968] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:85 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:19:40.190 [2024-12-16 11:41:20.225989] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:101864 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:40.190 [2024-12-16 11:41:20.226004] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:63 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:19:40.190 [2024-12-16 11:41:20.226025] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:81 nsid:1 lba:101872 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:40.190 [2024-12-16 11:41:20.226041] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:81 cdw0:0 sqhd:0043 p:0 m:0 dnr:0 00:19:40.190 [2024-12-16 11:41:20.226062] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:124 nsid:1 lba:101880 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:40.190 [2024-12-16 11:41:20.226078] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:124 cdw0:0 sqhd:0044 p:0 m:0 dnr:0 00:19:40.190 [2024-12-16 11:41:20.226099] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:92 nsid:1 lba:101888 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:40.190 [2024-12-16 11:41:20.226115] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:92 cdw0:0 sqhd:0045 p:0 m:0 dnr:0 00:19:40.190 [2024-12-16 11:41:20.226136] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:101896 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:40.190 [2024-12-16 11:41:20.226168] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:62 cdw0:0 sqhd:0046 p:0 m:0 dnr:0 00:19:40.190 [2024-12-16 11:41:20.226190] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:101904 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:40.190 [2024-12-16 11:41:20.226229] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:49 cdw0:0 sqhd:0047 p:0 m:0 dnr:0 00:19:40.190 [2024-12-16 11:41:20.226660] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:101912 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:40.190 [2024-12-16 11:41:20.226689] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:55 cdw0:0 sqhd:0048 p:0 m:0 dnr:0 00:19:40.190 [2024-12-16 11:41:20.226718] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:101920 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:40.190 [2024-12-16 11:41:20.226749] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:39 cdw0:0 sqhd:0049 p:0 m:0 dnr:0 00:19:40.190 [2024-12-16 11:41:20.226775] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:101928 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:40.190 [2024-12-16 11:41:20.226793] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:21 cdw0:0 sqhd:004a p:0 m:0 dnr:0 00:19:40.190 [2024-12-16 11:41:20.226817] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:110 nsid:1 lba:101936 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:40.190 [2024-12-16 11:41:20.226834] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:110 cdw0:0 sqhd:004b p:0 m:0 dnr:0 00:19:40.190 [2024-12-16 11:41:20.226857] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:109 nsid:1 lba:101944 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:40.190 [2024-12-16 11:41:20.226873] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:109 cdw0:0 sqhd:004c p:0 m:0 dnr:0 00:19:40.190 [2024-12-16 11:41:20.226897] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:101952 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:40.190 [2024-12-16 11:41:20.226937] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:17 cdw0:0 sqhd:004d p:0 m:0 dnr:0 00:19:40.190 [2024-12-16 11:41:20.226961] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:75 nsid:1 lba:101960 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:40.190 [2024-12-16 11:41:20.226978] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:75 cdw0:0 sqhd:004e p:0 m:0 dnr:0 00:19:40.190 [2024-12-16 11:41:20.227000] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:101968 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:40.190 [2024-12-16 11:41:20.227016] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:24 cdw0:0 sqhd:004f p:0 m:0 dnr:0 00:19:40.190 [2024-12-16 11:41:20.227100] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:88 nsid:1 lba:101976 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:40.190 [2024-12-16 11:41:20.227123] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:88 cdw0:0 sqhd:0050 p:0 m:0 dnr:0 00:19:40.190 [2024-12-16 11:41:20.227150] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:67 nsid:1 lba:101400 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:40.190 [2024-12-16 11:41:20.227184] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:67 cdw0:0 sqhd:0051 p:0 m:0 dnr:0 00:19:40.190 [2024-12-16 11:41:20.227214] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:76 nsid:1 lba:101408 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:40.190 [2024-12-16 11:41:20.227231] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:76 cdw0:0 sqhd:0052 p:0 m:0 dnr:0 00:19:40.190 [2024-12-16 11:41:20.227255] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:106 nsid:1 lba:101416 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:40.190 [2024-12-16 11:41:20.227271] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:106 cdw0:0 sqhd:0053 p:0 m:0 dnr:0 00:19:40.190 [2024-12-16 11:41:20.227295] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:101424 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:40.190 [2024-12-16 11:41:20.227312] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:57 cdw0:0 sqhd:0054 p:0 m:0 dnr:0 00:19:40.190 [2024-12-16 11:41:20.227336] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:101432 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:40.190 [2024-12-16 11:41:20.227374] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:50 cdw0:0 sqhd:0055 p:0 m:0 dnr:0 00:19:40.190 [2024-12-16 11:41:20.227400] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:101440 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:40.190 [2024-12-16 11:41:20.227417] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:38 cdw0:0 sqhd:0056 p:0 m:0 dnr:0 00:19:40.190 [2024-12-16 11:41:20.227441] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:101448 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:40.190 [2024-12-16 11:41:20.227457] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:18 cdw0:0 sqhd:0057 p:0 m:0 dnr:0 00:19:40.190 [2024-12-16 11:41:20.227481] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:101456 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:40.190 [2024-12-16 11:41:20.227498] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:54 cdw0:0 sqhd:0058 p:0 m:0 dnr:0 00:19:40.190 [2024-12-16 11:41:20.227521] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:101464 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:40.190 [2024-12-16 11:41:20.227538] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:45 cdw0:0 sqhd:0059 p:0 m:0 dnr:0 00:19:40.190 [2024-12-16 11:41:20.227561] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:80 nsid:1 lba:101472 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:40.190 [2024-12-16 11:41:20.227578] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:80 cdw0:0 sqhd:005a p:0 m:0 dnr:0 00:19:40.190 [2024-12-16 11:41:20.227602] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:101480 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:40.190 [2024-12-16 11:41:20.227618] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:26 cdw0:0 sqhd:005b p:0 m:0 dnr:0 00:19:40.190 [2024-12-16 11:41:20.227642] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:101488 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:40.190 [2024-12-16 11:41:20.227658] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:36 cdw0:0 sqhd:005c p:0 m:0 dnr:0 00:19:40.190 [2024-12-16 11:41:20.227696] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:115 nsid:1 lba:101496 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:40.190 [2024-12-16 11:41:20.227714] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:115 cdw0:0 sqhd:005d p:0 m:0 dnr:0 00:19:40.190 [2024-12-16 11:41:20.227738] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:101504 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:40.190 [2024-12-16 11:41:20.227754] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:14 cdw0:0 sqhd:005e p:0 m:0 dnr:0 00:19:40.190 [2024-12-16 11:41:20.227778] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:101 nsid:1 lba:101512 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:40.190 [2024-12-16 11:41:20.227795] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:101 cdw0:0 sqhd:005f p:0 m:0 dnr:0 00:19:40.190 [2024-12-16 11:41:20.227819] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:78 nsid:1 lba:101520 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:40.190 [2024-12-16 11:41:20.227835] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:78 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:19:40.190 [2024-12-16 11:41:20.227859] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:79 nsid:1 lba:101528 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:40.190 [2024-12-16 11:41:20.227876] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:79 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:19:40.190 [2024-12-16 11:41:20.227914] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:101536 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:40.190 [2024-12-16 11:41:20.227932] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:6 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:19:40.190 [2024-12-16 11:41:20.227956] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:99 nsid:1 lba:101544 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:40.190 [2024-12-16 11:41:20.227985] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:99 cdw0:0 sqhd:0063 p:0 m:0 dnr:0 00:19:40.190 [2024-12-16 11:41:20.228008] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:123 nsid:1 lba:101552 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:40.190 [2024-12-16 11:41:20.228025] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:123 cdw0:0 sqhd:0064 p:0 m:0 dnr:0 00:19:40.190 [2024-12-16 11:41:20.228048] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:83 nsid:1 lba:101560 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:40.190 [2024-12-16 11:41:20.228065] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:83 cdw0:0 sqhd:0065 p:0 m:0 dnr:0 00:19:40.190 [2024-12-16 11:41:20.228089] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:101568 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:40.190 [2024-12-16 11:41:20.228105] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:23 cdw0:0 sqhd:0066 p:0 m:0 dnr:0 00:19:40.191 [2024-12-16 11:41:20.228129] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:101576 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:40.191 [2024-12-16 11:41:20.228158] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:56 cdw0:0 sqhd:0067 p:0 m:0 dnr:0 00:19:40.191 [2024-12-16 11:41:20.228185] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:82 nsid:1 lba:101584 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:40.191 [2024-12-16 11:41:20.228201] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:82 cdw0:0 sqhd:0068 p:0 m:0 dnr:0 00:19:40.191 [2024-12-16 11:41:20.228225] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:101592 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:40.191 [2024-12-16 11:41:20.228241] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:13 cdw0:0 sqhd:0069 p:0 m:0 dnr:0 00:19:40.191 [2024-12-16 11:41:20.228265] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:101984 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:40.191 [2024-12-16 11:41:20.228282] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:42 cdw0:0 sqhd:006a p:0 m:0 dnr:0 00:19:40.191 [2024-12-16 11:41:20.228306] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:114 nsid:1 lba:101992 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:40.191 [2024-12-16 11:41:20.228323] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:114 cdw0:0 sqhd:006b p:0 m:0 dnr:0 00:19:40.191 [2024-12-16 11:41:20.228347] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:102000 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:40.191 [2024-12-16 11:41:20.228364] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:46 cdw0:0 sqhd:006c p:0 m:0 dnr:0 00:19:40.191 [2024-12-16 11:41:20.228388] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:102008 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:40.191 [2024-12-16 11:41:20.228404] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:60 cdw0:0 sqhd:006d p:0 m:0 dnr:0 00:19:40.191 [2024-12-16 11:41:20.228438] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:111 nsid:1 lba:102016 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:40.191 [2024-12-16 11:41:20.228455] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:111 cdw0:0 sqhd:006e p:0 m:0 dnr:0 00:19:40.191 [2024-12-16 11:41:20.228479] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:102024 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:40.191 [2024-12-16 11:41:20.228496] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:7 cdw0:0 sqhd:006f p:0 m:0 dnr:0 00:19:40.191 [2024-12-16 11:41:20.228520] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:102032 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:40.191 [2024-12-16 11:41:20.228547] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:32 cdw0:0 sqhd:0070 p:0 m:0 dnr:0 00:19:40.191 [2024-12-16 11:41:20.228571] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:102040 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:40.191 [2024-12-16 11:41:20.228588] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:51 cdw0:0 sqhd:0071 p:0 m:0 dnr:0 00:19:40.191 [2024-12-16 11:41:20.228612] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:116 nsid:1 lba:102048 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:40.191 [2024-12-16 11:41:20.228629] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:116 cdw0:0 sqhd:0072 p:0 m:0 dnr:0 00:19:40.191 [2024-12-16 11:41:20.228653] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:68 nsid:1 lba:102056 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:40.191 [2024-12-16 11:41:20.228669] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:68 cdw0:0 sqhd:0073 p:0 m:0 dnr:0 00:19:40.191 [2024-12-16 11:41:20.228693] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:90 nsid:1 lba:102064 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:40.191 [2024-12-16 11:41:20.228709] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:90 cdw0:0 sqhd:0074 p:0 m:0 dnr:0 00:19:40.191 [2024-12-16 11:41:20.228733] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:102072 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:40.191 [2024-12-16 11:41:20.228750] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:16 cdw0:0 sqhd:0075 p:0 m:0 dnr:0 00:19:40.191 [2024-12-16 11:41:20.228773] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:102080 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:40.191 [2024-12-16 11:41:20.228790] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:34 cdw0:0 sqhd:0076 p:0 m:0 dnr:0 00:19:40.191 [2024-12-16 11:41:20.228813] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:102088 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:40.191 [2024-12-16 11:41:20.228831] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:19 cdw0:0 sqhd:0077 p:0 m:0 dnr:0 00:19:40.191 [2024-12-16 11:41:20.228854] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:102096 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:40.191 [2024-12-16 11:41:20.228871] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:0 cdw0:0 sqhd:0078 p:0 m:0 dnr:0 00:19:40.191 [2024-12-16 11:41:20.228894] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:69 nsid:1 lba:102104 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:40.191 [2024-12-16 11:41:20.228911] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:69 cdw0:0 sqhd:0079 p:0 m:0 dnr:0 00:19:40.191 [2024-12-16 11:41:20.228934] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:102112 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:40.191 [2024-12-16 11:41:20.228963] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:10 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:19:40.191 [2024-12-16 11:41:20.228989] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:102120 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:40.191 [2024-12-16 11:41:20.229006] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:22 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:19:40.191 [2024-12-16 11:41:20.229029] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:102128 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:40.191 [2024-12-16 11:41:20.229046] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:8 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:19:40.191 [2024-12-16 11:41:20.229070] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:102136 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:40.191 [2024-12-16 11:41:20.229087] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:30 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:19:40.191 [2024-12-16 11:41:20.229110] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:86 nsid:1 lba:102144 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:40.191 [2024-12-16 11:41:20.229126] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:86 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:19:40.191 [2024-12-16 11:41:20.229163] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:104 nsid:1 lba:102152 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:40.191 [2024-12-16 11:41:20.229183] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:104 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:19:40.191 [2024-12-16 11:41:20.229207] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:71 nsid:1 lba:102160 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:40.191 [2024-12-16 11:41:20.229223] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:71 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:40.191 [2024-12-16 11:41:20.229248] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:102168 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:40.191 [2024-12-16 11:41:20.229265] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:12 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:19:40.191 [2024-12-16 11:41:20.229289] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:102176 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:40.191 [2024-12-16 11:41:20.229311] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:44 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:19:40.191 [2024-12-16 11:41:20.229335] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:98 nsid:1 lba:102184 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:40.191 [2024-12-16 11:41:20.229352] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:98 cdw0:0 sqhd:0003 p:0 m:0 dnr:0 00:19:40.191 [2024-12-16 11:41:20.229376] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:102192 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:40.191 [2024-12-16 11:41:20.229392] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:125 cdw0:0 sqhd:0004 p:0 m:0 dnr:0 00:19:40.191 [2024-12-16 11:41:20.229416] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:96 nsid:1 lba:102200 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:40.191 [2024-12-16 11:41:20.229432] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:96 cdw0:0 sqhd:0005 p:0 m:0 dnr:0 00:19:40.191 [2024-12-16 11:41:20.229456] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:102208 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:40.191 [2024-12-16 11:41:20.229482] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:43 cdw0:0 sqhd:0006 p:0 m:0 dnr:0 00:19:40.191 [2024-12-16 11:41:20.229508] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:73 nsid:1 lba:102216 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:40.191 [2024-12-16 11:41:20.229525] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:73 cdw0:0 sqhd:0007 p:0 m:0 dnr:0 00:19:40.191 [2024-12-16 11:41:20.229549] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:102224 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:40.191 [2024-12-16 11:41:20.229566] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:4 cdw0:0 sqhd:0008 p:0 m:0 dnr:0 00:19:40.191 [2024-12-16 11:41:20.229589] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:93 nsid:1 lba:102232 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:40.191 [2024-12-16 11:41:20.229605] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:93 cdw0:0 sqhd:0009 p:0 m:0 dnr:0 00:19:40.191 [2024-12-16 11:41:20.229630] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:77 nsid:1 lba:102240 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:40.191 [2024-12-16 11:41:20.229647] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:77 cdw0:0 sqhd:000a p:0 m:0 dnr:0 00:19:40.191 [2024-12-16 11:41:20.229833] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:102248 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:40.191 [2024-12-16 11:41:20.229858] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:49 cdw0:0 sqhd:000b p:0 m:0 dnr:0 00:19:40.191 [2024-12-16 11:41:20.229890] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:102256 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:40.191 [2024-12-16 11:41:20.229907] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:62 cdw0:0 sqhd:000c p:0 m:0 dnr:0 00:19:40.191 [2024-12-16 11:41:20.229935] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:92 nsid:1 lba:102264 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:40.191 [2024-12-16 11:41:20.229953] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:92 cdw0:0 sqhd:000d p:0 m:0 dnr:0 00:19:40.191 [2024-12-16 11:41:20.229980] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:124 nsid:1 lba:102272 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:40.192 [2024-12-16 11:41:20.229997] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:124 cdw0:0 sqhd:000e p:0 m:0 dnr:0 00:19:40.192 [2024-12-16 11:41:20.230024] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:81 nsid:1 lba:102280 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:40.192 [2024-12-16 11:41:20.230041] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:81 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:19:40.192 [2024-12-16 11:41:20.230068] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:102288 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:40.192 [2024-12-16 11:41:20.230085] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:63 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:19:40.192 [2024-12-16 11:41:20.230114] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:85 nsid:1 lba:102296 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:40.192 [2024-12-16 11:41:20.230131] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:85 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:19:40.192 [2024-12-16 11:41:20.230176] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:102304 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:40.192 [2024-12-16 11:41:20.230199] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:28 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:19:40.192 [2024-12-16 11:41:20.230239] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:102312 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:40.192 [2024-12-16 11:41:20.230257] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:47 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:19:40.192 [2024-12-16 11:41:20.230285] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:126 nsid:1 lba:102320 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:40.192 [2024-12-16 11:41:20.230302] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:126 cdw0:0 sqhd:0014 p:0 m:0 dnr:0 00:19:40.192 [2024-12-16 11:41:20.230330] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:121 nsid:1 lba:102328 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:40.192 [2024-12-16 11:41:20.230346] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:121 cdw0:0 sqhd:0015 p:0 m:0 dnr:0 00:19:40.192 [2024-12-16 11:41:20.230374] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:103 nsid:1 lba:102336 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:40.192 [2024-12-16 11:41:20.230391] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:103 cdw0:0 sqhd:0016 p:0 m:0 dnr:0 00:19:40.192 [2024-12-16 11:41:20.230419] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:122 nsid:1 lba:102344 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:40.192 [2024-12-16 11:41:20.230435] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:122 cdw0:0 sqhd:0017 p:0 m:0 dnr:0 00:19:40.192 [2024-12-16 11:41:20.230463] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:89 nsid:1 lba:102352 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:40.192 [2024-12-16 11:41:20.230480] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:89 cdw0:0 sqhd:0018 p:0 m:0 dnr:0 00:19:40.192 [2024-12-16 11:41:20.230508] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:102360 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:40.192 [2024-12-16 11:41:20.230524] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:52 cdw0:0 sqhd:0019 p:0 m:0 dnr:0 00:19:40.192 [2024-12-16 11:41:20.230552] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:102368 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:40.192 [2024-12-16 11:41:20.230569] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:5 cdw0:0 sqhd:001a p:0 m:0 dnr:0 00:19:40.192 [2024-12-16 11:41:20.230597] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:102376 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:40.192 [2024-12-16 11:41:20.230613] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:3 cdw0:0 sqhd:001b p:0 m:0 dnr:0 00:19:40.192 [2024-12-16 11:41:20.230641] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:117 nsid:1 lba:102384 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:40.192 [2024-12-16 11:41:20.230657] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:117 cdw0:0 sqhd:001c p:0 m:0 dnr:0 00:19:40.192 [2024-12-16 11:41:20.230685] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:107 nsid:1 lba:102392 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:40.192 [2024-12-16 11:41:20.230701] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:107 cdw0:0 sqhd:001d p:0 m:0 dnr:0 00:19:40.192 [2024-12-16 11:41:20.230729] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:113 nsid:1 lba:102400 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:40.192 [2024-12-16 11:41:20.230746] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:113 cdw0:0 sqhd:001e p:0 m:0 dnr:0 00:19:40.192 [2024-12-16 11:41:20.230782] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:102408 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:40.192 [2024-12-16 11:41:20.230800] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:48 cdw0:0 sqhd:001f p:0 m:0 dnr:0 00:19:40.192 [2024-12-16 11:41:20.230827] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:118 nsid:1 lba:101600 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:40.192 [2024-12-16 11:41:20.230844] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:118 cdw0:0 sqhd:0020 p:0 m:0 dnr:0 00:19:40.192 [2024-12-16 11:41:20.230872] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:101608 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:40.192 [2024-12-16 11:41:20.230889] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:53 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:19:40.192 [2024-12-16 11:41:20.230917] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:101616 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:40.192 [2024-12-16 11:41:20.230933] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:31 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:19:40.192 [2024-12-16 11:41:20.230961] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:64 nsid:1 lba:101624 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:40.192 [2024-12-16 11:41:20.230977] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:64 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:19:40.192 [2024-12-16 11:41:20.231005] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:97 nsid:1 lba:101632 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:40.192 [2024-12-16 11:41:20.231021] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:97 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 00:19:40.192 [2024-12-16 11:41:20.231048] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:101640 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:40.192 [2024-12-16 11:41:20.231064] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:29 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:19:40.192 [2024-12-16 11:41:20.231093] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:72 nsid:1 lba:101648 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:40.192 [2024-12-16 11:41:20.231109] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:72 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:19:40.192 8984.19 IOPS, 35.09 MiB/s [2024-12-16T11:41:40.945Z] 8455.71 IOPS, 33.03 MiB/s [2024-12-16T11:41:40.945Z] 7985.94 IOPS, 31.20 MiB/s [2024-12-16T11:41:40.945Z] 7565.63 IOPS, 29.55 MiB/s [2024-12-16T11:41:40.945Z] 7356.55 IOPS, 28.74 MiB/s [2024-12-16T11:41:40.945Z] 7432.71 IOPS, 29.03 MiB/s [2024-12-16T11:41:40.945Z] 7503.45 IOPS, 29.31 MiB/s [2024-12-16T11:41:40.945Z] 7594.04 IOPS, 29.66 MiB/s [2024-12-16T11:41:40.945Z] 7762.79 IOPS, 30.32 MiB/s [2024-12-16T11:41:40.945Z] 7913.16 IOPS, 30.91 MiB/s [2024-12-16T11:41:40.945Z] 8052.19 IOPS, 31.45 MiB/s [2024-12-16T11:41:40.945Z] 8092.33 IOPS, 31.61 MiB/s [2024-12-16T11:41:40.945Z] 8118.71 IOPS, 31.71 MiB/s [2024-12-16T11:41:40.945Z] 8146.00 IOPS, 31.82 MiB/s [2024-12-16T11:41:40.945Z] 8198.07 IOPS, 32.02 MiB/s [2024-12-16T11:41:40.945Z] 8317.06 IOPS, 32.49 MiB/s [2024-12-16T11:41:40.945Z] 8420.97 IOPS, 32.89 MiB/s [2024-12-16T11:41:40.945Z] 8466.91 IOPS, 33.07 MiB/s [2024-12-16T11:41:40.945Z] [2024-12-16 11:41:37.651874] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:83 nsid:1 lba:11256 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:40.192 [2024-12-16 11:41:37.651937] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:83 cdw0:0 sqhd:0007 p:0 m:0 dnr:0 00:19:40.192 [2024-12-16 11:41:37.653229] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:69 nsid:1 lba:11536 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:40.192 [2024-12-16 11:41:37.653269] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:69 cdw0:0 sqhd:001b p:0 m:0 dnr:0 00:19:40.192 [2024-12-16 11:41:37.653300] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:87 nsid:1 lba:11552 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:40.192 [2024-12-16 11:41:37.653348] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:87 cdw0:0 sqhd:001c p:0 m:0 dnr:0 00:19:40.192 [2024-12-16 11:41:37.653373] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:109 nsid:1 lba:10920 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:40.192 [2024-12-16 11:41:37.653389] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:109 cdw0:0 sqhd:001d p:0 m:0 dnr:0 00:19:40.192 [2024-12-16 11:41:37.653410] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:90 nsid:1 lba:10952 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:40.192 [2024-12-16 11:41:37.653425] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:90 cdw0:0 sqhd:001e p:0 m:0 dnr:0 00:19:40.192 [2024-12-16 11:41:37.653446] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:10984 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:40.192 [2024-12-16 11:41:37.653461] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:9 cdw0:0 sqhd:001f p:0 m:0 dnr:0 00:19:40.192 [2024-12-16 11:41:37.653498] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:11024 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:40.192 [2024-12-16 11:41:37.653513] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:40 cdw0:0 sqhd:0020 p:0 m:0 dnr:0 00:19:40.192 [2024-12-16 11:41:37.653534] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:11568 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:40.192 [2024-12-16 11:41:37.653550] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:50 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:19:40.192 [2024-12-16 11:41:37.653572] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:64 nsid:1 lba:11584 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:40.192 [2024-12-16 11:41:37.653588] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:64 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:19:40.192 [2024-12-16 11:41:37.653609] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:11600 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:40.192 [2024-12-16 11:41:37.653624] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:47 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:19:40.192 [2024-12-16 11:41:37.653646] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:80 nsid:1 lba:11616 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:40.192 [2024-12-16 11:41:37.653661] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:80 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 00:19:40.192 [2024-12-16 11:41:37.653683] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:78 nsid:1 lba:11288 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:40.192 [2024-12-16 11:41:37.653698] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:78 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:19:40.193 [2024-12-16 11:41:37.653719] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:74 nsid:1 lba:11328 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:40.193 [2024-12-16 11:41:37.653735] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:74 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:19:40.193 [2024-12-16 11:41:37.653756] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:11360 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:40.193 [2024-12-16 11:41:37.653772] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:41 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:19:40.193 [2024-12-16 11:41:37.653793] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:111 nsid:1 lba:11392 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:40.193 [2024-12-16 11:41:37.653819] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:111 cdw0:0 sqhd:0028 p:0 m:0 dnr:0 00:19:40.193 [2024-12-16 11:41:37.653872] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:96 nsid:1 lba:11056 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:40.193 [2024-12-16 11:41:37.653888] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:96 cdw0:0 sqhd:0029 p:0 m:0 dnr:0 00:19:40.193 [2024-12-16 11:41:37.653908] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:79 nsid:1 lba:11088 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:40.193 [2024-12-16 11:41:37.653922] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:79 cdw0:0 sqhd:002a p:0 m:0 dnr:0 00:19:40.193 [2024-12-16 11:41:37.653942] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:86 nsid:1 lba:11120 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:40.193 [2024-12-16 11:41:37.653956] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:86 cdw0:0 sqhd:002b p:0 m:0 dnr:0 00:19:40.193 [2024-12-16 11:41:37.653977] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:11152 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:40.193 [2024-12-16 11:41:37.653992] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:25 cdw0:0 sqhd:002c p:0 m:0 dnr:0 00:19:40.193 [2024-12-16 11:41:37.654775] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:102 nsid:1 lba:11632 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:40.193 [2024-12-16 11:41:37.654805] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:102 cdw0:0 sqhd:002d p:0 m:0 dnr:0 00:19:40.193 [2024-12-16 11:41:37.654833] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:65 nsid:1 lba:11648 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:40.193 [2024-12-16 11:41:37.654866] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:65 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:19:40.193 [2024-12-16 11:41:37.654887] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:92 nsid:1 lba:11664 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:40.193 [2024-12-16 11:41:37.654903] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:92 cdw0:0 sqhd:002f p:0 m:0 dnr:0 00:19:40.193 [2024-12-16 11:41:37.654938] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:11680 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:40.193 [2024-12-16 11:41:37.654953] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:12 cdw0:0 sqhd:0030 p:0 m:0 dnr:0 00:19:40.193 [2024-12-16 11:41:37.654973] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:100 nsid:1 lba:11696 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:40.193 [2024-12-16 11:41:37.654988] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:100 cdw0:0 sqhd:0031 p:0 m:0 dnr:0 00:19:40.193 [2024-12-16 11:41:37.655008] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:11712 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:40.193 [2024-12-16 11:41:37.655022] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:11 cdw0:0 sqhd:0032 p:0 m:0 dnr:0 00:19:40.193 [2024-12-16 11:41:37.655043] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:98 nsid:1 lba:11728 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:40.193 [2024-12-16 11:41:37.655057] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:98 cdw0:0 sqhd:0033 p:0 m:0 dnr:0 00:19:40.193 [2024-12-16 11:41:37.655078] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:11184 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:40.193 [2024-12-16 11:41:37.655092] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:38 cdw0:0 sqhd:0034 p:0 m:0 dnr:0 00:19:40.193 [2024-12-16 11:41:37.655126] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:11216 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:40.193 [2024-12-16 11:41:37.655143] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:4 cdw0:0 sqhd:0035 p:0 m:0 dnr:0 00:19:40.193 [2024-12-16 11:41:37.655163] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:11248 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:40.193 [2024-12-16 11:41:37.655178] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:23 cdw0:0 sqhd:0036 p:0 m:0 dnr:0 00:19:40.193 [2024-12-16 11:41:37.655215] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:11280 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:40.193 [2024-12-16 11:41:37.655231] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:5 cdw0:0 sqhd:0037 p:0 m:0 dnr:0 00:19:40.193 8464.35 IOPS, 33.06 MiB/s [2024-12-16T11:41:40.946Z] 8462.51 IOPS, 33.06 MiB/s [2024-12-16T11:41:40.946Z] Received shutdown signal, test time was about 35.911313 seconds 00:19:40.193 00:19:40.193 Latency(us) 00:19:40.193 [2024-12-16T11:41:40.946Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:19:40.193 Job: Nvme0n1 (Core Mask 0x4, workload: verify, depth: 128, IO size: 4096) 00:19:40.193 Verification LBA range: start 0x0 length 0x4000 00:19:40.193 Nvme0n1 : 35.91 8450.12 33.01 0.00 0.00 15116.89 166.63 4026531.84 00:19:40.193 [2024-12-16T11:41:40.946Z] =================================================================================================================== 00:19:40.193 [2024-12-16T11:41:40.946Z] Total : 8450.12 33.01 0.00 0.00 15116.89 166.63 4026531.84 00:19:40.193 11:41:40 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@143 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:19:40.452 11:41:40 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@145 -- # trap - SIGINT SIGTERM EXIT 00:19:40.452 11:41:40 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@147 -- # rm -f /home/vagrant/spdk_repo/spdk/test/nvmf/host/try.txt 00:19:40.452 11:41:40 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@148 -- # nvmftestfini 00:19:40.452 11:41:40 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@516 -- # nvmfcleanup 00:19:40.452 11:41:40 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@121 -- # sync 00:19:40.452 11:41:41 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:19:40.452 11:41:41 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@124 -- # set +e 00:19:40.452 11:41:41 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@125 -- # for i in {1..20} 00:19:40.452 11:41:41 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:19:40.452 rmmod nvme_tcp 00:19:40.452 rmmod nvme_fabrics 00:19:40.452 rmmod nvme_keyring 00:19:40.452 11:41:41 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:19:40.452 11:41:41 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@128 -- # set -e 00:19:40.452 11:41:41 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@129 -- # return 0 00:19:40.452 11:41:41 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@517 -- # '[' -n 91682 ']' 00:19:40.452 11:41:41 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@518 -- # killprocess 91682 00:19:40.452 11:41:41 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@954 -- # '[' -z 91682 ']' 00:19:40.452 11:41:41 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@958 -- # kill -0 91682 00:19:40.452 11:41:41 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@959 -- # uname 00:19:40.452 11:41:41 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:19:40.452 11:41:41 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 91682 00:19:40.452 killing process with pid 91682 00:19:40.452 11:41:41 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:19:40.452 11:41:41 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:19:40.452 11:41:41 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@972 -- # echo 'killing process with pid 91682' 00:19:40.452 11:41:41 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@973 -- # kill 91682 00:19:40.452 11:41:41 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@978 -- # wait 91682 00:19:40.711 11:41:41 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:19:40.711 11:41:41 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:19:40.711 11:41:41 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:19:40.711 11:41:41 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@297 -- # iptr 00:19:40.711 11:41:41 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@791 -- # iptables-save 00:19:40.711 11:41:41 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:19:40.711 11:41:41 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@791 -- # iptables-restore 00:19:40.711 11:41:41 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@298 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:19:40.711 11:41:41 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@299 -- # nvmf_veth_fini 00:19:40.711 11:41:41 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@233 -- # ip link set nvmf_init_br nomaster 00:19:40.711 11:41:41 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@234 -- # ip link set nvmf_init_br2 nomaster 00:19:40.711 11:41:41 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@235 -- # ip link set nvmf_tgt_br nomaster 00:19:40.711 11:41:41 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@236 -- # ip link set nvmf_tgt_br2 nomaster 00:19:40.711 11:41:41 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@237 -- # ip link set nvmf_init_br down 00:19:40.711 11:41:41 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@238 -- # ip link set nvmf_init_br2 down 00:19:40.711 11:41:41 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@239 -- # ip link set nvmf_tgt_br down 00:19:40.711 11:41:41 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@240 -- # ip link set nvmf_tgt_br2 down 00:19:40.711 11:41:41 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@241 -- # ip link delete nvmf_br type bridge 00:19:40.970 11:41:41 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@242 -- # ip link delete nvmf_init_if 00:19:40.970 11:41:41 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@243 -- # ip link delete nvmf_init_if2 00:19:40.970 11:41:41 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@244 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:19:40.970 11:41:41 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@245 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:19:40.970 11:41:41 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@246 -- # remove_spdk_ns 00:19:40.970 11:41:41 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:19:40.970 11:41:41 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:19:40.970 11:41:41 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:19:40.970 11:41:41 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@300 -- # return 0 00:19:40.970 ************************************ 00:19:40.970 END TEST nvmf_host_multipath_status 00:19:40.970 ************************************ 00:19:40.970 00:19:40.970 real 0m42.082s 00:19:40.970 user 2m17.272s 00:19:40.970 sys 0m10.403s 00:19:40.970 11:41:41 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@1130 -- # xtrace_disable 00:19:40.970 11:41:41 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@10 -- # set +x 00:19:40.970 11:41:41 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@28 -- # run_test nvmf_discovery_remove_ifc /home/vagrant/spdk_repo/spdk/test/nvmf/host/discovery_remove_ifc.sh --transport=tcp 00:19:40.970 11:41:41 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:19:40.970 11:41:41 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1111 -- # xtrace_disable 00:19:40.970 11:41:41 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:19:40.970 ************************************ 00:19:40.970 START TEST nvmf_discovery_remove_ifc 00:19:40.970 ************************************ 00:19:40.970 11:41:41 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/host/discovery_remove_ifc.sh --transport=tcp 00:19:41.230 * Looking for test storage... 00:19:41.230 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/host 00:19:41.230 11:41:41 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:19:41.230 11:41:41 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:19:41.230 11:41:41 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@1711 -- # lcov --version 00:19:41.230 11:41:41 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:19:41.230 11:41:41 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:19:41.230 11:41:41 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@333 -- # local ver1 ver1_l 00:19:41.230 11:41:41 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@334 -- # local ver2 ver2_l 00:19:41.230 11:41:41 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@336 -- # IFS=.-: 00:19:41.230 11:41:41 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@336 -- # read -ra ver1 00:19:41.230 11:41:41 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@337 -- # IFS=.-: 00:19:41.230 11:41:41 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@337 -- # read -ra ver2 00:19:41.230 11:41:41 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@338 -- # local 'op=<' 00:19:41.230 11:41:41 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@340 -- # ver1_l=2 00:19:41.230 11:41:41 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@341 -- # ver2_l=1 00:19:41.230 11:41:41 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:19:41.230 11:41:41 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@344 -- # case "$op" in 00:19:41.230 11:41:41 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@345 -- # : 1 00:19:41.230 11:41:41 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@364 -- # (( v = 0 )) 00:19:41.230 11:41:41 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:19:41.230 11:41:41 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@365 -- # decimal 1 00:19:41.230 11:41:41 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@353 -- # local d=1 00:19:41.230 11:41:41 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:19:41.230 11:41:41 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@355 -- # echo 1 00:19:41.230 11:41:41 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@365 -- # ver1[v]=1 00:19:41.230 11:41:41 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@366 -- # decimal 2 00:19:41.230 11:41:41 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@353 -- # local d=2 00:19:41.230 11:41:41 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:19:41.230 11:41:41 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@355 -- # echo 2 00:19:41.230 11:41:41 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@366 -- # ver2[v]=2 00:19:41.230 11:41:41 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:19:41.230 11:41:41 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:19:41.230 11:41:41 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@368 -- # return 0 00:19:41.230 11:41:41 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:19:41.230 11:41:41 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:19:41.230 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:19:41.230 --rc genhtml_branch_coverage=1 00:19:41.230 --rc genhtml_function_coverage=1 00:19:41.230 --rc genhtml_legend=1 00:19:41.230 --rc geninfo_all_blocks=1 00:19:41.230 --rc geninfo_unexecuted_blocks=1 00:19:41.230 00:19:41.230 ' 00:19:41.230 11:41:41 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:19:41.230 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:19:41.230 --rc genhtml_branch_coverage=1 00:19:41.230 --rc genhtml_function_coverage=1 00:19:41.230 --rc genhtml_legend=1 00:19:41.230 --rc geninfo_all_blocks=1 00:19:41.230 --rc geninfo_unexecuted_blocks=1 00:19:41.230 00:19:41.230 ' 00:19:41.230 11:41:41 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:19:41.230 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:19:41.230 --rc genhtml_branch_coverage=1 00:19:41.230 --rc genhtml_function_coverage=1 00:19:41.230 --rc genhtml_legend=1 00:19:41.230 --rc geninfo_all_blocks=1 00:19:41.230 --rc geninfo_unexecuted_blocks=1 00:19:41.230 00:19:41.230 ' 00:19:41.230 11:41:41 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:19:41.230 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:19:41.230 --rc genhtml_branch_coverage=1 00:19:41.230 --rc genhtml_function_coverage=1 00:19:41.230 --rc genhtml_legend=1 00:19:41.230 --rc geninfo_all_blocks=1 00:19:41.230 --rc geninfo_unexecuted_blocks=1 00:19:41.230 00:19:41.230 ' 00:19:41.230 11:41:41 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@12 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:19:41.230 11:41:41 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@7 -- # uname -s 00:19:41.230 11:41:41 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:19:41.230 11:41:41 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:19:41.230 11:41:41 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:19:41.230 11:41:41 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:19:41.230 11:41:41 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:19:41.230 11:41:41 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:19:41.230 11:41:41 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:19:41.230 11:41:41 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:19:41.230 11:41:41 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:19:41.230 11:41:41 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:19:41.230 11:41:41 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:2fb5c03d-b67f-4471-b8a9-a83bf7b15be8 00:19:41.230 11:41:41 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@18 -- # NVME_HOSTID=2fb5c03d-b67f-4471-b8a9-a83bf7b15be8 00:19:41.230 11:41:41 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:19:41.230 11:41:41 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:19:41.230 11:41:41 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:19:41.230 11:41:41 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:19:41.230 11:41:41 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@49 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:19:41.230 11:41:41 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@15 -- # shopt -s extglob 00:19:41.230 11:41:41 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:19:41.230 11:41:41 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:19:41.230 11:41:41 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:19:41.230 11:41:41 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:41.230 11:41:41 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:41.230 11:41:41 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:41.230 11:41:41 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- paths/export.sh@5 -- # export PATH 00:19:41.231 11:41:41 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:41.231 11:41:41 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@51 -- # : 0 00:19:41.231 11:41:41 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:19:41.231 11:41:41 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:19:41.231 11:41:41 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:19:41.231 11:41:41 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:19:41.231 11:41:41 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:19:41.231 11:41:41 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:19:41.231 /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:19:41.231 11:41:41 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:19:41.231 11:41:41 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:19:41.231 11:41:41 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@55 -- # have_pci_nics=0 00:19:41.231 11:41:41 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@14 -- # '[' tcp == rdma ']' 00:19:41.231 11:41:41 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@19 -- # discovery_port=8009 00:19:41.231 11:41:41 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@20 -- # discovery_nqn=nqn.2014-08.org.nvmexpress.discovery 00:19:41.231 11:41:41 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@23 -- # nqn=nqn.2016-06.io.spdk:cnode 00:19:41.231 11:41:41 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@25 -- # host_nqn=nqn.2021-12.io.spdk:test 00:19:41.231 11:41:41 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@26 -- # host_sock=/tmp/host.sock 00:19:41.231 11:41:41 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@39 -- # nvmftestinit 00:19:41.231 11:41:41 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:19:41.231 11:41:41 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:19:41.231 11:41:41 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@476 -- # prepare_net_devs 00:19:41.231 11:41:41 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@438 -- # local -g is_hw=no 00:19:41.231 11:41:41 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@440 -- # remove_spdk_ns 00:19:41.231 11:41:41 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:19:41.231 11:41:41 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:19:41.231 11:41:41 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:19:41.231 11:41:41 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@442 -- # [[ virt != virt ]] 00:19:41.231 11:41:41 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@444 -- # [[ no == yes ]] 00:19:41.231 11:41:41 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@451 -- # [[ virt == phy ]] 00:19:41.231 11:41:41 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@454 -- # [[ virt == phy-fallback ]] 00:19:41.231 11:41:41 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@459 -- # [[ tcp == tcp ]] 00:19:41.231 11:41:41 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@460 -- # nvmf_veth_init 00:19:41.231 11:41:41 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@145 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:19:41.231 11:41:41 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@146 -- # NVMF_SECOND_INITIATOR_IP=10.0.0.2 00:19:41.231 11:41:41 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@147 -- # NVMF_FIRST_TARGET_IP=10.0.0.3 00:19:41.231 11:41:41 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@148 -- # NVMF_SECOND_TARGET_IP=10.0.0.4 00:19:41.231 11:41:41 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@149 -- # NVMF_INITIATOR_IP=10.0.0.1 00:19:41.231 11:41:41 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@150 -- # NVMF_BRIDGE=nvmf_br 00:19:41.231 11:41:41 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@151 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:19:41.231 11:41:41 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@152 -- # NVMF_INITIATOR_INTERFACE2=nvmf_init_if2 00:19:41.231 11:41:41 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@153 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:19:41.231 11:41:41 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@154 -- # NVMF_INITIATOR_BRIDGE2=nvmf_init_br2 00:19:41.231 11:41:41 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@155 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:19:41.231 11:41:41 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@156 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:19:41.231 11:41:41 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@157 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:19:41.231 11:41:41 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@158 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:19:41.231 11:41:41 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@159 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:19:41.231 11:41:41 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@160 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:19:41.231 11:41:41 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@162 -- # ip link set nvmf_init_br nomaster 00:19:41.231 Cannot find device "nvmf_init_br" 00:19:41.231 11:41:41 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@162 -- # true 00:19:41.231 11:41:41 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@163 -- # ip link set nvmf_init_br2 nomaster 00:19:41.231 Cannot find device "nvmf_init_br2" 00:19:41.231 11:41:41 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@163 -- # true 00:19:41.231 11:41:41 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@164 -- # ip link set nvmf_tgt_br nomaster 00:19:41.231 Cannot find device "nvmf_tgt_br" 00:19:41.231 11:41:41 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@164 -- # true 00:19:41.231 11:41:41 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@165 -- # ip link set nvmf_tgt_br2 nomaster 00:19:41.231 Cannot find device "nvmf_tgt_br2" 00:19:41.231 11:41:41 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@165 -- # true 00:19:41.231 11:41:41 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@166 -- # ip link set nvmf_init_br down 00:19:41.231 Cannot find device "nvmf_init_br" 00:19:41.231 11:41:41 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@166 -- # true 00:19:41.231 11:41:41 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@167 -- # ip link set nvmf_init_br2 down 00:19:41.231 Cannot find device "nvmf_init_br2" 00:19:41.231 11:41:41 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@167 -- # true 00:19:41.231 11:41:41 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@168 -- # ip link set nvmf_tgt_br down 00:19:41.231 Cannot find device "nvmf_tgt_br" 00:19:41.231 11:41:41 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@168 -- # true 00:19:41.231 11:41:41 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@169 -- # ip link set nvmf_tgt_br2 down 00:19:41.231 Cannot find device "nvmf_tgt_br2" 00:19:41.231 11:41:41 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@169 -- # true 00:19:41.231 11:41:41 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@170 -- # ip link delete nvmf_br type bridge 00:19:41.490 Cannot find device "nvmf_br" 00:19:41.490 11:41:41 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@170 -- # true 00:19:41.490 11:41:41 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@171 -- # ip link delete nvmf_init_if 00:19:41.490 Cannot find device "nvmf_init_if" 00:19:41.490 11:41:41 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@171 -- # true 00:19:41.490 11:41:41 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@172 -- # ip link delete nvmf_init_if2 00:19:41.490 Cannot find device "nvmf_init_if2" 00:19:41.490 11:41:42 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@172 -- # true 00:19:41.490 11:41:42 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@173 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:19:41.490 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:19:41.490 11:41:42 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@173 -- # true 00:19:41.490 11:41:42 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@174 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:19:41.490 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:19:41.490 11:41:42 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@174 -- # true 00:19:41.490 11:41:42 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@177 -- # ip netns add nvmf_tgt_ns_spdk 00:19:41.490 11:41:42 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@180 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:19:41.490 11:41:42 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@181 -- # ip link add nvmf_init_if2 type veth peer name nvmf_init_br2 00:19:41.490 11:41:42 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@182 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:19:41.490 11:41:42 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@183 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:19:41.490 11:41:42 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:19:41.490 11:41:42 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@187 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:19:41.490 11:41:42 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@190 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:19:41.490 11:41:42 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@191 -- # ip addr add 10.0.0.2/24 dev nvmf_init_if2 00:19:41.490 11:41:42 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@192 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if 00:19:41.490 11:41:42 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@193 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.4/24 dev nvmf_tgt_if2 00:19:41.490 11:41:42 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@196 -- # ip link set nvmf_init_if up 00:19:41.490 11:41:42 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@197 -- # ip link set nvmf_init_if2 up 00:19:41.490 11:41:42 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@198 -- # ip link set nvmf_init_br up 00:19:41.490 11:41:42 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@199 -- # ip link set nvmf_init_br2 up 00:19:41.490 11:41:42 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@200 -- # ip link set nvmf_tgt_br up 00:19:41.491 11:41:42 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@201 -- # ip link set nvmf_tgt_br2 up 00:19:41.491 11:41:42 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@202 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:19:41.491 11:41:42 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@203 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:19:41.491 11:41:42 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@204 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:19:41.491 11:41:42 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@207 -- # ip link add nvmf_br type bridge 00:19:41.491 11:41:42 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@208 -- # ip link set nvmf_br up 00:19:41.491 11:41:42 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@211 -- # ip link set nvmf_init_br master nvmf_br 00:19:41.491 11:41:42 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@212 -- # ip link set nvmf_init_br2 master nvmf_br 00:19:41.491 11:41:42 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@213 -- # ip link set nvmf_tgt_br master nvmf_br 00:19:41.491 11:41:42 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@214 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:19:41.491 11:41:42 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@217 -- # ipts -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:19:41.491 11:41:42 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT' 00:19:41.491 11:41:42 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@218 -- # ipts -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT 00:19:41.491 11:41:42 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT' 00:19:41.491 11:41:42 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@219 -- # ipts -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:19:41.491 11:41:42 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@790 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT -m comment --comment 'SPDK_NVMF:-A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT' 00:19:41.491 11:41:42 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@222 -- # ping -c 1 10.0.0.3 00:19:41.491 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:19:41.491 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.052 ms 00:19:41.491 00:19:41.491 --- 10.0.0.3 ping statistics --- 00:19:41.491 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:19:41.491 rtt min/avg/max/mdev = 0.052/0.052/0.052/0.000 ms 00:19:41.491 11:41:42 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@223 -- # ping -c 1 10.0.0.4 00:19:41.491 PING 10.0.0.4 (10.0.0.4) 56(84) bytes of data. 00:19:41.491 64 bytes from 10.0.0.4: icmp_seq=1 ttl=64 time=0.035 ms 00:19:41.491 00:19:41.491 --- 10.0.0.4 ping statistics --- 00:19:41.491 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:19:41.491 rtt min/avg/max/mdev = 0.035/0.035/0.035/0.000 ms 00:19:41.491 11:41:42 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@224 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:19:41.491 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:19:41.491 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.019 ms 00:19:41.491 00:19:41.491 --- 10.0.0.1 ping statistics --- 00:19:41.491 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:19:41.491 rtt min/avg/max/mdev = 0.019/0.019/0.019/0.000 ms 00:19:41.491 11:41:42 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@225 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.2 00:19:41.491 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:19:41.491 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.037 ms 00:19:41.491 00:19:41.491 --- 10.0.0.2 ping statistics --- 00:19:41.491 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:19:41.491 rtt min/avg/max/mdev = 0.037/0.037/0.037/0.000 ms 00:19:41.491 11:41:42 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@227 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:19:41.491 11:41:42 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@461 -- # return 0 00:19:41.491 11:41:42 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:19:41.491 11:41:42 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:19:41.491 11:41:42 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:19:41.491 11:41:42 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:19:41.491 11:41:42 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:19:41.491 11:41:42 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:19:41.491 11:41:42 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:19:41.750 11:41:42 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@40 -- # nvmfappstart -m 0x2 00:19:41.750 11:41:42 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:19:41.750 11:41:42 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@726 -- # xtrace_disable 00:19:41.750 11:41:42 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:19:41.750 11:41:42 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@509 -- # nvmfpid=93161 00:19:41.750 11:41:42 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@508 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:19:41.750 11:41:42 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@510 -- # waitforlisten 93161 00:19:41.750 11:41:42 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@835 -- # '[' -z 93161 ']' 00:19:41.750 11:41:42 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:19:41.750 11:41:42 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@840 -- # local max_retries=100 00:19:41.750 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:19:41.750 11:41:42 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:19:41.750 11:41:42 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@844 -- # xtrace_disable 00:19:41.750 11:41:42 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:19:41.750 [2024-12-16 11:41:42.315584] Starting SPDK v25.01-pre git sha1 c360129e7 / DPDK 24.03.0 initialization... 00:19:41.750 [2024-12-16 11:41:42.315710] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:19:41.750 [2024-12-16 11:41:42.470884] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:19:42.008 [2024-12-16 11:41:42.538211] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:19:42.008 [2024-12-16 11:41:42.538276] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:19:42.008 [2024-12-16 11:41:42.538291] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:19:42.008 [2024-12-16 11:41:42.538302] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:19:42.008 [2024-12-16 11:41:42.538311] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:19:42.008 [2024-12-16 11:41:42.538770] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 1 00:19:42.008 11:41:42 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:19:42.008 11:41:42 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@868 -- # return 0 00:19:42.008 11:41:42 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:19:42.008 11:41:42 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@732 -- # xtrace_disable 00:19:42.008 11:41:42 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:19:42.008 11:41:42 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:19:42.008 11:41:42 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@43 -- # rpc_cmd 00:19:42.008 11:41:42 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:42.008 11:41:42 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:19:42.008 [2024-12-16 11:41:42.729443] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:19:42.008 [2024-12-16 11:41:42.737582] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 8009 *** 00:19:42.008 null0 00:19:42.267 [2024-12-16 11:41:42.769514] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4420 *** 00:19:42.267 11:41:42 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:42.267 11:41:42 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@59 -- # hostpid=93198 00:19:42.267 11:41:42 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@58 -- # /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -m 0x1 -r /tmp/host.sock --wait-for-rpc -L bdev_nvme 00:19:42.267 11:41:42 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@60 -- # waitforlisten 93198 /tmp/host.sock 00:19:42.267 11:41:42 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@835 -- # '[' -z 93198 ']' 00:19:42.267 11:41:42 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@839 -- # local rpc_addr=/tmp/host.sock 00:19:42.267 11:41:42 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@840 -- # local max_retries=100 00:19:42.267 11:41:42 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /tmp/host.sock...' 00:19:42.267 Waiting for process to start up and listen on UNIX domain socket /tmp/host.sock... 00:19:42.267 11:41:42 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@844 -- # xtrace_disable 00:19:42.267 11:41:42 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:19:42.267 [2024-12-16 11:41:42.856450] Starting SPDK v25.01-pre git sha1 c360129e7 / DPDK 24.03.0 initialization... 00:19:42.267 [2024-12-16 11:41:42.856762] [ DPDK EAL parameters: nvmf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid93198 ] 00:19:42.267 [2024-12-16 11:41:43.007427] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:19:42.525 [2024-12-16 11:41:43.068722] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:19:43.461 11:41:43 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:19:43.461 11:41:43 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@868 -- # return 0 00:19:43.461 11:41:43 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@62 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; killprocess $hostpid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:19:43.461 11:41:43 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@65 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_set_options -e 1 00:19:43.461 11:41:43 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:43.461 11:41:43 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:19:43.461 11:41:43 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:43.461 11:41:43 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@66 -- # rpc_cmd -s /tmp/host.sock framework_start_init 00:19:43.461 11:41:43 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:43.461 11:41:43 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:19:43.461 11:41:43 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:43.461 11:41:43 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@69 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme -t tcp -a 10.0.0.3 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test --ctrlr-loss-timeout-sec 2 --reconnect-delay-sec 1 --fast-io-fail-timeout-sec 1 --wait-for-attach 00:19:43.461 11:41:43 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:43.461 11:41:43 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:19:44.398 [2024-12-16 11:41:44.982751] bdev_nvme.c:7516:discovery_attach_cb: *INFO*: Discovery[10.0.0.3:8009] discovery ctrlr attached 00:19:44.398 [2024-12-16 11:41:44.982782] bdev_nvme.c:7602:discovery_poller: *INFO*: Discovery[10.0.0.3:8009] discovery ctrlr connected 00:19:44.398 [2024-12-16 11:41:44.982800] bdev_nvme.c:7479:get_discovery_log_page: *INFO*: Discovery[10.0.0.3:8009] sent discovery log page command 00:19:44.398 [2024-12-16 11:41:45.068860] bdev_nvme.c:7445:discovery_log_page_cb: *INFO*: Discovery[10.0.0.3:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.3:4420 new subsystem nvme0 00:19:44.398 [2024-12-16 11:41:45.123287] bdev_nvme.c:5663:nvme_ctrlr_create_done: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] ctrlr was created to 10.0.0.3:4420 00:19:44.398 [2024-12-16 11:41:45.124111] bdev_nvme.c:1990:bdev_nvme_create_qpair: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Connecting qpair 0x23791f0:1 started. 00:19:44.398 [2024-12-16 11:41:45.126023] bdev_nvme.c:8312:bdev_nvme_readv: *DEBUG*: read 8 blocks with offset 0 00:19:44.398 [2024-12-16 11:41:45.126082] bdev_nvme.c:8312:bdev_nvme_readv: *DEBUG*: read 1 blocks with offset 0 00:19:44.398 [2024-12-16 11:41:45.126111] bdev_nvme.c:8312:bdev_nvme_readv: *DEBUG*: read 64 blocks with offset 0 00:19:44.398 [2024-12-16 11:41:45.126128] bdev_nvme.c:7335:discovery_attach_controller_done: *INFO*: Discovery[10.0.0.3:8009] attach nvme0 done 00:19:44.398 [2024-12-16 11:41:45.126152] bdev_nvme.c:7294:discovery_remove_controllers: *INFO*: Discovery[10.0.0.3:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.3:4420 found again 00:19:44.398 11:41:45 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:44.398 11:41:45 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@72 -- # wait_for_bdev nvme0n1 00:19:44.398 11:41:45 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:19:44.398 [2024-12-16 11:41:45.131356] bdev_nvme.c:1792:bdev_nvme_disconnected_qpair_cb: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] qpair 0x23791f0 was disconnected and freed. delete nvme_qpair. 00:19:44.398 11:41:45 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:19:44.398 11:41:45 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:19:44.398 11:41:45 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:19:44.398 11:41:45 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:19:44.398 11:41:45 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:44.398 11:41:45 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:19:44.656 11:41:45 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:44.656 11:41:45 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ nvme0n1 != \n\v\m\e\0\n\1 ]] 00:19:44.656 11:41:45 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@75 -- # ip netns exec nvmf_tgt_ns_spdk ip addr del 10.0.0.3/24 dev nvmf_tgt_if 00:19:44.656 11:41:45 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@76 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if down 00:19:44.656 11:41:45 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@79 -- # wait_for_bdev '' 00:19:44.656 11:41:45 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:19:44.656 11:41:45 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:19:44.656 11:41:45 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:44.656 11:41:45 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:19:44.656 11:41:45 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:19:44.656 11:41:45 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:19:44.656 11:41:45 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:19:44.656 11:41:45 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:44.656 11:41:45 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ nvme0n1 != '' ]] 00:19:44.656 11:41:45 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:19:45.592 11:41:46 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:19:45.592 11:41:46 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:19:45.592 11:41:46 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:19:45.592 11:41:46 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:45.592 11:41:46 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:19:45.592 11:41:46 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:19:45.592 11:41:46 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:19:45.592 11:41:46 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:45.592 11:41:46 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ nvme0n1 != '' ]] 00:19:45.592 11:41:46 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:19:46.967 11:41:47 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:19:46.967 11:41:47 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:19:46.967 11:41:47 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:46.967 11:41:47 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:19:46.967 11:41:47 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:19:46.967 11:41:47 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:19:46.967 11:41:47 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:19:46.967 11:41:47 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:46.967 11:41:47 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ nvme0n1 != '' ]] 00:19:46.967 11:41:47 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:19:47.902 11:41:48 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:19:47.902 11:41:48 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:19:47.902 11:41:48 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:19:47.902 11:41:48 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:47.902 11:41:48 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:19:47.902 11:41:48 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:19:47.902 11:41:48 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:19:47.902 11:41:48 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:47.902 11:41:48 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ nvme0n1 != '' ]] 00:19:47.902 11:41:48 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:19:48.837 11:41:49 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:19:48.837 11:41:49 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:19:48.837 11:41:49 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:48.837 11:41:49 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:19:48.837 11:41:49 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:19:48.837 11:41:49 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:19:48.837 11:41:49 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:19:48.837 11:41:49 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:48.837 11:41:49 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ nvme0n1 != '' ]] 00:19:48.837 11:41:49 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:19:49.770 11:41:50 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:19:49.770 11:41:50 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:19:49.770 11:41:50 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:19:49.770 11:41:50 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:49.770 11:41:50 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:19:49.770 11:41:50 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:19:49.770 11:41:50 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:19:49.770 11:41:50 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:50.029 [2024-12-16 11:41:50.554187] /home/vagrant/spdk_repo/spdk/include/spdk_internal/nvme_tcp.h: 421:nvme_tcp_read_data: *ERROR*: spdk_sock_recv() failed, errno 110: Connection timed out 00:19:50.029 [2024-12-16 11:41:50.554256] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:19:50.029 [2024-12-16 11:41:50.554273] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:50.029 [2024-12-16 11:41:50.554286] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:19:50.029 [2024-12-16 11:41:50.554296] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:50.029 [2024-12-16 11:41:50.554306] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:19:50.029 [2024-12-16 11:41:50.554315] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:50.029 [2024-12-16 11:41:50.554324] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:19:50.029 [2024-12-16 11:41:50.554334] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:50.029 [2024-12-16 11:41:50.554345] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: KEEP ALIVE (18) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000000 00:19:50.029 [2024-12-16 11:41:50.554354] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:50.029 [2024-12-16 11:41:50.554363] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22bb290 is same with the state(6) to be set 00:19:50.029 11:41:50 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ nvme0n1 != '' ]] 00:19:50.029 11:41:50 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:19:50.029 [2024-12-16 11:41:50.564192] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x22bb290 (9): Bad file descriptor 00:19:50.029 [2024-12-16 11:41:50.574214] bdev_nvme.c:2550:bdev_nvme_reset_destroy_qpairs: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Delete qpairs for reset. 00:19:50.029 [2024-12-16 11:41:50.574379] bdev_nvme.c:2538:bdev_nvme_reset_destroy_qpair_done: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] qpairs were deleted. 00:19:50.029 [2024-12-16 11:41:50.574392] bdev_nvme.c:2134:nvme_ctrlr_disconnect: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Start disconnecting ctrlr. 00:19:50.029 [2024-12-16 11:41:50.574398] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0, 1] resetting controller 00:19:50.029 [2024-12-16 11:41:50.574446] bdev_nvme.c:2522:bdev_nvme_reconnect_ctrlr: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Start reconnecting ctrlr. 00:19:51.007 11:41:51 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:19:51.007 11:41:51 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:19:51.007 11:41:51 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:19:51.007 11:41:51 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:51.007 11:41:51 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:19:51.007 11:41:51 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:19:51.007 11:41:51 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:19:51.007 [2024-12-16 11:41:51.623283] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 110 00:19:51.007 [2024-12-16 11:41:51.623648] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22bb290 with addr=10.0.0.3, port=4420 00:19:51.007 [2024-12-16 11:41:51.624002] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22bb290 is same with the state(6) to be set 00:19:51.007 [2024-12-16 11:41:51.624122] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x22bb290 (9): Bad file descriptor 00:19:51.007 [2024-12-16 11:41:51.625339] bdev_nvme.c:3173:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: [nqn.2016-06.io.spdk:cnode0, 1] Unable to perform failover, already in progress. 00:19:51.007 [2024-12-16 11:41:51.625461] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] Ctrlr is in error state 00:19:51.007 [2024-12-16 11:41:51.625489] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] controller reinitialization failed 00:19:51.007 [2024-12-16 11:41:51.625511] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] in failed state. 00:19:51.007 [2024-12-16 11:41:51.625531] bdev_nvme.c:2512:bdev_nvme_reconnect_ctrlr_poll: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] ctrlr could not be connected. 00:19:51.007 [2024-12-16 11:41:51.625546] bdev_nvme.c:2279:bdev_nvme_reset_ctrlr_complete: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Clear pending resets. 00:19:51.007 [2024-12-16 11:41:51.625557] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] Resetting controller failed. 00:19:51.007 [2024-12-16 11:41:51.625581] bdev_nvme.c:2134:nvme_ctrlr_disconnect: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Start disconnecting ctrlr. 00:19:51.007 [2024-12-16 11:41:51.625595] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0, 1] resetting controller 00:19:51.007 11:41:51 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:51.007 11:41:51 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ nvme0n1 != '' ]] 00:19:51.007 11:41:51 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:19:51.941 [2024-12-16 11:41:52.625675] bdev_nvme.c:2522:bdev_nvme_reconnect_ctrlr: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Start reconnecting ctrlr. 00:19:51.941 [2024-12-16 11:41:52.625716] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] in failed state. 00:19:51.941 [2024-12-16 11:41:52.625748] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] Ctrlr is in error state 00:19:51.941 [2024-12-16 11:41:52.625760] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] controller reinitialization failed 00:19:51.941 [2024-12-16 11:41:52.625771] nvme_ctrlr.c:1098:nvme_ctrlr_fail: *NOTICE*: [nqn.2016-06.io.spdk:cnode0, 1] already in failed state 00:19:51.941 [2024-12-16 11:41:52.625781] bdev_nvme.c:2512:bdev_nvme_reconnect_ctrlr_poll: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] ctrlr could not be connected. 00:19:51.941 [2024-12-16 11:41:52.625788] bdev_nvme.c:2279:bdev_nvme_reset_ctrlr_complete: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Clear pending resets. 00:19:51.941 [2024-12-16 11:41:52.625793] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] Resetting controller failed. 00:19:51.941 [2024-12-16 11:41:52.625828] bdev_nvme.c:7267:remove_discovery_entry: *INFO*: Discovery[10.0.0.3:8009] Remove discovery entry: nqn.2016-06.io.spdk:cnode0:10.0.0.3:4420 00:19:51.941 [2024-12-16 11:41:52.625878] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:19:51.941 [2024-12-16 11:41:52.625894] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:51.941 [2024-12-16 11:41:52.625908] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:19:51.941 [2024-12-16 11:41:52.625917] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:51.941 [2024-12-16 11:41:52.625927] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:19:51.941 [2024-12-16 11:41:52.625936] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:51.941 [2024-12-16 11:41:52.625945] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:19:51.941 [2024-12-16 11:41:52.625954] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:51.941 [2024-12-16 11:41:52.625964] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: KEEP ALIVE (18) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000000 00:19:51.941 [2024-12-16 11:41:52.625973] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:51.942 [2024-12-16 11:41:52.625983] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2014-08.org.nvmexpress.discovery, 1] in failed state. 00:19:51.942 [2024-12-16 11:41:52.626486] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x22e7820 (9): Bad file descriptor 00:19:51.942 [2024-12-16 11:41:52.627498] nvme_fabric.c: 214:nvme_fabric_prop_get_cmd_async: *ERROR*: Failed to send Property Get fabrics command 00:19:51.942 [2024-12-16 11:41:52.627520] nvme_ctrlr.c:1217:nvme_ctrlr_shutdown_async: *ERROR*: [nqn.2014-08.org.nvmexpress.discovery, 1] Failed to read the CC register 00:19:51.942 11:41:52 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:19:51.942 11:41:52 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:19:51.942 11:41:52 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:19:51.942 11:41:52 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:51.942 11:41:52 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:19:51.942 11:41:52 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:19:51.942 11:41:52 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:19:51.942 11:41:52 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:52.199 11:41:52 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ '' != '' ]] 00:19:52.199 11:41:52 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@82 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if 00:19:52.199 11:41:52 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@83 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:19:52.199 11:41:52 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@86 -- # wait_for_bdev nvme1n1 00:19:52.199 11:41:52 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:19:52.199 11:41:52 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:19:52.199 11:41:52 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:52.199 11:41:52 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:19:52.199 11:41:52 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:19:52.199 11:41:52 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:19:52.199 11:41:52 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:19:52.199 11:41:52 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:52.199 11:41:52 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ '' != \n\v\m\e\1\n\1 ]] 00:19:52.199 11:41:52 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:19:53.134 11:41:53 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:19:53.134 11:41:53 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:19:53.134 11:41:53 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:19:53.134 11:41:53 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:53.134 11:41:53 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:19:53.134 11:41:53 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:19:53.134 11:41:53 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:19:53.134 11:41:53 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:53.134 11:41:53 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ '' != \n\v\m\e\1\n\1 ]] 00:19:53.134 11:41:53 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:19:54.069 [2024-12-16 11:41:54.634600] bdev_nvme.c:7516:discovery_attach_cb: *INFO*: Discovery[10.0.0.3:8009] discovery ctrlr attached 00:19:54.069 [2024-12-16 11:41:54.634627] bdev_nvme.c:7602:discovery_poller: *INFO*: Discovery[10.0.0.3:8009] discovery ctrlr connected 00:19:54.069 [2024-12-16 11:41:54.634647] bdev_nvme.c:7479:get_discovery_log_page: *INFO*: Discovery[10.0.0.3:8009] sent discovery log page command 00:19:54.069 [2024-12-16 11:41:54.720707] bdev_nvme.c:7445:discovery_log_page_cb: *INFO*: Discovery[10.0.0.3:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.3:4420 new subsystem nvme1 00:19:54.069 [2024-12-16 11:41:54.775277] bdev_nvme.c:5663:nvme_ctrlr_create_done: *INFO*: [nqn.2016-06.io.spdk:cnode0, 2] ctrlr was created to 10.0.0.3:4420 00:19:54.069 [2024-12-16 11:41:54.776063] bdev_nvme.c:1990:bdev_nvme_create_qpair: *INFO*: [nqn.2016-06.io.spdk:cnode0, 2] Connecting qpair 0x23338f0:1 started. 00:19:54.069 [2024-12-16 11:41:54.777708] bdev_nvme.c:8312:bdev_nvme_readv: *DEBUG*: read 8 blocks with offset 0 00:19:54.069 [2024-12-16 11:41:54.777870] bdev_nvme.c:8312:bdev_nvme_readv: *DEBUG*: read 1 blocks with offset 0 00:19:54.069 [2024-12-16 11:41:54.777939] bdev_nvme.c:8312:bdev_nvme_readv: *DEBUG*: read 64 blocks with offset 0 00:19:54.069 [2024-12-16 11:41:54.778064] bdev_nvme.c:7335:discovery_attach_controller_done: *INFO*: Discovery[10.0.0.3:8009] attach nvme1 done 00:19:54.069 [2024-12-16 11:41:54.778129] bdev_nvme.c:7294:discovery_remove_controllers: *INFO*: Discovery[10.0.0.3:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.3:4420 found again 00:19:54.069 [2024-12-16 11:41:54.783224] bdev_nvme.c:1792:bdev_nvme_disconnected_qpair_cb: *INFO*: [nqn.2016-06.io.spdk:cnode0, 2] qpair 0x23338f0 was disconnected and freed. delete nvme_qpair. 00:19:54.327 11:41:54 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:19:54.327 11:41:54 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:19:54.327 11:41:54 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:54.327 11:41:54 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:19:54.327 11:41:54 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:19:54.327 11:41:54 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:19:54.327 11:41:54 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:19:54.327 11:41:54 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:54.327 11:41:54 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ nvme1n1 != \n\v\m\e\1\n\1 ]] 00:19:54.327 11:41:54 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@88 -- # trap - SIGINT SIGTERM EXIT 00:19:54.327 11:41:54 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@90 -- # killprocess 93198 00:19:54.327 11:41:54 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@954 -- # '[' -z 93198 ']' 00:19:54.327 11:41:54 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@958 -- # kill -0 93198 00:19:54.327 11:41:54 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@959 -- # uname 00:19:54.327 11:41:54 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:19:54.327 11:41:54 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 93198 00:19:54.327 killing process with pid 93198 00:19:54.327 11:41:54 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:19:54.327 11:41:54 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:19:54.327 11:41:54 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@972 -- # echo 'killing process with pid 93198' 00:19:54.327 11:41:54 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@973 -- # kill 93198 00:19:54.327 11:41:54 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@978 -- # wait 93198 00:19:54.585 11:41:55 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@91 -- # nvmftestfini 00:19:54.585 11:41:55 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@516 -- # nvmfcleanup 00:19:54.585 11:41:55 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@121 -- # sync 00:19:54.585 11:41:55 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:19:54.585 11:41:55 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@124 -- # set +e 00:19:54.585 11:41:55 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@125 -- # for i in {1..20} 00:19:54.585 11:41:55 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:19:54.585 rmmod nvme_tcp 00:19:54.585 rmmod nvme_fabrics 00:19:54.585 rmmod nvme_keyring 00:19:54.585 11:41:55 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:19:54.585 11:41:55 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@128 -- # set -e 00:19:54.585 11:41:55 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@129 -- # return 0 00:19:54.585 11:41:55 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@517 -- # '[' -n 93161 ']' 00:19:54.585 11:41:55 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@518 -- # killprocess 93161 00:19:54.585 11:41:55 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@954 -- # '[' -z 93161 ']' 00:19:54.585 11:41:55 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@958 -- # kill -0 93161 00:19:54.585 11:41:55 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@959 -- # uname 00:19:54.585 11:41:55 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:19:54.585 11:41:55 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 93161 00:19:54.585 killing process with pid 93161 00:19:54.585 11:41:55 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:19:54.585 11:41:55 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:19:54.585 11:41:55 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@972 -- # echo 'killing process with pid 93161' 00:19:54.585 11:41:55 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@973 -- # kill 93161 00:19:54.585 11:41:55 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@978 -- # wait 93161 00:19:54.842 11:41:55 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:19:54.842 11:41:55 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:19:54.842 11:41:55 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:19:54.842 11:41:55 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@297 -- # iptr 00:19:54.843 11:41:55 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:19:54.843 11:41:55 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@791 -- # iptables-save 00:19:54.843 11:41:55 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@791 -- # iptables-restore 00:19:54.843 11:41:55 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@298 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:19:54.843 11:41:55 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@299 -- # nvmf_veth_fini 00:19:54.843 11:41:55 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@233 -- # ip link set nvmf_init_br nomaster 00:19:54.843 11:41:55 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@234 -- # ip link set nvmf_init_br2 nomaster 00:19:54.843 11:41:55 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@235 -- # ip link set nvmf_tgt_br nomaster 00:19:54.843 11:41:55 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@236 -- # ip link set nvmf_tgt_br2 nomaster 00:19:54.843 11:41:55 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@237 -- # ip link set nvmf_init_br down 00:19:54.843 11:41:55 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@238 -- # ip link set nvmf_init_br2 down 00:19:54.843 11:41:55 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@239 -- # ip link set nvmf_tgt_br down 00:19:54.843 11:41:55 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@240 -- # ip link set nvmf_tgt_br2 down 00:19:54.843 11:41:55 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@241 -- # ip link delete nvmf_br type bridge 00:19:55.101 11:41:55 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@242 -- # ip link delete nvmf_init_if 00:19:55.101 11:41:55 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@243 -- # ip link delete nvmf_init_if2 00:19:55.101 11:41:55 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@244 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:19:55.101 11:41:55 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@245 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:19:55.101 11:41:55 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@246 -- # remove_spdk_ns 00:19:55.101 11:41:55 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:19:55.101 11:41:55 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:19:55.101 11:41:55 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:19:55.101 11:41:55 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@300 -- # return 0 00:19:55.101 00:19:55.101 real 0m14.044s 00:19:55.101 user 0m25.029s 00:19:55.101 sys 0m1.765s 00:19:55.101 11:41:55 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@1130 -- # xtrace_disable 00:19:55.101 ************************************ 00:19:55.101 END TEST nvmf_discovery_remove_ifc 00:19:55.101 ************************************ 00:19:55.101 11:41:55 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:19:55.101 11:41:55 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@29 -- # run_test nvmf_identify_kernel_target /home/vagrant/spdk_repo/spdk/test/nvmf/host/identify_kernel_nvmf.sh --transport=tcp 00:19:55.101 11:41:55 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:19:55.101 11:41:55 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1111 -- # xtrace_disable 00:19:55.101 11:41:55 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:19:55.101 ************************************ 00:19:55.101 START TEST nvmf_identify_kernel_target 00:19:55.101 ************************************ 00:19:55.101 11:41:55 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/host/identify_kernel_nvmf.sh --transport=tcp 00:19:55.101 * Looking for test storage... 00:19:55.361 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/host 00:19:55.361 11:41:55 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:19:55.361 11:41:55 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1711 -- # lcov --version 00:19:55.361 11:41:55 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:19:55.361 11:41:55 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:19:55.361 11:41:55 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:19:55.361 11:41:55 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@333 -- # local ver1 ver1_l 00:19:55.361 11:41:55 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@334 -- # local ver2 ver2_l 00:19:55.361 11:41:55 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@336 -- # IFS=.-: 00:19:55.361 11:41:55 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@336 -- # read -ra ver1 00:19:55.361 11:41:55 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@337 -- # IFS=.-: 00:19:55.361 11:41:55 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@337 -- # read -ra ver2 00:19:55.361 11:41:55 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@338 -- # local 'op=<' 00:19:55.361 11:41:55 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@340 -- # ver1_l=2 00:19:55.361 11:41:55 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@341 -- # ver2_l=1 00:19:55.361 11:41:55 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:19:55.361 11:41:55 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@344 -- # case "$op" in 00:19:55.361 11:41:55 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@345 -- # : 1 00:19:55.361 11:41:55 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@364 -- # (( v = 0 )) 00:19:55.361 11:41:55 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:19:55.361 11:41:55 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@365 -- # decimal 1 00:19:55.361 11:41:55 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@353 -- # local d=1 00:19:55.361 11:41:55 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:19:55.361 11:41:55 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@355 -- # echo 1 00:19:55.361 11:41:55 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@365 -- # ver1[v]=1 00:19:55.361 11:41:55 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@366 -- # decimal 2 00:19:55.361 11:41:55 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@353 -- # local d=2 00:19:55.361 11:41:55 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:19:55.361 11:41:55 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@355 -- # echo 2 00:19:55.361 11:41:55 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@366 -- # ver2[v]=2 00:19:55.361 11:41:55 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:19:55.361 11:41:55 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:19:55.361 11:41:55 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@368 -- # return 0 00:19:55.361 11:41:55 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:19:55.361 11:41:55 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:19:55.361 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:19:55.361 --rc genhtml_branch_coverage=1 00:19:55.361 --rc genhtml_function_coverage=1 00:19:55.361 --rc genhtml_legend=1 00:19:55.361 --rc geninfo_all_blocks=1 00:19:55.361 --rc geninfo_unexecuted_blocks=1 00:19:55.361 00:19:55.361 ' 00:19:55.361 11:41:55 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:19:55.361 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:19:55.361 --rc genhtml_branch_coverage=1 00:19:55.361 --rc genhtml_function_coverage=1 00:19:55.361 --rc genhtml_legend=1 00:19:55.361 --rc geninfo_all_blocks=1 00:19:55.361 --rc geninfo_unexecuted_blocks=1 00:19:55.361 00:19:55.361 ' 00:19:55.361 11:41:55 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:19:55.361 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:19:55.361 --rc genhtml_branch_coverage=1 00:19:55.361 --rc genhtml_function_coverage=1 00:19:55.361 --rc genhtml_legend=1 00:19:55.361 --rc geninfo_all_blocks=1 00:19:55.361 --rc geninfo_unexecuted_blocks=1 00:19:55.361 00:19:55.361 ' 00:19:55.361 11:41:55 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:19:55.361 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:19:55.361 --rc genhtml_branch_coverage=1 00:19:55.361 --rc genhtml_function_coverage=1 00:19:55.361 --rc genhtml_legend=1 00:19:55.361 --rc geninfo_all_blocks=1 00:19:55.361 --rc geninfo_unexecuted_blocks=1 00:19:55.361 00:19:55.361 ' 00:19:55.361 11:41:55 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:19:55.361 11:41:55 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@7 -- # uname -s 00:19:55.361 11:41:55 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:19:55.361 11:41:55 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:19:55.361 11:41:55 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:19:55.361 11:41:55 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:19:55.362 11:41:55 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:19:55.362 11:41:55 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:19:55.362 11:41:55 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:19:55.362 11:41:55 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:19:55.362 11:41:55 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:19:55.362 11:41:55 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:19:55.362 11:41:55 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:2fb5c03d-b67f-4471-b8a9-a83bf7b15be8 00:19:55.362 11:41:55 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@18 -- # NVME_HOSTID=2fb5c03d-b67f-4471-b8a9-a83bf7b15be8 00:19:55.362 11:41:55 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:19:55.362 11:41:55 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:19:55.362 11:41:55 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:19:55.362 11:41:55 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:19:55.362 11:41:55 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@49 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:19:55.362 11:41:55 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@15 -- # shopt -s extglob 00:19:55.362 11:41:55 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:19:55.362 11:41:55 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:19:55.362 11:41:55 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:19:55.362 11:41:55 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:55.362 11:41:55 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:55.362 11:41:55 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:55.362 11:41:55 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- paths/export.sh@5 -- # export PATH 00:19:55.362 11:41:55 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:55.362 11:41:55 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@51 -- # : 0 00:19:55.362 11:41:55 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:19:55.362 11:41:55 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:19:55.362 11:41:55 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:19:55.362 11:41:55 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:19:55.362 11:41:55 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:19:55.362 11:41:55 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:19:55.362 /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:19:55.362 11:41:55 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:19:55.362 11:41:55 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:19:55.362 11:41:55 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@55 -- # have_pci_nics=0 00:19:55.362 11:41:55 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@11 -- # nvmftestinit 00:19:55.362 11:41:55 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:19:55.362 11:41:55 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:19:55.362 11:41:55 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@476 -- # prepare_net_devs 00:19:55.362 11:41:55 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@438 -- # local -g is_hw=no 00:19:55.362 11:41:55 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@440 -- # remove_spdk_ns 00:19:55.362 11:41:55 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:19:55.362 11:41:55 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:19:55.362 11:41:55 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:19:55.362 11:41:55 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@442 -- # [[ virt != virt ]] 00:19:55.362 11:41:55 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@444 -- # [[ no == yes ]] 00:19:55.362 11:41:55 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@451 -- # [[ virt == phy ]] 00:19:55.362 11:41:55 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@454 -- # [[ virt == phy-fallback ]] 00:19:55.362 11:41:55 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@459 -- # [[ tcp == tcp ]] 00:19:55.362 11:41:55 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@460 -- # nvmf_veth_init 00:19:55.362 11:41:55 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@145 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:19:55.362 11:41:55 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@146 -- # NVMF_SECOND_INITIATOR_IP=10.0.0.2 00:19:55.362 11:41:55 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@147 -- # NVMF_FIRST_TARGET_IP=10.0.0.3 00:19:55.362 11:41:55 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@148 -- # NVMF_SECOND_TARGET_IP=10.0.0.4 00:19:55.362 11:41:55 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@149 -- # NVMF_INITIATOR_IP=10.0.0.1 00:19:55.362 11:41:55 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@150 -- # NVMF_BRIDGE=nvmf_br 00:19:55.362 11:41:55 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@151 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:19:55.362 11:41:56 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@152 -- # NVMF_INITIATOR_INTERFACE2=nvmf_init_if2 00:19:55.362 11:41:56 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@153 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:19:55.362 11:41:56 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@154 -- # NVMF_INITIATOR_BRIDGE2=nvmf_init_br2 00:19:55.362 11:41:56 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@155 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:19:55.362 11:41:56 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@156 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:19:55.362 11:41:56 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@157 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:19:55.362 11:41:56 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@158 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:19:55.362 11:41:56 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@159 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:19:55.362 11:41:56 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@160 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:19:55.362 11:41:56 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@162 -- # ip link set nvmf_init_br nomaster 00:19:55.362 Cannot find device "nvmf_init_br" 00:19:55.362 11:41:56 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@162 -- # true 00:19:55.362 11:41:56 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@163 -- # ip link set nvmf_init_br2 nomaster 00:19:55.362 Cannot find device "nvmf_init_br2" 00:19:55.362 11:41:56 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@163 -- # true 00:19:55.362 11:41:56 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@164 -- # ip link set nvmf_tgt_br nomaster 00:19:55.362 Cannot find device "nvmf_tgt_br" 00:19:55.362 11:41:56 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@164 -- # true 00:19:55.362 11:41:56 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@165 -- # ip link set nvmf_tgt_br2 nomaster 00:19:55.362 Cannot find device "nvmf_tgt_br2" 00:19:55.362 11:41:56 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@165 -- # true 00:19:55.362 11:41:56 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@166 -- # ip link set nvmf_init_br down 00:19:55.362 Cannot find device "nvmf_init_br" 00:19:55.362 11:41:56 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@166 -- # true 00:19:55.362 11:41:56 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@167 -- # ip link set nvmf_init_br2 down 00:19:55.362 Cannot find device "nvmf_init_br2" 00:19:55.362 11:41:56 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@167 -- # true 00:19:55.362 11:41:56 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@168 -- # ip link set nvmf_tgt_br down 00:19:55.362 Cannot find device "nvmf_tgt_br" 00:19:55.362 11:41:56 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@168 -- # true 00:19:55.362 11:41:56 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@169 -- # ip link set nvmf_tgt_br2 down 00:19:55.362 Cannot find device "nvmf_tgt_br2" 00:19:55.362 11:41:56 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@169 -- # true 00:19:55.362 11:41:56 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@170 -- # ip link delete nvmf_br type bridge 00:19:55.362 Cannot find device "nvmf_br" 00:19:55.362 11:41:56 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@170 -- # true 00:19:55.362 11:41:56 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@171 -- # ip link delete nvmf_init_if 00:19:55.621 Cannot find device "nvmf_init_if" 00:19:55.621 11:41:56 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@171 -- # true 00:19:55.621 11:41:56 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@172 -- # ip link delete nvmf_init_if2 00:19:55.621 Cannot find device "nvmf_init_if2" 00:19:55.621 11:41:56 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@172 -- # true 00:19:55.621 11:41:56 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@173 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:19:55.621 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:19:55.621 11:41:56 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@173 -- # true 00:19:55.621 11:41:56 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@174 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:19:55.621 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:19:55.621 11:41:56 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@174 -- # true 00:19:55.621 11:41:56 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@177 -- # ip netns add nvmf_tgt_ns_spdk 00:19:55.621 11:41:56 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@180 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:19:55.621 11:41:56 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@181 -- # ip link add nvmf_init_if2 type veth peer name nvmf_init_br2 00:19:55.621 11:41:56 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@182 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:19:55.621 11:41:56 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@183 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:19:55.621 11:41:56 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:19:55.621 11:41:56 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@187 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:19:55.621 11:41:56 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@190 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:19:55.621 11:41:56 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@191 -- # ip addr add 10.0.0.2/24 dev nvmf_init_if2 00:19:55.621 11:41:56 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@192 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if 00:19:55.621 11:41:56 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@193 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.4/24 dev nvmf_tgt_if2 00:19:55.621 11:41:56 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@196 -- # ip link set nvmf_init_if up 00:19:55.621 11:41:56 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@197 -- # ip link set nvmf_init_if2 up 00:19:55.621 11:41:56 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@198 -- # ip link set nvmf_init_br up 00:19:55.621 11:41:56 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@199 -- # ip link set nvmf_init_br2 up 00:19:55.621 11:41:56 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@200 -- # ip link set nvmf_tgt_br up 00:19:55.621 11:41:56 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@201 -- # ip link set nvmf_tgt_br2 up 00:19:55.621 11:41:56 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@202 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:19:55.621 11:41:56 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@203 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:19:55.621 11:41:56 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@204 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:19:55.621 11:41:56 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@207 -- # ip link add nvmf_br type bridge 00:19:55.621 11:41:56 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@208 -- # ip link set nvmf_br up 00:19:55.621 11:41:56 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@211 -- # ip link set nvmf_init_br master nvmf_br 00:19:55.621 11:41:56 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@212 -- # ip link set nvmf_init_br2 master nvmf_br 00:19:55.621 11:41:56 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@213 -- # ip link set nvmf_tgt_br master nvmf_br 00:19:55.621 11:41:56 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@214 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:19:55.621 11:41:56 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@217 -- # ipts -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:19:55.621 11:41:56 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT' 00:19:55.621 11:41:56 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@218 -- # ipts -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT 00:19:55.621 11:41:56 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT' 00:19:55.622 11:41:56 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@219 -- # ipts -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:19:55.622 11:41:56 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@790 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT -m comment --comment 'SPDK_NVMF:-A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT' 00:19:55.622 11:41:56 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@222 -- # ping -c 1 10.0.0.3 00:19:55.622 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:19:55.622 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.086 ms 00:19:55.622 00:19:55.622 --- 10.0.0.3 ping statistics --- 00:19:55.622 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:19:55.622 rtt min/avg/max/mdev = 0.086/0.086/0.086/0.000 ms 00:19:55.622 11:41:56 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@223 -- # ping -c 1 10.0.0.4 00:19:55.622 PING 10.0.0.4 (10.0.0.4) 56(84) bytes of data. 00:19:55.622 64 bytes from 10.0.0.4: icmp_seq=1 ttl=64 time=0.059 ms 00:19:55.622 00:19:55.622 --- 10.0.0.4 ping statistics --- 00:19:55.622 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:19:55.622 rtt min/avg/max/mdev = 0.059/0.059/0.059/0.000 ms 00:19:55.622 11:41:56 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@224 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:19:55.880 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:19:55.880 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.029 ms 00:19:55.880 00:19:55.880 --- 10.0.0.1 ping statistics --- 00:19:55.880 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:19:55.880 rtt min/avg/max/mdev = 0.029/0.029/0.029/0.000 ms 00:19:55.880 11:41:56 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@225 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.2 00:19:55.880 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:19:55.880 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.058 ms 00:19:55.880 00:19:55.880 --- 10.0.0.2 ping statistics --- 00:19:55.880 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:19:55.881 rtt min/avg/max/mdev = 0.058/0.058/0.058/0.000 ms 00:19:55.881 11:41:56 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@227 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:19:55.881 11:41:56 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@461 -- # return 0 00:19:55.881 11:41:56 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:19:55.881 11:41:56 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:19:55.881 11:41:56 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:19:55.881 11:41:56 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:19:55.881 11:41:56 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:19:55.881 11:41:56 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:19:55.881 11:41:56 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:19:55.881 11:41:56 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@13 -- # trap 'nvmftestfini || :; clean_kernel_target' EXIT 00:19:55.881 11:41:56 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@15 -- # get_main_ns_ip 00:19:55.881 11:41:56 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@769 -- # local ip 00:19:55.881 11:41:56 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@770 -- # ip_candidates=() 00:19:55.881 11:41:56 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@770 -- # local -A ip_candidates 00:19:55.881 11:41:56 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:19:55.881 11:41:56 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:19:55.881 11:41:56 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:19:55.881 11:41:56 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:19:55.881 11:41:56 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:19:55.881 11:41:56 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:19:55.881 11:41:56 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:19:55.881 11:41:56 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@15 -- # target_ip=10.0.0.1 00:19:55.881 11:41:56 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@16 -- # configure_kernel_target nqn.2016-06.io.spdk:testnqn 10.0.0.1 00:19:55.881 11:41:56 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@660 -- # local kernel_name=nqn.2016-06.io.spdk:testnqn kernel_target_ip=10.0.0.1 00:19:55.881 11:41:56 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@662 -- # nvmet=/sys/kernel/config/nvmet 00:19:55.881 11:41:56 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@663 -- # kernel_subsystem=/sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn 00:19:55.881 11:41:56 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@664 -- # kernel_namespace=/sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn/namespaces/1 00:19:55.881 11:41:56 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@665 -- # kernel_port=/sys/kernel/config/nvmet/ports/1 00:19:55.881 11:41:56 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@667 -- # local block nvme 00:19:55.881 11:41:56 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@669 -- # [[ ! -e /sys/module/nvmet ]] 00:19:55.881 11:41:56 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@670 -- # modprobe nvmet 00:19:55.881 11:41:56 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@673 -- # [[ -e /sys/kernel/config/nvmet ]] 00:19:55.881 11:41:56 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@675 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh reset 00:19:56.139 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:19:56.139 Waiting for block devices as requested 00:19:56.139 0000:00:11.0 (1b36 0010): uio_pci_generic -> nvme 00:19:56.398 0000:00:10.0 (1b36 0010): uio_pci_generic -> nvme 00:19:56.398 11:41:57 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@678 -- # for block in /sys/block/nvme* 00:19:56.398 11:41:57 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@679 -- # [[ -e /sys/block/nvme0n1 ]] 00:19:56.398 11:41:57 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@680 -- # is_block_zoned nvme0n1 00:19:56.398 11:41:57 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1650 -- # local device=nvme0n1 00:19:56.398 11:41:57 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1652 -- # [[ -e /sys/block/nvme0n1/queue/zoned ]] 00:19:56.398 11:41:57 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1653 -- # [[ none != none ]] 00:19:56.398 11:41:57 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@681 -- # block_in_use nvme0n1 00:19:56.398 11:41:57 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@381 -- # local block=nvme0n1 pt 00:19:56.398 11:41:57 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@390 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py nvme0n1 00:19:56.398 No valid GPT data, bailing 00:19:56.398 11:41:57 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@394 -- # blkid -s PTTYPE -o value /dev/nvme0n1 00:19:56.398 11:41:57 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@394 -- # pt= 00:19:56.398 11:41:57 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@395 -- # return 1 00:19:56.398 11:41:57 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@681 -- # nvme=/dev/nvme0n1 00:19:56.398 11:41:57 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@678 -- # for block in /sys/block/nvme* 00:19:56.398 11:41:57 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@679 -- # [[ -e /sys/block/nvme0n2 ]] 00:19:56.398 11:41:57 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@680 -- # is_block_zoned nvme0n2 00:19:56.398 11:41:57 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1650 -- # local device=nvme0n2 00:19:56.398 11:41:57 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1652 -- # [[ -e /sys/block/nvme0n2/queue/zoned ]] 00:19:56.398 11:41:57 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1653 -- # [[ none != none ]] 00:19:56.398 11:41:57 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@681 -- # block_in_use nvme0n2 00:19:56.398 11:41:57 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@381 -- # local block=nvme0n2 pt 00:19:56.398 11:41:57 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@390 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py nvme0n2 00:19:56.398 No valid GPT data, bailing 00:19:56.658 11:41:57 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@394 -- # blkid -s PTTYPE -o value /dev/nvme0n2 00:19:56.658 11:41:57 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@394 -- # pt= 00:19:56.658 11:41:57 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@395 -- # return 1 00:19:56.658 11:41:57 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@681 -- # nvme=/dev/nvme0n2 00:19:56.658 11:41:57 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@678 -- # for block in /sys/block/nvme* 00:19:56.658 11:41:57 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@679 -- # [[ -e /sys/block/nvme0n3 ]] 00:19:56.658 11:41:57 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@680 -- # is_block_zoned nvme0n3 00:19:56.658 11:41:57 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1650 -- # local device=nvme0n3 00:19:56.658 11:41:57 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1652 -- # [[ -e /sys/block/nvme0n3/queue/zoned ]] 00:19:56.658 11:41:57 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1653 -- # [[ none != none ]] 00:19:56.658 11:41:57 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@681 -- # block_in_use nvme0n3 00:19:56.658 11:41:57 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@381 -- # local block=nvme0n3 pt 00:19:56.658 11:41:57 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@390 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py nvme0n3 00:19:56.658 No valid GPT data, bailing 00:19:56.658 11:41:57 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@394 -- # blkid -s PTTYPE -o value /dev/nvme0n3 00:19:56.658 11:41:57 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@394 -- # pt= 00:19:56.658 11:41:57 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@395 -- # return 1 00:19:56.658 11:41:57 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@681 -- # nvme=/dev/nvme0n3 00:19:56.658 11:41:57 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@678 -- # for block in /sys/block/nvme* 00:19:56.658 11:41:57 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@679 -- # [[ -e /sys/block/nvme1n1 ]] 00:19:56.658 11:41:57 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@680 -- # is_block_zoned nvme1n1 00:19:56.658 11:41:57 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1650 -- # local device=nvme1n1 00:19:56.658 11:41:57 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1652 -- # [[ -e /sys/block/nvme1n1/queue/zoned ]] 00:19:56.658 11:41:57 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1653 -- # [[ none != none ]] 00:19:56.658 11:41:57 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@681 -- # block_in_use nvme1n1 00:19:56.658 11:41:57 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@381 -- # local block=nvme1n1 pt 00:19:56.658 11:41:57 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@390 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py nvme1n1 00:19:56.658 No valid GPT data, bailing 00:19:56.658 11:41:57 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@394 -- # blkid -s PTTYPE -o value /dev/nvme1n1 00:19:56.658 11:41:57 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@394 -- # pt= 00:19:56.658 11:41:57 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@395 -- # return 1 00:19:56.658 11:41:57 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@681 -- # nvme=/dev/nvme1n1 00:19:56.658 11:41:57 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@684 -- # [[ -b /dev/nvme1n1 ]] 00:19:56.658 11:41:57 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@686 -- # mkdir /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn 00:19:56.658 11:41:57 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@687 -- # mkdir /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn/namespaces/1 00:19:56.658 11:41:57 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@688 -- # mkdir /sys/kernel/config/nvmet/ports/1 00:19:56.658 11:41:57 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@693 -- # echo SPDK-nqn.2016-06.io.spdk:testnqn 00:19:56.658 11:41:57 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@695 -- # echo 1 00:19:56.658 11:41:57 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@696 -- # echo /dev/nvme1n1 00:19:56.658 11:41:57 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@697 -- # echo 1 00:19:56.658 11:41:57 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@699 -- # echo 10.0.0.1 00:19:56.658 11:41:57 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@700 -- # echo tcp 00:19:56.658 11:41:57 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@701 -- # echo 4420 00:19:56.658 11:41:57 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@702 -- # echo ipv4 00:19:56.658 11:41:57 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@705 -- # ln -s /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn /sys/kernel/config/nvmet/ports/1/subsystems/ 00:19:56.658 11:41:57 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@708 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:2fb5c03d-b67f-4471-b8a9-a83bf7b15be8 --hostid=2fb5c03d-b67f-4471-b8a9-a83bf7b15be8 -a 10.0.0.1 -t tcp -s 4420 00:19:56.658 00:19:56.658 Discovery Log Number of Records 2, Generation counter 2 00:19:56.658 =====Discovery Log Entry 0====== 00:19:56.658 trtype: tcp 00:19:56.658 adrfam: ipv4 00:19:56.658 subtype: current discovery subsystem 00:19:56.658 treq: not specified, sq flow control disable supported 00:19:56.658 portid: 1 00:19:56.658 trsvcid: 4420 00:19:56.658 subnqn: nqn.2014-08.org.nvmexpress.discovery 00:19:56.658 traddr: 10.0.0.1 00:19:56.658 eflags: none 00:19:56.658 sectype: none 00:19:56.658 =====Discovery Log Entry 1====== 00:19:56.658 trtype: tcp 00:19:56.658 adrfam: ipv4 00:19:56.658 subtype: nvme subsystem 00:19:56.658 treq: not specified, sq flow control disable supported 00:19:56.658 portid: 1 00:19:56.658 trsvcid: 4420 00:19:56.658 subnqn: nqn.2016-06.io.spdk:testnqn 00:19:56.658 traddr: 10.0.0.1 00:19:56.658 eflags: none 00:19:56.658 sectype: none 00:19:56.658 11:41:57 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@18 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_identify -r ' trtype:tcp adrfam:IPv4 traddr:10.0.0.1 00:19:56.658 trsvcid:4420 subnqn:nqn.2014-08.org.nvmexpress.discovery' 00:19:56.917 ===================================================== 00:19:56.917 NVMe over Fabrics controller at 10.0.0.1:4420: nqn.2014-08.org.nvmexpress.discovery 00:19:56.917 ===================================================== 00:19:56.917 Controller Capabilities/Features 00:19:56.917 ================================ 00:19:56.917 Vendor ID: 0000 00:19:56.917 Subsystem Vendor ID: 0000 00:19:56.917 Serial Number: b20e19293dafbbfe0551 00:19:56.917 Model Number: Linux 00:19:56.917 Firmware Version: 6.8.9-20 00:19:56.917 Recommended Arb Burst: 0 00:19:56.917 IEEE OUI Identifier: 00 00 00 00:19:56.917 Multi-path I/O 00:19:56.917 May have multiple subsystem ports: No 00:19:56.917 May have multiple controllers: No 00:19:56.917 Associated with SR-IOV VF: No 00:19:56.917 Max Data Transfer Size: Unlimited 00:19:56.917 Max Number of Namespaces: 0 00:19:56.917 Max Number of I/O Queues: 1024 00:19:56.917 NVMe Specification Version (VS): 1.3 00:19:56.917 NVMe Specification Version (Identify): 1.3 00:19:56.917 Maximum Queue Entries: 1024 00:19:56.917 Contiguous Queues Required: No 00:19:56.917 Arbitration Mechanisms Supported 00:19:56.917 Weighted Round Robin: Not Supported 00:19:56.917 Vendor Specific: Not Supported 00:19:56.917 Reset Timeout: 7500 ms 00:19:56.917 Doorbell Stride: 4 bytes 00:19:56.917 NVM Subsystem Reset: Not Supported 00:19:56.917 Command Sets Supported 00:19:56.917 NVM Command Set: Supported 00:19:56.917 Boot Partition: Not Supported 00:19:56.917 Memory Page Size Minimum: 4096 bytes 00:19:56.917 Memory Page Size Maximum: 4096 bytes 00:19:56.917 Persistent Memory Region: Not Supported 00:19:56.917 Optional Asynchronous Events Supported 00:19:56.917 Namespace Attribute Notices: Not Supported 00:19:56.917 Firmware Activation Notices: Not Supported 00:19:56.917 ANA Change Notices: Not Supported 00:19:56.917 PLE Aggregate Log Change Notices: Not Supported 00:19:56.917 LBA Status Info Alert Notices: Not Supported 00:19:56.917 EGE Aggregate Log Change Notices: Not Supported 00:19:56.917 Normal NVM Subsystem Shutdown event: Not Supported 00:19:56.917 Zone Descriptor Change Notices: Not Supported 00:19:56.917 Discovery Log Change Notices: Supported 00:19:56.917 Controller Attributes 00:19:56.917 128-bit Host Identifier: Not Supported 00:19:56.917 Non-Operational Permissive Mode: Not Supported 00:19:56.917 NVM Sets: Not Supported 00:19:56.917 Read Recovery Levels: Not Supported 00:19:56.917 Endurance Groups: Not Supported 00:19:56.917 Predictable Latency Mode: Not Supported 00:19:56.917 Traffic Based Keep ALive: Not Supported 00:19:56.917 Namespace Granularity: Not Supported 00:19:56.917 SQ Associations: Not Supported 00:19:56.917 UUID List: Not Supported 00:19:56.917 Multi-Domain Subsystem: Not Supported 00:19:56.917 Fixed Capacity Management: Not Supported 00:19:56.917 Variable Capacity Management: Not Supported 00:19:56.917 Delete Endurance Group: Not Supported 00:19:56.917 Delete NVM Set: Not Supported 00:19:56.917 Extended LBA Formats Supported: Not Supported 00:19:56.917 Flexible Data Placement Supported: Not Supported 00:19:56.917 00:19:56.917 Controller Memory Buffer Support 00:19:56.917 ================================ 00:19:56.917 Supported: No 00:19:56.917 00:19:56.917 Persistent Memory Region Support 00:19:56.917 ================================ 00:19:56.917 Supported: No 00:19:56.917 00:19:56.917 Admin Command Set Attributes 00:19:56.917 ============================ 00:19:56.917 Security Send/Receive: Not Supported 00:19:56.917 Format NVM: Not Supported 00:19:56.917 Firmware Activate/Download: Not Supported 00:19:56.917 Namespace Management: Not Supported 00:19:56.917 Device Self-Test: Not Supported 00:19:56.917 Directives: Not Supported 00:19:56.917 NVMe-MI: Not Supported 00:19:56.917 Virtualization Management: Not Supported 00:19:56.917 Doorbell Buffer Config: Not Supported 00:19:56.917 Get LBA Status Capability: Not Supported 00:19:56.917 Command & Feature Lockdown Capability: Not Supported 00:19:56.917 Abort Command Limit: 1 00:19:56.917 Async Event Request Limit: 1 00:19:56.917 Number of Firmware Slots: N/A 00:19:56.917 Firmware Slot 1 Read-Only: N/A 00:19:56.917 Firmware Activation Without Reset: N/A 00:19:56.917 Multiple Update Detection Support: N/A 00:19:56.917 Firmware Update Granularity: No Information Provided 00:19:56.917 Per-Namespace SMART Log: No 00:19:56.917 Asymmetric Namespace Access Log Page: Not Supported 00:19:56.917 Subsystem NQN: nqn.2014-08.org.nvmexpress.discovery 00:19:56.917 Command Effects Log Page: Not Supported 00:19:56.917 Get Log Page Extended Data: Supported 00:19:56.917 Telemetry Log Pages: Not Supported 00:19:56.917 Persistent Event Log Pages: Not Supported 00:19:56.917 Supported Log Pages Log Page: May Support 00:19:56.917 Commands Supported & Effects Log Page: Not Supported 00:19:56.917 Feature Identifiers & Effects Log Page:May Support 00:19:56.917 NVMe-MI Commands & Effects Log Page: May Support 00:19:56.917 Data Area 4 for Telemetry Log: Not Supported 00:19:56.917 Error Log Page Entries Supported: 1 00:19:56.917 Keep Alive: Not Supported 00:19:56.917 00:19:56.917 NVM Command Set Attributes 00:19:56.917 ========================== 00:19:56.917 Submission Queue Entry Size 00:19:56.917 Max: 1 00:19:56.917 Min: 1 00:19:56.917 Completion Queue Entry Size 00:19:56.917 Max: 1 00:19:56.918 Min: 1 00:19:56.918 Number of Namespaces: 0 00:19:56.918 Compare Command: Not Supported 00:19:56.918 Write Uncorrectable Command: Not Supported 00:19:56.918 Dataset Management Command: Not Supported 00:19:56.918 Write Zeroes Command: Not Supported 00:19:56.918 Set Features Save Field: Not Supported 00:19:56.918 Reservations: Not Supported 00:19:56.918 Timestamp: Not Supported 00:19:56.918 Copy: Not Supported 00:19:56.918 Volatile Write Cache: Not Present 00:19:56.918 Atomic Write Unit (Normal): 1 00:19:56.918 Atomic Write Unit (PFail): 1 00:19:56.918 Atomic Compare & Write Unit: 1 00:19:56.918 Fused Compare & Write: Not Supported 00:19:56.918 Scatter-Gather List 00:19:56.918 SGL Command Set: Supported 00:19:56.918 SGL Keyed: Not Supported 00:19:56.918 SGL Bit Bucket Descriptor: Not Supported 00:19:56.918 SGL Metadata Pointer: Not Supported 00:19:56.918 Oversized SGL: Not Supported 00:19:56.918 SGL Metadata Address: Not Supported 00:19:56.918 SGL Offset: Supported 00:19:56.918 Transport SGL Data Block: Not Supported 00:19:56.918 Replay Protected Memory Block: Not Supported 00:19:56.918 00:19:56.918 Firmware Slot Information 00:19:56.918 ========================= 00:19:56.918 Active slot: 0 00:19:56.918 00:19:56.918 00:19:56.918 Error Log 00:19:56.918 ========= 00:19:56.918 00:19:56.918 Active Namespaces 00:19:56.918 ================= 00:19:56.918 Discovery Log Page 00:19:56.918 ================== 00:19:56.918 Generation Counter: 2 00:19:56.918 Number of Records: 2 00:19:56.918 Record Format: 0 00:19:56.918 00:19:56.918 Discovery Log Entry 0 00:19:56.918 ---------------------- 00:19:56.918 Transport Type: 3 (TCP) 00:19:56.918 Address Family: 1 (IPv4) 00:19:56.918 Subsystem Type: 3 (Current Discovery Subsystem) 00:19:56.918 Entry Flags: 00:19:56.918 Duplicate Returned Information: 0 00:19:56.918 Explicit Persistent Connection Support for Discovery: 0 00:19:56.918 Transport Requirements: 00:19:56.918 Secure Channel: Not Specified 00:19:56.918 Port ID: 1 (0x0001) 00:19:56.918 Controller ID: 65535 (0xffff) 00:19:56.918 Admin Max SQ Size: 32 00:19:56.918 Transport Service Identifier: 4420 00:19:56.918 NVM Subsystem Qualified Name: nqn.2014-08.org.nvmexpress.discovery 00:19:56.918 Transport Address: 10.0.0.1 00:19:56.918 Discovery Log Entry 1 00:19:56.918 ---------------------- 00:19:56.918 Transport Type: 3 (TCP) 00:19:56.918 Address Family: 1 (IPv4) 00:19:56.918 Subsystem Type: 2 (NVM Subsystem) 00:19:56.918 Entry Flags: 00:19:56.918 Duplicate Returned Information: 0 00:19:56.918 Explicit Persistent Connection Support for Discovery: 0 00:19:56.918 Transport Requirements: 00:19:56.918 Secure Channel: Not Specified 00:19:56.918 Port ID: 1 (0x0001) 00:19:56.918 Controller ID: 65535 (0xffff) 00:19:56.918 Admin Max SQ Size: 32 00:19:56.918 Transport Service Identifier: 4420 00:19:56.918 NVM Subsystem Qualified Name: nqn.2016-06.io.spdk:testnqn 00:19:56.918 Transport Address: 10.0.0.1 00:19:56.918 11:41:57 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@24 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_identify -r ' trtype:tcp adrfam:IPv4 traddr:10.0.0.1 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:testnqn' 00:19:57.177 get_feature(0x01) failed 00:19:57.177 get_feature(0x02) failed 00:19:57.177 get_feature(0x04) failed 00:19:57.177 ===================================================== 00:19:57.177 NVMe over Fabrics controller at 10.0.0.1:4420: nqn.2016-06.io.spdk:testnqn 00:19:57.177 ===================================================== 00:19:57.177 Controller Capabilities/Features 00:19:57.177 ================================ 00:19:57.177 Vendor ID: 0000 00:19:57.177 Subsystem Vendor ID: 0000 00:19:57.177 Serial Number: 41652fe8c8ccd8a79a1d 00:19:57.177 Model Number: SPDK-nqn.2016-06.io.spdk:testnqn 00:19:57.177 Firmware Version: 6.8.9-20 00:19:57.177 Recommended Arb Burst: 6 00:19:57.177 IEEE OUI Identifier: 00 00 00 00:19:57.177 Multi-path I/O 00:19:57.177 May have multiple subsystem ports: Yes 00:19:57.177 May have multiple controllers: Yes 00:19:57.177 Associated with SR-IOV VF: No 00:19:57.177 Max Data Transfer Size: Unlimited 00:19:57.177 Max Number of Namespaces: 1024 00:19:57.177 Max Number of I/O Queues: 128 00:19:57.177 NVMe Specification Version (VS): 1.3 00:19:57.177 NVMe Specification Version (Identify): 1.3 00:19:57.177 Maximum Queue Entries: 1024 00:19:57.177 Contiguous Queues Required: No 00:19:57.177 Arbitration Mechanisms Supported 00:19:57.177 Weighted Round Robin: Not Supported 00:19:57.177 Vendor Specific: Not Supported 00:19:57.177 Reset Timeout: 7500 ms 00:19:57.177 Doorbell Stride: 4 bytes 00:19:57.177 NVM Subsystem Reset: Not Supported 00:19:57.177 Command Sets Supported 00:19:57.177 NVM Command Set: Supported 00:19:57.177 Boot Partition: Not Supported 00:19:57.177 Memory Page Size Minimum: 4096 bytes 00:19:57.177 Memory Page Size Maximum: 4096 bytes 00:19:57.177 Persistent Memory Region: Not Supported 00:19:57.177 Optional Asynchronous Events Supported 00:19:57.177 Namespace Attribute Notices: Supported 00:19:57.177 Firmware Activation Notices: Not Supported 00:19:57.177 ANA Change Notices: Supported 00:19:57.177 PLE Aggregate Log Change Notices: Not Supported 00:19:57.177 LBA Status Info Alert Notices: Not Supported 00:19:57.177 EGE Aggregate Log Change Notices: Not Supported 00:19:57.177 Normal NVM Subsystem Shutdown event: Not Supported 00:19:57.177 Zone Descriptor Change Notices: Not Supported 00:19:57.177 Discovery Log Change Notices: Not Supported 00:19:57.177 Controller Attributes 00:19:57.177 128-bit Host Identifier: Supported 00:19:57.177 Non-Operational Permissive Mode: Not Supported 00:19:57.177 NVM Sets: Not Supported 00:19:57.177 Read Recovery Levels: Not Supported 00:19:57.177 Endurance Groups: Not Supported 00:19:57.177 Predictable Latency Mode: Not Supported 00:19:57.177 Traffic Based Keep ALive: Supported 00:19:57.177 Namespace Granularity: Not Supported 00:19:57.177 SQ Associations: Not Supported 00:19:57.177 UUID List: Not Supported 00:19:57.177 Multi-Domain Subsystem: Not Supported 00:19:57.177 Fixed Capacity Management: Not Supported 00:19:57.177 Variable Capacity Management: Not Supported 00:19:57.177 Delete Endurance Group: Not Supported 00:19:57.177 Delete NVM Set: Not Supported 00:19:57.177 Extended LBA Formats Supported: Not Supported 00:19:57.177 Flexible Data Placement Supported: Not Supported 00:19:57.177 00:19:57.178 Controller Memory Buffer Support 00:19:57.178 ================================ 00:19:57.178 Supported: No 00:19:57.178 00:19:57.178 Persistent Memory Region Support 00:19:57.178 ================================ 00:19:57.178 Supported: No 00:19:57.178 00:19:57.178 Admin Command Set Attributes 00:19:57.178 ============================ 00:19:57.178 Security Send/Receive: Not Supported 00:19:57.178 Format NVM: Not Supported 00:19:57.178 Firmware Activate/Download: Not Supported 00:19:57.178 Namespace Management: Not Supported 00:19:57.178 Device Self-Test: Not Supported 00:19:57.178 Directives: Not Supported 00:19:57.178 NVMe-MI: Not Supported 00:19:57.178 Virtualization Management: Not Supported 00:19:57.178 Doorbell Buffer Config: Not Supported 00:19:57.178 Get LBA Status Capability: Not Supported 00:19:57.178 Command & Feature Lockdown Capability: Not Supported 00:19:57.178 Abort Command Limit: 4 00:19:57.178 Async Event Request Limit: 4 00:19:57.178 Number of Firmware Slots: N/A 00:19:57.178 Firmware Slot 1 Read-Only: N/A 00:19:57.178 Firmware Activation Without Reset: N/A 00:19:57.178 Multiple Update Detection Support: N/A 00:19:57.178 Firmware Update Granularity: No Information Provided 00:19:57.178 Per-Namespace SMART Log: Yes 00:19:57.178 Asymmetric Namespace Access Log Page: Supported 00:19:57.178 ANA Transition Time : 10 sec 00:19:57.178 00:19:57.178 Asymmetric Namespace Access Capabilities 00:19:57.178 ANA Optimized State : Supported 00:19:57.178 ANA Non-Optimized State : Supported 00:19:57.178 ANA Inaccessible State : Supported 00:19:57.178 ANA Persistent Loss State : Supported 00:19:57.178 ANA Change State : Supported 00:19:57.178 ANAGRPID is not changed : No 00:19:57.178 Non-Zero ANAGRPID for NS Mgmt Cmd : Not Supported 00:19:57.178 00:19:57.178 ANA Group Identifier Maximum : 128 00:19:57.178 Number of ANA Group Identifiers : 128 00:19:57.178 Max Number of Allowed Namespaces : 1024 00:19:57.178 Subsystem NQN: nqn.2016-06.io.spdk:testnqn 00:19:57.178 Command Effects Log Page: Supported 00:19:57.178 Get Log Page Extended Data: Supported 00:19:57.178 Telemetry Log Pages: Not Supported 00:19:57.178 Persistent Event Log Pages: Not Supported 00:19:57.178 Supported Log Pages Log Page: May Support 00:19:57.178 Commands Supported & Effects Log Page: Not Supported 00:19:57.178 Feature Identifiers & Effects Log Page:May Support 00:19:57.178 NVMe-MI Commands & Effects Log Page: May Support 00:19:57.178 Data Area 4 for Telemetry Log: Not Supported 00:19:57.178 Error Log Page Entries Supported: 128 00:19:57.178 Keep Alive: Supported 00:19:57.178 Keep Alive Granularity: 1000 ms 00:19:57.178 00:19:57.178 NVM Command Set Attributes 00:19:57.178 ========================== 00:19:57.178 Submission Queue Entry Size 00:19:57.178 Max: 64 00:19:57.178 Min: 64 00:19:57.178 Completion Queue Entry Size 00:19:57.178 Max: 16 00:19:57.178 Min: 16 00:19:57.178 Number of Namespaces: 1024 00:19:57.178 Compare Command: Not Supported 00:19:57.178 Write Uncorrectable Command: Not Supported 00:19:57.178 Dataset Management Command: Supported 00:19:57.178 Write Zeroes Command: Supported 00:19:57.178 Set Features Save Field: Not Supported 00:19:57.178 Reservations: Not Supported 00:19:57.178 Timestamp: Not Supported 00:19:57.178 Copy: Not Supported 00:19:57.178 Volatile Write Cache: Present 00:19:57.178 Atomic Write Unit (Normal): 1 00:19:57.178 Atomic Write Unit (PFail): 1 00:19:57.178 Atomic Compare & Write Unit: 1 00:19:57.178 Fused Compare & Write: Not Supported 00:19:57.178 Scatter-Gather List 00:19:57.178 SGL Command Set: Supported 00:19:57.178 SGL Keyed: Not Supported 00:19:57.178 SGL Bit Bucket Descriptor: Not Supported 00:19:57.178 SGL Metadata Pointer: Not Supported 00:19:57.178 Oversized SGL: Not Supported 00:19:57.178 SGL Metadata Address: Not Supported 00:19:57.178 SGL Offset: Supported 00:19:57.178 Transport SGL Data Block: Not Supported 00:19:57.178 Replay Protected Memory Block: Not Supported 00:19:57.178 00:19:57.178 Firmware Slot Information 00:19:57.178 ========================= 00:19:57.178 Active slot: 0 00:19:57.178 00:19:57.178 Asymmetric Namespace Access 00:19:57.178 =========================== 00:19:57.178 Change Count : 0 00:19:57.178 Number of ANA Group Descriptors : 1 00:19:57.178 ANA Group Descriptor : 0 00:19:57.178 ANA Group ID : 1 00:19:57.178 Number of NSID Values : 1 00:19:57.178 Change Count : 0 00:19:57.178 ANA State : 1 00:19:57.178 Namespace Identifier : 1 00:19:57.178 00:19:57.178 Commands Supported and Effects 00:19:57.178 ============================== 00:19:57.178 Admin Commands 00:19:57.178 -------------- 00:19:57.178 Get Log Page (02h): Supported 00:19:57.178 Identify (06h): Supported 00:19:57.178 Abort (08h): Supported 00:19:57.178 Set Features (09h): Supported 00:19:57.178 Get Features (0Ah): Supported 00:19:57.178 Asynchronous Event Request (0Ch): Supported 00:19:57.178 Keep Alive (18h): Supported 00:19:57.178 I/O Commands 00:19:57.178 ------------ 00:19:57.178 Flush (00h): Supported 00:19:57.178 Write (01h): Supported LBA-Change 00:19:57.178 Read (02h): Supported 00:19:57.178 Write Zeroes (08h): Supported LBA-Change 00:19:57.178 Dataset Management (09h): Supported 00:19:57.178 00:19:57.178 Error Log 00:19:57.178 ========= 00:19:57.178 Entry: 0 00:19:57.178 Error Count: 0x3 00:19:57.178 Submission Queue Id: 0x0 00:19:57.178 Command Id: 0x5 00:19:57.178 Phase Bit: 0 00:19:57.178 Status Code: 0x2 00:19:57.178 Status Code Type: 0x0 00:19:57.178 Do Not Retry: 1 00:19:57.178 Error Location: 0x28 00:19:57.178 LBA: 0x0 00:19:57.178 Namespace: 0x0 00:19:57.178 Vendor Log Page: 0x0 00:19:57.178 ----------- 00:19:57.178 Entry: 1 00:19:57.178 Error Count: 0x2 00:19:57.178 Submission Queue Id: 0x0 00:19:57.178 Command Id: 0x5 00:19:57.178 Phase Bit: 0 00:19:57.178 Status Code: 0x2 00:19:57.178 Status Code Type: 0x0 00:19:57.178 Do Not Retry: 1 00:19:57.178 Error Location: 0x28 00:19:57.178 LBA: 0x0 00:19:57.178 Namespace: 0x0 00:19:57.178 Vendor Log Page: 0x0 00:19:57.178 ----------- 00:19:57.178 Entry: 2 00:19:57.178 Error Count: 0x1 00:19:57.178 Submission Queue Id: 0x0 00:19:57.178 Command Id: 0x4 00:19:57.178 Phase Bit: 0 00:19:57.178 Status Code: 0x2 00:19:57.178 Status Code Type: 0x0 00:19:57.178 Do Not Retry: 1 00:19:57.178 Error Location: 0x28 00:19:57.178 LBA: 0x0 00:19:57.178 Namespace: 0x0 00:19:57.178 Vendor Log Page: 0x0 00:19:57.178 00:19:57.178 Number of Queues 00:19:57.178 ================ 00:19:57.178 Number of I/O Submission Queues: 128 00:19:57.178 Number of I/O Completion Queues: 128 00:19:57.178 00:19:57.178 ZNS Specific Controller Data 00:19:57.178 ============================ 00:19:57.178 Zone Append Size Limit: 0 00:19:57.178 00:19:57.178 00:19:57.178 Active Namespaces 00:19:57.178 ================= 00:19:57.178 get_feature(0x05) failed 00:19:57.178 Namespace ID:1 00:19:57.178 Command Set Identifier: NVM (00h) 00:19:57.178 Deallocate: Supported 00:19:57.178 Deallocated/Unwritten Error: Not Supported 00:19:57.178 Deallocated Read Value: Unknown 00:19:57.178 Deallocate in Write Zeroes: Not Supported 00:19:57.178 Deallocated Guard Field: 0xFFFF 00:19:57.178 Flush: Supported 00:19:57.178 Reservation: Not Supported 00:19:57.178 Namespace Sharing Capabilities: Multiple Controllers 00:19:57.178 Size (in LBAs): 1310720 (5GiB) 00:19:57.178 Capacity (in LBAs): 1310720 (5GiB) 00:19:57.178 Utilization (in LBAs): 1310720 (5GiB) 00:19:57.178 UUID: b2c7a909-3fa4-4f48-a88e-f76c6ae18b1b 00:19:57.178 Thin Provisioning: Not Supported 00:19:57.178 Per-NS Atomic Units: Yes 00:19:57.178 Atomic Boundary Size (Normal): 0 00:19:57.178 Atomic Boundary Size (PFail): 0 00:19:57.178 Atomic Boundary Offset: 0 00:19:57.178 NGUID/EUI64 Never Reused: No 00:19:57.178 ANA group ID: 1 00:19:57.178 Namespace Write Protected: No 00:19:57.178 Number of LBA Formats: 1 00:19:57.178 Current LBA Format: LBA Format #00 00:19:57.178 LBA Format #00: Data Size: 4096 Metadata Size: 0 00:19:57.178 00:19:57.178 11:41:57 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@1 -- # nvmftestfini 00:19:57.178 11:41:57 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@516 -- # nvmfcleanup 00:19:57.178 11:41:57 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@121 -- # sync 00:19:57.178 11:41:57 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:19:57.178 11:41:57 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@124 -- # set +e 00:19:57.178 11:41:57 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@125 -- # for i in {1..20} 00:19:57.179 11:41:57 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:19:57.179 rmmod nvme_tcp 00:19:57.179 rmmod nvme_fabrics 00:19:57.179 11:41:57 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:19:57.179 11:41:57 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@128 -- # set -e 00:19:57.179 11:41:57 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@129 -- # return 0 00:19:57.179 11:41:57 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@517 -- # '[' -n '' ']' 00:19:57.179 11:41:57 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:19:57.179 11:41:57 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:19:57.179 11:41:57 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:19:57.179 11:41:57 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@297 -- # iptr 00:19:57.179 11:41:57 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@791 -- # iptables-save 00:19:57.179 11:41:57 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:19:57.179 11:41:57 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@791 -- # iptables-restore 00:19:57.179 11:41:57 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@298 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:19:57.179 11:41:57 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@299 -- # nvmf_veth_fini 00:19:57.179 11:41:57 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@233 -- # ip link set nvmf_init_br nomaster 00:19:57.179 11:41:57 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@234 -- # ip link set nvmf_init_br2 nomaster 00:19:57.179 11:41:57 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@235 -- # ip link set nvmf_tgt_br nomaster 00:19:57.179 11:41:57 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@236 -- # ip link set nvmf_tgt_br2 nomaster 00:19:57.179 11:41:57 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@237 -- # ip link set nvmf_init_br down 00:19:57.437 11:41:57 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@238 -- # ip link set nvmf_init_br2 down 00:19:57.437 11:41:57 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@239 -- # ip link set nvmf_tgt_br down 00:19:57.437 11:41:57 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@240 -- # ip link set nvmf_tgt_br2 down 00:19:57.437 11:41:57 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@241 -- # ip link delete nvmf_br type bridge 00:19:57.437 11:41:57 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@242 -- # ip link delete nvmf_init_if 00:19:57.437 11:41:57 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@243 -- # ip link delete nvmf_init_if2 00:19:57.437 11:41:58 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@244 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:19:57.437 11:41:58 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@245 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:19:57.437 11:41:58 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@246 -- # remove_spdk_ns 00:19:57.438 11:41:58 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:19:57.438 11:41:58 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:19:57.438 11:41:58 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:19:57.438 11:41:58 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@300 -- # return 0 00:19:57.438 11:41:58 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@1 -- # clean_kernel_target 00:19:57.438 11:41:58 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@712 -- # [[ -e /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn ]] 00:19:57.438 11:41:58 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@714 -- # echo 0 00:19:57.438 11:41:58 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@716 -- # rm -f /sys/kernel/config/nvmet/ports/1/subsystems/nqn.2016-06.io.spdk:testnqn 00:19:57.438 11:41:58 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@717 -- # rmdir /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn/namespaces/1 00:19:57.438 11:41:58 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@718 -- # rmdir /sys/kernel/config/nvmet/ports/1 00:19:57.438 11:41:58 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@719 -- # rmdir /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn 00:19:57.438 11:41:58 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@721 -- # modules=(/sys/module/nvmet/holders/*) 00:19:57.438 11:41:58 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@723 -- # modprobe -r nvmet_tcp nvmet 00:19:57.438 11:41:58 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@726 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:19:58.372 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:19:58.372 0000:00:10.0 (1b36 0010): nvme -> uio_pci_generic 00:19:58.372 0000:00:11.0 (1b36 0010): nvme -> uio_pci_generic 00:19:58.372 ************************************ 00:19:58.372 END TEST nvmf_identify_kernel_target 00:19:58.372 ************************************ 00:19:58.372 00:19:58.372 real 0m3.274s 00:19:58.372 user 0m1.140s 00:19:58.372 sys 0m1.468s 00:19:58.372 11:41:59 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1130 -- # xtrace_disable 00:19:58.372 11:41:59 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@10 -- # set +x 00:19:58.372 11:41:59 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@30 -- # run_test nvmf_auth_host /home/vagrant/spdk_repo/spdk/test/nvmf/host/auth.sh --transport=tcp 00:19:58.372 11:41:59 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:19:58.372 11:41:59 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1111 -- # xtrace_disable 00:19:58.372 11:41:59 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:19:58.372 ************************************ 00:19:58.373 START TEST nvmf_auth_host 00:19:58.373 ************************************ 00:19:58.373 11:41:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/host/auth.sh --transport=tcp 00:19:58.631 * Looking for test storage... 00:19:58.631 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/host 00:19:58.631 11:41:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:19:58.631 11:41:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1711 -- # lcov --version 00:19:58.631 11:41:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:19:58.631 11:41:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:19:58.631 11:41:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:19:58.631 11:41:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@333 -- # local ver1 ver1_l 00:19:58.632 11:41:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@334 -- # local ver2 ver2_l 00:19:58.632 11:41:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@336 -- # IFS=.-: 00:19:58.632 11:41:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@336 -- # read -ra ver1 00:19:58.632 11:41:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@337 -- # IFS=.-: 00:19:58.632 11:41:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@337 -- # read -ra ver2 00:19:58.632 11:41:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@338 -- # local 'op=<' 00:19:58.632 11:41:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@340 -- # ver1_l=2 00:19:58.632 11:41:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@341 -- # ver2_l=1 00:19:58.632 11:41:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:19:58.632 11:41:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@344 -- # case "$op" in 00:19:58.632 11:41:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@345 -- # : 1 00:19:58.632 11:41:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@364 -- # (( v = 0 )) 00:19:58.632 11:41:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:19:58.632 11:41:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@365 -- # decimal 1 00:19:58.632 11:41:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@353 -- # local d=1 00:19:58.632 11:41:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:19:58.632 11:41:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@355 -- # echo 1 00:19:58.632 11:41:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@365 -- # ver1[v]=1 00:19:58.632 11:41:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@366 -- # decimal 2 00:19:58.632 11:41:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@353 -- # local d=2 00:19:58.632 11:41:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:19:58.632 11:41:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@355 -- # echo 2 00:19:58.632 11:41:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@366 -- # ver2[v]=2 00:19:58.632 11:41:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:19:58.632 11:41:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:19:58.632 11:41:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@368 -- # return 0 00:19:58.632 11:41:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:19:58.632 11:41:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:19:58.632 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:19:58.632 --rc genhtml_branch_coverage=1 00:19:58.632 --rc genhtml_function_coverage=1 00:19:58.632 --rc genhtml_legend=1 00:19:58.632 --rc geninfo_all_blocks=1 00:19:58.632 --rc geninfo_unexecuted_blocks=1 00:19:58.632 00:19:58.632 ' 00:19:58.632 11:41:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:19:58.632 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:19:58.632 --rc genhtml_branch_coverage=1 00:19:58.632 --rc genhtml_function_coverage=1 00:19:58.632 --rc genhtml_legend=1 00:19:58.632 --rc geninfo_all_blocks=1 00:19:58.632 --rc geninfo_unexecuted_blocks=1 00:19:58.632 00:19:58.632 ' 00:19:58.632 11:41:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:19:58.632 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:19:58.632 --rc genhtml_branch_coverage=1 00:19:58.632 --rc genhtml_function_coverage=1 00:19:58.632 --rc genhtml_legend=1 00:19:58.632 --rc geninfo_all_blocks=1 00:19:58.632 --rc geninfo_unexecuted_blocks=1 00:19:58.632 00:19:58.632 ' 00:19:58.632 11:41:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:19:58.632 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:19:58.632 --rc genhtml_branch_coverage=1 00:19:58.632 --rc genhtml_function_coverage=1 00:19:58.632 --rc genhtml_legend=1 00:19:58.632 --rc geninfo_all_blocks=1 00:19:58.632 --rc geninfo_unexecuted_blocks=1 00:19:58.632 00:19:58.632 ' 00:19:58.632 11:41:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@10 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:19:58.632 11:41:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@7 -- # uname -s 00:19:58.632 11:41:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:19:58.632 11:41:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:19:58.632 11:41:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:19:58.632 11:41:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:19:58.632 11:41:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:19:58.632 11:41:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:19:58.632 11:41:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:19:58.632 11:41:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:19:58.632 11:41:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:19:58.632 11:41:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:19:58.632 11:41:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:2fb5c03d-b67f-4471-b8a9-a83bf7b15be8 00:19:58.632 11:41:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@18 -- # NVME_HOSTID=2fb5c03d-b67f-4471-b8a9-a83bf7b15be8 00:19:58.632 11:41:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:19:58.632 11:41:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:19:58.632 11:41:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:19:58.632 11:41:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:19:58.632 11:41:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@49 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:19:58.632 11:41:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@15 -- # shopt -s extglob 00:19:58.632 11:41:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:19:58.632 11:41:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:19:58.632 11:41:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:19:58.632 11:41:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:58.632 11:41:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:58.632 11:41:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:58.632 11:41:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- paths/export.sh@5 -- # export PATH 00:19:58.632 11:41:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:58.632 11:41:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@51 -- # : 0 00:19:58.632 11:41:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:19:58.632 11:41:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:19:58.632 11:41:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:19:58.632 11:41:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:19:58.632 11:41:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:19:58.632 11:41:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:19:58.632 /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:19:58.632 11:41:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:19:58.632 11:41:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:19:58.632 11:41:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@55 -- # have_pci_nics=0 00:19:58.632 11:41:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@13 -- # digests=("sha256" "sha384" "sha512") 00:19:58.632 11:41:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@16 -- # dhgroups=("ffdhe2048" "ffdhe3072" "ffdhe4096" "ffdhe6144" "ffdhe8192") 00:19:58.632 11:41:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@17 -- # subnqn=nqn.2024-02.io.spdk:cnode0 00:19:58.632 11:41:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@18 -- # hostnqn=nqn.2024-02.io.spdk:host0 00:19:58.632 11:41:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@19 -- # nvmet_subsys=/sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0 00:19:58.632 11:41:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@20 -- # nvmet_host=/sys/kernel/config/nvmet/hosts/nqn.2024-02.io.spdk:host0 00:19:58.632 11:41:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@21 -- # keys=() 00:19:58.632 11:41:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@21 -- # ckeys=() 00:19:58.632 11:41:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@68 -- # nvmftestinit 00:19:58.632 11:41:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:19:58.632 11:41:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:19:58.632 11:41:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@476 -- # prepare_net_devs 00:19:58.632 11:41:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@438 -- # local -g is_hw=no 00:19:58.632 11:41:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@440 -- # remove_spdk_ns 00:19:58.632 11:41:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:19:58.633 11:41:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:19:58.633 11:41:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:19:58.633 11:41:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@442 -- # [[ virt != virt ]] 00:19:58.633 11:41:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@444 -- # [[ no == yes ]] 00:19:58.633 11:41:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@451 -- # [[ virt == phy ]] 00:19:58.633 11:41:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@454 -- # [[ virt == phy-fallback ]] 00:19:58.633 11:41:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@459 -- # [[ tcp == tcp ]] 00:19:58.633 11:41:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@460 -- # nvmf_veth_init 00:19:58.633 11:41:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@145 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:19:58.633 11:41:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@146 -- # NVMF_SECOND_INITIATOR_IP=10.0.0.2 00:19:58.633 11:41:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@147 -- # NVMF_FIRST_TARGET_IP=10.0.0.3 00:19:58.633 11:41:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@148 -- # NVMF_SECOND_TARGET_IP=10.0.0.4 00:19:58.633 11:41:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@149 -- # NVMF_INITIATOR_IP=10.0.0.1 00:19:58.633 11:41:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@150 -- # NVMF_BRIDGE=nvmf_br 00:19:58.633 11:41:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@151 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:19:58.633 11:41:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@152 -- # NVMF_INITIATOR_INTERFACE2=nvmf_init_if2 00:19:58.633 11:41:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@153 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:19:58.633 11:41:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@154 -- # NVMF_INITIATOR_BRIDGE2=nvmf_init_br2 00:19:58.633 11:41:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@155 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:19:58.633 11:41:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@156 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:19:58.633 11:41:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@157 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:19:58.633 11:41:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@158 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:19:58.633 11:41:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@159 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:19:58.633 11:41:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@160 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:19:58.633 11:41:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@162 -- # ip link set nvmf_init_br nomaster 00:19:58.633 Cannot find device "nvmf_init_br" 00:19:58.633 11:41:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@162 -- # true 00:19:58.633 11:41:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@163 -- # ip link set nvmf_init_br2 nomaster 00:19:58.891 Cannot find device "nvmf_init_br2" 00:19:58.891 11:41:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@163 -- # true 00:19:58.891 11:41:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@164 -- # ip link set nvmf_tgt_br nomaster 00:19:58.891 Cannot find device "nvmf_tgt_br" 00:19:58.891 11:41:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@164 -- # true 00:19:58.891 11:41:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@165 -- # ip link set nvmf_tgt_br2 nomaster 00:19:58.891 Cannot find device "nvmf_tgt_br2" 00:19:58.891 11:41:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@165 -- # true 00:19:58.891 11:41:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@166 -- # ip link set nvmf_init_br down 00:19:58.891 Cannot find device "nvmf_init_br" 00:19:58.891 11:41:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@166 -- # true 00:19:58.891 11:41:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@167 -- # ip link set nvmf_init_br2 down 00:19:58.891 Cannot find device "nvmf_init_br2" 00:19:58.891 11:41:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@167 -- # true 00:19:58.891 11:41:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@168 -- # ip link set nvmf_tgt_br down 00:19:58.891 Cannot find device "nvmf_tgt_br" 00:19:58.891 11:41:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@168 -- # true 00:19:58.891 11:41:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@169 -- # ip link set nvmf_tgt_br2 down 00:19:58.891 Cannot find device "nvmf_tgt_br2" 00:19:58.891 11:41:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@169 -- # true 00:19:58.891 11:41:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@170 -- # ip link delete nvmf_br type bridge 00:19:58.891 Cannot find device "nvmf_br" 00:19:58.891 11:41:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@170 -- # true 00:19:58.891 11:41:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@171 -- # ip link delete nvmf_init_if 00:19:58.891 Cannot find device "nvmf_init_if" 00:19:58.891 11:41:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@171 -- # true 00:19:58.891 11:41:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@172 -- # ip link delete nvmf_init_if2 00:19:58.891 Cannot find device "nvmf_init_if2" 00:19:58.891 11:41:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@172 -- # true 00:19:58.891 11:41:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@173 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:19:58.891 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:19:58.891 11:41:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@173 -- # true 00:19:58.891 11:41:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@174 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:19:58.891 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:19:58.891 11:41:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@174 -- # true 00:19:58.891 11:41:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@177 -- # ip netns add nvmf_tgt_ns_spdk 00:19:58.891 11:41:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@180 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:19:58.891 11:41:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@181 -- # ip link add nvmf_init_if2 type veth peer name nvmf_init_br2 00:19:58.891 11:41:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@182 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:19:58.891 11:41:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@183 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:19:58.891 11:41:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:19:58.891 11:41:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@187 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:19:58.891 11:41:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@190 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:19:58.891 11:41:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@191 -- # ip addr add 10.0.0.2/24 dev nvmf_init_if2 00:19:58.891 11:41:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@192 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if 00:19:58.891 11:41:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@193 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.4/24 dev nvmf_tgt_if2 00:19:58.891 11:41:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@196 -- # ip link set nvmf_init_if up 00:19:58.891 11:41:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@197 -- # ip link set nvmf_init_if2 up 00:19:59.149 11:41:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@198 -- # ip link set nvmf_init_br up 00:19:59.149 11:41:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@199 -- # ip link set nvmf_init_br2 up 00:19:59.149 11:41:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@200 -- # ip link set nvmf_tgt_br up 00:19:59.149 11:41:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@201 -- # ip link set nvmf_tgt_br2 up 00:19:59.149 11:41:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@202 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:19:59.149 11:41:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@203 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:19:59.149 11:41:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@204 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:19:59.149 11:41:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@207 -- # ip link add nvmf_br type bridge 00:19:59.149 11:41:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@208 -- # ip link set nvmf_br up 00:19:59.149 11:41:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@211 -- # ip link set nvmf_init_br master nvmf_br 00:19:59.149 11:41:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@212 -- # ip link set nvmf_init_br2 master nvmf_br 00:19:59.149 11:41:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@213 -- # ip link set nvmf_tgt_br master nvmf_br 00:19:59.149 11:41:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@214 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:19:59.149 11:41:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@217 -- # ipts -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:19:59.149 11:41:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT' 00:19:59.149 11:41:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@218 -- # ipts -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT 00:19:59.149 11:41:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT' 00:19:59.149 11:41:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@219 -- # ipts -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:19:59.149 11:41:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@790 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT -m comment --comment 'SPDK_NVMF:-A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT' 00:19:59.149 11:41:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@222 -- # ping -c 1 10.0.0.3 00:19:59.149 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:19:59.149 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.084 ms 00:19:59.149 00:19:59.149 --- 10.0.0.3 ping statistics --- 00:19:59.149 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:19:59.149 rtt min/avg/max/mdev = 0.084/0.084/0.084/0.000 ms 00:19:59.149 11:41:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@223 -- # ping -c 1 10.0.0.4 00:19:59.149 PING 10.0.0.4 (10.0.0.4) 56(84) bytes of data. 00:19:59.149 64 bytes from 10.0.0.4: icmp_seq=1 ttl=64 time=0.085 ms 00:19:59.149 00:19:59.149 --- 10.0.0.4 ping statistics --- 00:19:59.149 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:19:59.149 rtt min/avg/max/mdev = 0.085/0.085/0.085/0.000 ms 00:19:59.149 11:41:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@224 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:19:59.149 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:19:59.149 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.026 ms 00:19:59.149 00:19:59.149 --- 10.0.0.1 ping statistics --- 00:19:59.149 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:19:59.149 rtt min/avg/max/mdev = 0.026/0.026/0.026/0.000 ms 00:19:59.149 11:41:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@225 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.2 00:19:59.149 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:19:59.149 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.065 ms 00:19:59.149 00:19:59.149 --- 10.0.0.2 ping statistics --- 00:19:59.149 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:19:59.149 rtt min/avg/max/mdev = 0.065/0.065/0.065/0.000 ms 00:19:59.149 11:41:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@227 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:19:59.149 11:41:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@461 -- # return 0 00:19:59.149 11:41:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:19:59.149 11:41:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:19:59.149 11:41:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:19:59.149 11:41:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:19:59.149 11:41:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:19:59.149 11:41:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:19:59.149 11:41:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:19:59.149 11:41:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@69 -- # nvmfappstart -L nvme_auth 00:19:59.149 11:41:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:19:59.149 11:41:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@726 -- # xtrace_disable 00:19:59.149 11:41:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:19:59.149 11:41:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@509 -- # nvmfpid=94211 00:19:59.149 11:41:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@510 -- # waitforlisten 94211 00:19:59.149 11:41:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@835 -- # '[' -z 94211 ']' 00:19:59.149 11:41:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:19:59.149 11:41:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@508 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -L nvme_auth 00:19:59.149 11:41:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@840 -- # local max_retries=100 00:19:59.149 11:41:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:19:59.149 11:41:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@844 -- # xtrace_disable 00:19:59.149 11:41:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:19:59.715 11:42:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:19:59.715 11:42:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@868 -- # return 0 00:19:59.715 11:42:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:19:59.715 11:42:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@732 -- # xtrace_disable 00:19:59.715 11:42:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:19:59.715 11:42:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:19:59.715 11:42:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@70 -- # trap 'cat /home/vagrant/spdk_repo/spdk/../output/nvme-auth.log; cleanup' SIGINT SIGTERM EXIT 00:19:59.715 11:42:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@73 -- # gen_dhchap_key null 32 00:19:59.715 11:42:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@751 -- # local digest len file key 00:19:59.715 11:42:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:19:59.715 11:42:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # local -A digests 00:19:59.715 11:42:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@754 -- # digest=null 00:19:59.715 11:42:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@754 -- # len=32 00:19:59.715 11:42:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # xxd -p -c0 -l 16 /dev/urandom 00:19:59.715 11:42:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # key=a82a0fe67dbfef760f482a1909e6e7e6 00:19:59.715 11:42:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@756 -- # mktemp -t spdk.key-null.XXX 00:19:59.715 11:42:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@756 -- # file=/tmp/spdk.key-null.REW 00:19:59.715 11:42:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@757 -- # format_dhchap_key a82a0fe67dbfef760f482a1909e6e7e6 0 00:19:59.716 11:42:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # format_key DHHC-1 a82a0fe67dbfef760f482a1909e6e7e6 0 00:19:59.716 11:42:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@730 -- # local prefix key digest 00:19:59.716 11:42:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # prefix=DHHC-1 00:19:59.716 11:42:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # key=a82a0fe67dbfef760f482a1909e6e7e6 00:19:59.716 11:42:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # digest=0 00:19:59.716 11:42:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@733 -- # python - 00:19:59.716 11:42:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@758 -- # chmod 0600 /tmp/spdk.key-null.REW 00:19:59.716 11:42:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@760 -- # echo /tmp/spdk.key-null.REW 00:19:59.716 11:42:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@73 -- # keys[0]=/tmp/spdk.key-null.REW 00:19:59.716 11:42:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@73 -- # gen_dhchap_key sha512 64 00:19:59.716 11:42:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@751 -- # local digest len file key 00:19:59.716 11:42:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:19:59.716 11:42:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # local -A digests 00:19:59.716 11:42:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@754 -- # digest=sha512 00:19:59.716 11:42:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@754 -- # len=64 00:19:59.716 11:42:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # xxd -p -c0 -l 32 /dev/urandom 00:19:59.716 11:42:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # key=b6b901fecfab90985639073d0caebd8d94af1c64c924463d62a4933f74920744 00:19:59.716 11:42:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@756 -- # mktemp -t spdk.key-sha512.XXX 00:19:59.716 11:42:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@756 -- # file=/tmp/spdk.key-sha512.oZZ 00:19:59.716 11:42:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@757 -- # format_dhchap_key b6b901fecfab90985639073d0caebd8d94af1c64c924463d62a4933f74920744 3 00:19:59.716 11:42:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # format_key DHHC-1 b6b901fecfab90985639073d0caebd8d94af1c64c924463d62a4933f74920744 3 00:19:59.716 11:42:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@730 -- # local prefix key digest 00:19:59.716 11:42:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # prefix=DHHC-1 00:19:59.716 11:42:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # key=b6b901fecfab90985639073d0caebd8d94af1c64c924463d62a4933f74920744 00:19:59.716 11:42:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # digest=3 00:19:59.716 11:42:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@733 -- # python - 00:19:59.716 11:42:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@758 -- # chmod 0600 /tmp/spdk.key-sha512.oZZ 00:19:59.716 11:42:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@760 -- # echo /tmp/spdk.key-sha512.oZZ 00:19:59.716 11:42:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@73 -- # ckeys[0]=/tmp/spdk.key-sha512.oZZ 00:19:59.716 11:42:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@74 -- # gen_dhchap_key null 48 00:19:59.716 11:42:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@751 -- # local digest len file key 00:19:59.716 11:42:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:19:59.716 11:42:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # local -A digests 00:19:59.716 11:42:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@754 -- # digest=null 00:19:59.716 11:42:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@754 -- # len=48 00:19:59.716 11:42:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # xxd -p -c0 -l 24 /dev/urandom 00:19:59.716 11:42:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # key=2eaae426451d161be239bc3476e374cf86f5d0e36c529889 00:19:59.716 11:42:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@756 -- # mktemp -t spdk.key-null.XXX 00:19:59.716 11:42:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@756 -- # file=/tmp/spdk.key-null.bUL 00:19:59.716 11:42:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@757 -- # format_dhchap_key 2eaae426451d161be239bc3476e374cf86f5d0e36c529889 0 00:19:59.716 11:42:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # format_key DHHC-1 2eaae426451d161be239bc3476e374cf86f5d0e36c529889 0 00:19:59.716 11:42:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@730 -- # local prefix key digest 00:19:59.716 11:42:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # prefix=DHHC-1 00:19:59.716 11:42:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # key=2eaae426451d161be239bc3476e374cf86f5d0e36c529889 00:19:59.716 11:42:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # digest=0 00:19:59.716 11:42:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@733 -- # python - 00:19:59.974 11:42:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@758 -- # chmod 0600 /tmp/spdk.key-null.bUL 00:19:59.974 11:42:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@760 -- # echo /tmp/spdk.key-null.bUL 00:19:59.974 11:42:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@74 -- # keys[1]=/tmp/spdk.key-null.bUL 00:19:59.974 11:42:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@74 -- # gen_dhchap_key sha384 48 00:19:59.974 11:42:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@751 -- # local digest len file key 00:19:59.974 11:42:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:19:59.974 11:42:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # local -A digests 00:19:59.974 11:42:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@754 -- # digest=sha384 00:19:59.974 11:42:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@754 -- # len=48 00:19:59.974 11:42:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # xxd -p -c0 -l 24 /dev/urandom 00:19:59.974 11:42:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # key=b027eba812cfe874d05307e3bde095a024e50e526efb4752 00:19:59.974 11:42:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@756 -- # mktemp -t spdk.key-sha384.XXX 00:19:59.974 11:42:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@756 -- # file=/tmp/spdk.key-sha384.uiG 00:19:59.974 11:42:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@757 -- # format_dhchap_key b027eba812cfe874d05307e3bde095a024e50e526efb4752 2 00:19:59.974 11:42:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # format_key DHHC-1 b027eba812cfe874d05307e3bde095a024e50e526efb4752 2 00:19:59.974 11:42:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@730 -- # local prefix key digest 00:19:59.974 11:42:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # prefix=DHHC-1 00:19:59.974 11:42:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # key=b027eba812cfe874d05307e3bde095a024e50e526efb4752 00:19:59.974 11:42:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # digest=2 00:19:59.974 11:42:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@733 -- # python - 00:19:59.974 11:42:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@758 -- # chmod 0600 /tmp/spdk.key-sha384.uiG 00:19:59.974 11:42:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@760 -- # echo /tmp/spdk.key-sha384.uiG 00:19:59.974 11:42:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@74 -- # ckeys[1]=/tmp/spdk.key-sha384.uiG 00:19:59.974 11:42:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@75 -- # gen_dhchap_key sha256 32 00:19:59.974 11:42:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@751 -- # local digest len file key 00:19:59.974 11:42:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:19:59.974 11:42:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # local -A digests 00:19:59.974 11:42:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@754 -- # digest=sha256 00:19:59.974 11:42:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@754 -- # len=32 00:19:59.974 11:42:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # xxd -p -c0 -l 16 /dev/urandom 00:19:59.974 11:42:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # key=9119d3fcfee64965829b682fffcccb39 00:19:59.974 11:42:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@756 -- # mktemp -t spdk.key-sha256.XXX 00:19:59.974 11:42:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@756 -- # file=/tmp/spdk.key-sha256.9lW 00:19:59.974 11:42:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@757 -- # format_dhchap_key 9119d3fcfee64965829b682fffcccb39 1 00:19:59.975 11:42:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # format_key DHHC-1 9119d3fcfee64965829b682fffcccb39 1 00:19:59.975 11:42:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@730 -- # local prefix key digest 00:19:59.975 11:42:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # prefix=DHHC-1 00:19:59.975 11:42:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # key=9119d3fcfee64965829b682fffcccb39 00:19:59.975 11:42:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # digest=1 00:19:59.975 11:42:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@733 -- # python - 00:19:59.975 11:42:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@758 -- # chmod 0600 /tmp/spdk.key-sha256.9lW 00:19:59.975 11:42:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@760 -- # echo /tmp/spdk.key-sha256.9lW 00:19:59.975 11:42:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@75 -- # keys[2]=/tmp/spdk.key-sha256.9lW 00:19:59.975 11:42:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@75 -- # gen_dhchap_key sha256 32 00:19:59.975 11:42:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@751 -- # local digest len file key 00:19:59.975 11:42:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:19:59.975 11:42:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # local -A digests 00:19:59.975 11:42:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@754 -- # digest=sha256 00:19:59.975 11:42:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@754 -- # len=32 00:19:59.975 11:42:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # xxd -p -c0 -l 16 /dev/urandom 00:19:59.975 11:42:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # key=4b0ac49c9b1efdb2b51308293780dbaf 00:19:59.975 11:42:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@756 -- # mktemp -t spdk.key-sha256.XXX 00:19:59.975 11:42:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@756 -- # file=/tmp/spdk.key-sha256.yds 00:19:59.975 11:42:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@757 -- # format_dhchap_key 4b0ac49c9b1efdb2b51308293780dbaf 1 00:19:59.975 11:42:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # format_key DHHC-1 4b0ac49c9b1efdb2b51308293780dbaf 1 00:19:59.975 11:42:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@730 -- # local prefix key digest 00:19:59.975 11:42:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # prefix=DHHC-1 00:19:59.975 11:42:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # key=4b0ac49c9b1efdb2b51308293780dbaf 00:19:59.975 11:42:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # digest=1 00:19:59.975 11:42:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@733 -- # python - 00:19:59.975 11:42:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@758 -- # chmod 0600 /tmp/spdk.key-sha256.yds 00:19:59.975 11:42:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@760 -- # echo /tmp/spdk.key-sha256.yds 00:19:59.975 11:42:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@75 -- # ckeys[2]=/tmp/spdk.key-sha256.yds 00:19:59.975 11:42:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@76 -- # gen_dhchap_key sha384 48 00:19:59.975 11:42:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@751 -- # local digest len file key 00:19:59.975 11:42:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:19:59.975 11:42:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # local -A digests 00:19:59.975 11:42:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@754 -- # digest=sha384 00:19:59.975 11:42:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@754 -- # len=48 00:19:59.975 11:42:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # xxd -p -c0 -l 24 /dev/urandom 00:19:59.975 11:42:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # key=ad2b45a65e56af6d9e1e30642ad28755c39697af230880f1 00:19:59.975 11:42:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@756 -- # mktemp -t spdk.key-sha384.XXX 00:19:59.975 11:42:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@756 -- # file=/tmp/spdk.key-sha384.V1q 00:19:59.975 11:42:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@757 -- # format_dhchap_key ad2b45a65e56af6d9e1e30642ad28755c39697af230880f1 2 00:19:59.975 11:42:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # format_key DHHC-1 ad2b45a65e56af6d9e1e30642ad28755c39697af230880f1 2 00:19:59.975 11:42:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@730 -- # local prefix key digest 00:19:59.975 11:42:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # prefix=DHHC-1 00:19:59.975 11:42:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # key=ad2b45a65e56af6d9e1e30642ad28755c39697af230880f1 00:19:59.975 11:42:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # digest=2 00:19:59.975 11:42:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@733 -- # python - 00:20:00.233 11:42:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@758 -- # chmod 0600 /tmp/spdk.key-sha384.V1q 00:20:00.233 11:42:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@760 -- # echo /tmp/spdk.key-sha384.V1q 00:20:00.233 11:42:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@76 -- # keys[3]=/tmp/spdk.key-sha384.V1q 00:20:00.233 11:42:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@76 -- # gen_dhchap_key null 32 00:20:00.233 11:42:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@751 -- # local digest len file key 00:20:00.233 11:42:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:20:00.233 11:42:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # local -A digests 00:20:00.233 11:42:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@754 -- # digest=null 00:20:00.233 11:42:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@754 -- # len=32 00:20:00.233 11:42:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # xxd -p -c0 -l 16 /dev/urandom 00:20:00.233 11:42:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # key=16c571590c4fbe05d381beaf8a44eb21 00:20:00.233 11:42:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@756 -- # mktemp -t spdk.key-null.XXX 00:20:00.233 11:42:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@756 -- # file=/tmp/spdk.key-null.1H6 00:20:00.234 11:42:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@757 -- # format_dhchap_key 16c571590c4fbe05d381beaf8a44eb21 0 00:20:00.234 11:42:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # format_key DHHC-1 16c571590c4fbe05d381beaf8a44eb21 0 00:20:00.234 11:42:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@730 -- # local prefix key digest 00:20:00.234 11:42:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # prefix=DHHC-1 00:20:00.234 11:42:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # key=16c571590c4fbe05d381beaf8a44eb21 00:20:00.234 11:42:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # digest=0 00:20:00.234 11:42:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@733 -- # python - 00:20:00.234 11:42:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@758 -- # chmod 0600 /tmp/spdk.key-null.1H6 00:20:00.234 11:42:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@760 -- # echo /tmp/spdk.key-null.1H6 00:20:00.234 11:42:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@76 -- # ckeys[3]=/tmp/spdk.key-null.1H6 00:20:00.234 11:42:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@77 -- # gen_dhchap_key sha512 64 00:20:00.234 11:42:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@751 -- # local digest len file key 00:20:00.234 11:42:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:20:00.234 11:42:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # local -A digests 00:20:00.234 11:42:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@754 -- # digest=sha512 00:20:00.234 11:42:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@754 -- # len=64 00:20:00.234 11:42:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # xxd -p -c0 -l 32 /dev/urandom 00:20:00.234 11:42:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # key=d8e8690af61748a4acae32c78944f1094db4f518dc0ab49b4556e957b86fc4f2 00:20:00.234 11:42:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@756 -- # mktemp -t spdk.key-sha512.XXX 00:20:00.234 11:42:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@756 -- # file=/tmp/spdk.key-sha512.l6W 00:20:00.234 11:42:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@757 -- # format_dhchap_key d8e8690af61748a4acae32c78944f1094db4f518dc0ab49b4556e957b86fc4f2 3 00:20:00.234 11:42:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # format_key DHHC-1 d8e8690af61748a4acae32c78944f1094db4f518dc0ab49b4556e957b86fc4f2 3 00:20:00.234 11:42:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@730 -- # local prefix key digest 00:20:00.234 11:42:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # prefix=DHHC-1 00:20:00.234 11:42:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # key=d8e8690af61748a4acae32c78944f1094db4f518dc0ab49b4556e957b86fc4f2 00:20:00.234 11:42:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # digest=3 00:20:00.234 11:42:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@733 -- # python - 00:20:00.234 11:42:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@758 -- # chmod 0600 /tmp/spdk.key-sha512.l6W 00:20:00.234 11:42:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@760 -- # echo /tmp/spdk.key-sha512.l6W 00:20:00.234 11:42:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@77 -- # keys[4]=/tmp/spdk.key-sha512.l6W 00:20:00.234 11:42:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@77 -- # ckeys[4]= 00:20:00.234 11:42:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@79 -- # waitforlisten 94211 00:20:00.234 11:42:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@835 -- # '[' -z 94211 ']' 00:20:00.234 11:42:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:20:00.234 11:42:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@840 -- # local max_retries=100 00:20:00.234 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:20:00.234 11:42:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:20:00.234 11:42:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@844 -- # xtrace_disable 00:20:00.234 11:42:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:00.492 11:42:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:20:00.492 11:42:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@868 -- # return 0 00:20:00.492 11:42:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@80 -- # for i in "${!keys[@]}" 00:20:00.492 11:42:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@81 -- # rpc_cmd keyring_file_add_key key0 /tmp/spdk.key-null.REW 00:20:00.492 11:42:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:00.492 11:42:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:00.492 11:42:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:00.492 11:42:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@82 -- # [[ -n /tmp/spdk.key-sha512.oZZ ]] 00:20:00.492 11:42:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@82 -- # rpc_cmd keyring_file_add_key ckey0 /tmp/spdk.key-sha512.oZZ 00:20:00.492 11:42:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:00.492 11:42:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:00.492 11:42:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:00.492 11:42:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@80 -- # for i in "${!keys[@]}" 00:20:00.492 11:42:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@81 -- # rpc_cmd keyring_file_add_key key1 /tmp/spdk.key-null.bUL 00:20:00.492 11:42:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:00.492 11:42:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:00.492 11:42:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:00.492 11:42:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@82 -- # [[ -n /tmp/spdk.key-sha384.uiG ]] 00:20:00.492 11:42:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@82 -- # rpc_cmd keyring_file_add_key ckey1 /tmp/spdk.key-sha384.uiG 00:20:00.492 11:42:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:00.492 11:42:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:00.492 11:42:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:00.492 11:42:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@80 -- # for i in "${!keys[@]}" 00:20:00.492 11:42:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@81 -- # rpc_cmd keyring_file_add_key key2 /tmp/spdk.key-sha256.9lW 00:20:00.492 11:42:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:00.492 11:42:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:00.492 11:42:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:00.492 11:42:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@82 -- # [[ -n /tmp/spdk.key-sha256.yds ]] 00:20:00.492 11:42:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@82 -- # rpc_cmd keyring_file_add_key ckey2 /tmp/spdk.key-sha256.yds 00:20:00.492 11:42:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:00.492 11:42:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:00.492 11:42:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:00.492 11:42:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@80 -- # for i in "${!keys[@]}" 00:20:00.492 11:42:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@81 -- # rpc_cmd keyring_file_add_key key3 /tmp/spdk.key-sha384.V1q 00:20:00.492 11:42:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:00.751 11:42:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:00.751 11:42:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:00.751 11:42:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@82 -- # [[ -n /tmp/spdk.key-null.1H6 ]] 00:20:00.751 11:42:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@82 -- # rpc_cmd keyring_file_add_key ckey3 /tmp/spdk.key-null.1H6 00:20:00.751 11:42:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:00.751 11:42:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:00.751 11:42:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:00.751 11:42:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@80 -- # for i in "${!keys[@]}" 00:20:00.751 11:42:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@81 -- # rpc_cmd keyring_file_add_key key4 /tmp/spdk.key-sha512.l6W 00:20:00.751 11:42:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:00.751 11:42:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:00.751 11:42:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:00.751 11:42:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@82 -- # [[ -n '' ]] 00:20:00.751 11:42:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@85 -- # nvmet_auth_init 00:20:00.751 11:42:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@35 -- # get_main_ns_ip 00:20:00.751 11:42:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:20:00.751 11:42:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:20:00.751 11:42:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:20:00.751 11:42:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:20:00.751 11:42:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:20:00.751 11:42:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:20:00.751 11:42:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:20:00.751 11:42:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:20:00.751 11:42:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:20:00.751 11:42:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:20:00.751 11:42:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@35 -- # configure_kernel_target nqn.2024-02.io.spdk:cnode0 10.0.0.1 00:20:00.751 11:42:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@660 -- # local kernel_name=nqn.2024-02.io.spdk:cnode0 kernel_target_ip=10.0.0.1 00:20:00.751 11:42:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@662 -- # nvmet=/sys/kernel/config/nvmet 00:20:00.751 11:42:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@663 -- # kernel_subsystem=/sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0 00:20:00.751 11:42:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@664 -- # kernel_namespace=/sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0/namespaces/1 00:20:00.751 11:42:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@665 -- # kernel_port=/sys/kernel/config/nvmet/ports/1 00:20:00.751 11:42:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@667 -- # local block nvme 00:20:00.751 11:42:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@669 -- # [[ ! -e /sys/module/nvmet ]] 00:20:00.751 11:42:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@670 -- # modprobe nvmet 00:20:00.751 11:42:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@673 -- # [[ -e /sys/kernel/config/nvmet ]] 00:20:00.751 11:42:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@675 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh reset 00:20:01.009 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:20:01.009 Waiting for block devices as requested 00:20:01.009 0000:00:11.0 (1b36 0010): uio_pci_generic -> nvme 00:20:01.266 0000:00:10.0 (1b36 0010): uio_pci_generic -> nvme 00:20:01.830 11:42:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@678 -- # for block in /sys/block/nvme* 00:20:01.831 11:42:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@679 -- # [[ -e /sys/block/nvme0n1 ]] 00:20:01.831 11:42:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@680 -- # is_block_zoned nvme0n1 00:20:01.831 11:42:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1650 -- # local device=nvme0n1 00:20:01.831 11:42:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1652 -- # [[ -e /sys/block/nvme0n1/queue/zoned ]] 00:20:01.831 11:42:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1653 -- # [[ none != none ]] 00:20:01.831 11:42:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@681 -- # block_in_use nvme0n1 00:20:01.831 11:42:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@381 -- # local block=nvme0n1 pt 00:20:01.831 11:42:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@390 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py nvme0n1 00:20:01.831 No valid GPT data, bailing 00:20:01.831 11:42:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@394 -- # blkid -s PTTYPE -o value /dev/nvme0n1 00:20:01.831 11:42:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@394 -- # pt= 00:20:01.831 11:42:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@395 -- # return 1 00:20:01.831 11:42:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@681 -- # nvme=/dev/nvme0n1 00:20:01.831 11:42:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@678 -- # for block in /sys/block/nvme* 00:20:01.831 11:42:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@679 -- # [[ -e /sys/block/nvme0n2 ]] 00:20:01.831 11:42:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@680 -- # is_block_zoned nvme0n2 00:20:01.831 11:42:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1650 -- # local device=nvme0n2 00:20:01.831 11:42:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1652 -- # [[ -e /sys/block/nvme0n2/queue/zoned ]] 00:20:01.831 11:42:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1653 -- # [[ none != none ]] 00:20:01.831 11:42:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@681 -- # block_in_use nvme0n2 00:20:01.831 11:42:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@381 -- # local block=nvme0n2 pt 00:20:01.831 11:42:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@390 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py nvme0n2 00:20:01.831 No valid GPT data, bailing 00:20:01.831 11:42:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@394 -- # blkid -s PTTYPE -o value /dev/nvme0n2 00:20:01.831 11:42:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@394 -- # pt= 00:20:01.831 11:42:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@395 -- # return 1 00:20:01.831 11:42:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@681 -- # nvme=/dev/nvme0n2 00:20:01.831 11:42:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@678 -- # for block in /sys/block/nvme* 00:20:01.831 11:42:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@679 -- # [[ -e /sys/block/nvme0n3 ]] 00:20:01.831 11:42:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@680 -- # is_block_zoned nvme0n3 00:20:01.831 11:42:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1650 -- # local device=nvme0n3 00:20:01.831 11:42:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1652 -- # [[ -e /sys/block/nvme0n3/queue/zoned ]] 00:20:01.831 11:42:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1653 -- # [[ none != none ]] 00:20:01.831 11:42:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@681 -- # block_in_use nvme0n3 00:20:01.831 11:42:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@381 -- # local block=nvme0n3 pt 00:20:01.831 11:42:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@390 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py nvme0n3 00:20:01.831 No valid GPT data, bailing 00:20:01.831 11:42:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@394 -- # blkid -s PTTYPE -o value /dev/nvme0n3 00:20:01.831 11:42:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@394 -- # pt= 00:20:01.831 11:42:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@395 -- # return 1 00:20:01.831 11:42:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@681 -- # nvme=/dev/nvme0n3 00:20:01.831 11:42:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@678 -- # for block in /sys/block/nvme* 00:20:01.831 11:42:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@679 -- # [[ -e /sys/block/nvme1n1 ]] 00:20:01.831 11:42:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@680 -- # is_block_zoned nvme1n1 00:20:01.831 11:42:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1650 -- # local device=nvme1n1 00:20:01.831 11:42:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1652 -- # [[ -e /sys/block/nvme1n1/queue/zoned ]] 00:20:01.831 11:42:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1653 -- # [[ none != none ]] 00:20:01.831 11:42:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@681 -- # block_in_use nvme1n1 00:20:01.831 11:42:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@381 -- # local block=nvme1n1 pt 00:20:01.831 11:42:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@390 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py nvme1n1 00:20:02.089 No valid GPT data, bailing 00:20:02.089 11:42:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@394 -- # blkid -s PTTYPE -o value /dev/nvme1n1 00:20:02.089 11:42:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@394 -- # pt= 00:20:02.089 11:42:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@395 -- # return 1 00:20:02.089 11:42:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@681 -- # nvme=/dev/nvme1n1 00:20:02.089 11:42:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@684 -- # [[ -b /dev/nvme1n1 ]] 00:20:02.089 11:42:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@686 -- # mkdir /sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0 00:20:02.089 11:42:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@687 -- # mkdir /sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0/namespaces/1 00:20:02.089 11:42:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@688 -- # mkdir /sys/kernel/config/nvmet/ports/1 00:20:02.089 11:42:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@693 -- # echo SPDK-nqn.2024-02.io.spdk:cnode0 00:20:02.089 11:42:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@695 -- # echo 1 00:20:02.089 11:42:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@696 -- # echo /dev/nvme1n1 00:20:02.089 11:42:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@697 -- # echo 1 00:20:02.089 11:42:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@699 -- # echo 10.0.0.1 00:20:02.089 11:42:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@700 -- # echo tcp 00:20:02.089 11:42:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@701 -- # echo 4420 00:20:02.089 11:42:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@702 -- # echo ipv4 00:20:02.089 11:42:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@705 -- # ln -s /sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0 /sys/kernel/config/nvmet/ports/1/subsystems/ 00:20:02.089 11:42:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@708 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:2fb5c03d-b67f-4471-b8a9-a83bf7b15be8 --hostid=2fb5c03d-b67f-4471-b8a9-a83bf7b15be8 -a 10.0.0.1 -t tcp -s 4420 00:20:02.089 00:20:02.089 Discovery Log Number of Records 2, Generation counter 2 00:20:02.089 =====Discovery Log Entry 0====== 00:20:02.089 trtype: tcp 00:20:02.089 adrfam: ipv4 00:20:02.089 subtype: current discovery subsystem 00:20:02.089 treq: not specified, sq flow control disable supported 00:20:02.089 portid: 1 00:20:02.089 trsvcid: 4420 00:20:02.089 subnqn: nqn.2014-08.org.nvmexpress.discovery 00:20:02.089 traddr: 10.0.0.1 00:20:02.089 eflags: none 00:20:02.089 sectype: none 00:20:02.089 =====Discovery Log Entry 1====== 00:20:02.089 trtype: tcp 00:20:02.089 adrfam: ipv4 00:20:02.089 subtype: nvme subsystem 00:20:02.089 treq: not specified, sq flow control disable supported 00:20:02.089 portid: 1 00:20:02.089 trsvcid: 4420 00:20:02.089 subnqn: nqn.2024-02.io.spdk:cnode0 00:20:02.089 traddr: 10.0.0.1 00:20:02.089 eflags: none 00:20:02.089 sectype: none 00:20:02.089 11:42:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@36 -- # mkdir /sys/kernel/config/nvmet/hosts/nqn.2024-02.io.spdk:host0 00:20:02.089 11:42:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@37 -- # echo 0 00:20:02.089 11:42:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@38 -- # ln -s /sys/kernel/config/nvmet/hosts/nqn.2024-02.io.spdk:host0 /sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0/allowed_hosts/nqn.2024-02.io.spdk:host0 00:20:02.089 11:42:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@88 -- # nvmet_auth_set_key sha256 ffdhe2048 1 00:20:02.089 11:42:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:20:02.089 11:42:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:20:02.089 11:42:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:20:02.089 11:42:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:20:02.089 11:42:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:MmVhYWU0MjY0NTFkMTYxYmUyMzliYzM0NzZlMzc0Y2Y4NmY1ZDBlMzZjNTI5ODg5wjh+4Q==: 00:20:02.089 11:42:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:YjAyN2ViYTgxMmNmZTg3NGQwNTMwN2UzYmRlMDk1YTAyNGU1MGU1MjZlZmI0NzUyEsEUpg==: 00:20:02.089 11:42:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:20:02.089 11:42:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:20:02.347 11:42:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:MmVhYWU0MjY0NTFkMTYxYmUyMzliYzM0NzZlMzc0Y2Y4NmY1ZDBlMzZjNTI5ODg5wjh+4Q==: 00:20:02.347 11:42:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:YjAyN2ViYTgxMmNmZTg3NGQwNTMwN2UzYmRlMDk1YTAyNGU1MGU1MjZlZmI0NzUyEsEUpg==: ]] 00:20:02.347 11:42:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:YjAyN2ViYTgxMmNmZTg3NGQwNTMwN2UzYmRlMDk1YTAyNGU1MGU1MjZlZmI0NzUyEsEUpg==: 00:20:02.347 11:42:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@93 -- # IFS=, 00:20:02.347 11:42:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@94 -- # printf %s sha256,sha384,sha512 00:20:02.347 11:42:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@93 -- # IFS=, 00:20:02.347 11:42:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@94 -- # printf %s ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 00:20:02.347 11:42:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@93 -- # connect_authenticate sha256,sha384,sha512 ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 1 00:20:02.347 11:42:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:20:02.347 11:42:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256,sha384,sha512 00:20:02.347 11:42:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 00:20:02.347 11:42:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:20:02.347 11:42:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:20:02.347 11:42:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256,sha384,sha512 --dhchap-dhgroups ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 00:20:02.347 11:42:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:02.347 11:42:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:02.347 11:42:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:02.347 11:42:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:20:02.347 11:42:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:20:02.347 11:42:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:20:02.347 11:42:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:20:02.347 11:42:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:20:02.347 11:42:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:20:02.347 11:42:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:20:02.347 11:42:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:20:02.347 11:42:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:20:02.347 11:42:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:20:02.347 11:42:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:20:02.347 11:42:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:20:02.347 11:42:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:02.347 11:42:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:02.347 nvme0n1 00:20:02.347 11:42:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:02.347 11:42:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:20:02.347 11:42:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:20:02.347 11:42:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:02.347 11:42:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:02.347 11:42:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:02.347 11:42:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:02.347 11:42:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:20:02.347 11:42:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:02.347 11:42:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:02.347 11:42:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:02.347 11:42:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@100 -- # for digest in "${digests[@]}" 00:20:02.347 11:42:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:20:02.347 11:42:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:20:02.348 11:42:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe2048 0 00:20:02.348 11:42:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:20:02.348 11:42:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:20:02.348 11:42:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:20:02.348 11:42:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:20:02.348 11:42:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:YTgyYTBmZTY3ZGJmZWY3NjBmNDgyYTE5MDllNmU3ZTbwPdsP: 00:20:02.348 11:42:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:YjZiOTAxZmVjZmFiOTA5ODU2MzkwNzNkMGNhZWJkOGQ5NGFmMWM2NGM5MjQ0NjNkNjJhNDkzM2Y3NDkyMDc0NHmGrx4=: 00:20:02.348 11:42:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:20:02.348 11:42:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:20:02.348 11:42:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:YTgyYTBmZTY3ZGJmZWY3NjBmNDgyYTE5MDllNmU3ZTbwPdsP: 00:20:02.348 11:42:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:YjZiOTAxZmVjZmFiOTA5ODU2MzkwNzNkMGNhZWJkOGQ5NGFmMWM2NGM5MjQ0NjNkNjJhNDkzM2Y3NDkyMDc0NHmGrx4=: ]] 00:20:02.348 11:42:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:YjZiOTAxZmVjZmFiOTA5ODU2MzkwNzNkMGNhZWJkOGQ5NGFmMWM2NGM5MjQ0NjNkNjJhNDkzM2Y3NDkyMDc0NHmGrx4=: 00:20:02.348 11:42:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe2048 0 00:20:02.348 11:42:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:20:02.348 11:42:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:20:02.348 11:42:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:20:02.348 11:42:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:20:02.348 11:42:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:20:02.348 11:42:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:20:02.348 11:42:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:02.348 11:42:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:02.348 11:42:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:02.348 11:42:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:20:02.348 11:42:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:20:02.348 11:42:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:20:02.348 11:42:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:20:02.348 11:42:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:20:02.348 11:42:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:20:02.348 11:42:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:20:02.348 11:42:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:20:02.348 11:42:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:20:02.348 11:42:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:20:02.348 11:42:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:20:02.348 11:42:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:20:02.348 11:42:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:02.348 11:42:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:02.606 nvme0n1 00:20:02.606 11:42:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:02.606 11:42:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:20:02.606 11:42:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:20:02.606 11:42:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:02.606 11:42:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:02.606 11:42:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:02.606 11:42:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:02.606 11:42:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:20:02.606 11:42:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:02.606 11:42:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:02.606 11:42:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:02.606 11:42:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:20:02.606 11:42:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe2048 1 00:20:02.606 11:42:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:20:02.606 11:42:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:20:02.606 11:42:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:20:02.606 11:42:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:20:02.606 11:42:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:MmVhYWU0MjY0NTFkMTYxYmUyMzliYzM0NzZlMzc0Y2Y4NmY1ZDBlMzZjNTI5ODg5wjh+4Q==: 00:20:02.606 11:42:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:YjAyN2ViYTgxMmNmZTg3NGQwNTMwN2UzYmRlMDk1YTAyNGU1MGU1MjZlZmI0NzUyEsEUpg==: 00:20:02.606 11:42:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:20:02.606 11:42:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:20:02.606 11:42:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:MmVhYWU0MjY0NTFkMTYxYmUyMzliYzM0NzZlMzc0Y2Y4NmY1ZDBlMzZjNTI5ODg5wjh+4Q==: 00:20:02.606 11:42:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:YjAyN2ViYTgxMmNmZTg3NGQwNTMwN2UzYmRlMDk1YTAyNGU1MGU1MjZlZmI0NzUyEsEUpg==: ]] 00:20:02.606 11:42:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:YjAyN2ViYTgxMmNmZTg3NGQwNTMwN2UzYmRlMDk1YTAyNGU1MGU1MjZlZmI0NzUyEsEUpg==: 00:20:02.606 11:42:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe2048 1 00:20:02.606 11:42:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:20:02.606 11:42:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:20:02.606 11:42:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:20:02.606 11:42:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:20:02.606 11:42:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:20:02.606 11:42:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:20:02.606 11:42:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:02.606 11:42:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:02.606 11:42:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:02.606 11:42:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:20:02.606 11:42:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:20:02.606 11:42:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:20:02.606 11:42:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:20:02.606 11:42:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:20:02.606 11:42:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:20:02.606 11:42:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:20:02.606 11:42:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:20:02.606 11:42:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:20:02.606 11:42:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:20:02.607 11:42:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:20:02.607 11:42:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:20:02.607 11:42:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:02.607 11:42:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:02.607 nvme0n1 00:20:02.607 11:42:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:02.607 11:42:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:20:02.607 11:42:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:02.607 11:42:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:02.607 11:42:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:20:02.607 11:42:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:02.865 11:42:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:02.865 11:42:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:20:02.865 11:42:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:02.865 11:42:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:02.865 11:42:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:02.865 11:42:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:20:02.865 11:42:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe2048 2 00:20:02.865 11:42:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:20:02.865 11:42:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:20:02.865 11:42:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:20:02.865 11:42:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:20:02.865 11:42:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:OTExOWQzZmNmZWU2NDk2NTgyOWI2ODJmZmZjY2NiMzktCSxF: 00:20:02.865 11:42:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:NGIwYWM0OWM5YjFlZmRiMmI1MTMwODI5Mzc4MGRiYWZn16gQ: 00:20:02.865 11:42:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:20:02.865 11:42:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:20:02.865 11:42:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:OTExOWQzZmNmZWU2NDk2NTgyOWI2ODJmZmZjY2NiMzktCSxF: 00:20:02.865 11:42:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:NGIwYWM0OWM5YjFlZmRiMmI1MTMwODI5Mzc4MGRiYWZn16gQ: ]] 00:20:02.865 11:42:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:NGIwYWM0OWM5YjFlZmRiMmI1MTMwODI5Mzc4MGRiYWZn16gQ: 00:20:02.865 11:42:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe2048 2 00:20:02.865 11:42:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:20:02.865 11:42:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:20:02.865 11:42:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:20:02.865 11:42:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:20:02.865 11:42:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:20:02.865 11:42:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:20:02.865 11:42:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:02.865 11:42:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:02.865 11:42:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:02.865 11:42:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:20:02.865 11:42:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:20:02.865 11:42:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:20:02.865 11:42:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:20:02.865 11:42:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:20:02.865 11:42:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:20:02.865 11:42:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:20:02.865 11:42:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:20:02.865 11:42:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:20:02.865 11:42:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:20:02.865 11:42:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:20:02.865 11:42:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:20:02.865 11:42:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:02.865 11:42:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:02.865 nvme0n1 00:20:02.865 11:42:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:02.865 11:42:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:20:02.865 11:42:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:20:02.865 11:42:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:02.865 11:42:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:02.865 11:42:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:02.865 11:42:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:02.865 11:42:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:20:02.865 11:42:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:02.865 11:42:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:02.865 11:42:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:02.865 11:42:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:20:02.865 11:42:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe2048 3 00:20:02.865 11:42:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:20:02.865 11:42:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:20:02.865 11:42:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:20:02.865 11:42:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:20:02.865 11:42:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:YWQyYjQ1YTY1ZTU2YWY2ZDllMWUzMDY0MmFkMjg3NTVjMzk2OTdhZjIzMDg4MGYxGLXTWw==: 00:20:02.865 11:42:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:MTZjNTcxNTkwYzRmYmUwNWQzODFiZWFmOGE0NGViMjGFzSou: 00:20:02.865 11:42:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:20:02.865 11:42:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:20:02.866 11:42:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:YWQyYjQ1YTY1ZTU2YWY2ZDllMWUzMDY0MmFkMjg3NTVjMzk2OTdhZjIzMDg4MGYxGLXTWw==: 00:20:02.866 11:42:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:MTZjNTcxNTkwYzRmYmUwNWQzODFiZWFmOGE0NGViMjGFzSou: ]] 00:20:02.866 11:42:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:MTZjNTcxNTkwYzRmYmUwNWQzODFiZWFmOGE0NGViMjGFzSou: 00:20:02.866 11:42:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe2048 3 00:20:02.866 11:42:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:20:02.866 11:42:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:20:02.866 11:42:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:20:02.866 11:42:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:20:02.866 11:42:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:20:02.866 11:42:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:20:02.866 11:42:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:02.866 11:42:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:02.866 11:42:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:02.866 11:42:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:20:02.866 11:42:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:20:02.866 11:42:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:20:02.866 11:42:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:20:02.866 11:42:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:20:02.866 11:42:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:20:02.866 11:42:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:20:02.866 11:42:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:20:02.866 11:42:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:20:02.866 11:42:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:20:02.866 11:42:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:20:02.866 11:42:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:20:02.866 11:42:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:02.866 11:42:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:03.124 nvme0n1 00:20:03.124 11:42:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:03.124 11:42:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:20:03.124 11:42:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:03.124 11:42:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:20:03.124 11:42:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:03.124 11:42:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:03.124 11:42:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:03.124 11:42:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:20:03.124 11:42:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:03.124 11:42:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:03.124 11:42:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:03.124 11:42:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:20:03.124 11:42:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe2048 4 00:20:03.124 11:42:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:20:03.124 11:42:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:20:03.124 11:42:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:20:03.124 11:42:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:20:03.124 11:42:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:ZDhlODY5MGFmNjE3NDhhNGFjYWUzMmM3ODk0NGYxMDk0ZGI0ZjUxOGRjMGFiNDliNDU1NmU5NTdiODZmYzRmMrlj5SE=: 00:20:03.124 11:42:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:20:03.124 11:42:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:20:03.124 11:42:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:20:03.124 11:42:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:ZDhlODY5MGFmNjE3NDhhNGFjYWUzMmM3ODk0NGYxMDk0ZGI0ZjUxOGRjMGFiNDliNDU1NmU5NTdiODZmYzRmMrlj5SE=: 00:20:03.124 11:42:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:20:03.124 11:42:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe2048 4 00:20:03.124 11:42:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:20:03.124 11:42:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:20:03.124 11:42:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:20:03.124 11:42:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:20:03.124 11:42:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:20:03.124 11:42:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:20:03.124 11:42:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:03.124 11:42:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:03.124 11:42:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:03.124 11:42:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:20:03.124 11:42:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:20:03.124 11:42:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:20:03.124 11:42:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:20:03.124 11:42:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:20:03.124 11:42:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:20:03.124 11:42:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:20:03.124 11:42:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:20:03.124 11:42:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:20:03.124 11:42:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:20:03.124 11:42:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:20:03.124 11:42:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:20:03.124 11:42:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:03.124 11:42:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:03.124 nvme0n1 00:20:03.124 11:42:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:03.124 11:42:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:20:03.124 11:42:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:20:03.124 11:42:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:03.124 11:42:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:03.124 11:42:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:03.382 11:42:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:03.382 11:42:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:20:03.382 11:42:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:03.382 11:42:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:03.382 11:42:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:03.382 11:42:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:20:03.382 11:42:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:20:03.382 11:42:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe3072 0 00:20:03.382 11:42:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:20:03.382 11:42:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:20:03.382 11:42:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:20:03.383 11:42:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:20:03.383 11:42:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:YTgyYTBmZTY3ZGJmZWY3NjBmNDgyYTE5MDllNmU3ZTbwPdsP: 00:20:03.383 11:42:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:YjZiOTAxZmVjZmFiOTA5ODU2MzkwNzNkMGNhZWJkOGQ5NGFmMWM2NGM5MjQ0NjNkNjJhNDkzM2Y3NDkyMDc0NHmGrx4=: 00:20:03.383 11:42:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:20:03.383 11:42:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:20:03.641 11:42:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:YTgyYTBmZTY3ZGJmZWY3NjBmNDgyYTE5MDllNmU3ZTbwPdsP: 00:20:03.641 11:42:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:YjZiOTAxZmVjZmFiOTA5ODU2MzkwNzNkMGNhZWJkOGQ5NGFmMWM2NGM5MjQ0NjNkNjJhNDkzM2Y3NDkyMDc0NHmGrx4=: ]] 00:20:03.641 11:42:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:YjZiOTAxZmVjZmFiOTA5ODU2MzkwNzNkMGNhZWJkOGQ5NGFmMWM2NGM5MjQ0NjNkNjJhNDkzM2Y3NDkyMDc0NHmGrx4=: 00:20:03.641 11:42:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe3072 0 00:20:03.641 11:42:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:20:03.641 11:42:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:20:03.641 11:42:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:20:03.641 11:42:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:20:03.641 11:42:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:20:03.641 11:42:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:20:03.641 11:42:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:03.641 11:42:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:03.641 11:42:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:03.641 11:42:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:20:03.641 11:42:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:20:03.641 11:42:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:20:03.641 11:42:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:20:03.641 11:42:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:20:03.641 11:42:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:20:03.641 11:42:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:20:03.641 11:42:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:20:03.641 11:42:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:20:03.641 11:42:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:20:03.641 11:42:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:20:03.641 11:42:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:20:03.641 11:42:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:03.641 11:42:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:03.641 nvme0n1 00:20:03.641 11:42:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:03.641 11:42:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:20:03.641 11:42:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:20:03.641 11:42:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:03.641 11:42:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:03.641 11:42:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:03.899 11:42:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:03.899 11:42:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:20:03.899 11:42:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:03.899 11:42:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:03.899 11:42:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:03.900 11:42:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:20:03.900 11:42:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe3072 1 00:20:03.900 11:42:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:20:03.900 11:42:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:20:03.900 11:42:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:20:03.900 11:42:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:20:03.900 11:42:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:MmVhYWU0MjY0NTFkMTYxYmUyMzliYzM0NzZlMzc0Y2Y4NmY1ZDBlMzZjNTI5ODg5wjh+4Q==: 00:20:03.900 11:42:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:YjAyN2ViYTgxMmNmZTg3NGQwNTMwN2UzYmRlMDk1YTAyNGU1MGU1MjZlZmI0NzUyEsEUpg==: 00:20:03.900 11:42:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:20:03.900 11:42:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:20:03.900 11:42:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:MmVhYWU0MjY0NTFkMTYxYmUyMzliYzM0NzZlMzc0Y2Y4NmY1ZDBlMzZjNTI5ODg5wjh+4Q==: 00:20:03.900 11:42:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:YjAyN2ViYTgxMmNmZTg3NGQwNTMwN2UzYmRlMDk1YTAyNGU1MGU1MjZlZmI0NzUyEsEUpg==: ]] 00:20:03.900 11:42:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:YjAyN2ViYTgxMmNmZTg3NGQwNTMwN2UzYmRlMDk1YTAyNGU1MGU1MjZlZmI0NzUyEsEUpg==: 00:20:03.900 11:42:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe3072 1 00:20:03.900 11:42:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:20:03.900 11:42:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:20:03.900 11:42:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:20:03.900 11:42:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:20:03.900 11:42:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:20:03.900 11:42:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:20:03.900 11:42:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:03.900 11:42:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:03.900 11:42:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:03.900 11:42:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:20:03.900 11:42:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:20:03.900 11:42:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:20:03.900 11:42:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:20:03.900 11:42:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:20:03.900 11:42:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:20:03.900 11:42:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:20:03.900 11:42:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:20:03.900 11:42:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:20:03.900 11:42:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:20:03.900 11:42:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:20:03.900 11:42:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:20:03.900 11:42:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:03.900 11:42:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:03.900 nvme0n1 00:20:03.900 11:42:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:03.900 11:42:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:20:03.900 11:42:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:20:03.900 11:42:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:03.900 11:42:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:03.900 11:42:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:03.900 11:42:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:03.900 11:42:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:20:03.900 11:42:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:03.900 11:42:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:03.900 11:42:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:03.900 11:42:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:20:03.900 11:42:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe3072 2 00:20:03.900 11:42:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:20:03.900 11:42:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:20:03.900 11:42:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:20:03.900 11:42:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:20:03.900 11:42:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:OTExOWQzZmNmZWU2NDk2NTgyOWI2ODJmZmZjY2NiMzktCSxF: 00:20:03.900 11:42:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:NGIwYWM0OWM5YjFlZmRiMmI1MTMwODI5Mzc4MGRiYWZn16gQ: 00:20:03.900 11:42:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:20:03.900 11:42:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:20:03.900 11:42:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:OTExOWQzZmNmZWU2NDk2NTgyOWI2ODJmZmZjY2NiMzktCSxF: 00:20:03.900 11:42:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:NGIwYWM0OWM5YjFlZmRiMmI1MTMwODI5Mzc4MGRiYWZn16gQ: ]] 00:20:03.900 11:42:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:NGIwYWM0OWM5YjFlZmRiMmI1MTMwODI5Mzc4MGRiYWZn16gQ: 00:20:03.900 11:42:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe3072 2 00:20:03.900 11:42:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:20:03.900 11:42:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:20:03.900 11:42:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:20:03.900 11:42:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:20:03.900 11:42:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:20:03.900 11:42:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:20:03.900 11:42:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:03.900 11:42:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:03.900 11:42:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:03.900 11:42:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:20:03.900 11:42:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:20:03.900 11:42:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:20:03.900 11:42:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:20:03.900 11:42:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:20:03.900 11:42:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:20:03.900 11:42:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:20:03.900 11:42:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:20:03.900 11:42:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:20:03.900 11:42:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:20:03.900 11:42:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:20:03.900 11:42:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:20:03.900 11:42:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:03.900 11:42:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:04.159 nvme0n1 00:20:04.159 11:42:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:04.159 11:42:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:20:04.159 11:42:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:20:04.159 11:42:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:04.159 11:42:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:04.159 11:42:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:04.159 11:42:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:04.159 11:42:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:20:04.159 11:42:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:04.159 11:42:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:04.159 11:42:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:04.159 11:42:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:20:04.159 11:42:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe3072 3 00:20:04.159 11:42:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:20:04.159 11:42:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:20:04.159 11:42:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:20:04.159 11:42:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:20:04.159 11:42:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:YWQyYjQ1YTY1ZTU2YWY2ZDllMWUzMDY0MmFkMjg3NTVjMzk2OTdhZjIzMDg4MGYxGLXTWw==: 00:20:04.159 11:42:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:MTZjNTcxNTkwYzRmYmUwNWQzODFiZWFmOGE0NGViMjGFzSou: 00:20:04.159 11:42:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:20:04.159 11:42:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:20:04.159 11:42:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:YWQyYjQ1YTY1ZTU2YWY2ZDllMWUzMDY0MmFkMjg3NTVjMzk2OTdhZjIzMDg4MGYxGLXTWw==: 00:20:04.159 11:42:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:MTZjNTcxNTkwYzRmYmUwNWQzODFiZWFmOGE0NGViMjGFzSou: ]] 00:20:04.159 11:42:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:MTZjNTcxNTkwYzRmYmUwNWQzODFiZWFmOGE0NGViMjGFzSou: 00:20:04.159 11:42:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe3072 3 00:20:04.159 11:42:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:20:04.159 11:42:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:20:04.159 11:42:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:20:04.159 11:42:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:20:04.159 11:42:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:20:04.159 11:42:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:20:04.159 11:42:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:04.159 11:42:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:04.159 11:42:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:04.159 11:42:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:20:04.159 11:42:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:20:04.159 11:42:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:20:04.159 11:42:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:20:04.159 11:42:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:20:04.159 11:42:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:20:04.159 11:42:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:20:04.159 11:42:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:20:04.159 11:42:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:20:04.159 11:42:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:20:04.159 11:42:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:20:04.159 11:42:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:20:04.159 11:42:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:04.159 11:42:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:04.417 nvme0n1 00:20:04.418 11:42:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:04.418 11:42:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:20:04.418 11:42:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:04.418 11:42:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:04.418 11:42:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:20:04.418 11:42:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:04.418 11:42:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:04.418 11:42:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:20:04.418 11:42:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:04.418 11:42:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:04.418 11:42:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:04.418 11:42:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:20:04.418 11:42:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe3072 4 00:20:04.418 11:42:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:20:04.418 11:42:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:20:04.418 11:42:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:20:04.418 11:42:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:20:04.418 11:42:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:ZDhlODY5MGFmNjE3NDhhNGFjYWUzMmM3ODk0NGYxMDk0ZGI0ZjUxOGRjMGFiNDliNDU1NmU5NTdiODZmYzRmMrlj5SE=: 00:20:04.418 11:42:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:20:04.418 11:42:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:20:04.418 11:42:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:20:04.418 11:42:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:ZDhlODY5MGFmNjE3NDhhNGFjYWUzMmM3ODk0NGYxMDk0ZGI0ZjUxOGRjMGFiNDliNDU1NmU5NTdiODZmYzRmMrlj5SE=: 00:20:04.418 11:42:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:20:04.418 11:42:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe3072 4 00:20:04.418 11:42:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:20:04.418 11:42:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:20:04.418 11:42:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:20:04.418 11:42:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:20:04.418 11:42:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:20:04.418 11:42:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:20:04.418 11:42:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:04.418 11:42:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:04.418 11:42:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:04.418 11:42:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:20:04.418 11:42:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:20:04.418 11:42:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:20:04.418 11:42:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:20:04.418 11:42:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:20:04.418 11:42:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:20:04.418 11:42:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:20:04.418 11:42:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:20:04.418 11:42:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:20:04.418 11:42:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:20:04.418 11:42:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:20:04.418 11:42:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:20:04.418 11:42:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:04.418 11:42:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:04.418 nvme0n1 00:20:04.418 11:42:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:04.418 11:42:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:20:04.418 11:42:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:04.418 11:42:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:04.418 11:42:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:20:04.676 11:42:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:04.676 11:42:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:04.676 11:42:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:20:04.676 11:42:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:04.676 11:42:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:04.676 11:42:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:04.676 11:42:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:20:04.676 11:42:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:20:04.676 11:42:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe4096 0 00:20:04.676 11:42:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:20:04.676 11:42:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:20:04.676 11:42:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:20:04.676 11:42:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:20:04.676 11:42:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:YTgyYTBmZTY3ZGJmZWY3NjBmNDgyYTE5MDllNmU3ZTbwPdsP: 00:20:04.676 11:42:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:YjZiOTAxZmVjZmFiOTA5ODU2MzkwNzNkMGNhZWJkOGQ5NGFmMWM2NGM5MjQ0NjNkNjJhNDkzM2Y3NDkyMDc0NHmGrx4=: 00:20:04.676 11:42:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:20:04.676 11:42:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:20:05.242 11:42:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:YTgyYTBmZTY3ZGJmZWY3NjBmNDgyYTE5MDllNmU3ZTbwPdsP: 00:20:05.242 11:42:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:YjZiOTAxZmVjZmFiOTA5ODU2MzkwNzNkMGNhZWJkOGQ5NGFmMWM2NGM5MjQ0NjNkNjJhNDkzM2Y3NDkyMDc0NHmGrx4=: ]] 00:20:05.242 11:42:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:YjZiOTAxZmVjZmFiOTA5ODU2MzkwNzNkMGNhZWJkOGQ5NGFmMWM2NGM5MjQ0NjNkNjJhNDkzM2Y3NDkyMDc0NHmGrx4=: 00:20:05.242 11:42:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe4096 0 00:20:05.242 11:42:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:20:05.242 11:42:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:20:05.242 11:42:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:20:05.242 11:42:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:20:05.242 11:42:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:20:05.242 11:42:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:20:05.242 11:42:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:05.242 11:42:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:05.242 11:42:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:05.242 11:42:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:20:05.242 11:42:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:20:05.242 11:42:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:20:05.242 11:42:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:20:05.242 11:42:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:20:05.242 11:42:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:20:05.242 11:42:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:20:05.242 11:42:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:20:05.242 11:42:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:20:05.242 11:42:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:20:05.242 11:42:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:20:05.242 11:42:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:20:05.242 11:42:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:05.242 11:42:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:05.500 nvme0n1 00:20:05.500 11:42:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:05.500 11:42:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:20:05.500 11:42:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:20:05.500 11:42:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:05.500 11:42:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:05.500 11:42:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:05.500 11:42:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:05.500 11:42:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:20:05.500 11:42:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:05.500 11:42:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:05.500 11:42:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:05.500 11:42:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:20:05.500 11:42:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe4096 1 00:20:05.500 11:42:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:20:05.500 11:42:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:20:05.500 11:42:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:20:05.500 11:42:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:20:05.500 11:42:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:MmVhYWU0MjY0NTFkMTYxYmUyMzliYzM0NzZlMzc0Y2Y4NmY1ZDBlMzZjNTI5ODg5wjh+4Q==: 00:20:05.500 11:42:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:YjAyN2ViYTgxMmNmZTg3NGQwNTMwN2UzYmRlMDk1YTAyNGU1MGU1MjZlZmI0NzUyEsEUpg==: 00:20:05.500 11:42:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:20:05.500 11:42:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:20:05.500 11:42:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:MmVhYWU0MjY0NTFkMTYxYmUyMzliYzM0NzZlMzc0Y2Y4NmY1ZDBlMzZjNTI5ODg5wjh+4Q==: 00:20:05.500 11:42:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:YjAyN2ViYTgxMmNmZTg3NGQwNTMwN2UzYmRlMDk1YTAyNGU1MGU1MjZlZmI0NzUyEsEUpg==: ]] 00:20:05.500 11:42:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:YjAyN2ViYTgxMmNmZTg3NGQwNTMwN2UzYmRlMDk1YTAyNGU1MGU1MjZlZmI0NzUyEsEUpg==: 00:20:05.500 11:42:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe4096 1 00:20:05.500 11:42:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:20:05.500 11:42:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:20:05.501 11:42:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:20:05.501 11:42:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:20:05.501 11:42:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:20:05.501 11:42:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:20:05.501 11:42:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:05.501 11:42:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:05.501 11:42:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:05.501 11:42:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:20:05.501 11:42:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:20:05.501 11:42:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:20:05.501 11:42:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:20:05.501 11:42:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:20:05.501 11:42:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:20:05.501 11:42:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:20:05.501 11:42:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:20:05.501 11:42:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:20:05.501 11:42:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:20:05.501 11:42:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:20:05.501 11:42:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:20:05.501 11:42:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:05.501 11:42:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:05.759 nvme0n1 00:20:05.759 11:42:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:05.759 11:42:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:20:05.759 11:42:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:05.759 11:42:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:05.759 11:42:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:20:05.759 11:42:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:05.759 11:42:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:05.759 11:42:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:20:05.759 11:42:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:05.759 11:42:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:05.759 11:42:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:05.759 11:42:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:20:05.759 11:42:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe4096 2 00:20:05.759 11:42:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:20:05.759 11:42:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:20:05.759 11:42:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:20:05.759 11:42:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:20:05.759 11:42:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:OTExOWQzZmNmZWU2NDk2NTgyOWI2ODJmZmZjY2NiMzktCSxF: 00:20:05.759 11:42:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:NGIwYWM0OWM5YjFlZmRiMmI1MTMwODI5Mzc4MGRiYWZn16gQ: 00:20:05.759 11:42:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:20:05.759 11:42:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:20:05.759 11:42:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:OTExOWQzZmNmZWU2NDk2NTgyOWI2ODJmZmZjY2NiMzktCSxF: 00:20:05.759 11:42:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:NGIwYWM0OWM5YjFlZmRiMmI1MTMwODI5Mzc4MGRiYWZn16gQ: ]] 00:20:05.759 11:42:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:NGIwYWM0OWM5YjFlZmRiMmI1MTMwODI5Mzc4MGRiYWZn16gQ: 00:20:05.759 11:42:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe4096 2 00:20:05.759 11:42:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:20:05.759 11:42:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:20:05.759 11:42:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:20:05.759 11:42:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:20:05.759 11:42:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:20:05.759 11:42:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:20:05.759 11:42:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:05.759 11:42:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:05.759 11:42:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:05.759 11:42:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:20:05.759 11:42:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:20:05.759 11:42:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:20:05.759 11:42:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:20:05.759 11:42:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:20:05.759 11:42:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:20:05.759 11:42:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:20:05.759 11:42:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:20:05.759 11:42:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:20:05.759 11:42:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:20:05.759 11:42:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:20:05.759 11:42:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:20:05.760 11:42:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:05.760 11:42:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:06.018 nvme0n1 00:20:06.018 11:42:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:06.018 11:42:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:20:06.018 11:42:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:06.018 11:42:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:06.018 11:42:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:20:06.018 11:42:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:06.018 11:42:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:06.018 11:42:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:20:06.018 11:42:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:06.018 11:42:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:06.018 11:42:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:06.018 11:42:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:20:06.018 11:42:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe4096 3 00:20:06.018 11:42:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:20:06.018 11:42:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:20:06.018 11:42:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:20:06.018 11:42:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:20:06.018 11:42:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:YWQyYjQ1YTY1ZTU2YWY2ZDllMWUzMDY0MmFkMjg3NTVjMzk2OTdhZjIzMDg4MGYxGLXTWw==: 00:20:06.018 11:42:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:MTZjNTcxNTkwYzRmYmUwNWQzODFiZWFmOGE0NGViMjGFzSou: 00:20:06.018 11:42:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:20:06.018 11:42:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:20:06.018 11:42:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:YWQyYjQ1YTY1ZTU2YWY2ZDllMWUzMDY0MmFkMjg3NTVjMzk2OTdhZjIzMDg4MGYxGLXTWw==: 00:20:06.018 11:42:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:MTZjNTcxNTkwYzRmYmUwNWQzODFiZWFmOGE0NGViMjGFzSou: ]] 00:20:06.018 11:42:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:MTZjNTcxNTkwYzRmYmUwNWQzODFiZWFmOGE0NGViMjGFzSou: 00:20:06.018 11:42:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe4096 3 00:20:06.018 11:42:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:20:06.018 11:42:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:20:06.018 11:42:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:20:06.018 11:42:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:20:06.018 11:42:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:20:06.018 11:42:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:20:06.018 11:42:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:06.018 11:42:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:06.018 11:42:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:06.018 11:42:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:20:06.018 11:42:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:20:06.018 11:42:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:20:06.018 11:42:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:20:06.018 11:42:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:20:06.018 11:42:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:20:06.018 11:42:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:20:06.018 11:42:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:20:06.018 11:42:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:20:06.018 11:42:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:20:06.018 11:42:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:20:06.018 11:42:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:20:06.018 11:42:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:06.018 11:42:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:06.276 nvme0n1 00:20:06.276 11:42:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:06.276 11:42:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:20:06.276 11:42:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:06.276 11:42:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:20:06.276 11:42:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:06.276 11:42:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:06.276 11:42:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:06.276 11:42:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:20:06.276 11:42:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:06.276 11:42:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:06.276 11:42:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:06.276 11:42:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:20:06.276 11:42:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe4096 4 00:20:06.276 11:42:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:20:06.277 11:42:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:20:06.277 11:42:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:20:06.277 11:42:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:20:06.277 11:42:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:ZDhlODY5MGFmNjE3NDhhNGFjYWUzMmM3ODk0NGYxMDk0ZGI0ZjUxOGRjMGFiNDliNDU1NmU5NTdiODZmYzRmMrlj5SE=: 00:20:06.277 11:42:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:20:06.277 11:42:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:20:06.277 11:42:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:20:06.277 11:42:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:ZDhlODY5MGFmNjE3NDhhNGFjYWUzMmM3ODk0NGYxMDk0ZGI0ZjUxOGRjMGFiNDliNDU1NmU5NTdiODZmYzRmMrlj5SE=: 00:20:06.277 11:42:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:20:06.277 11:42:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe4096 4 00:20:06.277 11:42:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:20:06.277 11:42:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:20:06.277 11:42:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:20:06.277 11:42:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:20:06.277 11:42:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:20:06.277 11:42:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:20:06.277 11:42:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:06.277 11:42:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:06.277 11:42:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:06.277 11:42:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:20:06.277 11:42:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:20:06.277 11:42:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:20:06.277 11:42:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:20:06.277 11:42:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:20:06.277 11:42:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:20:06.277 11:42:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:20:06.277 11:42:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:20:06.277 11:42:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:20:06.277 11:42:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:20:06.277 11:42:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:20:06.277 11:42:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:20:06.277 11:42:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:06.277 11:42:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:06.535 nvme0n1 00:20:06.535 11:42:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:06.535 11:42:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:20:06.535 11:42:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:20:06.535 11:42:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:06.535 11:42:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:06.535 11:42:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:06.535 11:42:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:06.535 11:42:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:20:06.535 11:42:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:06.535 11:42:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:06.535 11:42:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:06.535 11:42:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:20:06.535 11:42:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:20:06.535 11:42:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe6144 0 00:20:06.535 11:42:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:20:06.535 11:42:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:20:06.535 11:42:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:20:06.535 11:42:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:20:06.535 11:42:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:YTgyYTBmZTY3ZGJmZWY3NjBmNDgyYTE5MDllNmU3ZTbwPdsP: 00:20:06.535 11:42:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:YjZiOTAxZmVjZmFiOTA5ODU2MzkwNzNkMGNhZWJkOGQ5NGFmMWM2NGM5MjQ0NjNkNjJhNDkzM2Y3NDkyMDc0NHmGrx4=: 00:20:06.535 11:42:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:20:06.535 11:42:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:20:08.438 11:42:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:YTgyYTBmZTY3ZGJmZWY3NjBmNDgyYTE5MDllNmU3ZTbwPdsP: 00:20:08.438 11:42:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:YjZiOTAxZmVjZmFiOTA5ODU2MzkwNzNkMGNhZWJkOGQ5NGFmMWM2NGM5MjQ0NjNkNjJhNDkzM2Y3NDkyMDc0NHmGrx4=: ]] 00:20:08.438 11:42:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:YjZiOTAxZmVjZmFiOTA5ODU2MzkwNzNkMGNhZWJkOGQ5NGFmMWM2NGM5MjQ0NjNkNjJhNDkzM2Y3NDkyMDc0NHmGrx4=: 00:20:08.438 11:42:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe6144 0 00:20:08.438 11:42:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:20:08.438 11:42:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:20:08.438 11:42:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:20:08.438 11:42:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:20:08.438 11:42:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:20:08.438 11:42:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:20:08.438 11:42:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:08.438 11:42:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:08.438 11:42:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:08.438 11:42:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:20:08.438 11:42:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:20:08.438 11:42:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:20:08.438 11:42:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:20:08.438 11:42:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:20:08.438 11:42:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:20:08.438 11:42:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:20:08.438 11:42:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:20:08.438 11:42:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:20:08.438 11:42:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:20:08.438 11:42:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:20:08.438 11:42:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:20:08.438 11:42:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:08.438 11:42:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:08.697 nvme0n1 00:20:08.697 11:42:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:08.697 11:42:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:20:08.697 11:42:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:20:08.697 11:42:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:08.697 11:42:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:08.697 11:42:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:08.697 11:42:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:08.697 11:42:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:20:08.697 11:42:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:08.697 11:42:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:08.697 11:42:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:08.697 11:42:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:20:08.697 11:42:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe6144 1 00:20:08.697 11:42:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:20:08.697 11:42:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:20:08.697 11:42:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:20:08.697 11:42:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:20:08.697 11:42:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:MmVhYWU0MjY0NTFkMTYxYmUyMzliYzM0NzZlMzc0Y2Y4NmY1ZDBlMzZjNTI5ODg5wjh+4Q==: 00:20:08.697 11:42:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:YjAyN2ViYTgxMmNmZTg3NGQwNTMwN2UzYmRlMDk1YTAyNGU1MGU1MjZlZmI0NzUyEsEUpg==: 00:20:08.697 11:42:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:20:08.697 11:42:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:20:08.697 11:42:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:MmVhYWU0MjY0NTFkMTYxYmUyMzliYzM0NzZlMzc0Y2Y4NmY1ZDBlMzZjNTI5ODg5wjh+4Q==: 00:20:08.697 11:42:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:YjAyN2ViYTgxMmNmZTg3NGQwNTMwN2UzYmRlMDk1YTAyNGU1MGU1MjZlZmI0NzUyEsEUpg==: ]] 00:20:08.697 11:42:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:YjAyN2ViYTgxMmNmZTg3NGQwNTMwN2UzYmRlMDk1YTAyNGU1MGU1MjZlZmI0NzUyEsEUpg==: 00:20:08.697 11:42:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe6144 1 00:20:08.697 11:42:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:20:08.697 11:42:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:20:08.697 11:42:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:20:08.697 11:42:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:20:08.697 11:42:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:20:08.697 11:42:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:20:08.697 11:42:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:08.697 11:42:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:08.697 11:42:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:08.697 11:42:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:20:08.697 11:42:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:20:08.697 11:42:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:20:08.697 11:42:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:20:08.697 11:42:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:20:08.697 11:42:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:20:08.697 11:42:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:20:08.697 11:42:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:20:08.697 11:42:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:20:08.697 11:42:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:20:08.697 11:42:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:20:08.697 11:42:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:20:08.697 11:42:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:08.697 11:42:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:08.956 nvme0n1 00:20:08.956 11:42:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:08.956 11:42:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:20:08.956 11:42:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:08.956 11:42:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:08.956 11:42:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:20:08.956 11:42:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:08.956 11:42:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:08.956 11:42:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:20:08.956 11:42:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:08.956 11:42:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:08.956 11:42:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:08.956 11:42:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:20:08.956 11:42:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe6144 2 00:20:08.956 11:42:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:20:08.956 11:42:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:20:08.956 11:42:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:20:08.956 11:42:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:20:08.956 11:42:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:OTExOWQzZmNmZWU2NDk2NTgyOWI2ODJmZmZjY2NiMzktCSxF: 00:20:08.956 11:42:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:NGIwYWM0OWM5YjFlZmRiMmI1MTMwODI5Mzc4MGRiYWZn16gQ: 00:20:08.956 11:42:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:20:08.956 11:42:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:20:08.956 11:42:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:OTExOWQzZmNmZWU2NDk2NTgyOWI2ODJmZmZjY2NiMzktCSxF: 00:20:08.956 11:42:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:NGIwYWM0OWM5YjFlZmRiMmI1MTMwODI5Mzc4MGRiYWZn16gQ: ]] 00:20:08.956 11:42:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:NGIwYWM0OWM5YjFlZmRiMmI1MTMwODI5Mzc4MGRiYWZn16gQ: 00:20:08.956 11:42:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe6144 2 00:20:08.956 11:42:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:20:08.956 11:42:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:20:08.956 11:42:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:20:08.956 11:42:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:20:08.956 11:42:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:20:08.956 11:42:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:20:08.956 11:42:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:08.956 11:42:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:08.956 11:42:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:08.956 11:42:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:20:08.956 11:42:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:20:08.956 11:42:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:20:08.956 11:42:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:20:08.956 11:42:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:20:08.956 11:42:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:20:08.956 11:42:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:20:08.956 11:42:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:20:08.956 11:42:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:20:08.956 11:42:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:20:08.956 11:42:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:20:08.956 11:42:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:20:08.956 11:42:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:08.956 11:42:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:09.521 nvme0n1 00:20:09.521 11:42:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:09.521 11:42:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:20:09.521 11:42:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:20:09.521 11:42:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:09.521 11:42:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:09.521 11:42:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:09.521 11:42:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:09.521 11:42:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:20:09.521 11:42:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:09.521 11:42:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:09.521 11:42:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:09.521 11:42:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:20:09.521 11:42:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe6144 3 00:20:09.521 11:42:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:20:09.521 11:42:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:20:09.521 11:42:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:20:09.521 11:42:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:20:09.521 11:42:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:YWQyYjQ1YTY1ZTU2YWY2ZDllMWUzMDY0MmFkMjg3NTVjMzk2OTdhZjIzMDg4MGYxGLXTWw==: 00:20:09.521 11:42:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:MTZjNTcxNTkwYzRmYmUwNWQzODFiZWFmOGE0NGViMjGFzSou: 00:20:09.521 11:42:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:20:09.521 11:42:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:20:09.521 11:42:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:YWQyYjQ1YTY1ZTU2YWY2ZDllMWUzMDY0MmFkMjg3NTVjMzk2OTdhZjIzMDg4MGYxGLXTWw==: 00:20:09.521 11:42:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:MTZjNTcxNTkwYzRmYmUwNWQzODFiZWFmOGE0NGViMjGFzSou: ]] 00:20:09.521 11:42:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:MTZjNTcxNTkwYzRmYmUwNWQzODFiZWFmOGE0NGViMjGFzSou: 00:20:09.521 11:42:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe6144 3 00:20:09.521 11:42:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:20:09.521 11:42:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:20:09.521 11:42:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:20:09.521 11:42:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:20:09.521 11:42:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:20:09.521 11:42:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:20:09.521 11:42:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:09.521 11:42:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:09.521 11:42:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:09.521 11:42:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:20:09.521 11:42:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:20:09.521 11:42:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:20:09.521 11:42:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:20:09.521 11:42:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:20:09.521 11:42:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:20:09.521 11:42:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:20:09.521 11:42:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:20:09.521 11:42:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:20:09.521 11:42:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:20:09.521 11:42:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:20:09.521 11:42:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:20:09.521 11:42:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:09.521 11:42:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:09.779 nvme0n1 00:20:09.779 11:42:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:09.779 11:42:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:20:09.779 11:42:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:09.779 11:42:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:20:09.779 11:42:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:09.779 11:42:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:10.037 11:42:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:10.037 11:42:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:20:10.037 11:42:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:10.037 11:42:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:10.037 11:42:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:10.037 11:42:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:20:10.037 11:42:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe6144 4 00:20:10.037 11:42:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:20:10.037 11:42:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:20:10.037 11:42:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:20:10.037 11:42:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:20:10.037 11:42:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:ZDhlODY5MGFmNjE3NDhhNGFjYWUzMmM3ODk0NGYxMDk0ZGI0ZjUxOGRjMGFiNDliNDU1NmU5NTdiODZmYzRmMrlj5SE=: 00:20:10.037 11:42:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:20:10.037 11:42:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:20:10.037 11:42:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:20:10.037 11:42:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:ZDhlODY5MGFmNjE3NDhhNGFjYWUzMmM3ODk0NGYxMDk0ZGI0ZjUxOGRjMGFiNDliNDU1NmU5NTdiODZmYzRmMrlj5SE=: 00:20:10.037 11:42:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:20:10.037 11:42:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe6144 4 00:20:10.037 11:42:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:20:10.037 11:42:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:20:10.037 11:42:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:20:10.037 11:42:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:20:10.037 11:42:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:20:10.037 11:42:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:20:10.037 11:42:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:10.037 11:42:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:10.037 11:42:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:10.037 11:42:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:20:10.037 11:42:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:20:10.037 11:42:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:20:10.037 11:42:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:20:10.037 11:42:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:20:10.037 11:42:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:20:10.037 11:42:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:20:10.037 11:42:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:20:10.037 11:42:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:20:10.037 11:42:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:20:10.037 11:42:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:20:10.037 11:42:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:20:10.037 11:42:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:10.037 11:42:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:10.296 nvme0n1 00:20:10.296 11:42:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:10.296 11:42:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:20:10.296 11:42:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:20:10.296 11:42:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:10.296 11:42:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:10.296 11:42:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:10.296 11:42:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:10.296 11:42:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:20:10.296 11:42:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:10.296 11:42:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:10.296 11:42:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:10.296 11:42:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:20:10.296 11:42:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:20:10.296 11:42:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe8192 0 00:20:10.296 11:42:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:20:10.296 11:42:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:20:10.296 11:42:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:20:10.296 11:42:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:20:10.296 11:42:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:YTgyYTBmZTY3ZGJmZWY3NjBmNDgyYTE5MDllNmU3ZTbwPdsP: 00:20:10.296 11:42:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:YjZiOTAxZmVjZmFiOTA5ODU2MzkwNzNkMGNhZWJkOGQ5NGFmMWM2NGM5MjQ0NjNkNjJhNDkzM2Y3NDkyMDc0NHmGrx4=: 00:20:10.296 11:42:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:20:10.296 11:42:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:20:10.296 11:42:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:YTgyYTBmZTY3ZGJmZWY3NjBmNDgyYTE5MDllNmU3ZTbwPdsP: 00:20:10.296 11:42:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:YjZiOTAxZmVjZmFiOTA5ODU2MzkwNzNkMGNhZWJkOGQ5NGFmMWM2NGM5MjQ0NjNkNjJhNDkzM2Y3NDkyMDc0NHmGrx4=: ]] 00:20:10.296 11:42:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:YjZiOTAxZmVjZmFiOTA5ODU2MzkwNzNkMGNhZWJkOGQ5NGFmMWM2NGM5MjQ0NjNkNjJhNDkzM2Y3NDkyMDc0NHmGrx4=: 00:20:10.296 11:42:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe8192 0 00:20:10.296 11:42:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:20:10.296 11:42:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:20:10.296 11:42:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:20:10.296 11:42:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:20:10.296 11:42:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:20:10.296 11:42:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:20:10.296 11:42:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:10.296 11:42:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:10.296 11:42:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:10.296 11:42:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:20:10.296 11:42:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:20:10.296 11:42:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:20:10.296 11:42:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:20:10.296 11:42:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:20:10.296 11:42:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:20:10.296 11:42:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:20:10.296 11:42:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:20:10.296 11:42:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:20:10.296 11:42:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:20:10.296 11:42:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:20:10.296 11:42:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:20:10.296 11:42:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:10.296 11:42:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:10.862 nvme0n1 00:20:10.862 11:42:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:10.862 11:42:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:20:10.862 11:42:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:10.862 11:42:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:20:10.862 11:42:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:10.862 11:42:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:11.120 11:42:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:11.120 11:42:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:20:11.120 11:42:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:11.120 11:42:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:11.120 11:42:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:11.120 11:42:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:20:11.120 11:42:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe8192 1 00:20:11.120 11:42:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:20:11.120 11:42:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:20:11.120 11:42:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:20:11.120 11:42:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:20:11.120 11:42:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:MmVhYWU0MjY0NTFkMTYxYmUyMzliYzM0NzZlMzc0Y2Y4NmY1ZDBlMzZjNTI5ODg5wjh+4Q==: 00:20:11.120 11:42:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:YjAyN2ViYTgxMmNmZTg3NGQwNTMwN2UzYmRlMDk1YTAyNGU1MGU1MjZlZmI0NzUyEsEUpg==: 00:20:11.120 11:42:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:20:11.120 11:42:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:20:11.120 11:42:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:MmVhYWU0MjY0NTFkMTYxYmUyMzliYzM0NzZlMzc0Y2Y4NmY1ZDBlMzZjNTI5ODg5wjh+4Q==: 00:20:11.120 11:42:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:YjAyN2ViYTgxMmNmZTg3NGQwNTMwN2UzYmRlMDk1YTAyNGU1MGU1MjZlZmI0NzUyEsEUpg==: ]] 00:20:11.120 11:42:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:YjAyN2ViYTgxMmNmZTg3NGQwNTMwN2UzYmRlMDk1YTAyNGU1MGU1MjZlZmI0NzUyEsEUpg==: 00:20:11.120 11:42:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe8192 1 00:20:11.120 11:42:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:20:11.120 11:42:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:20:11.120 11:42:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:20:11.120 11:42:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:20:11.120 11:42:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:20:11.120 11:42:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:20:11.120 11:42:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:11.120 11:42:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:11.120 11:42:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:11.120 11:42:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:20:11.120 11:42:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:20:11.120 11:42:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:20:11.120 11:42:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:20:11.120 11:42:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:20:11.120 11:42:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:20:11.120 11:42:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:20:11.120 11:42:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:20:11.120 11:42:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:20:11.120 11:42:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:20:11.120 11:42:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:20:11.120 11:42:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:20:11.120 11:42:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:11.120 11:42:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:11.686 nvme0n1 00:20:11.686 11:42:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:11.686 11:42:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:20:11.686 11:42:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:20:11.686 11:42:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:11.686 11:42:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:11.686 11:42:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:11.686 11:42:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:11.686 11:42:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:20:11.686 11:42:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:11.686 11:42:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:11.686 11:42:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:11.686 11:42:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:20:11.686 11:42:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe8192 2 00:20:11.686 11:42:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:20:11.686 11:42:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:20:11.686 11:42:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:20:11.686 11:42:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:20:11.686 11:42:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:OTExOWQzZmNmZWU2NDk2NTgyOWI2ODJmZmZjY2NiMzktCSxF: 00:20:11.686 11:42:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:NGIwYWM0OWM5YjFlZmRiMmI1MTMwODI5Mzc4MGRiYWZn16gQ: 00:20:11.686 11:42:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:20:11.686 11:42:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:20:11.686 11:42:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:OTExOWQzZmNmZWU2NDk2NTgyOWI2ODJmZmZjY2NiMzktCSxF: 00:20:11.686 11:42:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:NGIwYWM0OWM5YjFlZmRiMmI1MTMwODI5Mzc4MGRiYWZn16gQ: ]] 00:20:11.686 11:42:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:NGIwYWM0OWM5YjFlZmRiMmI1MTMwODI5Mzc4MGRiYWZn16gQ: 00:20:11.686 11:42:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe8192 2 00:20:11.686 11:42:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:20:11.686 11:42:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:20:11.686 11:42:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:20:11.687 11:42:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:20:11.687 11:42:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:20:11.687 11:42:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:20:11.687 11:42:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:11.687 11:42:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:11.687 11:42:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:11.687 11:42:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:20:11.687 11:42:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:20:11.687 11:42:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:20:11.687 11:42:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:20:11.687 11:42:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:20:11.687 11:42:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:20:11.687 11:42:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:20:11.687 11:42:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:20:11.687 11:42:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:20:11.687 11:42:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:20:11.687 11:42:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:20:11.687 11:42:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:20:11.687 11:42:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:11.687 11:42:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:12.253 nvme0n1 00:20:12.253 11:42:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:12.253 11:42:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:20:12.253 11:42:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:20:12.253 11:42:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:12.253 11:42:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:12.253 11:42:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:12.253 11:42:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:12.253 11:42:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:20:12.253 11:42:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:12.253 11:42:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:12.253 11:42:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:12.253 11:42:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:20:12.253 11:42:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe8192 3 00:20:12.253 11:42:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:20:12.253 11:42:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:20:12.253 11:42:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:20:12.253 11:42:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:20:12.253 11:42:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:YWQyYjQ1YTY1ZTU2YWY2ZDllMWUzMDY0MmFkMjg3NTVjMzk2OTdhZjIzMDg4MGYxGLXTWw==: 00:20:12.254 11:42:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:MTZjNTcxNTkwYzRmYmUwNWQzODFiZWFmOGE0NGViMjGFzSou: 00:20:12.254 11:42:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:20:12.254 11:42:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:20:12.254 11:42:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:YWQyYjQ1YTY1ZTU2YWY2ZDllMWUzMDY0MmFkMjg3NTVjMzk2OTdhZjIzMDg4MGYxGLXTWw==: 00:20:12.254 11:42:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:MTZjNTcxNTkwYzRmYmUwNWQzODFiZWFmOGE0NGViMjGFzSou: ]] 00:20:12.254 11:42:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:MTZjNTcxNTkwYzRmYmUwNWQzODFiZWFmOGE0NGViMjGFzSou: 00:20:12.254 11:42:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe8192 3 00:20:12.254 11:42:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:20:12.254 11:42:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:20:12.254 11:42:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:20:12.254 11:42:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:20:12.254 11:42:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:20:12.254 11:42:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:20:12.254 11:42:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:12.254 11:42:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:12.254 11:42:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:12.254 11:42:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:20:12.254 11:42:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:20:12.254 11:42:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:20:12.254 11:42:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:20:12.254 11:42:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:20:12.254 11:42:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:20:12.254 11:42:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:20:12.254 11:42:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:20:12.254 11:42:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:20:12.254 11:42:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:20:12.254 11:42:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:20:12.254 11:42:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:20:12.254 11:42:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:12.254 11:42:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:12.820 nvme0n1 00:20:12.820 11:42:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:12.820 11:42:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:20:12.820 11:42:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:20:12.820 11:42:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:12.820 11:42:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:12.820 11:42:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:13.078 11:42:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:13.078 11:42:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:20:13.078 11:42:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:13.078 11:42:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:13.078 11:42:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:13.078 11:42:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:20:13.078 11:42:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe8192 4 00:20:13.078 11:42:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:20:13.078 11:42:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:20:13.078 11:42:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:20:13.078 11:42:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:20:13.078 11:42:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:ZDhlODY5MGFmNjE3NDhhNGFjYWUzMmM3ODk0NGYxMDk0ZGI0ZjUxOGRjMGFiNDliNDU1NmU5NTdiODZmYzRmMrlj5SE=: 00:20:13.078 11:42:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:20:13.078 11:42:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:20:13.078 11:42:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:20:13.078 11:42:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:ZDhlODY5MGFmNjE3NDhhNGFjYWUzMmM3ODk0NGYxMDk0ZGI0ZjUxOGRjMGFiNDliNDU1NmU5NTdiODZmYzRmMrlj5SE=: 00:20:13.078 11:42:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:20:13.078 11:42:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe8192 4 00:20:13.078 11:42:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:20:13.078 11:42:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:20:13.078 11:42:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:20:13.078 11:42:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:20:13.078 11:42:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:20:13.078 11:42:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:20:13.078 11:42:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:13.078 11:42:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:13.078 11:42:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:13.078 11:42:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:20:13.078 11:42:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:20:13.078 11:42:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:20:13.078 11:42:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:20:13.078 11:42:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:20:13.078 11:42:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:20:13.078 11:42:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:20:13.078 11:42:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:20:13.078 11:42:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:20:13.078 11:42:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:20:13.078 11:42:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:20:13.078 11:42:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:20:13.078 11:42:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:13.078 11:42:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:13.645 nvme0n1 00:20:13.645 11:42:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:13.645 11:42:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:20:13.645 11:42:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:13.645 11:42:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:13.645 11:42:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:20:13.645 11:42:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:13.645 11:42:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:13.645 11:42:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:20:13.645 11:42:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:13.645 11:42:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:13.645 11:42:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:13.645 11:42:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@100 -- # for digest in "${digests[@]}" 00:20:13.645 11:42:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:20:13.645 11:42:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:20:13.645 11:42:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe2048 0 00:20:13.645 11:42:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:20:13.645 11:42:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:20:13.645 11:42:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:20:13.645 11:42:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:20:13.645 11:42:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:YTgyYTBmZTY3ZGJmZWY3NjBmNDgyYTE5MDllNmU3ZTbwPdsP: 00:20:13.645 11:42:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:YjZiOTAxZmVjZmFiOTA5ODU2MzkwNzNkMGNhZWJkOGQ5NGFmMWM2NGM5MjQ0NjNkNjJhNDkzM2Y3NDkyMDc0NHmGrx4=: 00:20:13.646 11:42:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:20:13.646 11:42:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:20:13.646 11:42:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:YTgyYTBmZTY3ZGJmZWY3NjBmNDgyYTE5MDllNmU3ZTbwPdsP: 00:20:13.646 11:42:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:YjZiOTAxZmVjZmFiOTA5ODU2MzkwNzNkMGNhZWJkOGQ5NGFmMWM2NGM5MjQ0NjNkNjJhNDkzM2Y3NDkyMDc0NHmGrx4=: ]] 00:20:13.646 11:42:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:YjZiOTAxZmVjZmFiOTA5ODU2MzkwNzNkMGNhZWJkOGQ5NGFmMWM2NGM5MjQ0NjNkNjJhNDkzM2Y3NDkyMDc0NHmGrx4=: 00:20:13.646 11:42:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe2048 0 00:20:13.646 11:42:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:20:13.646 11:42:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:20:13.646 11:42:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:20:13.646 11:42:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:20:13.646 11:42:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:20:13.646 11:42:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:20:13.646 11:42:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:13.646 11:42:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:13.646 11:42:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:13.646 11:42:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:20:13.646 11:42:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:20:13.646 11:42:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:20:13.646 11:42:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:20:13.646 11:42:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:20:13.646 11:42:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:20:13.646 11:42:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:20:13.646 11:42:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:20:13.646 11:42:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:20:13.646 11:42:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:20:13.646 11:42:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:20:13.646 11:42:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:20:13.646 11:42:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:13.646 11:42:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:13.646 nvme0n1 00:20:13.646 11:42:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:13.646 11:42:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:20:13.646 11:42:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:13.646 11:42:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:13.646 11:42:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:20:13.646 11:42:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:13.904 11:42:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:13.904 11:42:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:20:13.904 11:42:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:13.904 11:42:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:13.904 11:42:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:13.905 11:42:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:20:13.905 11:42:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe2048 1 00:20:13.905 11:42:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:20:13.905 11:42:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:20:13.905 11:42:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:20:13.905 11:42:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:20:13.905 11:42:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:MmVhYWU0MjY0NTFkMTYxYmUyMzliYzM0NzZlMzc0Y2Y4NmY1ZDBlMzZjNTI5ODg5wjh+4Q==: 00:20:13.905 11:42:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:YjAyN2ViYTgxMmNmZTg3NGQwNTMwN2UzYmRlMDk1YTAyNGU1MGU1MjZlZmI0NzUyEsEUpg==: 00:20:13.905 11:42:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:20:13.905 11:42:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:20:13.905 11:42:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:MmVhYWU0MjY0NTFkMTYxYmUyMzliYzM0NzZlMzc0Y2Y4NmY1ZDBlMzZjNTI5ODg5wjh+4Q==: 00:20:13.905 11:42:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:YjAyN2ViYTgxMmNmZTg3NGQwNTMwN2UzYmRlMDk1YTAyNGU1MGU1MjZlZmI0NzUyEsEUpg==: ]] 00:20:13.905 11:42:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:YjAyN2ViYTgxMmNmZTg3NGQwNTMwN2UzYmRlMDk1YTAyNGU1MGU1MjZlZmI0NzUyEsEUpg==: 00:20:13.905 11:42:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe2048 1 00:20:13.905 11:42:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:20:13.905 11:42:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:20:13.905 11:42:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:20:13.905 11:42:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:20:13.905 11:42:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:20:13.905 11:42:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:20:13.905 11:42:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:13.905 11:42:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:13.905 11:42:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:13.905 11:42:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:20:13.905 11:42:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:20:13.905 11:42:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:20:13.905 11:42:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:20:13.905 11:42:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:20:13.905 11:42:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:20:13.905 11:42:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:20:13.905 11:42:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:20:13.905 11:42:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:20:13.905 11:42:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:20:13.905 11:42:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:20:13.905 11:42:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:20:13.905 11:42:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:13.905 11:42:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:13.905 nvme0n1 00:20:13.905 11:42:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:13.905 11:42:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:20:13.905 11:42:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:13.905 11:42:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:13.905 11:42:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:20:13.905 11:42:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:13.905 11:42:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:13.905 11:42:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:20:13.905 11:42:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:13.905 11:42:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:13.905 11:42:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:13.905 11:42:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:20:13.905 11:42:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe2048 2 00:20:13.905 11:42:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:20:13.905 11:42:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:20:13.905 11:42:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:20:13.905 11:42:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:20:13.905 11:42:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:OTExOWQzZmNmZWU2NDk2NTgyOWI2ODJmZmZjY2NiMzktCSxF: 00:20:13.905 11:42:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:NGIwYWM0OWM5YjFlZmRiMmI1MTMwODI5Mzc4MGRiYWZn16gQ: 00:20:13.905 11:42:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:20:13.905 11:42:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:20:13.905 11:42:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:OTExOWQzZmNmZWU2NDk2NTgyOWI2ODJmZmZjY2NiMzktCSxF: 00:20:13.905 11:42:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:NGIwYWM0OWM5YjFlZmRiMmI1MTMwODI5Mzc4MGRiYWZn16gQ: ]] 00:20:13.905 11:42:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:NGIwYWM0OWM5YjFlZmRiMmI1MTMwODI5Mzc4MGRiYWZn16gQ: 00:20:13.905 11:42:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe2048 2 00:20:13.905 11:42:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:20:13.905 11:42:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:20:13.905 11:42:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:20:13.905 11:42:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:20:13.905 11:42:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:20:13.905 11:42:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:20:13.905 11:42:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:13.905 11:42:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:13.905 11:42:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:13.905 11:42:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:20:13.905 11:42:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:20:13.905 11:42:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:20:13.905 11:42:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:20:13.905 11:42:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:20:13.905 11:42:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:20:13.905 11:42:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:20:13.905 11:42:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:20:13.905 11:42:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:20:13.905 11:42:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:20:13.905 11:42:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:20:13.905 11:42:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:20:13.905 11:42:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:13.905 11:42:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:14.163 nvme0n1 00:20:14.163 11:42:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:14.163 11:42:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:20:14.164 11:42:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:20:14.164 11:42:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:14.164 11:42:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:14.164 11:42:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:14.164 11:42:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:14.164 11:42:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:20:14.164 11:42:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:14.164 11:42:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:14.164 11:42:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:14.164 11:42:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:20:14.164 11:42:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe2048 3 00:20:14.164 11:42:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:20:14.164 11:42:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:20:14.164 11:42:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:20:14.164 11:42:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:20:14.164 11:42:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:YWQyYjQ1YTY1ZTU2YWY2ZDllMWUzMDY0MmFkMjg3NTVjMzk2OTdhZjIzMDg4MGYxGLXTWw==: 00:20:14.164 11:42:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:MTZjNTcxNTkwYzRmYmUwNWQzODFiZWFmOGE0NGViMjGFzSou: 00:20:14.164 11:42:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:20:14.164 11:42:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:20:14.164 11:42:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:YWQyYjQ1YTY1ZTU2YWY2ZDllMWUzMDY0MmFkMjg3NTVjMzk2OTdhZjIzMDg4MGYxGLXTWw==: 00:20:14.164 11:42:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:MTZjNTcxNTkwYzRmYmUwNWQzODFiZWFmOGE0NGViMjGFzSou: ]] 00:20:14.164 11:42:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:MTZjNTcxNTkwYzRmYmUwNWQzODFiZWFmOGE0NGViMjGFzSou: 00:20:14.164 11:42:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe2048 3 00:20:14.164 11:42:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:20:14.164 11:42:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:20:14.164 11:42:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:20:14.164 11:42:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:20:14.164 11:42:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:20:14.164 11:42:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:20:14.164 11:42:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:14.164 11:42:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:14.164 11:42:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:14.164 11:42:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:20:14.164 11:42:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:20:14.164 11:42:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:20:14.164 11:42:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:20:14.164 11:42:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:20:14.164 11:42:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:20:14.164 11:42:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:20:14.164 11:42:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:20:14.164 11:42:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:20:14.164 11:42:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:20:14.164 11:42:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:20:14.164 11:42:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:20:14.164 11:42:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:14.164 11:42:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:14.164 nvme0n1 00:20:14.164 11:42:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:14.164 11:42:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:20:14.422 11:42:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:14.422 11:42:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:14.423 11:42:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:20:14.423 11:42:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:14.423 11:42:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:14.423 11:42:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:20:14.423 11:42:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:14.423 11:42:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:14.423 11:42:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:14.423 11:42:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:20:14.423 11:42:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe2048 4 00:20:14.423 11:42:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:20:14.423 11:42:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:20:14.423 11:42:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:20:14.423 11:42:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:20:14.423 11:42:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:ZDhlODY5MGFmNjE3NDhhNGFjYWUzMmM3ODk0NGYxMDk0ZGI0ZjUxOGRjMGFiNDliNDU1NmU5NTdiODZmYzRmMrlj5SE=: 00:20:14.423 11:42:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:20:14.423 11:42:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:20:14.423 11:42:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:20:14.423 11:42:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:ZDhlODY5MGFmNjE3NDhhNGFjYWUzMmM3ODk0NGYxMDk0ZGI0ZjUxOGRjMGFiNDliNDU1NmU5NTdiODZmYzRmMrlj5SE=: 00:20:14.423 11:42:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:20:14.423 11:42:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe2048 4 00:20:14.423 11:42:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:20:14.423 11:42:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:20:14.423 11:42:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:20:14.423 11:42:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:20:14.423 11:42:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:20:14.423 11:42:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:20:14.423 11:42:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:14.423 11:42:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:14.423 11:42:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:14.423 11:42:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:20:14.423 11:42:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:20:14.423 11:42:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:20:14.423 11:42:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:20:14.423 11:42:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:20:14.423 11:42:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:20:14.423 11:42:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:20:14.423 11:42:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:20:14.423 11:42:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:20:14.423 11:42:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:20:14.423 11:42:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:20:14.423 11:42:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:20:14.423 11:42:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:14.423 11:42:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:14.423 nvme0n1 00:20:14.423 11:42:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:14.423 11:42:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:20:14.423 11:42:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:14.423 11:42:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:14.423 11:42:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:20:14.423 11:42:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:14.423 11:42:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:14.423 11:42:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:20:14.423 11:42:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:14.423 11:42:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:14.423 11:42:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:14.423 11:42:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:20:14.423 11:42:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:20:14.423 11:42:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe3072 0 00:20:14.423 11:42:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:20:14.423 11:42:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:20:14.423 11:42:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:20:14.423 11:42:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:20:14.423 11:42:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:YTgyYTBmZTY3ZGJmZWY3NjBmNDgyYTE5MDllNmU3ZTbwPdsP: 00:20:14.423 11:42:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:YjZiOTAxZmVjZmFiOTA5ODU2MzkwNzNkMGNhZWJkOGQ5NGFmMWM2NGM5MjQ0NjNkNjJhNDkzM2Y3NDkyMDc0NHmGrx4=: 00:20:14.423 11:42:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:20:14.423 11:42:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:20:14.423 11:42:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:YTgyYTBmZTY3ZGJmZWY3NjBmNDgyYTE5MDllNmU3ZTbwPdsP: 00:20:14.423 11:42:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:YjZiOTAxZmVjZmFiOTA5ODU2MzkwNzNkMGNhZWJkOGQ5NGFmMWM2NGM5MjQ0NjNkNjJhNDkzM2Y3NDkyMDc0NHmGrx4=: ]] 00:20:14.423 11:42:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:YjZiOTAxZmVjZmFiOTA5ODU2MzkwNzNkMGNhZWJkOGQ5NGFmMWM2NGM5MjQ0NjNkNjJhNDkzM2Y3NDkyMDc0NHmGrx4=: 00:20:14.423 11:42:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe3072 0 00:20:14.423 11:42:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:20:14.423 11:42:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:20:14.423 11:42:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:20:14.423 11:42:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:20:14.423 11:42:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:20:14.423 11:42:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:20:14.423 11:42:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:14.423 11:42:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:14.423 11:42:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:14.423 11:42:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:20:14.423 11:42:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:20:14.423 11:42:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:20:14.423 11:42:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:20:14.423 11:42:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:20:14.423 11:42:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:20:14.423 11:42:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:20:14.423 11:42:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:20:14.423 11:42:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:20:14.423 11:42:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:20:14.423 11:42:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:20:14.423 11:42:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:20:14.423 11:42:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:14.423 11:42:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:14.681 nvme0n1 00:20:14.681 11:42:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:14.681 11:42:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:20:14.681 11:42:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:14.681 11:42:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:20:14.681 11:42:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:14.681 11:42:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:14.681 11:42:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:14.681 11:42:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:20:14.681 11:42:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:14.681 11:42:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:14.681 11:42:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:14.681 11:42:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:20:14.681 11:42:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe3072 1 00:20:14.681 11:42:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:20:14.681 11:42:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:20:14.681 11:42:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:20:14.681 11:42:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:20:14.681 11:42:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:MmVhYWU0MjY0NTFkMTYxYmUyMzliYzM0NzZlMzc0Y2Y4NmY1ZDBlMzZjNTI5ODg5wjh+4Q==: 00:20:14.681 11:42:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:YjAyN2ViYTgxMmNmZTg3NGQwNTMwN2UzYmRlMDk1YTAyNGU1MGU1MjZlZmI0NzUyEsEUpg==: 00:20:14.681 11:42:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:20:14.681 11:42:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:20:14.681 11:42:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:MmVhYWU0MjY0NTFkMTYxYmUyMzliYzM0NzZlMzc0Y2Y4NmY1ZDBlMzZjNTI5ODg5wjh+4Q==: 00:20:14.681 11:42:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:YjAyN2ViYTgxMmNmZTg3NGQwNTMwN2UzYmRlMDk1YTAyNGU1MGU1MjZlZmI0NzUyEsEUpg==: ]] 00:20:14.681 11:42:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:YjAyN2ViYTgxMmNmZTg3NGQwNTMwN2UzYmRlMDk1YTAyNGU1MGU1MjZlZmI0NzUyEsEUpg==: 00:20:14.681 11:42:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe3072 1 00:20:14.681 11:42:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:20:14.681 11:42:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:20:14.681 11:42:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:20:14.681 11:42:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:20:14.681 11:42:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:20:14.681 11:42:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:20:14.681 11:42:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:14.681 11:42:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:14.681 11:42:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:14.681 11:42:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:20:14.681 11:42:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:20:14.681 11:42:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:20:14.681 11:42:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:20:14.681 11:42:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:20:14.681 11:42:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:20:14.681 11:42:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:20:14.681 11:42:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:20:14.681 11:42:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:20:14.681 11:42:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:20:14.681 11:42:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:20:14.681 11:42:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:20:14.681 11:42:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:14.681 11:42:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:14.939 nvme0n1 00:20:14.939 11:42:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:14.939 11:42:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:20:14.939 11:42:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:20:14.939 11:42:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:14.939 11:42:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:14.939 11:42:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:14.939 11:42:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:14.939 11:42:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:20:14.939 11:42:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:14.939 11:42:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:14.939 11:42:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:14.939 11:42:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:20:14.939 11:42:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe3072 2 00:20:14.939 11:42:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:20:14.939 11:42:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:20:14.939 11:42:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:20:14.939 11:42:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:20:14.939 11:42:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:OTExOWQzZmNmZWU2NDk2NTgyOWI2ODJmZmZjY2NiMzktCSxF: 00:20:14.939 11:42:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:NGIwYWM0OWM5YjFlZmRiMmI1MTMwODI5Mzc4MGRiYWZn16gQ: 00:20:14.939 11:42:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:20:14.939 11:42:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:20:14.939 11:42:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:OTExOWQzZmNmZWU2NDk2NTgyOWI2ODJmZmZjY2NiMzktCSxF: 00:20:14.939 11:42:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:NGIwYWM0OWM5YjFlZmRiMmI1MTMwODI5Mzc4MGRiYWZn16gQ: ]] 00:20:14.939 11:42:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:NGIwYWM0OWM5YjFlZmRiMmI1MTMwODI5Mzc4MGRiYWZn16gQ: 00:20:14.939 11:42:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe3072 2 00:20:14.939 11:42:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:20:14.939 11:42:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:20:14.939 11:42:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:20:14.939 11:42:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:20:14.939 11:42:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:20:14.939 11:42:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:20:14.939 11:42:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:14.940 11:42:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:14.940 11:42:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:14.940 11:42:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:20:14.940 11:42:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:20:14.940 11:42:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:20:14.940 11:42:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:20:14.940 11:42:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:20:14.940 11:42:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:20:14.940 11:42:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:20:14.940 11:42:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:20:14.940 11:42:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:20:14.940 11:42:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:20:14.940 11:42:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:20:14.940 11:42:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:20:14.940 11:42:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:14.940 11:42:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:15.198 nvme0n1 00:20:15.198 11:42:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:15.198 11:42:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:20:15.198 11:42:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:15.198 11:42:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:15.198 11:42:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:20:15.198 11:42:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:15.198 11:42:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:15.198 11:42:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:20:15.198 11:42:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:15.198 11:42:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:15.198 11:42:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:15.198 11:42:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:20:15.198 11:42:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe3072 3 00:20:15.198 11:42:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:20:15.198 11:42:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:20:15.198 11:42:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:20:15.198 11:42:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:20:15.198 11:42:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:YWQyYjQ1YTY1ZTU2YWY2ZDllMWUzMDY0MmFkMjg3NTVjMzk2OTdhZjIzMDg4MGYxGLXTWw==: 00:20:15.198 11:42:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:MTZjNTcxNTkwYzRmYmUwNWQzODFiZWFmOGE0NGViMjGFzSou: 00:20:15.198 11:42:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:20:15.198 11:42:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:20:15.198 11:42:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:YWQyYjQ1YTY1ZTU2YWY2ZDllMWUzMDY0MmFkMjg3NTVjMzk2OTdhZjIzMDg4MGYxGLXTWw==: 00:20:15.198 11:42:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:MTZjNTcxNTkwYzRmYmUwNWQzODFiZWFmOGE0NGViMjGFzSou: ]] 00:20:15.198 11:42:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:MTZjNTcxNTkwYzRmYmUwNWQzODFiZWFmOGE0NGViMjGFzSou: 00:20:15.198 11:42:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe3072 3 00:20:15.198 11:42:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:20:15.198 11:42:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:20:15.198 11:42:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:20:15.198 11:42:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:20:15.198 11:42:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:20:15.198 11:42:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:20:15.198 11:42:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:15.198 11:42:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:15.198 11:42:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:15.198 11:42:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:20:15.198 11:42:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:20:15.198 11:42:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:20:15.198 11:42:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:20:15.198 11:42:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:20:15.198 11:42:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:20:15.198 11:42:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:20:15.198 11:42:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:20:15.198 11:42:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:20:15.198 11:42:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:20:15.198 11:42:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:20:15.198 11:42:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:20:15.198 11:42:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:15.198 11:42:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:15.198 nvme0n1 00:20:15.198 11:42:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:15.198 11:42:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:20:15.198 11:42:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:15.198 11:42:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:15.198 11:42:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:20:15.198 11:42:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:15.457 11:42:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:15.457 11:42:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:20:15.457 11:42:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:15.457 11:42:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:15.457 11:42:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:15.457 11:42:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:20:15.457 11:42:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe3072 4 00:20:15.457 11:42:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:20:15.457 11:42:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:20:15.457 11:42:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:20:15.457 11:42:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:20:15.457 11:42:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:ZDhlODY5MGFmNjE3NDhhNGFjYWUzMmM3ODk0NGYxMDk0ZGI0ZjUxOGRjMGFiNDliNDU1NmU5NTdiODZmYzRmMrlj5SE=: 00:20:15.457 11:42:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:20:15.457 11:42:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:20:15.457 11:42:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:20:15.457 11:42:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:ZDhlODY5MGFmNjE3NDhhNGFjYWUzMmM3ODk0NGYxMDk0ZGI0ZjUxOGRjMGFiNDliNDU1NmU5NTdiODZmYzRmMrlj5SE=: 00:20:15.457 11:42:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:20:15.457 11:42:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe3072 4 00:20:15.457 11:42:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:20:15.457 11:42:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:20:15.457 11:42:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:20:15.457 11:42:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:20:15.457 11:42:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:20:15.457 11:42:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:20:15.457 11:42:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:15.457 11:42:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:15.457 11:42:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:15.457 11:42:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:20:15.457 11:42:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:20:15.457 11:42:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:20:15.457 11:42:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:20:15.457 11:42:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:20:15.457 11:42:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:20:15.457 11:42:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:20:15.457 11:42:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:20:15.457 11:42:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:20:15.457 11:42:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:20:15.457 11:42:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:20:15.457 11:42:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:20:15.457 11:42:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:15.457 11:42:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:15.457 nvme0n1 00:20:15.457 11:42:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:15.457 11:42:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:20:15.457 11:42:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:20:15.457 11:42:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:15.457 11:42:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:15.457 11:42:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:15.457 11:42:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:15.457 11:42:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:20:15.457 11:42:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:15.457 11:42:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:15.457 11:42:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:15.457 11:42:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:20:15.457 11:42:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:20:15.457 11:42:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe4096 0 00:20:15.457 11:42:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:20:15.457 11:42:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:20:15.457 11:42:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:20:15.457 11:42:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:20:15.457 11:42:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:YTgyYTBmZTY3ZGJmZWY3NjBmNDgyYTE5MDllNmU3ZTbwPdsP: 00:20:15.457 11:42:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:YjZiOTAxZmVjZmFiOTA5ODU2MzkwNzNkMGNhZWJkOGQ5NGFmMWM2NGM5MjQ0NjNkNjJhNDkzM2Y3NDkyMDc0NHmGrx4=: 00:20:15.457 11:42:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:20:15.457 11:42:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:20:15.457 11:42:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:YTgyYTBmZTY3ZGJmZWY3NjBmNDgyYTE5MDllNmU3ZTbwPdsP: 00:20:15.457 11:42:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:YjZiOTAxZmVjZmFiOTA5ODU2MzkwNzNkMGNhZWJkOGQ5NGFmMWM2NGM5MjQ0NjNkNjJhNDkzM2Y3NDkyMDc0NHmGrx4=: ]] 00:20:15.457 11:42:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:YjZiOTAxZmVjZmFiOTA5ODU2MzkwNzNkMGNhZWJkOGQ5NGFmMWM2NGM5MjQ0NjNkNjJhNDkzM2Y3NDkyMDc0NHmGrx4=: 00:20:15.457 11:42:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe4096 0 00:20:15.457 11:42:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:20:15.457 11:42:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:20:15.457 11:42:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:20:15.457 11:42:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:20:15.457 11:42:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:20:15.457 11:42:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:20:15.457 11:42:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:15.457 11:42:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:15.457 11:42:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:15.457 11:42:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:20:15.457 11:42:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:20:15.457 11:42:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:20:15.457 11:42:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:20:15.457 11:42:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:20:15.457 11:42:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:20:15.457 11:42:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:20:15.457 11:42:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:20:15.457 11:42:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:20:15.457 11:42:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:20:15.457 11:42:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:20:15.457 11:42:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:20:15.457 11:42:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:15.457 11:42:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:15.717 nvme0n1 00:20:15.717 11:42:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:15.717 11:42:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:20:15.717 11:42:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:20:15.717 11:42:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:15.717 11:42:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:15.717 11:42:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:15.717 11:42:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:15.717 11:42:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:20:15.717 11:42:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:15.717 11:42:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:15.717 11:42:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:15.717 11:42:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:20:15.717 11:42:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe4096 1 00:20:15.717 11:42:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:20:15.717 11:42:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:20:15.717 11:42:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:20:15.717 11:42:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:20:15.717 11:42:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:MmVhYWU0MjY0NTFkMTYxYmUyMzliYzM0NzZlMzc0Y2Y4NmY1ZDBlMzZjNTI5ODg5wjh+4Q==: 00:20:15.717 11:42:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:YjAyN2ViYTgxMmNmZTg3NGQwNTMwN2UzYmRlMDk1YTAyNGU1MGU1MjZlZmI0NzUyEsEUpg==: 00:20:15.717 11:42:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:20:15.717 11:42:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:20:15.717 11:42:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:MmVhYWU0MjY0NTFkMTYxYmUyMzliYzM0NzZlMzc0Y2Y4NmY1ZDBlMzZjNTI5ODg5wjh+4Q==: 00:20:15.718 11:42:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:YjAyN2ViYTgxMmNmZTg3NGQwNTMwN2UzYmRlMDk1YTAyNGU1MGU1MjZlZmI0NzUyEsEUpg==: ]] 00:20:15.718 11:42:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:YjAyN2ViYTgxMmNmZTg3NGQwNTMwN2UzYmRlMDk1YTAyNGU1MGU1MjZlZmI0NzUyEsEUpg==: 00:20:15.718 11:42:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe4096 1 00:20:15.718 11:42:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:20:15.718 11:42:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:20:15.718 11:42:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:20:15.718 11:42:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:20:15.718 11:42:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:20:15.718 11:42:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:20:15.718 11:42:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:15.718 11:42:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:15.718 11:42:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:15.718 11:42:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:20:15.718 11:42:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:20:15.718 11:42:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:20:15.718 11:42:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:20:15.718 11:42:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:20:15.718 11:42:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:20:15.718 11:42:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:20:15.718 11:42:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:20:15.718 11:42:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:20:15.718 11:42:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:20:15.718 11:42:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:20:15.718 11:42:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:20:15.718 11:42:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:15.718 11:42:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:15.976 nvme0n1 00:20:15.976 11:42:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:15.976 11:42:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:20:15.976 11:42:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:15.976 11:42:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:15.976 11:42:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:20:15.976 11:42:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:15.976 11:42:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:15.976 11:42:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:20:15.976 11:42:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:15.976 11:42:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:15.976 11:42:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:15.976 11:42:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:20:15.976 11:42:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe4096 2 00:20:15.976 11:42:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:20:15.976 11:42:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:20:15.976 11:42:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:20:15.976 11:42:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:20:15.976 11:42:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:OTExOWQzZmNmZWU2NDk2NTgyOWI2ODJmZmZjY2NiMzktCSxF: 00:20:15.976 11:42:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:NGIwYWM0OWM5YjFlZmRiMmI1MTMwODI5Mzc4MGRiYWZn16gQ: 00:20:15.976 11:42:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:20:15.976 11:42:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:20:15.976 11:42:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:OTExOWQzZmNmZWU2NDk2NTgyOWI2ODJmZmZjY2NiMzktCSxF: 00:20:15.976 11:42:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:NGIwYWM0OWM5YjFlZmRiMmI1MTMwODI5Mzc4MGRiYWZn16gQ: ]] 00:20:15.976 11:42:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:NGIwYWM0OWM5YjFlZmRiMmI1MTMwODI5Mzc4MGRiYWZn16gQ: 00:20:15.976 11:42:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe4096 2 00:20:15.976 11:42:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:20:15.976 11:42:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:20:15.976 11:42:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:20:15.976 11:42:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:20:15.976 11:42:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:20:15.976 11:42:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:20:15.976 11:42:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:15.976 11:42:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:15.976 11:42:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:16.235 11:42:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:20:16.235 11:42:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:20:16.235 11:42:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:20:16.235 11:42:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:20:16.235 11:42:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:20:16.235 11:42:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:20:16.235 11:42:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:20:16.235 11:42:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:20:16.235 11:42:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:20:16.235 11:42:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:20:16.235 11:42:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:20:16.235 11:42:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:20:16.235 11:42:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:16.235 11:42:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:16.235 nvme0n1 00:20:16.235 11:42:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:16.235 11:42:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:20:16.235 11:42:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:20:16.235 11:42:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:16.235 11:42:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:16.235 11:42:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:16.235 11:42:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:16.235 11:42:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:20:16.235 11:42:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:16.235 11:42:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:16.235 11:42:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:16.235 11:42:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:20:16.235 11:42:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe4096 3 00:20:16.494 11:42:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:20:16.494 11:42:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:20:16.494 11:42:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:20:16.494 11:42:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:20:16.494 11:42:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:YWQyYjQ1YTY1ZTU2YWY2ZDllMWUzMDY0MmFkMjg3NTVjMzk2OTdhZjIzMDg4MGYxGLXTWw==: 00:20:16.494 11:42:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:MTZjNTcxNTkwYzRmYmUwNWQzODFiZWFmOGE0NGViMjGFzSou: 00:20:16.494 11:42:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:20:16.494 11:42:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:20:16.494 11:42:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:YWQyYjQ1YTY1ZTU2YWY2ZDllMWUzMDY0MmFkMjg3NTVjMzk2OTdhZjIzMDg4MGYxGLXTWw==: 00:20:16.494 11:42:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:MTZjNTcxNTkwYzRmYmUwNWQzODFiZWFmOGE0NGViMjGFzSou: ]] 00:20:16.494 11:42:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:MTZjNTcxNTkwYzRmYmUwNWQzODFiZWFmOGE0NGViMjGFzSou: 00:20:16.494 11:42:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe4096 3 00:20:16.494 11:42:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:20:16.494 11:42:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:20:16.494 11:42:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:20:16.494 11:42:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:20:16.494 11:42:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:20:16.494 11:42:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:20:16.494 11:42:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:16.494 11:42:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:16.494 11:42:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:16.494 11:42:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:20:16.494 11:42:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:20:16.494 11:42:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:20:16.494 11:42:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:20:16.494 11:42:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:20:16.494 11:42:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:20:16.494 11:42:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:20:16.494 11:42:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:20:16.494 11:42:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:20:16.494 11:42:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:20:16.494 11:42:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:20:16.494 11:42:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:20:16.494 11:42:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:16.494 11:42:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:16.494 nvme0n1 00:20:16.494 11:42:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:16.494 11:42:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:20:16.494 11:42:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:20:16.494 11:42:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:16.494 11:42:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:16.494 11:42:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:16.494 11:42:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:16.494 11:42:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:20:16.494 11:42:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:16.494 11:42:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:16.752 11:42:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:16.752 11:42:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:20:16.752 11:42:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe4096 4 00:20:16.752 11:42:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:20:16.752 11:42:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:20:16.752 11:42:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:20:16.752 11:42:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:20:16.752 11:42:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:ZDhlODY5MGFmNjE3NDhhNGFjYWUzMmM3ODk0NGYxMDk0ZGI0ZjUxOGRjMGFiNDliNDU1NmU5NTdiODZmYzRmMrlj5SE=: 00:20:16.752 11:42:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:20:16.752 11:42:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:20:16.752 11:42:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:20:16.752 11:42:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:ZDhlODY5MGFmNjE3NDhhNGFjYWUzMmM3ODk0NGYxMDk0ZGI0ZjUxOGRjMGFiNDliNDU1NmU5NTdiODZmYzRmMrlj5SE=: 00:20:16.752 11:42:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:20:16.752 11:42:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe4096 4 00:20:16.752 11:42:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:20:16.752 11:42:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:20:16.752 11:42:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:20:16.752 11:42:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:20:16.752 11:42:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:20:16.752 11:42:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:20:16.752 11:42:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:16.752 11:42:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:16.752 11:42:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:16.752 11:42:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:20:16.752 11:42:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:20:16.752 11:42:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:20:16.753 11:42:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:20:16.753 11:42:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:20:16.753 11:42:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:20:16.753 11:42:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:20:16.753 11:42:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:20:16.753 11:42:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:20:16.753 11:42:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:20:16.753 11:42:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:20:16.753 11:42:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:20:16.753 11:42:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:16.753 11:42:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:16.753 nvme0n1 00:20:16.753 11:42:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:16.753 11:42:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:20:16.753 11:42:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:16.753 11:42:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:16.753 11:42:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:20:16.753 11:42:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:17.011 11:42:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:17.011 11:42:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:20:17.011 11:42:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:17.011 11:42:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:17.011 11:42:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:17.011 11:42:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:20:17.011 11:42:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:20:17.011 11:42:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe6144 0 00:20:17.011 11:42:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:20:17.011 11:42:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:20:17.011 11:42:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:20:17.011 11:42:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:20:17.011 11:42:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:YTgyYTBmZTY3ZGJmZWY3NjBmNDgyYTE5MDllNmU3ZTbwPdsP: 00:20:17.011 11:42:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:YjZiOTAxZmVjZmFiOTA5ODU2MzkwNzNkMGNhZWJkOGQ5NGFmMWM2NGM5MjQ0NjNkNjJhNDkzM2Y3NDkyMDc0NHmGrx4=: 00:20:17.011 11:42:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:20:17.011 11:42:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:20:17.011 11:42:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:YTgyYTBmZTY3ZGJmZWY3NjBmNDgyYTE5MDllNmU3ZTbwPdsP: 00:20:17.011 11:42:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:YjZiOTAxZmVjZmFiOTA5ODU2MzkwNzNkMGNhZWJkOGQ5NGFmMWM2NGM5MjQ0NjNkNjJhNDkzM2Y3NDkyMDc0NHmGrx4=: ]] 00:20:17.011 11:42:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:YjZiOTAxZmVjZmFiOTA5ODU2MzkwNzNkMGNhZWJkOGQ5NGFmMWM2NGM5MjQ0NjNkNjJhNDkzM2Y3NDkyMDc0NHmGrx4=: 00:20:17.011 11:42:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe6144 0 00:20:17.011 11:42:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:20:17.011 11:42:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:20:17.012 11:42:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:20:17.012 11:42:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:20:17.012 11:42:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:20:17.012 11:42:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:20:17.012 11:42:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:17.012 11:42:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:17.012 11:42:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:17.012 11:42:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:20:17.012 11:42:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:20:17.012 11:42:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:20:17.012 11:42:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:20:17.012 11:42:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:20:17.012 11:42:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:20:17.012 11:42:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:20:17.012 11:42:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:20:17.012 11:42:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:20:17.012 11:42:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:20:17.012 11:42:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:20:17.012 11:42:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:20:17.012 11:42:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:17.012 11:42:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:17.271 nvme0n1 00:20:17.271 11:42:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:17.271 11:42:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:20:17.271 11:42:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:17.271 11:42:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:17.271 11:42:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:20:17.271 11:42:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:17.271 11:42:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:17.271 11:42:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:20:17.271 11:42:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:17.271 11:42:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:17.271 11:42:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:17.271 11:42:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:20:17.271 11:42:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe6144 1 00:20:17.271 11:42:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:20:17.271 11:42:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:20:17.271 11:42:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:20:17.271 11:42:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:20:17.271 11:42:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:MmVhYWU0MjY0NTFkMTYxYmUyMzliYzM0NzZlMzc0Y2Y4NmY1ZDBlMzZjNTI5ODg5wjh+4Q==: 00:20:17.271 11:42:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:YjAyN2ViYTgxMmNmZTg3NGQwNTMwN2UzYmRlMDk1YTAyNGU1MGU1MjZlZmI0NzUyEsEUpg==: 00:20:17.271 11:42:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:20:17.271 11:42:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:20:17.271 11:42:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:MmVhYWU0MjY0NTFkMTYxYmUyMzliYzM0NzZlMzc0Y2Y4NmY1ZDBlMzZjNTI5ODg5wjh+4Q==: 00:20:17.271 11:42:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:YjAyN2ViYTgxMmNmZTg3NGQwNTMwN2UzYmRlMDk1YTAyNGU1MGU1MjZlZmI0NzUyEsEUpg==: ]] 00:20:17.271 11:42:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:YjAyN2ViYTgxMmNmZTg3NGQwNTMwN2UzYmRlMDk1YTAyNGU1MGU1MjZlZmI0NzUyEsEUpg==: 00:20:17.271 11:42:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe6144 1 00:20:17.271 11:42:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:20:17.271 11:42:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:20:17.271 11:42:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:20:17.271 11:42:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:20:17.271 11:42:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:20:17.271 11:42:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:20:17.271 11:42:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:17.271 11:42:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:17.271 11:42:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:17.271 11:42:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:20:17.271 11:42:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:20:17.271 11:42:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:20:17.271 11:42:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:20:17.271 11:42:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:20:17.271 11:42:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:20:17.271 11:42:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:20:17.271 11:42:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:20:17.271 11:42:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:20:17.271 11:42:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:20:17.271 11:42:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:20:17.271 11:42:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:20:17.271 11:42:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:17.271 11:42:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:17.838 nvme0n1 00:20:17.838 11:42:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:17.838 11:42:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:20:17.838 11:42:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:17.838 11:42:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:20:17.838 11:42:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:17.838 11:42:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:17.838 11:42:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:17.838 11:42:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:20:17.838 11:42:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:17.838 11:42:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:17.838 11:42:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:17.838 11:42:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:20:17.838 11:42:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe6144 2 00:20:17.838 11:42:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:20:17.838 11:42:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:20:17.838 11:42:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:20:17.838 11:42:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:20:17.838 11:42:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:OTExOWQzZmNmZWU2NDk2NTgyOWI2ODJmZmZjY2NiMzktCSxF: 00:20:17.838 11:42:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:NGIwYWM0OWM5YjFlZmRiMmI1MTMwODI5Mzc4MGRiYWZn16gQ: 00:20:17.838 11:42:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:20:17.838 11:42:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:20:17.838 11:42:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:OTExOWQzZmNmZWU2NDk2NTgyOWI2ODJmZmZjY2NiMzktCSxF: 00:20:17.838 11:42:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:NGIwYWM0OWM5YjFlZmRiMmI1MTMwODI5Mzc4MGRiYWZn16gQ: ]] 00:20:17.838 11:42:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:NGIwYWM0OWM5YjFlZmRiMmI1MTMwODI5Mzc4MGRiYWZn16gQ: 00:20:17.838 11:42:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe6144 2 00:20:17.838 11:42:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:20:17.838 11:42:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:20:17.838 11:42:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:20:17.838 11:42:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:20:17.838 11:42:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:20:17.838 11:42:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:20:17.838 11:42:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:17.838 11:42:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:17.838 11:42:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:17.838 11:42:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:20:17.838 11:42:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:20:17.838 11:42:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:20:17.838 11:42:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:20:17.838 11:42:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:20:17.838 11:42:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:20:17.838 11:42:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:20:17.838 11:42:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:20:17.838 11:42:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:20:17.838 11:42:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:20:17.838 11:42:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:20:17.838 11:42:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:20:17.838 11:42:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:17.838 11:42:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:18.097 nvme0n1 00:20:18.097 11:42:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:18.097 11:42:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:20:18.097 11:42:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:18.097 11:42:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:18.097 11:42:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:20:18.097 11:42:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:18.097 11:42:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:18.097 11:42:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:20:18.097 11:42:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:18.097 11:42:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:18.097 11:42:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:18.097 11:42:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:20:18.097 11:42:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe6144 3 00:20:18.097 11:42:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:20:18.097 11:42:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:20:18.097 11:42:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:20:18.097 11:42:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:20:18.097 11:42:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:YWQyYjQ1YTY1ZTU2YWY2ZDllMWUzMDY0MmFkMjg3NTVjMzk2OTdhZjIzMDg4MGYxGLXTWw==: 00:20:18.097 11:42:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:MTZjNTcxNTkwYzRmYmUwNWQzODFiZWFmOGE0NGViMjGFzSou: 00:20:18.097 11:42:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:20:18.097 11:42:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:20:18.097 11:42:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:YWQyYjQ1YTY1ZTU2YWY2ZDllMWUzMDY0MmFkMjg3NTVjMzk2OTdhZjIzMDg4MGYxGLXTWw==: 00:20:18.097 11:42:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:MTZjNTcxNTkwYzRmYmUwNWQzODFiZWFmOGE0NGViMjGFzSou: ]] 00:20:18.097 11:42:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:MTZjNTcxNTkwYzRmYmUwNWQzODFiZWFmOGE0NGViMjGFzSou: 00:20:18.097 11:42:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe6144 3 00:20:18.097 11:42:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:20:18.097 11:42:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:20:18.097 11:42:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:20:18.097 11:42:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:20:18.097 11:42:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:20:18.097 11:42:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:20:18.097 11:42:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:18.097 11:42:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:18.097 11:42:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:18.097 11:42:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:20:18.097 11:42:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:20:18.097 11:42:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:20:18.097 11:42:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:20:18.097 11:42:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:20:18.097 11:42:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:20:18.097 11:42:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:20:18.097 11:42:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:20:18.097 11:42:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:20:18.097 11:42:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:20:18.097 11:42:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:20:18.097 11:42:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:20:18.098 11:42:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:18.098 11:42:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:18.357 nvme0n1 00:20:18.357 11:42:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:18.357 11:42:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:20:18.615 11:42:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:18.615 11:42:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:20:18.615 11:42:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:18.615 11:42:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:18.615 11:42:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:18.615 11:42:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:20:18.615 11:42:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:18.615 11:42:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:18.615 11:42:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:18.615 11:42:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:20:18.615 11:42:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe6144 4 00:20:18.615 11:42:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:20:18.615 11:42:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:20:18.615 11:42:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:20:18.615 11:42:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:20:18.615 11:42:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:ZDhlODY5MGFmNjE3NDhhNGFjYWUzMmM3ODk0NGYxMDk0ZGI0ZjUxOGRjMGFiNDliNDU1NmU5NTdiODZmYzRmMrlj5SE=: 00:20:18.615 11:42:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:20:18.615 11:42:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:20:18.615 11:42:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:20:18.615 11:42:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:ZDhlODY5MGFmNjE3NDhhNGFjYWUzMmM3ODk0NGYxMDk0ZGI0ZjUxOGRjMGFiNDliNDU1NmU5NTdiODZmYzRmMrlj5SE=: 00:20:18.615 11:42:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:20:18.615 11:42:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe6144 4 00:20:18.615 11:42:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:20:18.615 11:42:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:20:18.615 11:42:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:20:18.615 11:42:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:20:18.615 11:42:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:20:18.615 11:42:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:20:18.615 11:42:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:18.615 11:42:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:18.615 11:42:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:18.615 11:42:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:20:18.615 11:42:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:20:18.615 11:42:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:20:18.615 11:42:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:20:18.615 11:42:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:20:18.615 11:42:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:20:18.615 11:42:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:20:18.615 11:42:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:20:18.615 11:42:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:20:18.615 11:42:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:20:18.615 11:42:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:20:18.615 11:42:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:20:18.615 11:42:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:18.615 11:42:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:18.874 nvme0n1 00:20:18.874 11:42:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:18.874 11:42:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:20:18.874 11:42:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:18.874 11:42:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:18.874 11:42:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:20:18.874 11:42:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:18.874 11:42:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:18.874 11:42:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:20:18.874 11:42:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:18.874 11:42:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:18.874 11:42:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:18.874 11:42:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:20:18.874 11:42:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:20:18.874 11:42:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe8192 0 00:20:18.874 11:42:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:20:18.874 11:42:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:20:18.874 11:42:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:20:18.874 11:42:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:20:18.874 11:42:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:YTgyYTBmZTY3ZGJmZWY3NjBmNDgyYTE5MDllNmU3ZTbwPdsP: 00:20:18.874 11:42:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:YjZiOTAxZmVjZmFiOTA5ODU2MzkwNzNkMGNhZWJkOGQ5NGFmMWM2NGM5MjQ0NjNkNjJhNDkzM2Y3NDkyMDc0NHmGrx4=: 00:20:18.874 11:42:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:20:18.874 11:42:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:20:18.874 11:42:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:YTgyYTBmZTY3ZGJmZWY3NjBmNDgyYTE5MDllNmU3ZTbwPdsP: 00:20:18.874 11:42:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:YjZiOTAxZmVjZmFiOTA5ODU2MzkwNzNkMGNhZWJkOGQ5NGFmMWM2NGM5MjQ0NjNkNjJhNDkzM2Y3NDkyMDc0NHmGrx4=: ]] 00:20:18.874 11:42:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:YjZiOTAxZmVjZmFiOTA5ODU2MzkwNzNkMGNhZWJkOGQ5NGFmMWM2NGM5MjQ0NjNkNjJhNDkzM2Y3NDkyMDc0NHmGrx4=: 00:20:18.874 11:42:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe8192 0 00:20:18.874 11:42:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:20:18.874 11:42:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:20:18.874 11:42:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:20:18.874 11:42:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:20:18.874 11:42:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:20:18.874 11:42:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:20:18.874 11:42:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:18.874 11:42:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:18.874 11:42:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:18.874 11:42:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:20:18.874 11:42:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:20:18.874 11:42:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:20:18.874 11:42:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:20:18.874 11:42:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:20:18.874 11:42:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:20:18.874 11:42:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:20:18.874 11:42:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:20:18.874 11:42:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:20:18.874 11:42:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:20:18.874 11:42:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:20:18.875 11:42:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:20:18.875 11:42:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:18.875 11:42:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:19.441 nvme0n1 00:20:19.441 11:42:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:19.441 11:42:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:20:19.441 11:42:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:20:19.441 11:42:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:19.441 11:42:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:19.699 11:42:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:19.699 11:42:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:19.699 11:42:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:20:19.699 11:42:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:19.699 11:42:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:19.699 11:42:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:19.699 11:42:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:20:19.699 11:42:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe8192 1 00:20:19.699 11:42:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:20:19.699 11:42:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:20:19.699 11:42:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:20:19.699 11:42:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:20:19.699 11:42:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:MmVhYWU0MjY0NTFkMTYxYmUyMzliYzM0NzZlMzc0Y2Y4NmY1ZDBlMzZjNTI5ODg5wjh+4Q==: 00:20:19.699 11:42:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:YjAyN2ViYTgxMmNmZTg3NGQwNTMwN2UzYmRlMDk1YTAyNGU1MGU1MjZlZmI0NzUyEsEUpg==: 00:20:19.699 11:42:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:20:19.699 11:42:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:20:19.699 11:42:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:MmVhYWU0MjY0NTFkMTYxYmUyMzliYzM0NzZlMzc0Y2Y4NmY1ZDBlMzZjNTI5ODg5wjh+4Q==: 00:20:19.699 11:42:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:YjAyN2ViYTgxMmNmZTg3NGQwNTMwN2UzYmRlMDk1YTAyNGU1MGU1MjZlZmI0NzUyEsEUpg==: ]] 00:20:19.699 11:42:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:YjAyN2ViYTgxMmNmZTg3NGQwNTMwN2UzYmRlMDk1YTAyNGU1MGU1MjZlZmI0NzUyEsEUpg==: 00:20:19.699 11:42:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe8192 1 00:20:19.699 11:42:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:20:19.699 11:42:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:20:19.699 11:42:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:20:19.699 11:42:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:20:19.699 11:42:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:20:19.699 11:42:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:20:19.699 11:42:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:19.699 11:42:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:19.699 11:42:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:19.699 11:42:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:20:19.699 11:42:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:20:19.699 11:42:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:20:19.699 11:42:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:20:19.699 11:42:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:20:19.699 11:42:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:20:19.699 11:42:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:20:19.699 11:42:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:20:19.699 11:42:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:20:19.699 11:42:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:20:19.699 11:42:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:20:19.699 11:42:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:20:19.699 11:42:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:19.699 11:42:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:20.265 nvme0n1 00:20:20.265 11:42:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:20.265 11:42:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:20:20.265 11:42:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:20.265 11:42:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:20:20.265 11:42:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:20.265 11:42:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:20.265 11:42:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:20.265 11:42:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:20:20.265 11:42:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:20.265 11:42:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:20.265 11:42:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:20.265 11:42:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:20:20.265 11:42:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe8192 2 00:20:20.265 11:42:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:20:20.265 11:42:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:20:20.265 11:42:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:20:20.265 11:42:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:20:20.265 11:42:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:OTExOWQzZmNmZWU2NDk2NTgyOWI2ODJmZmZjY2NiMzktCSxF: 00:20:20.265 11:42:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:NGIwYWM0OWM5YjFlZmRiMmI1MTMwODI5Mzc4MGRiYWZn16gQ: 00:20:20.265 11:42:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:20:20.265 11:42:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:20:20.265 11:42:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:OTExOWQzZmNmZWU2NDk2NTgyOWI2ODJmZmZjY2NiMzktCSxF: 00:20:20.265 11:42:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:NGIwYWM0OWM5YjFlZmRiMmI1MTMwODI5Mzc4MGRiYWZn16gQ: ]] 00:20:20.265 11:42:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:NGIwYWM0OWM5YjFlZmRiMmI1MTMwODI5Mzc4MGRiYWZn16gQ: 00:20:20.265 11:42:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe8192 2 00:20:20.265 11:42:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:20:20.265 11:42:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:20:20.265 11:42:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:20:20.265 11:42:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:20:20.265 11:42:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:20:20.265 11:42:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:20:20.265 11:42:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:20.265 11:42:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:20.265 11:42:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:20.265 11:42:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:20:20.265 11:42:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:20:20.265 11:42:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:20:20.265 11:42:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:20:20.265 11:42:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:20:20.265 11:42:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:20:20.265 11:42:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:20:20.265 11:42:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:20:20.265 11:42:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:20:20.265 11:42:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:20:20.265 11:42:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:20:20.266 11:42:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:20:20.266 11:42:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:20.266 11:42:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:20.832 nvme0n1 00:20:20.832 11:42:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:20.832 11:42:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:20:20.832 11:42:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:20:20.832 11:42:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:20.832 11:42:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:20.832 11:42:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:20.832 11:42:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:20.832 11:42:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:20:20.832 11:42:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:20.832 11:42:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:20.832 11:42:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:20.832 11:42:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:20:20.832 11:42:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe8192 3 00:20:20.832 11:42:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:20:20.832 11:42:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:20:20.832 11:42:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:20:20.832 11:42:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:20:20.832 11:42:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:YWQyYjQ1YTY1ZTU2YWY2ZDllMWUzMDY0MmFkMjg3NTVjMzk2OTdhZjIzMDg4MGYxGLXTWw==: 00:20:20.832 11:42:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:MTZjNTcxNTkwYzRmYmUwNWQzODFiZWFmOGE0NGViMjGFzSou: 00:20:20.832 11:42:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:20:20.832 11:42:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:20:20.832 11:42:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:YWQyYjQ1YTY1ZTU2YWY2ZDllMWUzMDY0MmFkMjg3NTVjMzk2OTdhZjIzMDg4MGYxGLXTWw==: 00:20:20.832 11:42:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:MTZjNTcxNTkwYzRmYmUwNWQzODFiZWFmOGE0NGViMjGFzSou: ]] 00:20:20.832 11:42:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:MTZjNTcxNTkwYzRmYmUwNWQzODFiZWFmOGE0NGViMjGFzSou: 00:20:20.832 11:42:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe8192 3 00:20:20.832 11:42:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:20:20.832 11:42:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:20:20.832 11:42:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:20:20.832 11:42:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:20:20.832 11:42:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:20:20.832 11:42:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:20:20.832 11:42:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:20.832 11:42:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:20.832 11:42:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:20.832 11:42:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:20:20.832 11:42:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:20:20.832 11:42:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:20:20.832 11:42:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:20:20.833 11:42:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:20:20.833 11:42:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:20:20.833 11:42:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:20:20.833 11:42:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:20:20.833 11:42:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:20:20.833 11:42:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:20:20.833 11:42:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:20:20.833 11:42:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:20:20.833 11:42:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:20.833 11:42:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:21.398 nvme0n1 00:20:21.398 11:42:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:21.398 11:42:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:20:21.399 11:42:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:20:21.399 11:42:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:21.399 11:42:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:21.657 11:42:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:21.657 11:42:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:21.657 11:42:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:20:21.657 11:42:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:21.657 11:42:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:21.657 11:42:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:21.657 11:42:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:20:21.657 11:42:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe8192 4 00:20:21.657 11:42:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:20:21.657 11:42:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:20:21.657 11:42:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:20:21.657 11:42:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:20:21.657 11:42:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:ZDhlODY5MGFmNjE3NDhhNGFjYWUzMmM3ODk0NGYxMDk0ZGI0ZjUxOGRjMGFiNDliNDU1NmU5NTdiODZmYzRmMrlj5SE=: 00:20:21.657 11:42:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:20:21.657 11:42:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:20:21.657 11:42:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:20:21.657 11:42:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:ZDhlODY5MGFmNjE3NDhhNGFjYWUzMmM3ODk0NGYxMDk0ZGI0ZjUxOGRjMGFiNDliNDU1NmU5NTdiODZmYzRmMrlj5SE=: 00:20:21.657 11:42:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:20:21.657 11:42:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe8192 4 00:20:21.657 11:42:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:20:21.657 11:42:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:20:21.657 11:42:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:20:21.657 11:42:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:20:21.657 11:42:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:20:21.657 11:42:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:20:21.657 11:42:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:21.657 11:42:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:21.657 11:42:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:21.657 11:42:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:20:21.657 11:42:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:20:21.657 11:42:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:20:21.657 11:42:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:20:21.657 11:42:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:20:21.657 11:42:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:20:21.657 11:42:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:20:21.657 11:42:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:20:21.657 11:42:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:20:21.657 11:42:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:20:21.657 11:42:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:20:21.657 11:42:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:20:21.657 11:42:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:21.657 11:42:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:22.224 nvme0n1 00:20:22.224 11:42:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:22.224 11:42:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:20:22.224 11:42:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:20:22.224 11:42:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:22.224 11:42:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:22.224 11:42:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:22.224 11:42:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:22.224 11:42:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:20:22.224 11:42:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:22.224 11:42:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:22.224 11:42:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:22.224 11:42:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@100 -- # for digest in "${digests[@]}" 00:20:22.224 11:42:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:20:22.224 11:42:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:20:22.224 11:42:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe2048 0 00:20:22.224 11:42:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:20:22.224 11:42:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:20:22.224 11:42:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:20:22.224 11:42:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:20:22.224 11:42:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:YTgyYTBmZTY3ZGJmZWY3NjBmNDgyYTE5MDllNmU3ZTbwPdsP: 00:20:22.224 11:42:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:YjZiOTAxZmVjZmFiOTA5ODU2MzkwNzNkMGNhZWJkOGQ5NGFmMWM2NGM5MjQ0NjNkNjJhNDkzM2Y3NDkyMDc0NHmGrx4=: 00:20:22.224 11:42:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:20:22.224 11:42:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:20:22.224 11:42:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:YTgyYTBmZTY3ZGJmZWY3NjBmNDgyYTE5MDllNmU3ZTbwPdsP: 00:20:22.224 11:42:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:YjZiOTAxZmVjZmFiOTA5ODU2MzkwNzNkMGNhZWJkOGQ5NGFmMWM2NGM5MjQ0NjNkNjJhNDkzM2Y3NDkyMDc0NHmGrx4=: ]] 00:20:22.224 11:42:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:YjZiOTAxZmVjZmFiOTA5ODU2MzkwNzNkMGNhZWJkOGQ5NGFmMWM2NGM5MjQ0NjNkNjJhNDkzM2Y3NDkyMDc0NHmGrx4=: 00:20:22.224 11:42:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe2048 0 00:20:22.224 11:42:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:20:22.224 11:42:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:20:22.224 11:42:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:20:22.224 11:42:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:20:22.224 11:42:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:20:22.224 11:42:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:20:22.224 11:42:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:22.224 11:42:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:22.224 11:42:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:22.224 11:42:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:20:22.224 11:42:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:20:22.224 11:42:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:20:22.224 11:42:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:20:22.224 11:42:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:20:22.224 11:42:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:20:22.224 11:42:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:20:22.224 11:42:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:20:22.224 11:42:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:20:22.224 11:42:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:20:22.224 11:42:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:20:22.224 11:42:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:20:22.224 11:42:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:22.224 11:42:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:22.224 nvme0n1 00:20:22.224 11:42:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:22.224 11:42:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:20:22.224 11:42:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:20:22.224 11:42:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:22.224 11:42:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:22.224 11:42:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:22.483 11:42:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:22.483 11:42:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:20:22.483 11:42:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:22.483 11:42:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:22.483 11:42:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:22.483 11:42:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:20:22.483 11:42:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe2048 1 00:20:22.483 11:42:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:20:22.483 11:42:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:20:22.483 11:42:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:20:22.483 11:42:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:20:22.483 11:42:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:MmVhYWU0MjY0NTFkMTYxYmUyMzliYzM0NzZlMzc0Y2Y4NmY1ZDBlMzZjNTI5ODg5wjh+4Q==: 00:20:22.483 11:42:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:YjAyN2ViYTgxMmNmZTg3NGQwNTMwN2UzYmRlMDk1YTAyNGU1MGU1MjZlZmI0NzUyEsEUpg==: 00:20:22.483 11:42:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:20:22.483 11:42:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:20:22.483 11:42:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:MmVhYWU0MjY0NTFkMTYxYmUyMzliYzM0NzZlMzc0Y2Y4NmY1ZDBlMzZjNTI5ODg5wjh+4Q==: 00:20:22.483 11:42:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:YjAyN2ViYTgxMmNmZTg3NGQwNTMwN2UzYmRlMDk1YTAyNGU1MGU1MjZlZmI0NzUyEsEUpg==: ]] 00:20:22.483 11:42:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:YjAyN2ViYTgxMmNmZTg3NGQwNTMwN2UzYmRlMDk1YTAyNGU1MGU1MjZlZmI0NzUyEsEUpg==: 00:20:22.483 11:42:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe2048 1 00:20:22.483 11:42:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:20:22.483 11:42:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:20:22.483 11:42:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:20:22.483 11:42:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:20:22.483 11:42:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:20:22.483 11:42:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:20:22.483 11:42:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:22.483 11:42:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:22.483 11:42:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:22.483 11:42:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:20:22.483 11:42:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:20:22.483 11:42:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:20:22.484 11:42:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:20:22.484 11:42:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:20:22.484 11:42:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:20:22.484 11:42:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:20:22.484 11:42:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:20:22.484 11:42:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:20:22.484 11:42:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:20:22.484 11:42:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:20:22.484 11:42:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:20:22.484 11:42:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:22.484 11:42:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:22.484 nvme0n1 00:20:22.484 11:42:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:22.484 11:42:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:20:22.484 11:42:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:22.484 11:42:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:20:22.484 11:42:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:22.484 11:42:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:22.484 11:42:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:22.484 11:42:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:20:22.484 11:42:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:22.484 11:42:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:22.484 11:42:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:22.484 11:42:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:20:22.484 11:42:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe2048 2 00:20:22.484 11:42:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:20:22.484 11:42:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:20:22.484 11:42:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:20:22.484 11:42:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:20:22.484 11:42:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:OTExOWQzZmNmZWU2NDk2NTgyOWI2ODJmZmZjY2NiMzktCSxF: 00:20:22.484 11:42:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:NGIwYWM0OWM5YjFlZmRiMmI1MTMwODI5Mzc4MGRiYWZn16gQ: 00:20:22.484 11:42:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:20:22.484 11:42:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:20:22.484 11:42:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:OTExOWQzZmNmZWU2NDk2NTgyOWI2ODJmZmZjY2NiMzktCSxF: 00:20:22.484 11:42:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:NGIwYWM0OWM5YjFlZmRiMmI1MTMwODI5Mzc4MGRiYWZn16gQ: ]] 00:20:22.484 11:42:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:NGIwYWM0OWM5YjFlZmRiMmI1MTMwODI5Mzc4MGRiYWZn16gQ: 00:20:22.484 11:42:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe2048 2 00:20:22.484 11:42:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:20:22.484 11:42:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:20:22.484 11:42:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:20:22.484 11:42:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:20:22.484 11:42:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:20:22.484 11:42:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:20:22.484 11:42:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:22.484 11:42:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:22.484 11:42:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:22.484 11:42:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:20:22.484 11:42:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:20:22.484 11:42:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:20:22.484 11:42:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:20:22.484 11:42:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:20:22.484 11:42:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:20:22.484 11:42:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:20:22.484 11:42:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:20:22.484 11:42:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:20:22.484 11:42:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:20:22.484 11:42:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:20:22.484 11:42:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:20:22.484 11:42:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:22.484 11:42:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:22.742 nvme0n1 00:20:22.742 11:42:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:22.742 11:42:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:20:22.742 11:42:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:20:22.742 11:42:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:22.742 11:42:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:22.742 11:42:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:22.742 11:42:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:22.742 11:42:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:20:22.742 11:42:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:22.742 11:42:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:22.742 11:42:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:22.742 11:42:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:20:22.742 11:42:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe2048 3 00:20:22.742 11:42:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:20:22.742 11:42:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:20:22.742 11:42:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:20:22.742 11:42:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:20:22.742 11:42:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:YWQyYjQ1YTY1ZTU2YWY2ZDllMWUzMDY0MmFkMjg3NTVjMzk2OTdhZjIzMDg4MGYxGLXTWw==: 00:20:22.743 11:42:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:MTZjNTcxNTkwYzRmYmUwNWQzODFiZWFmOGE0NGViMjGFzSou: 00:20:22.743 11:42:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:20:22.743 11:42:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:20:22.743 11:42:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:YWQyYjQ1YTY1ZTU2YWY2ZDllMWUzMDY0MmFkMjg3NTVjMzk2OTdhZjIzMDg4MGYxGLXTWw==: 00:20:22.743 11:42:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:MTZjNTcxNTkwYzRmYmUwNWQzODFiZWFmOGE0NGViMjGFzSou: ]] 00:20:22.743 11:42:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:MTZjNTcxNTkwYzRmYmUwNWQzODFiZWFmOGE0NGViMjGFzSou: 00:20:22.743 11:42:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe2048 3 00:20:22.743 11:42:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:20:22.743 11:42:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:20:22.743 11:42:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:20:22.743 11:42:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:20:22.743 11:42:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:20:22.743 11:42:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:20:22.743 11:42:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:22.743 11:42:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:22.743 11:42:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:22.743 11:42:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:20:22.743 11:42:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:20:22.743 11:42:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:20:22.743 11:42:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:20:22.743 11:42:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:20:22.743 11:42:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:20:22.743 11:42:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:20:22.743 11:42:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:20:22.743 11:42:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:20:22.743 11:42:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:20:22.743 11:42:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:20:22.743 11:42:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:20:22.743 11:42:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:22.743 11:42:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:22.743 nvme0n1 00:20:22.743 11:42:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:22.743 11:42:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:20:22.743 11:42:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:20:22.743 11:42:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:22.743 11:42:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:23.002 11:42:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:23.002 11:42:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:23.002 11:42:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:20:23.002 11:42:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:23.002 11:42:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:23.002 11:42:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:23.002 11:42:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:20:23.002 11:42:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe2048 4 00:20:23.002 11:42:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:20:23.002 11:42:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:20:23.002 11:42:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:20:23.002 11:42:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:20:23.002 11:42:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:ZDhlODY5MGFmNjE3NDhhNGFjYWUzMmM3ODk0NGYxMDk0ZGI0ZjUxOGRjMGFiNDliNDU1NmU5NTdiODZmYzRmMrlj5SE=: 00:20:23.002 11:42:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:20:23.002 11:42:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:20:23.002 11:42:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:20:23.002 11:42:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:ZDhlODY5MGFmNjE3NDhhNGFjYWUzMmM3ODk0NGYxMDk0ZGI0ZjUxOGRjMGFiNDliNDU1NmU5NTdiODZmYzRmMrlj5SE=: 00:20:23.002 11:42:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:20:23.002 11:42:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe2048 4 00:20:23.002 11:42:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:20:23.002 11:42:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:20:23.002 11:42:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:20:23.002 11:42:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:20:23.002 11:42:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:20:23.002 11:42:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:20:23.002 11:42:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:23.002 11:42:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:23.002 11:42:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:23.002 11:42:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:20:23.002 11:42:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:20:23.002 11:42:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:20:23.002 11:42:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:20:23.002 11:42:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:20:23.002 11:42:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:20:23.002 11:42:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:20:23.002 11:42:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:20:23.002 11:42:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:20:23.002 11:42:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:20:23.002 11:42:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:20:23.002 11:42:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:20:23.002 11:42:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:23.002 11:42:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:23.002 nvme0n1 00:20:23.002 11:42:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:23.002 11:42:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:20:23.002 11:42:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:20:23.002 11:42:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:23.002 11:42:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:23.002 11:42:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:23.002 11:42:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:23.002 11:42:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:20:23.002 11:42:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:23.002 11:42:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:23.002 11:42:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:23.002 11:42:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:20:23.002 11:42:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:20:23.002 11:42:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe3072 0 00:20:23.002 11:42:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:20:23.002 11:42:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:20:23.002 11:42:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:20:23.002 11:42:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:20:23.002 11:42:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:YTgyYTBmZTY3ZGJmZWY3NjBmNDgyYTE5MDllNmU3ZTbwPdsP: 00:20:23.002 11:42:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:YjZiOTAxZmVjZmFiOTA5ODU2MzkwNzNkMGNhZWJkOGQ5NGFmMWM2NGM5MjQ0NjNkNjJhNDkzM2Y3NDkyMDc0NHmGrx4=: 00:20:23.002 11:42:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:20:23.002 11:42:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:20:23.002 11:42:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:YTgyYTBmZTY3ZGJmZWY3NjBmNDgyYTE5MDllNmU3ZTbwPdsP: 00:20:23.002 11:42:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:YjZiOTAxZmVjZmFiOTA5ODU2MzkwNzNkMGNhZWJkOGQ5NGFmMWM2NGM5MjQ0NjNkNjJhNDkzM2Y3NDkyMDc0NHmGrx4=: ]] 00:20:23.002 11:42:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:YjZiOTAxZmVjZmFiOTA5ODU2MzkwNzNkMGNhZWJkOGQ5NGFmMWM2NGM5MjQ0NjNkNjJhNDkzM2Y3NDkyMDc0NHmGrx4=: 00:20:23.002 11:42:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe3072 0 00:20:23.002 11:42:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:20:23.002 11:42:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:20:23.002 11:42:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:20:23.002 11:42:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:20:23.002 11:42:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:20:23.002 11:42:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:20:23.002 11:42:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:23.002 11:42:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:23.002 11:42:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:23.002 11:42:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:20:23.002 11:42:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:20:23.002 11:42:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:20:23.002 11:42:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:20:23.002 11:42:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:20:23.002 11:42:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:20:23.002 11:42:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:20:23.002 11:42:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:20:23.002 11:42:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:20:23.002 11:42:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:20:23.002 11:42:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:20:23.002 11:42:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:20:23.002 11:42:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:23.002 11:42:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:23.261 nvme0n1 00:20:23.261 11:42:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:23.261 11:42:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:20:23.261 11:42:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:20:23.261 11:42:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:23.261 11:42:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:23.261 11:42:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:23.261 11:42:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:23.261 11:42:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:20:23.261 11:42:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:23.261 11:42:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:23.261 11:42:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:23.261 11:42:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:20:23.261 11:42:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe3072 1 00:20:23.261 11:42:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:20:23.261 11:42:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:20:23.261 11:42:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:20:23.261 11:42:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:20:23.261 11:42:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:MmVhYWU0MjY0NTFkMTYxYmUyMzliYzM0NzZlMzc0Y2Y4NmY1ZDBlMzZjNTI5ODg5wjh+4Q==: 00:20:23.261 11:42:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:YjAyN2ViYTgxMmNmZTg3NGQwNTMwN2UzYmRlMDk1YTAyNGU1MGU1MjZlZmI0NzUyEsEUpg==: 00:20:23.261 11:42:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:20:23.261 11:42:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:20:23.261 11:42:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:MmVhYWU0MjY0NTFkMTYxYmUyMzliYzM0NzZlMzc0Y2Y4NmY1ZDBlMzZjNTI5ODg5wjh+4Q==: 00:20:23.261 11:42:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:YjAyN2ViYTgxMmNmZTg3NGQwNTMwN2UzYmRlMDk1YTAyNGU1MGU1MjZlZmI0NzUyEsEUpg==: ]] 00:20:23.261 11:42:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:YjAyN2ViYTgxMmNmZTg3NGQwNTMwN2UzYmRlMDk1YTAyNGU1MGU1MjZlZmI0NzUyEsEUpg==: 00:20:23.261 11:42:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe3072 1 00:20:23.261 11:42:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:20:23.261 11:42:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:20:23.261 11:42:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:20:23.261 11:42:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:20:23.261 11:42:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:20:23.261 11:42:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:20:23.261 11:42:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:23.261 11:42:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:23.261 11:42:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:23.261 11:42:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:20:23.261 11:42:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:20:23.261 11:42:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:20:23.261 11:42:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:20:23.261 11:42:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:20:23.261 11:42:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:20:23.261 11:42:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:20:23.261 11:42:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:20:23.261 11:42:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:20:23.261 11:42:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:20:23.261 11:42:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:20:23.261 11:42:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:20:23.261 11:42:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:23.261 11:42:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:23.519 nvme0n1 00:20:23.519 11:42:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:23.519 11:42:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:20:23.519 11:42:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:23.519 11:42:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:20:23.519 11:42:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:23.519 11:42:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:23.519 11:42:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:23.519 11:42:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:20:23.519 11:42:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:23.519 11:42:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:23.519 11:42:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:23.519 11:42:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:20:23.519 11:42:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe3072 2 00:20:23.519 11:42:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:20:23.519 11:42:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:20:23.519 11:42:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:20:23.519 11:42:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:20:23.519 11:42:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:OTExOWQzZmNmZWU2NDk2NTgyOWI2ODJmZmZjY2NiMzktCSxF: 00:20:23.519 11:42:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:NGIwYWM0OWM5YjFlZmRiMmI1MTMwODI5Mzc4MGRiYWZn16gQ: 00:20:23.520 11:42:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:20:23.520 11:42:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:20:23.520 11:42:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:OTExOWQzZmNmZWU2NDk2NTgyOWI2ODJmZmZjY2NiMzktCSxF: 00:20:23.520 11:42:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:NGIwYWM0OWM5YjFlZmRiMmI1MTMwODI5Mzc4MGRiYWZn16gQ: ]] 00:20:23.520 11:42:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:NGIwYWM0OWM5YjFlZmRiMmI1MTMwODI5Mzc4MGRiYWZn16gQ: 00:20:23.520 11:42:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe3072 2 00:20:23.520 11:42:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:20:23.520 11:42:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:20:23.520 11:42:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:20:23.520 11:42:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:20:23.520 11:42:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:20:23.520 11:42:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:20:23.520 11:42:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:23.520 11:42:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:23.520 11:42:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:23.520 11:42:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:20:23.520 11:42:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:20:23.520 11:42:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:20:23.520 11:42:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:20:23.520 11:42:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:20:23.520 11:42:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:20:23.520 11:42:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:20:23.520 11:42:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:20:23.520 11:42:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:20:23.520 11:42:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:20:23.520 11:42:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:20:23.520 11:42:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:20:23.520 11:42:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:23.520 11:42:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:23.778 nvme0n1 00:20:23.778 11:42:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:23.778 11:42:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:20:23.778 11:42:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:20:23.778 11:42:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:23.778 11:42:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:23.778 11:42:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:23.778 11:42:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:23.778 11:42:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:20:23.778 11:42:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:23.778 11:42:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:23.778 11:42:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:23.778 11:42:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:20:23.778 11:42:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe3072 3 00:20:23.778 11:42:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:20:23.778 11:42:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:20:23.778 11:42:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:20:23.778 11:42:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:20:23.778 11:42:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:YWQyYjQ1YTY1ZTU2YWY2ZDllMWUzMDY0MmFkMjg3NTVjMzk2OTdhZjIzMDg4MGYxGLXTWw==: 00:20:23.778 11:42:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:MTZjNTcxNTkwYzRmYmUwNWQzODFiZWFmOGE0NGViMjGFzSou: 00:20:23.778 11:42:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:20:23.778 11:42:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:20:23.778 11:42:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:YWQyYjQ1YTY1ZTU2YWY2ZDllMWUzMDY0MmFkMjg3NTVjMzk2OTdhZjIzMDg4MGYxGLXTWw==: 00:20:23.778 11:42:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:MTZjNTcxNTkwYzRmYmUwNWQzODFiZWFmOGE0NGViMjGFzSou: ]] 00:20:23.778 11:42:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:MTZjNTcxNTkwYzRmYmUwNWQzODFiZWFmOGE0NGViMjGFzSou: 00:20:23.778 11:42:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe3072 3 00:20:23.778 11:42:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:20:23.778 11:42:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:20:23.778 11:42:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:20:23.778 11:42:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:20:23.778 11:42:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:20:23.778 11:42:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:20:23.778 11:42:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:23.778 11:42:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:23.778 11:42:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:23.778 11:42:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:20:23.778 11:42:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:20:23.778 11:42:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:20:23.778 11:42:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:20:23.778 11:42:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:20:23.778 11:42:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:20:23.778 11:42:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:20:23.778 11:42:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:20:23.778 11:42:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:20:23.778 11:42:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:20:23.778 11:42:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:20:23.778 11:42:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:20:23.778 11:42:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:23.778 11:42:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:23.778 nvme0n1 00:20:23.778 11:42:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:23.778 11:42:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:20:23.778 11:42:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:20:23.778 11:42:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:23.778 11:42:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:23.778 11:42:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:24.036 11:42:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:24.036 11:42:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:20:24.036 11:42:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:24.036 11:42:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:24.036 11:42:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:24.036 11:42:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:20:24.036 11:42:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe3072 4 00:20:24.036 11:42:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:20:24.036 11:42:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:20:24.036 11:42:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:20:24.036 11:42:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:20:24.036 11:42:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:ZDhlODY5MGFmNjE3NDhhNGFjYWUzMmM3ODk0NGYxMDk0ZGI0ZjUxOGRjMGFiNDliNDU1NmU5NTdiODZmYzRmMrlj5SE=: 00:20:24.036 11:42:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:20:24.036 11:42:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:20:24.036 11:42:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:20:24.036 11:42:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:ZDhlODY5MGFmNjE3NDhhNGFjYWUzMmM3ODk0NGYxMDk0ZGI0ZjUxOGRjMGFiNDliNDU1NmU5NTdiODZmYzRmMrlj5SE=: 00:20:24.036 11:42:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:20:24.036 11:42:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe3072 4 00:20:24.036 11:42:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:20:24.036 11:42:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:20:24.036 11:42:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:20:24.036 11:42:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:20:24.036 11:42:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:20:24.036 11:42:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:20:24.036 11:42:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:24.036 11:42:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:24.036 11:42:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:24.036 11:42:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:20:24.036 11:42:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:20:24.036 11:42:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:20:24.036 11:42:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:20:24.036 11:42:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:20:24.036 11:42:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:20:24.036 11:42:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:20:24.036 11:42:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:20:24.036 11:42:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:20:24.036 11:42:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:20:24.036 11:42:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:20:24.037 11:42:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:20:24.037 11:42:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:24.037 11:42:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:24.037 nvme0n1 00:20:24.037 11:42:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:24.037 11:42:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:20:24.037 11:42:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:24.037 11:42:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:20:24.037 11:42:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:24.037 11:42:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:24.037 11:42:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:24.037 11:42:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:20:24.037 11:42:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:24.037 11:42:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:24.037 11:42:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:24.037 11:42:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:20:24.037 11:42:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:20:24.037 11:42:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe4096 0 00:20:24.037 11:42:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:20:24.037 11:42:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:20:24.037 11:42:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:20:24.037 11:42:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:20:24.037 11:42:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:YTgyYTBmZTY3ZGJmZWY3NjBmNDgyYTE5MDllNmU3ZTbwPdsP: 00:20:24.037 11:42:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:YjZiOTAxZmVjZmFiOTA5ODU2MzkwNzNkMGNhZWJkOGQ5NGFmMWM2NGM5MjQ0NjNkNjJhNDkzM2Y3NDkyMDc0NHmGrx4=: 00:20:24.037 11:42:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:20:24.037 11:42:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:20:24.037 11:42:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:YTgyYTBmZTY3ZGJmZWY3NjBmNDgyYTE5MDllNmU3ZTbwPdsP: 00:20:24.037 11:42:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:YjZiOTAxZmVjZmFiOTA5ODU2MzkwNzNkMGNhZWJkOGQ5NGFmMWM2NGM5MjQ0NjNkNjJhNDkzM2Y3NDkyMDc0NHmGrx4=: ]] 00:20:24.037 11:42:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:YjZiOTAxZmVjZmFiOTA5ODU2MzkwNzNkMGNhZWJkOGQ5NGFmMWM2NGM5MjQ0NjNkNjJhNDkzM2Y3NDkyMDc0NHmGrx4=: 00:20:24.037 11:42:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe4096 0 00:20:24.037 11:42:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:20:24.037 11:42:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:20:24.037 11:42:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:20:24.037 11:42:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:20:24.037 11:42:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:20:24.037 11:42:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:20:24.037 11:42:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:24.037 11:42:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:24.037 11:42:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:24.037 11:42:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:20:24.037 11:42:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:20:24.037 11:42:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:20:24.037 11:42:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:20:24.037 11:42:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:20:24.037 11:42:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:20:24.037 11:42:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:20:24.037 11:42:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:20:24.037 11:42:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:20:24.037 11:42:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:20:24.037 11:42:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:20:24.037 11:42:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:20:24.037 11:42:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:24.037 11:42:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:24.295 nvme0n1 00:20:24.295 11:42:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:24.295 11:42:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:20:24.295 11:42:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:20:24.295 11:42:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:24.295 11:42:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:24.295 11:42:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:24.295 11:42:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:24.295 11:42:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:20:24.295 11:42:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:24.295 11:42:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:24.295 11:42:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:24.295 11:42:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:20:24.295 11:42:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe4096 1 00:20:24.295 11:42:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:20:24.295 11:42:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:20:24.295 11:42:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:20:24.295 11:42:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:20:24.295 11:42:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:MmVhYWU0MjY0NTFkMTYxYmUyMzliYzM0NzZlMzc0Y2Y4NmY1ZDBlMzZjNTI5ODg5wjh+4Q==: 00:20:24.295 11:42:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:YjAyN2ViYTgxMmNmZTg3NGQwNTMwN2UzYmRlMDk1YTAyNGU1MGU1MjZlZmI0NzUyEsEUpg==: 00:20:24.295 11:42:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:20:24.295 11:42:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:20:24.295 11:42:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:MmVhYWU0MjY0NTFkMTYxYmUyMzliYzM0NzZlMzc0Y2Y4NmY1ZDBlMzZjNTI5ODg5wjh+4Q==: 00:20:24.295 11:42:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:YjAyN2ViYTgxMmNmZTg3NGQwNTMwN2UzYmRlMDk1YTAyNGU1MGU1MjZlZmI0NzUyEsEUpg==: ]] 00:20:24.295 11:42:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:YjAyN2ViYTgxMmNmZTg3NGQwNTMwN2UzYmRlMDk1YTAyNGU1MGU1MjZlZmI0NzUyEsEUpg==: 00:20:24.295 11:42:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe4096 1 00:20:24.295 11:42:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:20:24.295 11:42:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:20:24.295 11:42:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:20:24.295 11:42:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:20:24.295 11:42:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:20:24.295 11:42:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:20:24.295 11:42:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:24.295 11:42:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:24.295 11:42:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:24.295 11:42:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:20:24.553 11:42:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:20:24.553 11:42:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:20:24.553 11:42:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:20:24.553 11:42:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:20:24.553 11:42:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:20:24.553 11:42:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:20:24.553 11:42:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:20:24.553 11:42:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:20:24.553 11:42:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:20:24.553 11:42:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:20:24.553 11:42:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:20:24.553 11:42:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:24.553 11:42:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:24.553 nvme0n1 00:20:24.553 11:42:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:24.553 11:42:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:20:24.553 11:42:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:24.553 11:42:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:24.553 11:42:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:20:24.553 11:42:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:24.553 11:42:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:24.553 11:42:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:20:24.553 11:42:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:24.553 11:42:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:24.811 11:42:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:24.811 11:42:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:20:24.811 11:42:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe4096 2 00:20:24.811 11:42:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:20:24.811 11:42:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:20:24.811 11:42:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:20:24.811 11:42:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:20:24.811 11:42:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:OTExOWQzZmNmZWU2NDk2NTgyOWI2ODJmZmZjY2NiMzktCSxF: 00:20:24.811 11:42:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:NGIwYWM0OWM5YjFlZmRiMmI1MTMwODI5Mzc4MGRiYWZn16gQ: 00:20:24.811 11:42:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:20:24.811 11:42:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:20:24.811 11:42:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:OTExOWQzZmNmZWU2NDk2NTgyOWI2ODJmZmZjY2NiMzktCSxF: 00:20:24.811 11:42:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:NGIwYWM0OWM5YjFlZmRiMmI1MTMwODI5Mzc4MGRiYWZn16gQ: ]] 00:20:24.811 11:42:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:NGIwYWM0OWM5YjFlZmRiMmI1MTMwODI5Mzc4MGRiYWZn16gQ: 00:20:24.811 11:42:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe4096 2 00:20:24.811 11:42:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:20:24.811 11:42:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:20:24.811 11:42:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:20:24.811 11:42:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:20:24.811 11:42:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:20:24.811 11:42:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:20:24.811 11:42:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:24.811 11:42:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:24.811 11:42:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:24.811 11:42:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:20:24.811 11:42:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:20:24.811 11:42:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:20:24.811 11:42:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:20:24.811 11:42:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:20:24.811 11:42:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:20:24.811 11:42:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:20:24.811 11:42:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:20:24.811 11:42:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:20:24.811 11:42:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:20:24.811 11:42:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:20:24.811 11:42:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:20:24.811 11:42:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:24.811 11:42:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:24.811 nvme0n1 00:20:24.811 11:42:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:24.811 11:42:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:20:24.811 11:42:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:20:24.811 11:42:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:24.811 11:42:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:24.811 11:42:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:25.069 11:42:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:25.069 11:42:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:20:25.070 11:42:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:25.070 11:42:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:25.070 11:42:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:25.070 11:42:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:20:25.070 11:42:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe4096 3 00:20:25.070 11:42:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:20:25.070 11:42:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:20:25.070 11:42:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:20:25.070 11:42:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:20:25.070 11:42:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:YWQyYjQ1YTY1ZTU2YWY2ZDllMWUzMDY0MmFkMjg3NTVjMzk2OTdhZjIzMDg4MGYxGLXTWw==: 00:20:25.070 11:42:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:MTZjNTcxNTkwYzRmYmUwNWQzODFiZWFmOGE0NGViMjGFzSou: 00:20:25.070 11:42:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:20:25.070 11:42:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:20:25.070 11:42:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:YWQyYjQ1YTY1ZTU2YWY2ZDllMWUzMDY0MmFkMjg3NTVjMzk2OTdhZjIzMDg4MGYxGLXTWw==: 00:20:25.070 11:42:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:MTZjNTcxNTkwYzRmYmUwNWQzODFiZWFmOGE0NGViMjGFzSou: ]] 00:20:25.070 11:42:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:MTZjNTcxNTkwYzRmYmUwNWQzODFiZWFmOGE0NGViMjGFzSou: 00:20:25.070 11:42:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe4096 3 00:20:25.070 11:42:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:20:25.070 11:42:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:20:25.070 11:42:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:20:25.070 11:42:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:20:25.070 11:42:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:20:25.070 11:42:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:20:25.070 11:42:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:25.070 11:42:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:25.070 11:42:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:25.070 11:42:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:20:25.070 11:42:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:20:25.070 11:42:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:20:25.070 11:42:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:20:25.070 11:42:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:20:25.070 11:42:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:20:25.070 11:42:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:20:25.070 11:42:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:20:25.070 11:42:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:20:25.070 11:42:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:20:25.070 11:42:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:20:25.070 11:42:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:20:25.070 11:42:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:25.070 11:42:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:25.070 nvme0n1 00:20:25.070 11:42:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:25.070 11:42:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:20:25.070 11:42:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:20:25.070 11:42:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:25.070 11:42:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:25.070 11:42:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:25.328 11:42:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:25.328 11:42:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:20:25.328 11:42:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:25.328 11:42:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:25.328 11:42:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:25.329 11:42:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:20:25.329 11:42:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe4096 4 00:20:25.329 11:42:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:20:25.329 11:42:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:20:25.329 11:42:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:20:25.329 11:42:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:20:25.329 11:42:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:ZDhlODY5MGFmNjE3NDhhNGFjYWUzMmM3ODk0NGYxMDk0ZGI0ZjUxOGRjMGFiNDliNDU1NmU5NTdiODZmYzRmMrlj5SE=: 00:20:25.329 11:42:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:20:25.329 11:42:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:20:25.329 11:42:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:20:25.329 11:42:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:ZDhlODY5MGFmNjE3NDhhNGFjYWUzMmM3ODk0NGYxMDk0ZGI0ZjUxOGRjMGFiNDliNDU1NmU5NTdiODZmYzRmMrlj5SE=: 00:20:25.329 11:42:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:20:25.329 11:42:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe4096 4 00:20:25.329 11:42:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:20:25.329 11:42:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:20:25.329 11:42:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:20:25.329 11:42:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:20:25.329 11:42:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:20:25.329 11:42:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:20:25.329 11:42:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:25.329 11:42:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:25.329 11:42:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:25.329 11:42:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:20:25.329 11:42:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:20:25.329 11:42:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:20:25.329 11:42:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:20:25.329 11:42:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:20:25.329 11:42:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:20:25.329 11:42:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:20:25.329 11:42:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:20:25.329 11:42:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:20:25.329 11:42:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:20:25.329 11:42:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:20:25.329 11:42:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:20:25.329 11:42:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:25.329 11:42:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:25.329 nvme0n1 00:20:25.329 11:42:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:25.329 11:42:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:20:25.329 11:42:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:20:25.329 11:42:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:25.329 11:42:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:25.329 11:42:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:25.587 11:42:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:25.587 11:42:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:20:25.587 11:42:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:25.587 11:42:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:25.587 11:42:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:25.587 11:42:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:20:25.587 11:42:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:20:25.587 11:42:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe6144 0 00:20:25.587 11:42:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:20:25.587 11:42:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:20:25.587 11:42:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:20:25.587 11:42:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:20:25.587 11:42:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:YTgyYTBmZTY3ZGJmZWY3NjBmNDgyYTE5MDllNmU3ZTbwPdsP: 00:20:25.587 11:42:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:YjZiOTAxZmVjZmFiOTA5ODU2MzkwNzNkMGNhZWJkOGQ5NGFmMWM2NGM5MjQ0NjNkNjJhNDkzM2Y3NDkyMDc0NHmGrx4=: 00:20:25.587 11:42:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:20:25.587 11:42:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:20:25.587 11:42:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:YTgyYTBmZTY3ZGJmZWY3NjBmNDgyYTE5MDllNmU3ZTbwPdsP: 00:20:25.587 11:42:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:YjZiOTAxZmVjZmFiOTA5ODU2MzkwNzNkMGNhZWJkOGQ5NGFmMWM2NGM5MjQ0NjNkNjJhNDkzM2Y3NDkyMDc0NHmGrx4=: ]] 00:20:25.587 11:42:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:YjZiOTAxZmVjZmFiOTA5ODU2MzkwNzNkMGNhZWJkOGQ5NGFmMWM2NGM5MjQ0NjNkNjJhNDkzM2Y3NDkyMDc0NHmGrx4=: 00:20:25.587 11:42:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe6144 0 00:20:25.587 11:42:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:20:25.587 11:42:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:20:25.587 11:42:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:20:25.587 11:42:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:20:25.587 11:42:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:20:25.587 11:42:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:20:25.587 11:42:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:25.587 11:42:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:25.587 11:42:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:25.587 11:42:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:20:25.587 11:42:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:20:25.587 11:42:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:20:25.587 11:42:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:20:25.587 11:42:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:20:25.587 11:42:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:20:25.587 11:42:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:20:25.587 11:42:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:20:25.587 11:42:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:20:25.587 11:42:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:20:25.587 11:42:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:20:25.587 11:42:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:20:25.587 11:42:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:25.587 11:42:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:25.847 nvme0n1 00:20:25.847 11:42:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:25.847 11:42:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:20:25.847 11:42:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:20:25.847 11:42:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:25.847 11:42:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:25.847 11:42:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:25.847 11:42:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:25.847 11:42:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:20:25.847 11:42:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:25.847 11:42:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:25.847 11:42:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:25.847 11:42:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:20:25.847 11:42:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe6144 1 00:20:25.847 11:42:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:20:25.847 11:42:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:20:25.847 11:42:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:20:25.847 11:42:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:20:25.847 11:42:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:MmVhYWU0MjY0NTFkMTYxYmUyMzliYzM0NzZlMzc0Y2Y4NmY1ZDBlMzZjNTI5ODg5wjh+4Q==: 00:20:25.847 11:42:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:YjAyN2ViYTgxMmNmZTg3NGQwNTMwN2UzYmRlMDk1YTAyNGU1MGU1MjZlZmI0NzUyEsEUpg==: 00:20:25.847 11:42:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:20:25.847 11:42:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:20:25.847 11:42:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:MmVhYWU0MjY0NTFkMTYxYmUyMzliYzM0NzZlMzc0Y2Y4NmY1ZDBlMzZjNTI5ODg5wjh+4Q==: 00:20:25.847 11:42:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:YjAyN2ViYTgxMmNmZTg3NGQwNTMwN2UzYmRlMDk1YTAyNGU1MGU1MjZlZmI0NzUyEsEUpg==: ]] 00:20:25.847 11:42:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:YjAyN2ViYTgxMmNmZTg3NGQwNTMwN2UzYmRlMDk1YTAyNGU1MGU1MjZlZmI0NzUyEsEUpg==: 00:20:25.847 11:42:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe6144 1 00:20:25.847 11:42:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:20:25.847 11:42:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:20:25.847 11:42:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:20:25.847 11:42:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:20:25.847 11:42:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:20:25.847 11:42:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:20:25.847 11:42:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:25.847 11:42:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:25.847 11:42:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:25.847 11:42:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:20:25.847 11:42:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:20:25.847 11:42:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:20:25.847 11:42:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:20:25.847 11:42:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:20:25.847 11:42:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:20:25.847 11:42:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:20:25.847 11:42:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:20:25.847 11:42:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:20:25.847 11:42:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:20:25.847 11:42:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:20:25.847 11:42:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:20:25.847 11:42:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:25.848 11:42:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:26.414 nvme0n1 00:20:26.414 11:42:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:26.414 11:42:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:20:26.414 11:42:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:20:26.414 11:42:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:26.414 11:42:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:26.414 11:42:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:26.414 11:42:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:26.414 11:42:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:20:26.414 11:42:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:26.414 11:42:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:26.414 11:42:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:26.414 11:42:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:20:26.414 11:42:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe6144 2 00:20:26.414 11:42:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:20:26.414 11:42:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:20:26.414 11:42:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:20:26.414 11:42:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:20:26.414 11:42:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:OTExOWQzZmNmZWU2NDk2NTgyOWI2ODJmZmZjY2NiMzktCSxF: 00:20:26.414 11:42:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:NGIwYWM0OWM5YjFlZmRiMmI1MTMwODI5Mzc4MGRiYWZn16gQ: 00:20:26.414 11:42:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:20:26.414 11:42:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:20:26.414 11:42:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:OTExOWQzZmNmZWU2NDk2NTgyOWI2ODJmZmZjY2NiMzktCSxF: 00:20:26.414 11:42:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:NGIwYWM0OWM5YjFlZmRiMmI1MTMwODI5Mzc4MGRiYWZn16gQ: ]] 00:20:26.414 11:42:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:NGIwYWM0OWM5YjFlZmRiMmI1MTMwODI5Mzc4MGRiYWZn16gQ: 00:20:26.414 11:42:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe6144 2 00:20:26.414 11:42:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:20:26.414 11:42:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:20:26.414 11:42:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:20:26.414 11:42:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:20:26.414 11:42:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:20:26.414 11:42:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:20:26.414 11:42:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:26.414 11:42:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:26.414 11:42:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:26.414 11:42:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:20:26.414 11:42:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:20:26.414 11:42:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:20:26.414 11:42:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:20:26.414 11:42:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:20:26.414 11:42:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:20:26.414 11:42:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:20:26.414 11:42:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:20:26.414 11:42:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:20:26.414 11:42:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:20:26.414 11:42:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:20:26.415 11:42:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:20:26.415 11:42:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:26.415 11:42:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:26.673 nvme0n1 00:20:26.673 11:42:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:26.673 11:42:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:20:26.673 11:42:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:26.673 11:42:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:26.673 11:42:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:20:26.673 11:42:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:26.673 11:42:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:26.673 11:42:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:20:26.673 11:42:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:26.673 11:42:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:26.673 11:42:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:26.673 11:42:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:20:26.673 11:42:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe6144 3 00:20:26.673 11:42:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:20:26.673 11:42:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:20:26.673 11:42:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:20:26.673 11:42:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:20:26.673 11:42:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:YWQyYjQ1YTY1ZTU2YWY2ZDllMWUzMDY0MmFkMjg3NTVjMzk2OTdhZjIzMDg4MGYxGLXTWw==: 00:20:26.673 11:42:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:MTZjNTcxNTkwYzRmYmUwNWQzODFiZWFmOGE0NGViMjGFzSou: 00:20:26.673 11:42:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:20:26.673 11:42:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:20:26.673 11:42:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:YWQyYjQ1YTY1ZTU2YWY2ZDllMWUzMDY0MmFkMjg3NTVjMzk2OTdhZjIzMDg4MGYxGLXTWw==: 00:20:26.673 11:42:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:MTZjNTcxNTkwYzRmYmUwNWQzODFiZWFmOGE0NGViMjGFzSou: ]] 00:20:26.673 11:42:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:MTZjNTcxNTkwYzRmYmUwNWQzODFiZWFmOGE0NGViMjGFzSou: 00:20:26.673 11:42:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe6144 3 00:20:26.673 11:42:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:20:26.673 11:42:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:20:26.673 11:42:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:20:26.673 11:42:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:20:26.673 11:42:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:20:26.673 11:42:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:20:26.673 11:42:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:26.673 11:42:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:26.673 11:42:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:26.673 11:42:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:20:26.673 11:42:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:20:26.673 11:42:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:20:26.673 11:42:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:20:26.673 11:42:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:20:26.673 11:42:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:20:26.673 11:42:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:20:26.673 11:42:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:20:26.673 11:42:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:20:26.673 11:42:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:20:26.673 11:42:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:20:26.673 11:42:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:20:26.673 11:42:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:26.673 11:42:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:27.240 nvme0n1 00:20:27.240 11:42:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:27.240 11:42:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:20:27.240 11:42:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:20:27.240 11:42:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:27.240 11:42:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:27.240 11:42:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:27.240 11:42:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:27.240 11:42:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:20:27.240 11:42:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:27.240 11:42:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:27.240 11:42:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:27.240 11:42:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:20:27.240 11:42:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe6144 4 00:20:27.240 11:42:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:20:27.240 11:42:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:20:27.240 11:42:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:20:27.240 11:42:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:20:27.240 11:42:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:ZDhlODY5MGFmNjE3NDhhNGFjYWUzMmM3ODk0NGYxMDk0ZGI0ZjUxOGRjMGFiNDliNDU1NmU5NTdiODZmYzRmMrlj5SE=: 00:20:27.240 11:42:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:20:27.240 11:42:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:20:27.240 11:42:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:20:27.240 11:42:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:ZDhlODY5MGFmNjE3NDhhNGFjYWUzMmM3ODk0NGYxMDk0ZGI0ZjUxOGRjMGFiNDliNDU1NmU5NTdiODZmYzRmMrlj5SE=: 00:20:27.240 11:42:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:20:27.240 11:42:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe6144 4 00:20:27.240 11:42:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:20:27.240 11:42:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:20:27.240 11:42:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:20:27.240 11:42:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:20:27.240 11:42:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:20:27.240 11:42:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:20:27.240 11:42:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:27.240 11:42:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:27.240 11:42:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:27.240 11:42:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:20:27.240 11:42:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:20:27.240 11:42:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:20:27.240 11:42:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:20:27.240 11:42:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:20:27.240 11:42:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:20:27.240 11:42:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:20:27.240 11:42:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:20:27.240 11:42:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:20:27.240 11:42:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:20:27.240 11:42:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:20:27.240 11:42:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:20:27.240 11:42:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:27.240 11:42:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:27.499 nvme0n1 00:20:27.499 11:42:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:27.499 11:42:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:20:27.499 11:42:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:20:27.499 11:42:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:27.499 11:42:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:27.499 11:42:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:27.499 11:42:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:27.499 11:42:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:20:27.499 11:42:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:27.499 11:42:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:27.499 11:42:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:27.499 11:42:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:20:27.499 11:42:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:20:27.499 11:42:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe8192 0 00:20:27.499 11:42:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:20:27.499 11:42:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:20:27.499 11:42:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:20:27.499 11:42:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:20:27.499 11:42:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:YTgyYTBmZTY3ZGJmZWY3NjBmNDgyYTE5MDllNmU3ZTbwPdsP: 00:20:27.500 11:42:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:YjZiOTAxZmVjZmFiOTA5ODU2MzkwNzNkMGNhZWJkOGQ5NGFmMWM2NGM5MjQ0NjNkNjJhNDkzM2Y3NDkyMDc0NHmGrx4=: 00:20:27.500 11:42:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:20:27.500 11:42:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:20:27.500 11:42:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:YTgyYTBmZTY3ZGJmZWY3NjBmNDgyYTE5MDllNmU3ZTbwPdsP: 00:20:27.500 11:42:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:YjZiOTAxZmVjZmFiOTA5ODU2MzkwNzNkMGNhZWJkOGQ5NGFmMWM2NGM5MjQ0NjNkNjJhNDkzM2Y3NDkyMDc0NHmGrx4=: ]] 00:20:27.500 11:42:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:YjZiOTAxZmVjZmFiOTA5ODU2MzkwNzNkMGNhZWJkOGQ5NGFmMWM2NGM5MjQ0NjNkNjJhNDkzM2Y3NDkyMDc0NHmGrx4=: 00:20:27.500 11:42:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe8192 0 00:20:27.500 11:42:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:20:27.500 11:42:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:20:27.500 11:42:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:20:27.500 11:42:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:20:27.500 11:42:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:20:27.500 11:42:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:20:27.500 11:42:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:27.500 11:42:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:27.758 11:42:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:27.758 11:42:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:20:27.758 11:42:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:20:27.758 11:42:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:20:27.758 11:42:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:20:27.758 11:42:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:20:27.758 11:42:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:20:27.758 11:42:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:20:27.758 11:42:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:20:27.758 11:42:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:20:27.758 11:42:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:20:27.758 11:42:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:20:27.758 11:42:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:20:27.758 11:42:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:27.758 11:42:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:28.325 nvme0n1 00:20:28.325 11:42:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:28.325 11:42:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:20:28.325 11:42:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:20:28.325 11:42:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:28.325 11:42:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:28.325 11:42:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:28.325 11:42:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:28.325 11:42:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:20:28.325 11:42:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:28.325 11:42:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:28.325 11:42:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:28.325 11:42:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:20:28.325 11:42:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe8192 1 00:20:28.325 11:42:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:20:28.325 11:42:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:20:28.325 11:42:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:20:28.325 11:42:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:20:28.325 11:42:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:MmVhYWU0MjY0NTFkMTYxYmUyMzliYzM0NzZlMzc0Y2Y4NmY1ZDBlMzZjNTI5ODg5wjh+4Q==: 00:20:28.325 11:42:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:YjAyN2ViYTgxMmNmZTg3NGQwNTMwN2UzYmRlMDk1YTAyNGU1MGU1MjZlZmI0NzUyEsEUpg==: 00:20:28.325 11:42:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:20:28.325 11:42:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:20:28.325 11:42:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:MmVhYWU0MjY0NTFkMTYxYmUyMzliYzM0NzZlMzc0Y2Y4NmY1ZDBlMzZjNTI5ODg5wjh+4Q==: 00:20:28.325 11:42:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:YjAyN2ViYTgxMmNmZTg3NGQwNTMwN2UzYmRlMDk1YTAyNGU1MGU1MjZlZmI0NzUyEsEUpg==: ]] 00:20:28.325 11:42:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:YjAyN2ViYTgxMmNmZTg3NGQwNTMwN2UzYmRlMDk1YTAyNGU1MGU1MjZlZmI0NzUyEsEUpg==: 00:20:28.325 11:42:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe8192 1 00:20:28.325 11:42:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:20:28.325 11:42:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:20:28.325 11:42:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:20:28.325 11:42:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:20:28.325 11:42:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:20:28.325 11:42:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:20:28.325 11:42:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:28.325 11:42:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:28.325 11:42:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:28.325 11:42:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:20:28.325 11:42:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:20:28.325 11:42:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:20:28.325 11:42:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:20:28.325 11:42:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:20:28.325 11:42:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:20:28.325 11:42:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:20:28.325 11:42:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:20:28.325 11:42:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:20:28.325 11:42:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:20:28.325 11:42:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:20:28.325 11:42:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:20:28.325 11:42:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:28.325 11:42:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:28.891 nvme0n1 00:20:28.891 11:42:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:28.891 11:42:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:20:28.891 11:42:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:28.891 11:42:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:28.891 11:42:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:20:28.891 11:42:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:28.891 11:42:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:28.892 11:42:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:20:28.892 11:42:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:28.892 11:42:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:28.892 11:42:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:28.892 11:42:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:20:28.892 11:42:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe8192 2 00:20:28.892 11:42:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:20:28.892 11:42:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:20:28.892 11:42:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:20:28.892 11:42:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:20:28.892 11:42:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:OTExOWQzZmNmZWU2NDk2NTgyOWI2ODJmZmZjY2NiMzktCSxF: 00:20:28.892 11:42:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:NGIwYWM0OWM5YjFlZmRiMmI1MTMwODI5Mzc4MGRiYWZn16gQ: 00:20:28.892 11:42:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:20:28.892 11:42:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:20:28.892 11:42:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:OTExOWQzZmNmZWU2NDk2NTgyOWI2ODJmZmZjY2NiMzktCSxF: 00:20:28.892 11:42:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:NGIwYWM0OWM5YjFlZmRiMmI1MTMwODI5Mzc4MGRiYWZn16gQ: ]] 00:20:28.892 11:42:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:NGIwYWM0OWM5YjFlZmRiMmI1MTMwODI5Mzc4MGRiYWZn16gQ: 00:20:28.892 11:42:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe8192 2 00:20:28.892 11:42:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:20:28.892 11:42:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:20:28.892 11:42:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:20:28.892 11:42:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:20:28.892 11:42:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:20:28.892 11:42:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:20:28.892 11:42:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:28.892 11:42:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:28.892 11:42:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:28.892 11:42:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:20:28.892 11:42:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:20:28.892 11:42:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:20:28.892 11:42:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:20:28.892 11:42:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:20:28.892 11:42:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:20:28.892 11:42:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:20:28.892 11:42:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:20:28.892 11:42:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:20:28.892 11:42:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:20:28.892 11:42:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:20:28.892 11:42:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:20:28.892 11:42:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:28.892 11:42:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:29.457 nvme0n1 00:20:29.457 11:42:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:29.457 11:42:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:20:29.457 11:42:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:20:29.457 11:42:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:29.457 11:42:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:29.457 11:42:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:29.714 11:42:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:29.714 11:42:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:20:29.714 11:42:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:29.714 11:42:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:29.714 11:42:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:29.714 11:42:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:20:29.714 11:42:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe8192 3 00:20:29.714 11:42:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:20:29.714 11:42:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:20:29.714 11:42:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:20:29.714 11:42:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:20:29.714 11:42:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:YWQyYjQ1YTY1ZTU2YWY2ZDllMWUzMDY0MmFkMjg3NTVjMzk2OTdhZjIzMDg4MGYxGLXTWw==: 00:20:29.714 11:42:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:MTZjNTcxNTkwYzRmYmUwNWQzODFiZWFmOGE0NGViMjGFzSou: 00:20:29.714 11:42:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:20:29.715 11:42:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:20:29.715 11:42:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:YWQyYjQ1YTY1ZTU2YWY2ZDllMWUzMDY0MmFkMjg3NTVjMzk2OTdhZjIzMDg4MGYxGLXTWw==: 00:20:29.715 11:42:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:MTZjNTcxNTkwYzRmYmUwNWQzODFiZWFmOGE0NGViMjGFzSou: ]] 00:20:29.715 11:42:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:MTZjNTcxNTkwYzRmYmUwNWQzODFiZWFmOGE0NGViMjGFzSou: 00:20:29.715 11:42:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe8192 3 00:20:29.715 11:42:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:20:29.715 11:42:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:20:29.715 11:42:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:20:29.715 11:42:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:20:29.715 11:42:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:20:29.715 11:42:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:20:29.715 11:42:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:29.715 11:42:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:29.715 11:42:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:29.715 11:42:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:20:29.715 11:42:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:20:29.715 11:42:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:20:29.715 11:42:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:20:29.715 11:42:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:20:29.715 11:42:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:20:29.715 11:42:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:20:29.715 11:42:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:20:29.715 11:42:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:20:29.715 11:42:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:20:29.715 11:42:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:20:29.715 11:42:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:20:29.715 11:42:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:29.715 11:42:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:30.281 nvme0n1 00:20:30.281 11:42:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:30.281 11:42:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:20:30.281 11:42:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:20:30.281 11:42:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:30.281 11:42:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:30.281 11:42:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:30.281 11:42:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:30.281 11:42:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:20:30.281 11:42:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:30.281 11:42:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:30.281 11:42:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:30.281 11:42:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:20:30.281 11:42:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe8192 4 00:20:30.281 11:42:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:20:30.281 11:42:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:20:30.281 11:42:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:20:30.281 11:42:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:20:30.281 11:42:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:ZDhlODY5MGFmNjE3NDhhNGFjYWUzMmM3ODk0NGYxMDk0ZGI0ZjUxOGRjMGFiNDliNDU1NmU5NTdiODZmYzRmMrlj5SE=: 00:20:30.281 11:42:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:20:30.281 11:42:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:20:30.281 11:42:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:20:30.281 11:42:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:ZDhlODY5MGFmNjE3NDhhNGFjYWUzMmM3ODk0NGYxMDk0ZGI0ZjUxOGRjMGFiNDliNDU1NmU5NTdiODZmYzRmMrlj5SE=: 00:20:30.281 11:42:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:20:30.281 11:42:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe8192 4 00:20:30.281 11:42:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:20:30.281 11:42:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:20:30.281 11:42:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:20:30.281 11:42:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:20:30.281 11:42:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:20:30.281 11:42:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:20:30.281 11:42:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:30.281 11:42:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:30.281 11:42:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:30.281 11:42:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:20:30.281 11:42:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:20:30.281 11:42:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:20:30.281 11:42:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:20:30.281 11:42:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:20:30.281 11:42:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:20:30.281 11:42:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:20:30.281 11:42:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:20:30.281 11:42:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:20:30.281 11:42:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:20:30.281 11:42:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:20:30.281 11:42:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:20:30.281 11:42:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:30.281 11:42:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:30.847 nvme0n1 00:20:30.847 11:42:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:30.847 11:42:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:20:30.847 11:42:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:20:30.847 11:42:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:30.847 11:42:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:30.847 11:42:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:30.847 11:42:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:30.847 11:42:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:20:30.847 11:42:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:30.847 11:42:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:30.847 11:42:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:30.847 11:42:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@110 -- # nvmet_auth_set_key sha256 ffdhe2048 1 00:20:30.847 11:42:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:20:30.847 11:42:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:20:30.847 11:42:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:20:30.847 11:42:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:20:30.847 11:42:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:MmVhYWU0MjY0NTFkMTYxYmUyMzliYzM0NzZlMzc0Y2Y4NmY1ZDBlMzZjNTI5ODg5wjh+4Q==: 00:20:30.847 11:42:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:YjAyN2ViYTgxMmNmZTg3NGQwNTMwN2UzYmRlMDk1YTAyNGU1MGU1MjZlZmI0NzUyEsEUpg==: 00:20:30.847 11:42:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:20:30.847 11:42:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:20:30.847 11:42:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:MmVhYWU0MjY0NTFkMTYxYmUyMzliYzM0NzZlMzc0Y2Y4NmY1ZDBlMzZjNTI5ODg5wjh+4Q==: 00:20:30.847 11:42:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:YjAyN2ViYTgxMmNmZTg3NGQwNTMwN2UzYmRlMDk1YTAyNGU1MGU1MjZlZmI0NzUyEsEUpg==: ]] 00:20:30.847 11:42:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:YjAyN2ViYTgxMmNmZTg3NGQwNTMwN2UzYmRlMDk1YTAyNGU1MGU1MjZlZmI0NzUyEsEUpg==: 00:20:30.847 11:42:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@111 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:20:30.847 11:42:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:30.847 11:42:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:30.847 11:42:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:30.847 11:42:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@112 -- # get_main_ns_ip 00:20:30.847 11:42:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:20:30.847 11:42:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:20:30.847 11:42:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:20:30.847 11:42:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:20:30.847 11:42:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:20:30.847 11:42:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:20:30.847 11:42:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:20:30.847 11:42:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:20:30.848 11:42:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:20:30.848 11:42:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:20:30.848 11:42:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@112 -- # NOT rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 00:20:30.848 11:42:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@652 -- # local es=0 00:20:30.848 11:42:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@654 -- # valid_exec_arg rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 00:20:30.848 11:42:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@640 -- # local arg=rpc_cmd 00:20:30.848 11:42:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:20:30.848 11:42:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@644 -- # type -t rpc_cmd 00:20:30.848 11:42:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:20:30.848 11:42:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@655 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 00:20:30.848 11:42:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:30.848 11:42:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:31.106 2024/12/16 11:42:31 error on JSON-RPC call, method: bdev_nvme_attach_controller, params: map[adrfam:ipv4 allow_unrecognized_csi:%!s(bool=false) ddgst:%!s(bool=false) hdgst:%!s(bool=false) hostnqn:nqn.2024-02.io.spdk:host0 name:nvme0 prchk_guard:%!s(bool=false) prchk_reftag:%!s(bool=false) subnqn:nqn.2024-02.io.spdk:cnode0 traddr:10.0.0.1 trsvcid:4420 trtype:tcp], err: error received for bdev_nvme_attach_controller method, err: Code=-5 Msg=Input/output error 00:20:31.106 request: 00:20:31.106 { 00:20:31.106 "method": "bdev_nvme_attach_controller", 00:20:31.106 "params": { 00:20:31.106 "name": "nvme0", 00:20:31.106 "trtype": "tcp", 00:20:31.106 "traddr": "10.0.0.1", 00:20:31.106 "adrfam": "ipv4", 00:20:31.106 "trsvcid": "4420", 00:20:31.106 "subnqn": "nqn.2024-02.io.spdk:cnode0", 00:20:31.106 "hostnqn": "nqn.2024-02.io.spdk:host0", 00:20:31.106 "prchk_reftag": false, 00:20:31.106 "prchk_guard": false, 00:20:31.106 "hdgst": false, 00:20:31.106 "ddgst": false, 00:20:31.106 "allow_unrecognized_csi": false 00:20:31.106 } 00:20:31.106 } 00:20:31.106 Got JSON-RPC error response 00:20:31.106 GoRPCClient: error on JSON-RPC call 00:20:31.106 11:42:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:20:31.106 11:42:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@655 -- # es=1 00:20:31.106 11:42:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:20:31.106 11:42:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:20:31.106 11:42:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:20:31.106 11:42:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@114 -- # rpc_cmd bdev_nvme_get_controllers 00:20:31.106 11:42:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@114 -- # jq length 00:20:31.106 11:42:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:31.106 11:42:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:31.106 11:42:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:31.106 11:42:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@114 -- # (( 0 == 0 )) 00:20:31.106 11:42:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@117 -- # get_main_ns_ip 00:20:31.106 11:42:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:20:31.106 11:42:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:20:31.106 11:42:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:20:31.106 11:42:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:20:31.106 11:42:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:20:31.106 11:42:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:20:31.106 11:42:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:20:31.106 11:42:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:20:31.106 11:42:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:20:31.106 11:42:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:20:31.106 11:42:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@117 -- # NOT rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 00:20:31.106 11:42:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@652 -- # local es=0 00:20:31.106 11:42:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@654 -- # valid_exec_arg rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 00:20:31.106 11:42:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@640 -- # local arg=rpc_cmd 00:20:31.106 11:42:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:20:31.106 11:42:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@644 -- # type -t rpc_cmd 00:20:31.106 11:42:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:20:31.106 11:42:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@655 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 00:20:31.106 11:42:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:31.106 11:42:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:31.106 2024/12/16 11:42:31 error on JSON-RPC call, method: bdev_nvme_attach_controller, params: map[adrfam:ipv4 allow_unrecognized_csi:%!s(bool=false) ddgst:%!s(bool=false) dhchap_key:key2 hdgst:%!s(bool=false) hostnqn:nqn.2024-02.io.spdk:host0 name:nvme0 prchk_guard:%!s(bool=false) prchk_reftag:%!s(bool=false) subnqn:nqn.2024-02.io.spdk:cnode0 traddr:10.0.0.1 trsvcid:4420 trtype:tcp], err: error received for bdev_nvme_attach_controller method, err: Code=-5 Msg=Input/output error 00:20:31.106 request: 00:20:31.106 { 00:20:31.106 "method": "bdev_nvme_attach_controller", 00:20:31.106 "params": { 00:20:31.106 "name": "nvme0", 00:20:31.106 "trtype": "tcp", 00:20:31.106 "traddr": "10.0.0.1", 00:20:31.106 "adrfam": "ipv4", 00:20:31.106 "trsvcid": "4420", 00:20:31.106 "subnqn": "nqn.2024-02.io.spdk:cnode0", 00:20:31.106 "hostnqn": "nqn.2024-02.io.spdk:host0", 00:20:31.106 "prchk_reftag": false, 00:20:31.106 "prchk_guard": false, 00:20:31.106 "hdgst": false, 00:20:31.106 "ddgst": false, 00:20:31.106 "dhchap_key": "key2", 00:20:31.106 "allow_unrecognized_csi": false 00:20:31.106 } 00:20:31.106 } 00:20:31.106 Got JSON-RPC error response 00:20:31.106 GoRPCClient: error on JSON-RPC call 00:20:31.106 11:42:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:20:31.106 11:42:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@655 -- # es=1 00:20:31.106 11:42:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:20:31.106 11:42:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:20:31.106 11:42:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:20:31.106 11:42:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@120 -- # rpc_cmd bdev_nvme_get_controllers 00:20:31.106 11:42:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:31.106 11:42:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:31.106 11:42:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@120 -- # jq length 00:20:31.106 11:42:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:31.106 11:42:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@120 -- # (( 0 == 0 )) 00:20:31.106 11:42:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@123 -- # get_main_ns_ip 00:20:31.106 11:42:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:20:31.106 11:42:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:20:31.106 11:42:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:20:31.106 11:42:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:20:31.106 11:42:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:20:31.106 11:42:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:20:31.106 11:42:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:20:31.106 11:42:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:20:31.107 11:42:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:20:31.107 11:42:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:20:31.107 11:42:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@123 -- # NOT rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey2 00:20:31.107 11:42:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@652 -- # local es=0 00:20:31.107 11:42:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@654 -- # valid_exec_arg rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey2 00:20:31.107 11:42:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@640 -- # local arg=rpc_cmd 00:20:31.107 11:42:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:20:31.107 11:42:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@644 -- # type -t rpc_cmd 00:20:31.107 11:42:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:20:31.107 11:42:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@655 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey2 00:20:31.107 11:42:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:31.107 11:42:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:31.107 2024/12/16 11:42:31 error on JSON-RPC call, method: bdev_nvme_attach_controller, params: map[adrfam:ipv4 allow_unrecognized_csi:%!s(bool=false) ddgst:%!s(bool=false) dhchap_ctrlr_key:ckey2 dhchap_key:key1 hdgst:%!s(bool=false) hostnqn:nqn.2024-02.io.spdk:host0 name:nvme0 prchk_guard:%!s(bool=false) prchk_reftag:%!s(bool=false) subnqn:nqn.2024-02.io.spdk:cnode0 traddr:10.0.0.1 trsvcid:4420 trtype:tcp], err: error received for bdev_nvme_attach_controller method, err: Code=-5 Msg=Input/output error 00:20:31.107 request: 00:20:31.107 { 00:20:31.107 "method": "bdev_nvme_attach_controller", 00:20:31.107 "params": { 00:20:31.107 "name": "nvme0", 00:20:31.107 "trtype": "tcp", 00:20:31.107 "traddr": "10.0.0.1", 00:20:31.107 "adrfam": "ipv4", 00:20:31.107 "trsvcid": "4420", 00:20:31.107 "subnqn": "nqn.2024-02.io.spdk:cnode0", 00:20:31.107 "hostnqn": "nqn.2024-02.io.spdk:host0", 00:20:31.107 "prchk_reftag": false, 00:20:31.107 "prchk_guard": false, 00:20:31.107 "hdgst": false, 00:20:31.107 "ddgst": false, 00:20:31.107 "dhchap_key": "key1", 00:20:31.107 "dhchap_ctrlr_key": "ckey2", 00:20:31.107 "allow_unrecognized_csi": false 00:20:31.107 } 00:20:31.107 } 00:20:31.107 Got JSON-RPC error response 00:20:31.107 GoRPCClient: error on JSON-RPC call 00:20:31.107 11:42:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:20:31.107 11:42:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@655 -- # es=1 00:20:31.107 11:42:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:20:31.107 11:42:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:20:31.107 11:42:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:20:31.107 11:42:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@128 -- # get_main_ns_ip 00:20:31.107 11:42:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:20:31.107 11:42:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:20:31.107 11:42:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:20:31.107 11:42:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:20:31.107 11:42:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:20:31.107 11:42:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:20:31.107 11:42:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:20:31.107 11:42:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:20:31.107 11:42:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:20:31.107 11:42:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:20:31.107 11:42:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@128 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 --ctrlr-loss-timeout-sec 1 --reconnect-delay-sec 1 00:20:31.107 11:42:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:31.107 11:42:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:31.366 nvme0n1 00:20:31.366 11:42:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:31.366 11:42:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@132 -- # nvmet_auth_set_key sha256 ffdhe2048 2 00:20:31.366 11:42:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:20:31.366 11:42:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:20:31.366 11:42:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:20:31.366 11:42:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:20:31.366 11:42:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:OTExOWQzZmNmZWU2NDk2NTgyOWI2ODJmZmZjY2NiMzktCSxF: 00:20:31.366 11:42:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:NGIwYWM0OWM5YjFlZmRiMmI1MTMwODI5Mzc4MGRiYWZn16gQ: 00:20:31.366 11:42:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:20:31.366 11:42:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:20:31.366 11:42:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:OTExOWQzZmNmZWU2NDk2NTgyOWI2ODJmZmZjY2NiMzktCSxF: 00:20:31.366 11:42:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:NGIwYWM0OWM5YjFlZmRiMmI1MTMwODI5Mzc4MGRiYWZn16gQ: ]] 00:20:31.366 11:42:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:NGIwYWM0OWM5YjFlZmRiMmI1MTMwODI5Mzc4MGRiYWZn16gQ: 00:20:31.366 11:42:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@133 -- # rpc_cmd bdev_nvme_set_keys nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:20:31.366 11:42:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:31.366 11:42:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:31.366 11:42:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:31.366 11:42:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@134 -- # rpc_cmd bdev_nvme_get_controllers 00:20:31.366 11:42:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:31.366 11:42:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:31.366 11:42:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@134 -- # jq -r '.[].name' 00:20:31.366 11:42:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:31.367 11:42:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@134 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:31.367 11:42:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@136 -- # NOT rpc_cmd bdev_nvme_set_keys nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey2 00:20:31.367 11:42:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@652 -- # local es=0 00:20:31.367 11:42:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@654 -- # valid_exec_arg rpc_cmd bdev_nvme_set_keys nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey2 00:20:31.367 11:42:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@640 -- # local arg=rpc_cmd 00:20:31.367 11:42:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:20:31.367 11:42:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@644 -- # type -t rpc_cmd 00:20:31.367 11:42:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:20:31.367 11:42:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@655 -- # rpc_cmd bdev_nvme_set_keys nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey2 00:20:31.367 11:42:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:31.367 11:42:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:31.367 2024/12/16 11:42:31 error on JSON-RPC call, method: bdev_nvme_set_keys, params: map[dhchap_ctrlr_key:ckey2 dhchap_key:key1 name:nvme0], err: error received for bdev_nvme_set_keys method, err: Code=-13 Msg=Permission denied 00:20:31.367 request: 00:20:31.367 { 00:20:31.367 "method": "bdev_nvme_set_keys", 00:20:31.367 "params": { 00:20:31.367 "name": "nvme0", 00:20:31.367 "dhchap_key": "key1", 00:20:31.367 "dhchap_ctrlr_key": "ckey2" 00:20:31.367 } 00:20:31.367 } 00:20:31.367 Got JSON-RPC error response 00:20:31.367 GoRPCClient: error on JSON-RPC call 00:20:31.367 11:42:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:20:31.367 11:42:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@655 -- # es=1 00:20:31.367 11:42:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:20:31.367 11:42:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:20:31.367 11:42:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:20:31.367 11:42:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@137 -- # rpc_cmd bdev_nvme_get_controllers 00:20:31.367 11:42:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@137 -- # jq length 00:20:31.367 11:42:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:31.367 11:42:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:31.367 11:42:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:31.367 11:42:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@137 -- # (( 1 != 0 )) 00:20:31.367 11:42:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@138 -- # sleep 1s 00:20:32.742 11:42:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@137 -- # rpc_cmd bdev_nvme_get_controllers 00:20:32.742 11:42:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:32.743 11:42:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:32.743 11:42:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@137 -- # jq length 00:20:32.743 11:42:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:32.743 11:42:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@137 -- # (( 0 != 0 )) 00:20:32.743 11:42:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@141 -- # nvmet_auth_set_key sha256 ffdhe2048 1 00:20:32.743 11:42:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:20:32.743 11:42:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:20:32.743 11:42:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:20:32.743 11:42:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:20:32.743 11:42:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:MmVhYWU0MjY0NTFkMTYxYmUyMzliYzM0NzZlMzc0Y2Y4NmY1ZDBlMzZjNTI5ODg5wjh+4Q==: 00:20:32.743 11:42:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:YjAyN2ViYTgxMmNmZTg3NGQwNTMwN2UzYmRlMDk1YTAyNGU1MGU1MjZlZmI0NzUyEsEUpg==: 00:20:32.743 11:42:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:20:32.743 11:42:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:20:32.743 11:42:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:MmVhYWU0MjY0NTFkMTYxYmUyMzliYzM0NzZlMzc0Y2Y4NmY1ZDBlMzZjNTI5ODg5wjh+4Q==: 00:20:32.743 11:42:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:YjAyN2ViYTgxMmNmZTg3NGQwNTMwN2UzYmRlMDk1YTAyNGU1MGU1MjZlZmI0NzUyEsEUpg==: ]] 00:20:32.743 11:42:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:YjAyN2ViYTgxMmNmZTg3NGQwNTMwN2UzYmRlMDk1YTAyNGU1MGU1MjZlZmI0NzUyEsEUpg==: 00:20:32.743 11:42:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@142 -- # get_main_ns_ip 00:20:32.743 11:42:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:20:32.743 11:42:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:20:32.743 11:42:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:20:32.743 11:42:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:20:32.743 11:42:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:20:32.743 11:42:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:20:32.743 11:42:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:20:32.743 11:42:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:20:32.743 11:42:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:20:32.743 11:42:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:20:32.743 11:42:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@142 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 --ctrlr-loss-timeout-sec 1 --reconnect-delay-sec 1 00:20:32.743 11:42:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:32.743 11:42:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:32.743 nvme0n1 00:20:32.743 11:42:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:32.743 11:42:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@146 -- # nvmet_auth_set_key sha256 ffdhe2048 2 00:20:32.743 11:42:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:20:32.743 11:42:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:20:32.743 11:42:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:20:32.743 11:42:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:20:32.743 11:42:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:OTExOWQzZmNmZWU2NDk2NTgyOWI2ODJmZmZjY2NiMzktCSxF: 00:20:32.743 11:42:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:NGIwYWM0OWM5YjFlZmRiMmI1MTMwODI5Mzc4MGRiYWZn16gQ: 00:20:32.743 11:42:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:20:32.743 11:42:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:20:32.743 11:42:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:OTExOWQzZmNmZWU2NDk2NTgyOWI2ODJmZmZjY2NiMzktCSxF: 00:20:32.743 11:42:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:NGIwYWM0OWM5YjFlZmRiMmI1MTMwODI5Mzc4MGRiYWZn16gQ: ]] 00:20:32.743 11:42:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:NGIwYWM0OWM5YjFlZmRiMmI1MTMwODI5Mzc4MGRiYWZn16gQ: 00:20:32.743 11:42:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@147 -- # NOT rpc_cmd bdev_nvme_set_keys nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey1 00:20:32.743 11:42:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@652 -- # local es=0 00:20:32.743 11:42:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@654 -- # valid_exec_arg rpc_cmd bdev_nvme_set_keys nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey1 00:20:32.743 11:42:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@640 -- # local arg=rpc_cmd 00:20:32.743 11:42:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:20:32.743 11:42:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@644 -- # type -t rpc_cmd 00:20:32.743 11:42:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:20:32.743 11:42:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@655 -- # rpc_cmd bdev_nvme_set_keys nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey1 00:20:32.743 11:42:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:32.743 11:42:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:32.743 2024/12/16 11:42:33 error on JSON-RPC call, method: bdev_nvme_set_keys, params: map[dhchap_ctrlr_key:ckey1 dhchap_key:key2 name:nvme0], err: error received for bdev_nvme_set_keys method, err: Code=-13 Msg=Permission denied 00:20:32.743 request: 00:20:32.743 { 00:20:32.743 "method": "bdev_nvme_set_keys", 00:20:32.743 "params": { 00:20:32.743 "name": "nvme0", 00:20:32.743 "dhchap_key": "key2", 00:20:32.743 "dhchap_ctrlr_key": "ckey1" 00:20:32.743 } 00:20:32.743 } 00:20:32.743 Got JSON-RPC error response 00:20:32.743 GoRPCClient: error on JSON-RPC call 00:20:32.743 11:42:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:20:32.743 11:42:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@655 -- # es=1 00:20:32.743 11:42:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:20:32.743 11:42:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:20:32.743 11:42:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:20:32.743 11:42:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@148 -- # rpc_cmd bdev_nvme_get_controllers 00:20:32.743 11:42:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@148 -- # jq length 00:20:32.743 11:42:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:32.743 11:42:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:32.743 11:42:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:32.743 11:42:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@148 -- # (( 1 != 0 )) 00:20:32.743 11:42:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@149 -- # sleep 1s 00:20:33.679 11:42:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@148 -- # jq length 00:20:33.680 11:42:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@148 -- # rpc_cmd bdev_nvme_get_controllers 00:20:33.680 11:42:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:33.680 11:42:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:33.680 11:42:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:33.680 11:42:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@148 -- # (( 0 != 0 )) 00:20:33.680 11:42:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@152 -- # trap - SIGINT SIGTERM EXIT 00:20:33.680 11:42:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@153 -- # cleanup 00:20:33.680 11:42:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@24 -- # nvmftestfini 00:20:33.680 11:42:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@516 -- # nvmfcleanup 00:20:33.680 11:42:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@121 -- # sync 00:20:33.680 11:42:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:20:33.680 11:42:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@124 -- # set +e 00:20:33.680 11:42:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@125 -- # for i in {1..20} 00:20:33.680 11:42:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:20:33.680 rmmod nvme_tcp 00:20:33.680 rmmod nvme_fabrics 00:20:33.680 11:42:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:20:33.680 11:42:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@128 -- # set -e 00:20:33.680 11:42:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@129 -- # return 0 00:20:33.680 11:42:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@517 -- # '[' -n 94211 ']' 00:20:33.680 11:42:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@518 -- # killprocess 94211 00:20:33.680 11:42:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@954 -- # '[' -z 94211 ']' 00:20:33.680 11:42:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@958 -- # kill -0 94211 00:20:33.680 11:42:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@959 -- # uname 00:20:33.680 11:42:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:20:33.680 11:42:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 94211 00:20:33.939 11:42:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:20:33.939 11:42:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:20:33.939 killing process with pid 94211 00:20:33.939 11:42:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@972 -- # echo 'killing process with pid 94211' 00:20:33.939 11:42:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@973 -- # kill 94211 00:20:33.939 11:42:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@978 -- # wait 94211 00:20:33.939 11:42:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:20:33.939 11:42:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:20:33.939 11:42:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:20:33.939 11:42:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@297 -- # iptr 00:20:33.939 11:42:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@791 -- # iptables-save 00:20:33.939 11:42:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@791 -- # iptables-restore 00:20:33.939 11:42:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:20:33.939 11:42:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@298 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:20:33.939 11:42:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@299 -- # nvmf_veth_fini 00:20:33.939 11:42:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@233 -- # ip link set nvmf_init_br nomaster 00:20:33.939 11:42:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@234 -- # ip link set nvmf_init_br2 nomaster 00:20:33.939 11:42:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@235 -- # ip link set nvmf_tgt_br nomaster 00:20:34.198 11:42:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@236 -- # ip link set nvmf_tgt_br2 nomaster 00:20:34.198 11:42:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@237 -- # ip link set nvmf_init_br down 00:20:34.198 11:42:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@238 -- # ip link set nvmf_init_br2 down 00:20:34.198 11:42:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@239 -- # ip link set nvmf_tgt_br down 00:20:34.198 11:42:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@240 -- # ip link set nvmf_tgt_br2 down 00:20:34.198 11:42:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@241 -- # ip link delete nvmf_br type bridge 00:20:34.198 11:42:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@242 -- # ip link delete nvmf_init_if 00:20:34.198 11:42:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@243 -- # ip link delete nvmf_init_if2 00:20:34.198 11:42:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@244 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:20:34.198 11:42:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@245 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:20:34.198 11:42:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@246 -- # remove_spdk_ns 00:20:34.198 11:42:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:20:34.198 11:42:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:20:34.198 11:42:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:20:34.198 11:42:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@300 -- # return 0 00:20:34.198 11:42:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@25 -- # rm /sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0/allowed_hosts/nqn.2024-02.io.spdk:host0 00:20:34.198 11:42:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@26 -- # rmdir /sys/kernel/config/nvmet/hosts/nqn.2024-02.io.spdk:host0 00:20:34.198 11:42:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@27 -- # clean_kernel_target 00:20:34.198 11:42:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@712 -- # [[ -e /sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0 ]] 00:20:34.198 11:42:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@714 -- # echo 0 00:20:34.198 11:42:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@716 -- # rm -f /sys/kernel/config/nvmet/ports/1/subsystems/nqn.2024-02.io.spdk:cnode0 00:20:34.198 11:42:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@717 -- # rmdir /sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0/namespaces/1 00:20:34.198 11:42:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@718 -- # rmdir /sys/kernel/config/nvmet/ports/1 00:20:34.198 11:42:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@719 -- # rmdir /sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0 00:20:34.198 11:42:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@721 -- # modules=(/sys/module/nvmet/holders/*) 00:20:34.198 11:42:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@723 -- # modprobe -r nvmet_tcp nvmet 00:20:34.198 11:42:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@726 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:20:35.135 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:20:35.135 0000:00:10.0 (1b36 0010): nvme -> uio_pci_generic 00:20:35.135 0000:00:11.0 (1b36 0010): nvme -> uio_pci_generic 00:20:35.135 11:42:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@28 -- # rm -f /tmp/spdk.key-null.REW /tmp/spdk.key-null.bUL /tmp/spdk.key-sha256.9lW /tmp/spdk.key-sha384.V1q /tmp/spdk.key-sha512.l6W /home/vagrant/spdk_repo/spdk/../output/nvme-auth.log 00:20:35.135 11:42:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:20:35.702 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:20:35.702 0000:00:11.0 (1b36 0010): Already using the uio_pci_generic driver 00:20:35.702 0000:00:10.0 (1b36 0010): Already using the uio_pci_generic driver 00:20:35.702 00:20:35.702 real 0m37.108s 00:20:35.702 user 0m33.851s 00:20:35.702 sys 0m3.790s 00:20:35.702 11:42:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1130 -- # xtrace_disable 00:20:35.702 11:42:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:35.702 ************************************ 00:20:35.702 END TEST nvmf_auth_host 00:20:35.702 ************************************ 00:20:35.702 11:42:36 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@32 -- # [[ tcp == \t\c\p ]] 00:20:35.702 11:42:36 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@33 -- # run_test nvmf_digest /home/vagrant/spdk_repo/spdk/test/nvmf/host/digest.sh --transport=tcp 00:20:35.702 11:42:36 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:20:35.702 11:42:36 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1111 -- # xtrace_disable 00:20:35.702 11:42:36 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:20:35.702 ************************************ 00:20:35.702 START TEST nvmf_digest 00:20:35.702 ************************************ 00:20:35.702 11:42:36 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/host/digest.sh --transport=tcp 00:20:35.702 * Looking for test storage... 00:20:35.702 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/host 00:20:35.702 11:42:36 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:20:35.702 11:42:36 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@1711 -- # lcov --version 00:20:35.702 11:42:36 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:20:35.702 11:42:36 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:20:35.702 11:42:36 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:20:35.702 11:42:36 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@333 -- # local ver1 ver1_l 00:20:35.702 11:42:36 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@334 -- # local ver2 ver2_l 00:20:35.702 11:42:36 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@336 -- # IFS=.-: 00:20:35.702 11:42:36 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@336 -- # read -ra ver1 00:20:35.702 11:42:36 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@337 -- # IFS=.-: 00:20:35.702 11:42:36 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@337 -- # read -ra ver2 00:20:35.702 11:42:36 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@338 -- # local 'op=<' 00:20:35.702 11:42:36 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@340 -- # ver1_l=2 00:20:35.702 11:42:36 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@341 -- # ver2_l=1 00:20:35.703 11:42:36 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:20:35.703 11:42:36 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@344 -- # case "$op" in 00:20:35.703 11:42:36 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@345 -- # : 1 00:20:35.703 11:42:36 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@364 -- # (( v = 0 )) 00:20:35.703 11:42:36 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:20:35.703 11:42:36 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@365 -- # decimal 1 00:20:35.703 11:42:36 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@353 -- # local d=1 00:20:35.703 11:42:36 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:20:35.703 11:42:36 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@355 -- # echo 1 00:20:35.703 11:42:36 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@365 -- # ver1[v]=1 00:20:35.703 11:42:36 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@366 -- # decimal 2 00:20:35.703 11:42:36 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@353 -- # local d=2 00:20:35.703 11:42:36 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:20:35.703 11:42:36 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@355 -- # echo 2 00:20:35.703 11:42:36 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@366 -- # ver2[v]=2 00:20:35.703 11:42:36 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:20:35.703 11:42:36 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:20:35.703 11:42:36 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@368 -- # return 0 00:20:35.703 11:42:36 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:20:35.703 11:42:36 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:20:35.703 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:20:35.703 --rc genhtml_branch_coverage=1 00:20:35.703 --rc genhtml_function_coverage=1 00:20:35.703 --rc genhtml_legend=1 00:20:35.703 --rc geninfo_all_blocks=1 00:20:35.703 --rc geninfo_unexecuted_blocks=1 00:20:35.703 00:20:35.703 ' 00:20:35.703 11:42:36 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:20:35.703 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:20:35.703 --rc genhtml_branch_coverage=1 00:20:35.703 --rc genhtml_function_coverage=1 00:20:35.703 --rc genhtml_legend=1 00:20:35.703 --rc geninfo_all_blocks=1 00:20:35.703 --rc geninfo_unexecuted_blocks=1 00:20:35.703 00:20:35.703 ' 00:20:35.703 11:42:36 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:20:35.703 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:20:35.703 --rc genhtml_branch_coverage=1 00:20:35.703 --rc genhtml_function_coverage=1 00:20:35.703 --rc genhtml_legend=1 00:20:35.703 --rc geninfo_all_blocks=1 00:20:35.703 --rc geninfo_unexecuted_blocks=1 00:20:35.703 00:20:35.703 ' 00:20:35.962 11:42:36 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:20:35.962 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:20:35.962 --rc genhtml_branch_coverage=1 00:20:35.962 --rc genhtml_function_coverage=1 00:20:35.962 --rc genhtml_legend=1 00:20:35.962 --rc geninfo_all_blocks=1 00:20:35.962 --rc geninfo_unexecuted_blocks=1 00:20:35.962 00:20:35.962 ' 00:20:35.962 11:42:36 nvmf_tcp.nvmf_host.nvmf_digest -- host/digest.sh@12 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:20:35.962 11:42:36 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@7 -- # uname -s 00:20:35.962 11:42:36 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:20:35.962 11:42:36 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:20:35.962 11:42:36 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:20:35.962 11:42:36 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:20:35.962 11:42:36 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:20:35.962 11:42:36 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:20:35.962 11:42:36 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:20:35.962 11:42:36 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:20:35.962 11:42:36 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:20:35.962 11:42:36 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:20:35.962 11:42:36 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:2fb5c03d-b67f-4471-b8a9-a83bf7b15be8 00:20:35.962 11:42:36 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@18 -- # NVME_HOSTID=2fb5c03d-b67f-4471-b8a9-a83bf7b15be8 00:20:35.962 11:42:36 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:20:35.962 11:42:36 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:20:35.962 11:42:36 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:20:35.962 11:42:36 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:20:35.962 11:42:36 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@49 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:20:35.962 11:42:36 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@15 -- # shopt -s extglob 00:20:35.962 11:42:36 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:20:35.962 11:42:36 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:20:35.962 11:42:36 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:20:35.962 11:42:36 nvmf_tcp.nvmf_host.nvmf_digest -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:35.962 11:42:36 nvmf_tcp.nvmf_host.nvmf_digest -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:35.963 11:42:36 nvmf_tcp.nvmf_host.nvmf_digest -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:35.963 11:42:36 nvmf_tcp.nvmf_host.nvmf_digest -- paths/export.sh@5 -- # export PATH 00:20:35.963 11:42:36 nvmf_tcp.nvmf_host.nvmf_digest -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:35.963 11:42:36 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@51 -- # : 0 00:20:35.963 11:42:36 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:20:35.963 11:42:36 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:20:35.963 11:42:36 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:20:35.963 11:42:36 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:20:35.963 11:42:36 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:20:35.963 11:42:36 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:20:35.963 /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:20:35.963 11:42:36 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:20:35.963 11:42:36 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:20:35.963 11:42:36 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@55 -- # have_pci_nics=0 00:20:35.963 11:42:36 nvmf_tcp.nvmf_host.nvmf_digest -- host/digest.sh@14 -- # nqn=nqn.2016-06.io.spdk:cnode1 00:20:35.963 11:42:36 nvmf_tcp.nvmf_host.nvmf_digest -- host/digest.sh@15 -- # bperfsock=/var/tmp/bperf.sock 00:20:35.963 11:42:36 nvmf_tcp.nvmf_host.nvmf_digest -- host/digest.sh@16 -- # runtime=2 00:20:35.963 11:42:36 nvmf_tcp.nvmf_host.nvmf_digest -- host/digest.sh@136 -- # [[ tcp != \t\c\p ]] 00:20:35.963 11:42:36 nvmf_tcp.nvmf_host.nvmf_digest -- host/digest.sh@138 -- # nvmftestinit 00:20:35.963 11:42:36 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:20:35.963 11:42:36 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:20:35.963 11:42:36 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@476 -- # prepare_net_devs 00:20:35.963 11:42:36 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@438 -- # local -g is_hw=no 00:20:35.963 11:42:36 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@440 -- # remove_spdk_ns 00:20:35.963 11:42:36 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:20:35.963 11:42:36 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:20:35.963 11:42:36 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:20:35.963 11:42:36 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@442 -- # [[ virt != virt ]] 00:20:35.963 11:42:36 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@444 -- # [[ no == yes ]] 00:20:35.963 11:42:36 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@451 -- # [[ virt == phy ]] 00:20:35.963 11:42:36 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@454 -- # [[ virt == phy-fallback ]] 00:20:35.963 11:42:36 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@459 -- # [[ tcp == tcp ]] 00:20:35.963 11:42:36 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@460 -- # nvmf_veth_init 00:20:35.963 11:42:36 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@145 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:20:35.963 11:42:36 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@146 -- # NVMF_SECOND_INITIATOR_IP=10.0.0.2 00:20:35.963 11:42:36 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@147 -- # NVMF_FIRST_TARGET_IP=10.0.0.3 00:20:35.963 11:42:36 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@148 -- # NVMF_SECOND_TARGET_IP=10.0.0.4 00:20:35.963 11:42:36 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@149 -- # NVMF_INITIATOR_IP=10.0.0.1 00:20:35.963 11:42:36 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@150 -- # NVMF_BRIDGE=nvmf_br 00:20:35.963 11:42:36 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@151 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:20:35.963 11:42:36 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@152 -- # NVMF_INITIATOR_INTERFACE2=nvmf_init_if2 00:20:35.963 11:42:36 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@153 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:20:35.963 11:42:36 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@154 -- # NVMF_INITIATOR_BRIDGE2=nvmf_init_br2 00:20:35.963 11:42:36 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@155 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:20:35.963 11:42:36 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@156 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:20:35.963 11:42:36 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@157 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:20:35.963 11:42:36 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@158 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:20:35.963 11:42:36 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@159 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:20:35.963 11:42:36 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@160 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:20:35.963 11:42:36 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@162 -- # ip link set nvmf_init_br nomaster 00:20:35.963 Cannot find device "nvmf_init_br" 00:20:35.963 11:42:36 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@162 -- # true 00:20:35.963 11:42:36 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@163 -- # ip link set nvmf_init_br2 nomaster 00:20:35.963 Cannot find device "nvmf_init_br2" 00:20:35.963 11:42:36 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@163 -- # true 00:20:35.963 11:42:36 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@164 -- # ip link set nvmf_tgt_br nomaster 00:20:35.963 Cannot find device "nvmf_tgt_br" 00:20:35.963 11:42:36 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@164 -- # true 00:20:35.963 11:42:36 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@165 -- # ip link set nvmf_tgt_br2 nomaster 00:20:35.963 Cannot find device "nvmf_tgt_br2" 00:20:35.963 11:42:36 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@165 -- # true 00:20:35.963 11:42:36 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@166 -- # ip link set nvmf_init_br down 00:20:35.963 Cannot find device "nvmf_init_br" 00:20:35.963 11:42:36 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@166 -- # true 00:20:35.963 11:42:36 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@167 -- # ip link set nvmf_init_br2 down 00:20:35.963 Cannot find device "nvmf_init_br2" 00:20:35.963 11:42:36 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@167 -- # true 00:20:35.963 11:42:36 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@168 -- # ip link set nvmf_tgt_br down 00:20:35.963 Cannot find device "nvmf_tgt_br" 00:20:35.963 11:42:36 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@168 -- # true 00:20:35.963 11:42:36 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@169 -- # ip link set nvmf_tgt_br2 down 00:20:35.963 Cannot find device "nvmf_tgt_br2" 00:20:35.963 11:42:36 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@169 -- # true 00:20:35.963 11:42:36 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@170 -- # ip link delete nvmf_br type bridge 00:20:35.963 Cannot find device "nvmf_br" 00:20:35.963 11:42:36 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@170 -- # true 00:20:35.963 11:42:36 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@171 -- # ip link delete nvmf_init_if 00:20:35.963 Cannot find device "nvmf_init_if" 00:20:35.963 11:42:36 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@171 -- # true 00:20:35.963 11:42:36 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@172 -- # ip link delete nvmf_init_if2 00:20:35.963 Cannot find device "nvmf_init_if2" 00:20:35.963 11:42:36 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@172 -- # true 00:20:35.963 11:42:36 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@173 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:20:35.963 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:20:35.963 11:42:36 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@173 -- # true 00:20:35.963 11:42:36 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@174 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:20:35.963 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:20:35.963 11:42:36 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@174 -- # true 00:20:35.963 11:42:36 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@177 -- # ip netns add nvmf_tgt_ns_spdk 00:20:35.963 11:42:36 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@180 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:20:35.963 11:42:36 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@181 -- # ip link add nvmf_init_if2 type veth peer name nvmf_init_br2 00:20:35.963 11:42:36 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@182 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:20:35.963 11:42:36 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@183 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:20:35.963 11:42:36 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:20:35.963 11:42:36 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@187 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:20:35.963 11:42:36 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@190 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:20:35.963 11:42:36 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@191 -- # ip addr add 10.0.0.2/24 dev nvmf_init_if2 00:20:35.963 11:42:36 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@192 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if 00:20:36.223 11:42:36 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@193 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.4/24 dev nvmf_tgt_if2 00:20:36.223 11:42:36 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@196 -- # ip link set nvmf_init_if up 00:20:36.223 11:42:36 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@197 -- # ip link set nvmf_init_if2 up 00:20:36.223 11:42:36 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@198 -- # ip link set nvmf_init_br up 00:20:36.223 11:42:36 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@199 -- # ip link set nvmf_init_br2 up 00:20:36.223 11:42:36 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@200 -- # ip link set nvmf_tgt_br up 00:20:36.223 11:42:36 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@201 -- # ip link set nvmf_tgt_br2 up 00:20:36.223 11:42:36 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@202 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:20:36.223 11:42:36 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@203 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:20:36.223 11:42:36 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@204 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:20:36.223 11:42:36 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@207 -- # ip link add nvmf_br type bridge 00:20:36.223 11:42:36 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@208 -- # ip link set nvmf_br up 00:20:36.223 11:42:36 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@211 -- # ip link set nvmf_init_br master nvmf_br 00:20:36.223 11:42:36 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@212 -- # ip link set nvmf_init_br2 master nvmf_br 00:20:36.223 11:42:36 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@213 -- # ip link set nvmf_tgt_br master nvmf_br 00:20:36.223 11:42:36 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@214 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:20:36.223 11:42:36 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@217 -- # ipts -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:20:36.223 11:42:36 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT' 00:20:36.223 11:42:36 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@218 -- # ipts -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT 00:20:36.223 11:42:36 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT' 00:20:36.223 11:42:36 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@219 -- # ipts -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:20:36.223 11:42:36 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@790 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT -m comment --comment 'SPDK_NVMF:-A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT' 00:20:36.223 11:42:36 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@222 -- # ping -c 1 10.0.0.3 00:20:36.223 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:20:36.223 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.063 ms 00:20:36.223 00:20:36.223 --- 10.0.0.3 ping statistics --- 00:20:36.223 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:20:36.223 rtt min/avg/max/mdev = 0.063/0.063/0.063/0.000 ms 00:20:36.223 11:42:36 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@223 -- # ping -c 1 10.0.0.4 00:20:36.223 PING 10.0.0.4 (10.0.0.4) 56(84) bytes of data. 00:20:36.223 64 bytes from 10.0.0.4: icmp_seq=1 ttl=64 time=0.044 ms 00:20:36.223 00:20:36.223 --- 10.0.0.4 ping statistics --- 00:20:36.223 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:20:36.223 rtt min/avg/max/mdev = 0.044/0.044/0.044/0.000 ms 00:20:36.223 11:42:36 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@224 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:20:36.223 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:20:36.223 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.032 ms 00:20:36.223 00:20:36.223 --- 10.0.0.1 ping statistics --- 00:20:36.223 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:20:36.223 rtt min/avg/max/mdev = 0.032/0.032/0.032/0.000 ms 00:20:36.223 11:42:36 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@225 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.2 00:20:36.223 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:20:36.223 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.056 ms 00:20:36.223 00:20:36.223 --- 10.0.0.2 ping statistics --- 00:20:36.223 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:20:36.223 rtt min/avg/max/mdev = 0.056/0.056/0.056/0.000 ms 00:20:36.223 11:42:36 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@227 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:20:36.223 11:42:36 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@461 -- # return 0 00:20:36.223 11:42:36 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:20:36.223 11:42:36 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:20:36.223 11:42:36 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:20:36.223 11:42:36 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:20:36.223 11:42:36 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:20:36.223 11:42:36 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:20:36.223 11:42:36 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:20:36.223 11:42:36 nvmf_tcp.nvmf_host.nvmf_digest -- host/digest.sh@140 -- # trap cleanup SIGINT SIGTERM EXIT 00:20:36.223 11:42:36 nvmf_tcp.nvmf_host.nvmf_digest -- host/digest.sh@141 -- # [[ 0 -eq 1 ]] 00:20:36.223 11:42:36 nvmf_tcp.nvmf_host.nvmf_digest -- host/digest.sh@145 -- # run_test nvmf_digest_clean run_digest 00:20:36.223 11:42:36 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:20:36.223 11:42:36 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@1111 -- # xtrace_disable 00:20:36.223 11:42:36 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@10 -- # set +x 00:20:36.223 ************************************ 00:20:36.223 START TEST nvmf_digest_clean 00:20:36.223 ************************************ 00:20:36.223 11:42:36 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@1129 -- # run_digest 00:20:36.223 11:42:36 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@120 -- # local dsa_initiator 00:20:36.223 11:42:36 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@121 -- # [[ '' == \d\s\a\_\i\n\i\t\i\a\t\o\r ]] 00:20:36.223 11:42:36 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@121 -- # dsa_initiator=false 00:20:36.223 11:42:36 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@123 -- # tgt_params=("--wait-for-rpc") 00:20:36.223 11:42:36 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@124 -- # nvmfappstart --wait-for-rpc 00:20:36.223 11:42:36 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:20:36.223 11:42:36 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@726 -- # xtrace_disable 00:20:36.223 11:42:36 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@10 -- # set +x 00:20:36.223 11:42:36 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- nvmf/common.sh@509 -- # nvmfpid=95866 00:20:36.223 11:42:36 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- nvmf/common.sh@510 -- # waitforlisten 95866 00:20:36.223 11:42:36 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- nvmf/common.sh@508 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --wait-for-rpc 00:20:36.223 11:42:36 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@835 -- # '[' -z 95866 ']' 00:20:36.223 11:42:36 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:20:36.223 11:42:36 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@840 -- # local max_retries=100 00:20:36.223 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:20:36.223 11:42:36 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:20:36.223 11:42:36 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@844 -- # xtrace_disable 00:20:36.223 11:42:36 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@10 -- # set +x 00:20:36.223 [2024-12-16 11:42:36.969365] Starting SPDK v25.01-pre git sha1 c360129e7 / DPDK 24.03.0 initialization... 00:20:36.223 [2024-12-16 11:42:36.969451] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:20:36.483 [2024-12-16 11:42:37.124044] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:20:36.483 [2024-12-16 11:42:37.175540] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:20:36.483 [2024-12-16 11:42:37.175607] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:20:36.483 [2024-12-16 11:42:37.175628] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:20:36.483 [2024-12-16 11:42:37.175638] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:20:36.483 [2024-12-16 11:42:37.175654] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:20:36.483 [2024-12-16 11:42:37.176130] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:20:36.483 11:42:37 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:20:36.483 11:42:37 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@868 -- # return 0 00:20:36.483 11:42:37 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:20:36.483 11:42:37 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@732 -- # xtrace_disable 00:20:36.483 11:42:37 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@10 -- # set +x 00:20:36.741 11:42:37 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:20:36.741 11:42:37 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@125 -- # [[ '' == \d\s\a\_\t\a\r\g\e\t ]] 00:20:36.741 11:42:37 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@126 -- # common_target_config 00:20:36.741 11:42:37 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@43 -- # rpc_cmd 00:20:36.741 11:42:37 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:36.741 11:42:37 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@10 -- # set +x 00:20:36.741 null0 00:20:36.741 [2024-12-16 11:42:37.394634] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:20:36.741 [2024-12-16 11:42:37.418814] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4420 *** 00:20:36.741 11:42:37 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:36.742 11:42:37 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@128 -- # run_bperf randread 4096 128 false 00:20:36.742 11:42:37 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@77 -- # local rw bs qd scan_dsa 00:20:36.742 11:42:37 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@78 -- # local acc_module acc_executed exp_module 00:20:36.742 11:42:37 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # rw=randread 00:20:36.742 11:42:37 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # bs=4096 00:20:36.742 11:42:37 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # qd=128 00:20:36.742 11:42:37 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # scan_dsa=false 00:20:36.742 11:42:37 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@83 -- # bperfpid=95897 00:20:36.742 11:42:37 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@82 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randread -o 4096 -t 2 -q 128 -z --wait-for-rpc 00:20:36.742 11:42:37 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@84 -- # waitforlisten 95897 /var/tmp/bperf.sock 00:20:36.742 11:42:37 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@835 -- # '[' -z 95897 ']' 00:20:36.742 11:42:37 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bperf.sock 00:20:36.742 11:42:37 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@840 -- # local max_retries=100 00:20:36.742 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:20:36.742 11:42:37 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:20:36.742 11:42:37 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@844 -- # xtrace_disable 00:20:36.742 11:42:37 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@10 -- # set +x 00:20:36.742 [2024-12-16 11:42:37.485932] Starting SPDK v25.01-pre git sha1 c360129e7 / DPDK 24.03.0 initialization... 00:20:36.742 [2024-12-16 11:42:37.486030] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid95897 ] 00:20:37.000 [2024-12-16 11:42:37.639770] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:20:37.000 [2024-12-16 11:42:37.698869] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 1 00:20:37.259 11:42:37 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:20:37.259 11:42:37 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@868 -- # return 0 00:20:37.259 11:42:37 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@86 -- # false 00:20:37.259 11:42:37 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@87 -- # bperf_rpc framework_start_init 00:20:37.259 11:42:37 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock framework_start_init 00:20:37.517 11:42:38 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@89 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.3 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:20:37.517 11:42:38 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.3 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:20:37.776 nvme0n1 00:20:38.035 11:42:38 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@92 -- # bperf_py perform_tests 00:20:38.035 11:42:38 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@19 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:20:38.035 Running I/O for 2 seconds... 00:20:40.348 19075.00 IOPS, 74.51 MiB/s [2024-12-16T11:42:41.101Z] 19095.50 IOPS, 74.59 MiB/s 00:20:40.348 Latency(us) 00:20:40.348 [2024-12-16T11:42:41.101Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:20:40.348 Job: nvme0n1 (Core Mask 0x2, workload: randread, depth: 128, IO size: 4096) 00:20:40.348 nvme0n1 : 2.00 19118.72 74.68 0.00 0.00 6687.28 3247.01 16562.73 00:20:40.348 [2024-12-16T11:42:41.101Z] =================================================================================================================== 00:20:40.348 [2024-12-16T11:42:41.101Z] Total : 19118.72 74.68 0.00 0.00 6687.28 3247.01 16562.73 00:20:40.348 { 00:20:40.348 "results": [ 00:20:40.348 { 00:20:40.348 "job": "nvme0n1", 00:20:40.348 "core_mask": "0x2", 00:20:40.348 "workload": "randread", 00:20:40.348 "status": "finished", 00:20:40.348 "queue_depth": 128, 00:20:40.348 "io_size": 4096, 00:20:40.348 "runtime": 2.004266, 00:20:40.348 "iops": 19118.719770729036, 00:20:40.348 "mibps": 74.6824991044103, 00:20:40.348 "io_failed": 0, 00:20:40.348 "io_timeout": 0, 00:20:40.348 "avg_latency_us": 6687.275426147485, 00:20:40.348 "min_latency_us": 3247.010909090909, 00:20:40.348 "max_latency_us": 16562.734545454547 00:20:40.348 } 00:20:40.348 ], 00:20:40.348 "core_count": 1 00:20:40.348 } 00:20:40.348 11:42:40 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@93 -- # read -r acc_module acc_executed 00:20:40.348 11:42:40 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@93 -- # get_accel_stats 00:20:40.348 11:42:40 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@36 -- # bperf_rpc accel_get_stats 00:20:40.348 11:42:40 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock accel_get_stats 00:20:40.348 11:42:40 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@37 -- # jq -rc '.operations[] 00:20:40.348 | select(.opcode=="crc32c") 00:20:40.348 | "\(.module_name) \(.executed)"' 00:20:40.348 11:42:40 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@94 -- # false 00:20:40.348 11:42:41 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@94 -- # exp_module=software 00:20:40.348 11:42:41 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@95 -- # (( acc_executed > 0 )) 00:20:40.348 11:42:41 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@96 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:20:40.348 11:42:41 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@98 -- # killprocess 95897 00:20:40.348 11:42:41 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@954 -- # '[' -z 95897 ']' 00:20:40.348 11:42:41 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@958 -- # kill -0 95897 00:20:40.348 11:42:41 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@959 -- # uname 00:20:40.348 11:42:41 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:20:40.348 11:42:41 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 95897 00:20:40.348 11:42:41 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:20:40.348 11:42:41 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:20:40.348 killing process with pid 95897 00:20:40.348 11:42:41 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@972 -- # echo 'killing process with pid 95897' 00:20:40.348 Received shutdown signal, test time was about 2.000000 seconds 00:20:40.348 00:20:40.348 Latency(us) 00:20:40.348 [2024-12-16T11:42:41.101Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:20:40.348 [2024-12-16T11:42:41.101Z] =================================================================================================================== 00:20:40.348 [2024-12-16T11:42:41.101Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:20:40.348 11:42:41 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@973 -- # kill 95897 00:20:40.348 11:42:41 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@978 -- # wait 95897 00:20:40.607 11:42:41 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@129 -- # run_bperf randread 131072 16 false 00:20:40.607 11:42:41 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@77 -- # local rw bs qd scan_dsa 00:20:40.607 11:42:41 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@78 -- # local acc_module acc_executed exp_module 00:20:40.607 11:42:41 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # rw=randread 00:20:40.607 11:42:41 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # bs=131072 00:20:40.607 11:42:41 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # qd=16 00:20:40.607 11:42:41 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # scan_dsa=false 00:20:40.607 11:42:41 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@83 -- # bperfpid=95974 00:20:40.607 11:42:41 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@84 -- # waitforlisten 95974 /var/tmp/bperf.sock 00:20:40.607 11:42:41 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@82 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randread -o 131072 -t 2 -q 16 -z --wait-for-rpc 00:20:40.607 11:42:41 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@835 -- # '[' -z 95974 ']' 00:20:40.607 11:42:41 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bperf.sock 00:20:40.607 11:42:41 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@840 -- # local max_retries=100 00:20:40.607 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:20:40.607 11:42:41 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:20:40.607 11:42:41 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@844 -- # xtrace_disable 00:20:40.607 11:42:41 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@10 -- # set +x 00:20:40.607 I/O size of 131072 is greater than zero copy threshold (65536). 00:20:40.607 Zero copy mechanism will not be used. 00:20:40.607 [2024-12-16 11:42:41.282816] Starting SPDK v25.01-pre git sha1 c360129e7 / DPDK 24.03.0 initialization... 00:20:40.607 [2024-12-16 11:42:41.282912] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid95974 ] 00:20:40.865 [2024-12-16 11:42:41.427252] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:20:40.865 [2024-12-16 11:42:41.483174] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 1 00:20:40.865 11:42:41 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:20:40.865 11:42:41 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@868 -- # return 0 00:20:40.865 11:42:41 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@86 -- # false 00:20:40.865 11:42:41 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@87 -- # bperf_rpc framework_start_init 00:20:40.865 11:42:41 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock framework_start_init 00:20:41.499 11:42:41 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@89 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.3 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:20:41.499 11:42:41 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.3 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:20:41.499 nvme0n1 00:20:41.499 11:42:42 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@92 -- # bperf_py perform_tests 00:20:41.499 11:42:42 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@19 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:20:41.757 I/O size of 131072 is greater than zero copy threshold (65536). 00:20:41.757 Zero copy mechanism will not be used. 00:20:41.757 Running I/O for 2 seconds... 00:20:43.629 7214.00 IOPS, 901.75 MiB/s [2024-12-16T11:42:44.382Z] 7338.00 IOPS, 917.25 MiB/s 00:20:43.629 Latency(us) 00:20:43.629 [2024-12-16T11:42:44.382Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:20:43.629 Job: nvme0n1 (Core Mask 0x2, workload: randread, depth: 16, IO size: 131072) 00:20:43.629 nvme0n1 : 2.00 7334.08 916.76 0.00 0.00 2177.81 629.29 4915.20 00:20:43.629 [2024-12-16T11:42:44.382Z] =================================================================================================================== 00:20:43.629 [2024-12-16T11:42:44.382Z] Total : 7334.08 916.76 0.00 0.00 2177.81 629.29 4915.20 00:20:43.629 { 00:20:43.629 "results": [ 00:20:43.629 { 00:20:43.629 "job": "nvme0n1", 00:20:43.629 "core_mask": "0x2", 00:20:43.629 "workload": "randread", 00:20:43.629 "status": "finished", 00:20:43.629 "queue_depth": 16, 00:20:43.629 "io_size": 131072, 00:20:43.629 "runtime": 2.003251, 00:20:43.629 "iops": 7334.078455470632, 00:20:43.629 "mibps": 916.759806933829, 00:20:43.629 "io_failed": 0, 00:20:43.629 "io_timeout": 0, 00:20:43.629 "avg_latency_us": 2177.807453901938, 00:20:43.629 "min_latency_us": 629.2945454545454, 00:20:43.629 "max_latency_us": 4915.2 00:20:43.629 } 00:20:43.629 ], 00:20:43.629 "core_count": 1 00:20:43.629 } 00:20:43.629 11:42:44 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@93 -- # read -r acc_module acc_executed 00:20:43.629 11:42:44 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@93 -- # get_accel_stats 00:20:43.629 11:42:44 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@37 -- # jq -rc '.operations[] 00:20:43.629 | select(.opcode=="crc32c") 00:20:43.629 | "\(.module_name) \(.executed)"' 00:20:43.629 11:42:44 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@36 -- # bperf_rpc accel_get_stats 00:20:43.629 11:42:44 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock accel_get_stats 00:20:44.197 11:42:44 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@94 -- # false 00:20:44.197 11:42:44 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@94 -- # exp_module=software 00:20:44.197 11:42:44 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@95 -- # (( acc_executed > 0 )) 00:20:44.197 11:42:44 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@96 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:20:44.197 11:42:44 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@98 -- # killprocess 95974 00:20:44.197 11:42:44 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@954 -- # '[' -z 95974 ']' 00:20:44.197 11:42:44 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@958 -- # kill -0 95974 00:20:44.197 11:42:44 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@959 -- # uname 00:20:44.197 11:42:44 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:20:44.197 11:42:44 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 95974 00:20:44.197 11:42:44 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:20:44.197 11:42:44 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:20:44.197 killing process with pid 95974 00:20:44.197 11:42:44 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@972 -- # echo 'killing process with pid 95974' 00:20:44.197 Received shutdown signal, test time was about 2.000000 seconds 00:20:44.197 00:20:44.197 Latency(us) 00:20:44.197 [2024-12-16T11:42:44.950Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:20:44.197 [2024-12-16T11:42:44.950Z] =================================================================================================================== 00:20:44.197 [2024-12-16T11:42:44.950Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:20:44.197 11:42:44 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@973 -- # kill 95974 00:20:44.197 11:42:44 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@978 -- # wait 95974 00:20:44.197 11:42:44 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@130 -- # run_bperf randwrite 4096 128 false 00:20:44.197 11:42:44 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@77 -- # local rw bs qd scan_dsa 00:20:44.197 11:42:44 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@78 -- # local acc_module acc_executed exp_module 00:20:44.197 11:42:44 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # rw=randwrite 00:20:44.197 11:42:44 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # bs=4096 00:20:44.197 11:42:44 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # qd=128 00:20:44.197 11:42:44 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # scan_dsa=false 00:20:44.197 11:42:44 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@82 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randwrite -o 4096 -t 2 -q 128 -z --wait-for-rpc 00:20:44.197 11:42:44 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@83 -- # bperfpid=96051 00:20:44.197 11:42:44 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@84 -- # waitforlisten 96051 /var/tmp/bperf.sock 00:20:44.197 11:42:44 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@835 -- # '[' -z 96051 ']' 00:20:44.197 11:42:44 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bperf.sock 00:20:44.197 11:42:44 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@840 -- # local max_retries=100 00:20:44.197 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:20:44.197 11:42:44 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:20:44.197 11:42:44 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@844 -- # xtrace_disable 00:20:44.197 11:42:44 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@10 -- # set +x 00:20:44.457 [2024-12-16 11:42:44.996290] Starting SPDK v25.01-pre git sha1 c360129e7 / DPDK 24.03.0 initialization... 00:20:44.457 [2024-12-16 11:42:44.996409] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid96051 ] 00:20:44.457 [2024-12-16 11:42:45.138118] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:20:44.457 [2024-12-16 11:42:45.193797] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 1 00:20:44.716 11:42:45 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:20:44.717 11:42:45 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@868 -- # return 0 00:20:44.717 11:42:45 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@86 -- # false 00:20:44.717 11:42:45 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@87 -- # bperf_rpc framework_start_init 00:20:44.717 11:42:45 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock framework_start_init 00:20:44.975 11:42:45 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@89 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.3 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:20:44.975 11:42:45 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.3 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:20:45.542 nvme0n1 00:20:45.542 11:42:46 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@92 -- # bperf_py perform_tests 00:20:45.542 11:42:46 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@19 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:20:45.542 Running I/O for 2 seconds... 00:20:47.417 22771.00 IOPS, 88.95 MiB/s [2024-12-16T11:42:48.170Z] 23061.50 IOPS, 90.08 MiB/s 00:20:47.417 Latency(us) 00:20:47.417 [2024-12-16T11:42:48.170Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:20:47.417 Job: nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:20:47.417 nvme0n1 : 2.01 23045.41 90.02 0.00 0.00 5545.70 2263.97 11796.48 00:20:47.417 [2024-12-16T11:42:48.170Z] =================================================================================================================== 00:20:47.417 [2024-12-16T11:42:48.170Z] Total : 23045.41 90.02 0.00 0.00 5545.70 2263.97 11796.48 00:20:47.417 { 00:20:47.417 "results": [ 00:20:47.417 { 00:20:47.417 "job": "nvme0n1", 00:20:47.417 "core_mask": "0x2", 00:20:47.417 "workload": "randwrite", 00:20:47.417 "status": "finished", 00:20:47.417 "queue_depth": 128, 00:20:47.417 "io_size": 4096, 00:20:47.417 "runtime": 2.009077, 00:20:47.417 "iops": 23045.408413913454, 00:20:47.417 "mibps": 90.02112661684943, 00:20:47.417 "io_failed": 0, 00:20:47.417 "io_timeout": 0, 00:20:47.417 "avg_latency_us": 5545.699620341645, 00:20:47.417 "min_latency_us": 2263.970909090909, 00:20:47.417 "max_latency_us": 11796.48 00:20:47.417 } 00:20:47.417 ], 00:20:47.417 "core_count": 1 00:20:47.417 } 00:20:47.417 11:42:48 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@93 -- # read -r acc_module acc_executed 00:20:47.417 11:42:48 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@93 -- # get_accel_stats 00:20:47.677 11:42:48 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@36 -- # bperf_rpc accel_get_stats 00:20:47.677 11:42:48 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@37 -- # jq -rc '.operations[] 00:20:47.677 | select(.opcode=="crc32c") 00:20:47.677 | "\(.module_name) \(.executed)"' 00:20:47.677 11:42:48 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock accel_get_stats 00:20:47.936 11:42:48 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@94 -- # false 00:20:47.936 11:42:48 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@94 -- # exp_module=software 00:20:47.936 11:42:48 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@95 -- # (( acc_executed > 0 )) 00:20:47.936 11:42:48 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@96 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:20:47.936 11:42:48 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@98 -- # killprocess 96051 00:20:47.936 11:42:48 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@954 -- # '[' -z 96051 ']' 00:20:47.936 11:42:48 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@958 -- # kill -0 96051 00:20:47.936 11:42:48 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@959 -- # uname 00:20:47.936 11:42:48 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:20:47.936 11:42:48 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 96051 00:20:47.936 11:42:48 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:20:47.936 11:42:48 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:20:47.936 killing process with pid 96051 00:20:47.936 11:42:48 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@972 -- # echo 'killing process with pid 96051' 00:20:47.936 11:42:48 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@973 -- # kill 96051 00:20:47.936 Received shutdown signal, test time was about 2.000000 seconds 00:20:47.936 00:20:47.936 Latency(us) 00:20:47.936 [2024-12-16T11:42:48.689Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:20:47.936 [2024-12-16T11:42:48.689Z] =================================================================================================================== 00:20:47.936 [2024-12-16T11:42:48.689Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:20:47.936 11:42:48 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@978 -- # wait 96051 00:20:48.194 11:42:48 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@131 -- # run_bperf randwrite 131072 16 false 00:20:48.194 11:42:48 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@77 -- # local rw bs qd scan_dsa 00:20:48.194 11:42:48 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@78 -- # local acc_module acc_executed exp_module 00:20:48.194 11:42:48 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # rw=randwrite 00:20:48.194 11:42:48 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # bs=131072 00:20:48.194 11:42:48 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # qd=16 00:20:48.194 11:42:48 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # scan_dsa=false 00:20:48.194 11:42:48 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@83 -- # bperfpid=96127 00:20:48.194 11:42:48 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@82 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randwrite -o 131072 -t 2 -q 16 -z --wait-for-rpc 00:20:48.194 11:42:48 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@84 -- # waitforlisten 96127 /var/tmp/bperf.sock 00:20:48.194 11:42:48 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@835 -- # '[' -z 96127 ']' 00:20:48.194 11:42:48 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bperf.sock 00:20:48.194 11:42:48 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@840 -- # local max_retries=100 00:20:48.194 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:20:48.194 11:42:48 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:20:48.194 11:42:48 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@844 -- # xtrace_disable 00:20:48.194 11:42:48 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@10 -- # set +x 00:20:48.194 [2024-12-16 11:42:48.743289] Starting SPDK v25.01-pre git sha1 c360129e7 / DPDK 24.03.0 initialization... 00:20:48.194 I/O size of 131072 is greater than zero copy threshold (65536). 00:20:48.194 Zero copy mechanism will not be used. 00:20:48.194 [2024-12-16 11:42:48.743407] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid96127 ] 00:20:48.194 [2024-12-16 11:42:48.884932] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:20:48.194 [2024-12-16 11:42:48.939395] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 1 00:20:49.128 11:42:49 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:20:49.128 11:42:49 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@868 -- # return 0 00:20:49.128 11:42:49 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@86 -- # false 00:20:49.128 11:42:49 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@87 -- # bperf_rpc framework_start_init 00:20:49.129 11:42:49 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock framework_start_init 00:20:49.387 11:42:50 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@89 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.3 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:20:49.387 11:42:50 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.3 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:20:49.954 nvme0n1 00:20:49.954 11:42:50 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@92 -- # bperf_py perform_tests 00:20:49.954 11:42:50 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@19 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:20:49.954 I/O size of 131072 is greater than zero copy threshold (65536). 00:20:49.954 Zero copy mechanism will not be used. 00:20:49.954 Running I/O for 2 seconds... 00:20:52.263 6630.00 IOPS, 828.75 MiB/s [2024-12-16T11:42:53.016Z] 6685.00 IOPS, 835.62 MiB/s 00:20:52.263 Latency(us) 00:20:52.263 [2024-12-16T11:42:53.016Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:20:52.263 Job: nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 16, IO size: 131072) 00:20:52.263 nvme0n1 : 2.00 6682.48 835.31 0.00 0.00 2388.47 1802.24 7298.33 00:20:52.263 [2024-12-16T11:42:53.016Z] =================================================================================================================== 00:20:52.263 [2024-12-16T11:42:53.016Z] Total : 6682.48 835.31 0.00 0.00 2388.47 1802.24 7298.33 00:20:52.263 { 00:20:52.263 "results": [ 00:20:52.263 { 00:20:52.263 "job": "nvme0n1", 00:20:52.263 "core_mask": "0x2", 00:20:52.263 "workload": "randwrite", 00:20:52.263 "status": "finished", 00:20:52.263 "queue_depth": 16, 00:20:52.263 "io_size": 131072, 00:20:52.263 "runtime": 2.003148, 00:20:52.263 "iops": 6682.481773688215, 00:20:52.263 "mibps": 835.3102217110269, 00:20:52.263 "io_failed": 0, 00:20:52.263 "io_timeout": 0, 00:20:52.263 "avg_latency_us": 2388.4729413362675, 00:20:52.263 "min_latency_us": 1802.24, 00:20:52.263 "max_latency_us": 7298.327272727272 00:20:52.263 } 00:20:52.263 ], 00:20:52.263 "core_count": 1 00:20:52.263 } 00:20:52.263 11:42:52 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@93 -- # read -r acc_module acc_executed 00:20:52.263 11:42:52 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@93 -- # get_accel_stats 00:20:52.263 11:42:52 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@36 -- # bperf_rpc accel_get_stats 00:20:52.263 11:42:52 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock accel_get_stats 00:20:52.263 11:42:52 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@37 -- # jq -rc '.operations[] 00:20:52.263 | select(.opcode=="crc32c") 00:20:52.263 | "\(.module_name) \(.executed)"' 00:20:52.263 11:42:53 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@94 -- # false 00:20:52.263 11:42:53 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@94 -- # exp_module=software 00:20:52.263 11:42:53 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@95 -- # (( acc_executed > 0 )) 00:20:52.263 11:42:53 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@96 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:20:52.263 11:42:53 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@98 -- # killprocess 96127 00:20:52.263 11:42:53 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@954 -- # '[' -z 96127 ']' 00:20:52.263 11:42:53 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@958 -- # kill -0 96127 00:20:52.263 11:42:53 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@959 -- # uname 00:20:52.263 11:42:53 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:20:52.263 11:42:53 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 96127 00:20:52.522 11:42:53 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:20:52.522 killing process with pid 96127 00:20:52.522 11:42:53 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:20:52.522 11:42:53 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@972 -- # echo 'killing process with pid 96127' 00:20:52.522 Received shutdown signal, test time was about 2.000000 seconds 00:20:52.522 00:20:52.522 Latency(us) 00:20:52.522 [2024-12-16T11:42:53.275Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:20:52.522 [2024-12-16T11:42:53.275Z] =================================================================================================================== 00:20:52.522 [2024-12-16T11:42:53.275Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:20:52.522 11:42:53 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@973 -- # kill 96127 00:20:52.522 11:42:53 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@978 -- # wait 96127 00:20:52.522 11:42:53 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@132 -- # killprocess 95866 00:20:52.522 11:42:53 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@954 -- # '[' -z 95866 ']' 00:20:52.522 11:42:53 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@958 -- # kill -0 95866 00:20:52.522 11:42:53 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@959 -- # uname 00:20:52.522 11:42:53 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:20:52.522 11:42:53 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 95866 00:20:52.522 11:42:53 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:20:52.522 killing process with pid 95866 00:20:52.522 11:42:53 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:20:52.522 11:42:53 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@972 -- # echo 'killing process with pid 95866' 00:20:52.522 11:42:53 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@973 -- # kill 95866 00:20:52.780 11:42:53 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@978 -- # wait 95866 00:20:52.780 00:20:52.780 real 0m16.570s 00:20:52.780 user 0m31.908s 00:20:52.780 sys 0m4.674s 00:20:52.780 ************************************ 00:20:52.780 END TEST nvmf_digest_clean 00:20:52.780 ************************************ 00:20:52.780 11:42:53 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@1130 -- # xtrace_disable 00:20:52.780 11:42:53 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@10 -- # set +x 00:20:52.780 11:42:53 nvmf_tcp.nvmf_host.nvmf_digest -- host/digest.sh@147 -- # run_test nvmf_digest_error run_digest_error 00:20:52.780 11:42:53 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:20:52.780 11:42:53 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@1111 -- # xtrace_disable 00:20:52.780 11:42:53 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@10 -- # set +x 00:20:52.780 ************************************ 00:20:52.780 START TEST nvmf_digest_error 00:20:52.780 ************************************ 00:20:52.780 11:42:53 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@1129 -- # run_digest_error 00:20:52.780 11:42:53 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@102 -- # nvmfappstart --wait-for-rpc 00:20:52.780 11:42:53 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:20:52.780 11:42:53 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@726 -- # xtrace_disable 00:20:52.780 11:42:53 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:20:53.039 11:42:53 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- nvmf/common.sh@509 -- # nvmfpid=96243 00:20:53.039 11:42:53 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- nvmf/common.sh@510 -- # waitforlisten 96243 00:20:53.039 11:42:53 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- nvmf/common.sh@508 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --wait-for-rpc 00:20:53.039 11:42:53 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@835 -- # '[' -z 96243 ']' 00:20:53.039 11:42:53 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:20:53.039 11:42:53 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@840 -- # local max_retries=100 00:20:53.039 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:20:53.039 11:42:53 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:20:53.039 11:42:53 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@844 -- # xtrace_disable 00:20:53.039 11:42:53 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:20:53.039 [2024-12-16 11:42:53.586976] Starting SPDK v25.01-pre git sha1 c360129e7 / DPDK 24.03.0 initialization... 00:20:53.039 [2024-12-16 11:42:53.587358] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:20:53.039 [2024-12-16 11:42:53.738626] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:20:53.298 [2024-12-16 11:42:53.796475] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:20:53.298 [2024-12-16 11:42:53.796556] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:20:53.298 [2024-12-16 11:42:53.796584] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:20:53.298 [2024-12-16 11:42:53.796593] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:20:53.298 [2024-12-16 11:42:53.796600] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:20:53.298 [2024-12-16 11:42:53.797035] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:20:53.298 11:42:53 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:20:53.298 11:42:53 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@868 -- # return 0 00:20:53.298 11:42:53 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:20:53.298 11:42:53 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@732 -- # xtrace_disable 00:20:53.298 11:42:53 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:20:53.298 11:42:53 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:20:53.298 11:42:53 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@104 -- # rpc_cmd accel_assign_opc -o crc32c -m error 00:20:53.298 11:42:53 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:53.298 11:42:53 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:20:53.298 [2024-12-16 11:42:53.905612] accel_rpc.c: 167:rpc_accel_assign_opc: *NOTICE*: Operation crc32c will be assigned to module error 00:20:53.298 11:42:53 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:53.298 11:42:53 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@105 -- # common_target_config 00:20:53.298 11:42:53 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@43 -- # rpc_cmd 00:20:53.299 11:42:53 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:53.299 11:42:53 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:20:53.299 null0 00:20:53.299 [2024-12-16 11:42:54.026100] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:20:53.299 [2024-12-16 11:42:54.050273] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4420 *** 00:20:53.557 11:42:54 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:53.557 11:42:54 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@108 -- # run_bperf_err randread 4096 128 00:20:53.557 11:42:54 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@54 -- # local rw bs qd 00:20:53.557 11:42:54 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # rw=randread 00:20:53.557 11:42:54 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # bs=4096 00:20:53.557 11:42:54 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # qd=128 00:20:53.557 11:42:54 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@58 -- # bperfpid=96278 00:20:53.557 11:42:54 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@60 -- # waitforlisten 96278 /var/tmp/bperf.sock 00:20:53.557 11:42:54 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@57 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randread -o 4096 -t 2 -q 128 -z 00:20:53.557 11:42:54 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@835 -- # '[' -z 96278 ']' 00:20:53.557 11:42:54 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bperf.sock 00:20:53.557 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:20:53.557 11:42:54 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@840 -- # local max_retries=100 00:20:53.557 11:42:54 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:20:53.557 11:42:54 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@844 -- # xtrace_disable 00:20:53.557 11:42:54 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:20:53.557 [2024-12-16 11:42:54.108326] Starting SPDK v25.01-pre git sha1 c360129e7 / DPDK 24.03.0 initialization... 00:20:53.558 [2024-12-16 11:42:54.108407] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid96278 ] 00:20:53.558 [2024-12-16 11:42:54.250673] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:20:53.558 [2024-12-16 11:42:54.309786] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 1 00:20:53.816 11:42:54 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:20:53.816 11:42:54 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@868 -- # return 0 00:20:53.816 11:42:54 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@61 -- # bperf_rpc bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 00:20:53.816 11:42:54 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 00:20:54.075 11:42:54 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@63 -- # rpc_cmd accel_error_inject_error -o crc32c -t disable 00:20:54.075 11:42:54 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:54.075 11:42:54 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:20:54.075 11:42:54 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:54.075 11:42:54 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@64 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.3 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:20:54.075 11:42:54 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.3 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:20:54.641 nvme0n1 00:20:54.641 11:42:55 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@67 -- # rpc_cmd accel_error_inject_error -o crc32c -t corrupt -i 256 00:20:54.641 11:42:55 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:54.641 11:42:55 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:20:54.641 11:42:55 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:54.641 11:42:55 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@69 -- # bperf_py perform_tests 00:20:54.641 11:42:55 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@19 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:20:54.641 Running I/O for 2 seconds... 00:20:54.641 [2024-12-16 11:42:55.245047] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1244050) 00:20:54.641 [2024-12-16 11:42:55.245119] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:19136 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:54.641 [2024-12-16 11:42:55.245150] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:45 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:54.641 [2024-12-16 11:42:55.257917] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1244050) 00:20:54.641 [2024-12-16 11:42:55.257971] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:83 nsid:1 lba:17316 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:54.641 [2024-12-16 11:42:55.258001] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:83 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:54.641 [2024-12-16 11:42:55.273632] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1244050) 00:20:54.641 [2024-12-16 11:42:55.273673] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:83 nsid:1 lba:5279 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:54.641 [2024-12-16 11:42:55.273687] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:83 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:54.641 [2024-12-16 11:42:55.287104] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1244050) 00:20:54.641 [2024-12-16 11:42:55.287182] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:89 nsid:1 lba:12746 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:54.641 [2024-12-16 11:42:55.287196] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:89 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:54.641 [2024-12-16 11:42:55.301102] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1244050) 00:20:54.641 [2024-12-16 11:42:55.301181] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:64 nsid:1 lba:12195 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:54.641 [2024-12-16 11:42:55.301196] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:64 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:54.642 [2024-12-16 11:42:55.314196] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1244050) 00:20:54.642 [2024-12-16 11:42:55.314263] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:64 nsid:1 lba:14416 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:54.642 [2024-12-16 11:42:55.314277] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:64 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:54.642 [2024-12-16 11:42:55.327442] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1244050) 00:20:54.642 [2024-12-16 11:42:55.327498] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:64 nsid:1 lba:22382 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:54.642 [2024-12-16 11:42:55.327511] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:64 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:54.642 [2024-12-16 11:42:55.341699] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1244050) 00:20:54.642 [2024-12-16 11:42:55.341754] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:90 nsid:1 lba:20436 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:54.642 [2024-12-16 11:42:55.341784] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:90 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:54.642 [2024-12-16 11:42:55.351960] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1244050) 00:20:54.642 [2024-12-16 11:42:55.352000] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:20893 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:54.642 [2024-12-16 11:42:55.352014] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:37 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:54.642 [2024-12-16 11:42:55.365934] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1244050) 00:20:54.642 [2024-12-16 11:42:55.365991] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:89 nsid:1 lba:18900 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:54.642 [2024-12-16 11:42:55.366020] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:89 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:54.642 [2024-12-16 11:42:55.379798] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1244050) 00:20:54.642 [2024-12-16 11:42:55.379839] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:91 nsid:1 lba:11331 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:54.642 [2024-12-16 11:42:55.379852] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:91 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:54.642 [2024-12-16 11:42:55.393659] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1244050) 00:20:54.642 [2024-12-16 11:42:55.393713] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:120 nsid:1 lba:7829 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:54.642 [2024-12-16 11:42:55.393742] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:120 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:54.900 [2024-12-16 11:42:55.407198] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1244050) 00:20:54.900 [2024-12-16 11:42:55.407263] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:13143 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:54.900 [2024-12-16 11:42:55.407276] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:16 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:54.900 [2024-12-16 11:42:55.420990] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1244050) 00:20:54.900 [2024-12-16 11:42:55.421045] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:66 nsid:1 lba:6680 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:54.900 [2024-12-16 11:42:55.421074] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:66 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:54.900 [2024-12-16 11:42:55.434469] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1244050) 00:20:54.900 [2024-12-16 11:42:55.434526] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:9505 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:54.900 [2024-12-16 11:42:55.434540] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:31 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:54.900 [2024-12-16 11:42:55.447542] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1244050) 00:20:54.900 [2024-12-16 11:42:55.447595] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:3083 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:54.900 [2024-12-16 11:42:55.447623] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:33 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:54.900 [2024-12-16 11:42:55.460355] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1244050) 00:20:54.900 [2024-12-16 11:42:55.460408] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:73 nsid:1 lba:430 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:54.900 [2024-12-16 11:42:55.460436] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:73 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:54.900 [2024-12-16 11:42:55.473220] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1244050) 00:20:54.900 [2024-12-16 11:42:55.473279] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:71 nsid:1 lba:11956 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:54.900 [2024-12-16 11:42:55.473307] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:71 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:54.900 [2024-12-16 11:42:55.485928] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1244050) 00:20:54.900 [2024-12-16 11:42:55.485982] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:20796 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:54.900 [2024-12-16 11:42:55.486011] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:59 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:54.900 [2024-12-16 11:42:55.497397] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1244050) 00:20:54.900 [2024-12-16 11:42:55.497449] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:85 nsid:1 lba:230 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:54.900 [2024-12-16 11:42:55.497477] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:85 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:54.900 [2024-12-16 11:42:55.511419] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1244050) 00:20:54.900 [2024-12-16 11:42:55.511473] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:8621 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:54.900 [2024-12-16 11:42:55.511501] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:37 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:54.900 [2024-12-16 11:42:55.525354] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1244050) 00:20:54.900 [2024-12-16 11:42:55.525394] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:121 nsid:1 lba:23279 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:54.900 [2024-12-16 11:42:55.525408] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:121 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:54.900 [2024-12-16 11:42:55.538624] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1244050) 00:20:54.900 [2024-12-16 11:42:55.538677] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:119 nsid:1 lba:9976 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:54.900 [2024-12-16 11:42:55.538705] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:119 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:54.900 [2024-12-16 11:42:55.552030] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1244050) 00:20:54.900 [2024-12-16 11:42:55.552084] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:4742 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:54.900 [2024-12-16 11:42:55.552114] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:54.900 [2024-12-16 11:42:55.565927] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1244050) 00:20:54.901 [2024-12-16 11:42:55.565981] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:68 nsid:1 lba:11921 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:54.901 [2024-12-16 11:42:55.566009] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:68 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:54.901 [2024-12-16 11:42:55.577845] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1244050) 00:20:54.901 [2024-12-16 11:42:55.577899] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:120 nsid:1 lba:9663 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:54.901 [2024-12-16 11:42:55.577928] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:120 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:54.901 [2024-12-16 11:42:55.591059] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1244050) 00:20:54.901 [2024-12-16 11:42:55.591115] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:72 nsid:1 lba:20966 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:54.901 [2024-12-16 11:42:55.591144] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:72 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:54.901 [2024-12-16 11:42:55.602828] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1244050) 00:20:54.901 [2024-12-16 11:42:55.602882] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:5369 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:54.901 [2024-12-16 11:42:55.602911] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:57 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:54.901 [2024-12-16 11:42:55.614972] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1244050) 00:20:54.901 [2024-12-16 11:42:55.615025] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:15299 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:54.901 [2024-12-16 11:42:55.615054] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:21 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:54.901 [2024-12-16 11:42:55.628066] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1244050) 00:20:54.901 [2024-12-16 11:42:55.628122] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:66 nsid:1 lba:3022 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:54.901 [2024-12-16 11:42:55.628151] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:66 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:54.901 [2024-12-16 11:42:55.641263] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1244050) 00:20:54.901 [2024-12-16 11:42:55.641318] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:66 nsid:1 lba:10444 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:54.901 [2024-12-16 11:42:55.641330] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:66 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:55.160 [2024-12-16 11:42:55.654199] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1244050) 00:20:55.160 [2024-12-16 11:42:55.654251] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:6401 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:55.160 [2024-12-16 11:42:55.654280] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:58 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:55.160 [2024-12-16 11:42:55.667047] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1244050) 00:20:55.160 [2024-12-16 11:42:55.667102] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:100 nsid:1 lba:20194 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:55.160 [2024-12-16 11:42:55.667130] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:100 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:55.160 [2024-12-16 11:42:55.680742] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1244050) 00:20:55.160 [2024-12-16 11:42:55.680798] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:2417 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:55.160 [2024-12-16 11:42:55.680827] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:42 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:55.160 [2024-12-16 11:42:55.695083] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1244050) 00:20:55.160 [2024-12-16 11:42:55.695137] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:84 nsid:1 lba:8093 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:55.160 [2024-12-16 11:42:55.695179] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:84 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:55.160 [2024-12-16 11:42:55.708922] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1244050) 00:20:55.160 [2024-12-16 11:42:55.708977] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:11787 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:55.160 [2024-12-16 11:42:55.709006] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:55.160 [2024-12-16 11:42:55.722311] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1244050) 00:20:55.160 [2024-12-16 11:42:55.722367] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:16822 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:55.160 [2024-12-16 11:42:55.722380] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:55.160 [2024-12-16 11:42:55.734212] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1244050) 00:20:55.160 [2024-12-16 11:42:55.734265] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:14514 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:55.160 [2024-12-16 11:42:55.734278] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:59 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:55.160 [2024-12-16 11:42:55.747117] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1244050) 00:20:55.160 [2024-12-16 11:42:55.747183] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:379 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:55.160 [2024-12-16 11:42:55.747212] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:28 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:55.160 [2024-12-16 11:42:55.760192] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1244050) 00:20:55.160 [2024-12-16 11:42:55.760272] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:24632 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:55.161 [2024-12-16 11:42:55.760287] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:29 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:55.161 [2024-12-16 11:42:55.774672] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1244050) 00:20:55.161 [2024-12-16 11:42:55.774712] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:3133 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:55.161 [2024-12-16 11:42:55.774726] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:23 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:55.161 [2024-12-16 11:42:55.789663] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1244050) 00:20:55.161 [2024-12-16 11:42:55.789703] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:19271 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:55.161 [2024-12-16 11:42:55.789716] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:17 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:55.161 [2024-12-16 11:42:55.803607] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1244050) 00:20:55.161 [2024-12-16 11:42:55.803661] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:3772 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:55.161 [2024-12-16 11:42:55.803699] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:29 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:55.161 [2024-12-16 11:42:55.817650] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1244050) 00:20:55.161 [2024-12-16 11:42:55.817721] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:6742 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:55.161 [2024-12-16 11:42:55.817734] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:23 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:55.161 [2024-12-16 11:42:55.831598] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1244050) 00:20:55.161 [2024-12-16 11:42:55.831653] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:19104 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:55.161 [2024-12-16 11:42:55.831682] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:19 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:55.161 [2024-12-16 11:42:55.844844] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1244050) 00:20:55.161 [2024-12-16 11:42:55.844899] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:105 nsid:1 lba:3223 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:55.161 [2024-12-16 11:42:55.844928] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:105 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:55.161 [2024-12-16 11:42:55.858270] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1244050) 00:20:55.161 [2024-12-16 11:42:55.858323] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:82 nsid:1 lba:14050 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:55.161 [2024-12-16 11:42:55.858369] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:82 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:55.161 [2024-12-16 11:42:55.871368] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1244050) 00:20:55.161 [2024-12-16 11:42:55.871422] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:3046 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:55.161 [2024-12-16 11:42:55.871450] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:55.161 [2024-12-16 11:42:55.884847] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1244050) 00:20:55.161 [2024-12-16 11:42:55.884898] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:23356 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:55.161 [2024-12-16 11:42:55.884927] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:24 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:55.161 [2024-12-16 11:42:55.897934] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1244050) 00:20:55.161 [2024-12-16 11:42:55.897987] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:96 nsid:1 lba:10171 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:55.161 [2024-12-16 11:42:55.898017] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:96 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:55.161 [2024-12-16 11:42:55.911167] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1244050) 00:20:55.161 [2024-12-16 11:42:55.911219] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:86 nsid:1 lba:2720 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:55.161 [2024-12-16 11:42:55.911248] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:86 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:55.420 [2024-12-16 11:42:55.925422] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1244050) 00:20:55.420 [2024-12-16 11:42:55.925463] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:84 nsid:1 lba:25368 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:55.420 [2024-12-16 11:42:55.925477] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:84 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:55.420 [2024-12-16 11:42:55.940614] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1244050) 00:20:55.420 [2024-12-16 11:42:55.940652] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:96 nsid:1 lba:7325 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:55.420 [2024-12-16 11:42:55.940666] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:96 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:55.420 [2024-12-16 11:42:55.955387] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1244050) 00:20:55.420 [2024-12-16 11:42:55.955426] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:68 nsid:1 lba:1402 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:55.420 [2024-12-16 11:42:55.955439] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:68 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:55.420 [2024-12-16 11:42:55.967313] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1244050) 00:20:55.420 [2024-12-16 11:42:55.967353] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:25050 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:55.420 [2024-12-16 11:42:55.967367] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:27 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:55.420 [2024-12-16 11:42:55.980811] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1244050) 00:20:55.420 [2024-12-16 11:42:55.980869] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:92 nsid:1 lba:17098 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:55.420 [2024-12-16 11:42:55.980898] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:92 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:55.420 [2024-12-16 11:42:55.997387] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1244050) 00:20:55.420 [2024-12-16 11:42:55.997430] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:9086 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:55.420 [2024-12-16 11:42:55.997443] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:21 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:55.420 [2024-12-16 11:42:56.011394] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1244050) 00:20:55.420 [2024-12-16 11:42:56.011453] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:87 nsid:1 lba:22090 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:55.420 [2024-12-16 11:42:56.011466] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:87 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:55.420 [2024-12-16 11:42:56.022920] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1244050) 00:20:55.420 [2024-12-16 11:42:56.022976] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:20900 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:55.420 [2024-12-16 11:42:56.023005] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:55.420 [2024-12-16 11:42:56.038834] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1244050) 00:20:55.420 [2024-12-16 11:42:56.038894] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:123 nsid:1 lba:13452 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:55.420 [2024-12-16 11:42:56.038923] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:123 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:55.420 [2024-12-16 11:42:56.050245] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1244050) 00:20:55.420 [2024-12-16 11:42:56.050300] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:16587 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:55.420 [2024-12-16 11:42:56.050330] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:44 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:55.420 [2024-12-16 11:42:56.065037] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1244050) 00:20:55.420 [2024-12-16 11:42:56.065097] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:78 nsid:1 lba:15446 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:55.420 [2024-12-16 11:42:56.065112] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:78 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:55.420 [2024-12-16 11:42:56.079595] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1244050) 00:20:55.420 [2024-12-16 11:42:56.079639] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:78 nsid:1 lba:21058 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:55.420 [2024-12-16 11:42:56.079653] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:78 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:55.420 [2024-12-16 11:42:56.093065] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1244050) 00:20:55.420 [2024-12-16 11:42:56.093124] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:78 nsid:1 lba:20125 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:55.420 [2024-12-16 11:42:56.093137] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:78 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:55.420 [2024-12-16 11:42:56.107132] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1244050) 00:20:55.420 [2024-12-16 11:42:56.107197] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:24478 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:55.420 [2024-12-16 11:42:56.107226] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:55.420 [2024-12-16 11:42:56.120440] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1244050) 00:20:55.420 [2024-12-16 11:42:56.120496] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:107 nsid:1 lba:24946 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:55.420 [2024-12-16 11:42:56.120525] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:107 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:55.420 [2024-12-16 11:42:56.134596] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1244050) 00:20:55.420 [2024-12-16 11:42:56.134637] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:13605 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:55.420 [2024-12-16 11:42:56.134681] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:32 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:55.420 [2024-12-16 11:42:56.147884] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1244050) 00:20:55.421 [2024-12-16 11:42:56.147927] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:93 nsid:1 lba:8686 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:55.421 [2024-12-16 11:42:56.147940] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:93 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:55.421 [2024-12-16 11:42:56.160042] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1244050) 00:20:55.421 [2024-12-16 11:42:56.160100] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:2883 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:55.421 [2024-12-16 11:42:56.160113] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:47 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:55.679 [2024-12-16 11:42:56.174080] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1244050) 00:20:55.679 [2024-12-16 11:42:56.174136] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:121 nsid:1 lba:18726 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:55.679 [2024-12-16 11:42:56.174161] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:121 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:55.679 [2024-12-16 11:42:56.188687] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1244050) 00:20:55.679 [2024-12-16 11:42:56.188741] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:7924 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:55.679 [2024-12-16 11:42:56.188771] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:55.679 [2024-12-16 11:42:56.203222] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1244050) 00:20:55.679 [2024-12-16 11:42:56.203277] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:104 nsid:1 lba:23581 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:55.679 [2024-12-16 11:42:56.203323] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:104 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:55.679 [2024-12-16 11:42:56.216790] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1244050) 00:20:55.679 [2024-12-16 11:42:56.216848] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:10550 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:55.679 [2024-12-16 11:42:56.216877] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:55.679 18748.00 IOPS, 73.23 MiB/s [2024-12-16T11:42:56.432Z] [2024-12-16 11:42:56.232501] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1244050) 00:20:55.679 [2024-12-16 11:42:56.232557] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:6828 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:55.679 [2024-12-16 11:42:56.232586] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:55.679 [2024-12-16 11:42:56.245098] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1244050) 00:20:55.679 [2024-12-16 11:42:56.245179] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:104 nsid:1 lba:21888 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:55.679 [2024-12-16 11:42:56.245193] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:104 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:55.679 [2024-12-16 11:42:56.258071] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1244050) 00:20:55.679 [2024-12-16 11:42:56.258128] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:118 nsid:1 lba:7546 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:55.679 [2024-12-16 11:42:56.258166] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:118 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:55.679 [2024-12-16 11:42:56.271070] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1244050) 00:20:55.679 [2024-12-16 11:42:56.271126] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:116 nsid:1 lba:7972 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:55.679 [2024-12-16 11:42:56.271181] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:116 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:55.679 [2024-12-16 11:42:56.284238] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1244050) 00:20:55.679 [2024-12-16 11:42:56.284294] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:123 nsid:1 lba:24297 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:55.679 [2024-12-16 11:42:56.284323] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:123 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:55.679 [2024-12-16 11:42:56.298306] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1244050) 00:20:55.679 [2024-12-16 11:42:56.298364] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:101 nsid:1 lba:9676 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:55.680 [2024-12-16 11:42:56.298377] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:101 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:55.680 [2024-12-16 11:42:56.312002] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1244050) 00:20:55.680 [2024-12-16 11:42:56.312086] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:1797 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:55.680 [2024-12-16 11:42:56.312115] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:27 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:55.680 [2024-12-16 11:42:56.326184] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1244050) 00:20:55.680 [2024-12-16 11:42:56.326250] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:9445 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:55.680 [2024-12-16 11:42:56.326279] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:55.680 [2024-12-16 11:42:56.339509] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1244050) 00:20:55.680 [2024-12-16 11:42:56.339565] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:25205 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:55.680 [2024-12-16 11:42:56.339605] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:55.680 [2024-12-16 11:42:56.352304] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1244050) 00:20:55.680 [2024-12-16 11:42:56.352360] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:86 nsid:1 lba:13838 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:55.680 [2024-12-16 11:42:56.352390] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:86 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:55.680 [2024-12-16 11:42:56.365455] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1244050) 00:20:55.680 [2024-12-16 11:42:56.365512] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:68 nsid:1 lba:2157 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:55.680 [2024-12-16 11:42:56.365525] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:68 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:55.680 [2024-12-16 11:42:56.378527] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1244050) 00:20:55.680 [2024-12-16 11:42:56.378583] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:17031 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:55.680 [2024-12-16 11:42:56.378613] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:33 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:55.680 [2024-12-16 11:42:56.392987] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1244050) 00:20:55.680 [2024-12-16 11:42:56.393042] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:7064 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:55.680 [2024-12-16 11:42:56.393071] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:20 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:55.680 [2024-12-16 11:42:56.406809] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1244050) 00:20:55.680 [2024-12-16 11:42:56.406865] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:17320 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:55.680 [2024-12-16 11:42:56.406894] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:41 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:55.680 [2024-12-16 11:42:56.419091] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1244050) 00:20:55.680 [2024-12-16 11:42:56.419170] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:114 nsid:1 lba:3601 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:55.680 [2024-12-16 11:42:56.419184] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:114 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:55.680 [2024-12-16 11:42:56.431626] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1244050) 00:20:55.680 [2024-12-16 11:42:56.431669] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:17990 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:55.680 [2024-12-16 11:42:56.431682] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:53 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:55.939 [2024-12-16 11:42:56.446314] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1244050) 00:20:55.939 [2024-12-16 11:42:56.446356] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:4169 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:55.939 [2024-12-16 11:42:56.446369] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:51 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:55.939 [2024-12-16 11:42:56.460444] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1244050) 00:20:55.939 [2024-12-16 11:42:56.460487] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:15661 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:55.939 [2024-12-16 11:42:56.460500] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:25 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:55.939 [2024-12-16 11:42:56.473950] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1244050) 00:20:55.939 [2024-12-16 11:42:56.474007] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:99 nsid:1 lba:8962 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:55.939 [2024-12-16 11:42:56.474037] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:99 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:55.939 [2024-12-16 11:42:56.487330] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1244050) 00:20:55.939 [2024-12-16 11:42:56.487402] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:120 nsid:1 lba:3724 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:55.939 [2024-12-16 11:42:56.487415] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:120 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:55.939 [2024-12-16 11:42:56.501570] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1244050) 00:20:55.939 [2024-12-16 11:42:56.501627] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:23600 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:55.939 [2024-12-16 11:42:56.501641] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:17 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:55.939 [2024-12-16 11:42:56.516029] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1244050) 00:20:55.939 [2024-12-16 11:42:56.516086] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:99 nsid:1 lba:12386 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:55.939 [2024-12-16 11:42:56.516115] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:99 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:55.939 [2024-12-16 11:42:56.530133] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1244050) 00:20:55.939 [2024-12-16 11:42:56.530198] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:16125 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:55.939 [2024-12-16 11:42:56.530228] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:47 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:55.939 [2024-12-16 11:42:56.541907] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1244050) 00:20:55.939 [2024-12-16 11:42:56.541964] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:70 nsid:1 lba:16372 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:55.939 [2024-12-16 11:42:56.541994] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:70 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:55.939 [2024-12-16 11:42:56.556734] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1244050) 00:20:55.939 [2024-12-16 11:42:56.556793] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:23205 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:55.939 [2024-12-16 11:42:56.556822] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:55.939 [2024-12-16 11:42:56.569877] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1244050) 00:20:55.939 [2024-12-16 11:42:56.569934] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:75 nsid:1 lba:22586 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:55.939 [2024-12-16 11:42:56.569968] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:75 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:55.939 [2024-12-16 11:42:56.582840] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1244050) 00:20:55.939 [2024-12-16 11:42:56.582896] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:75 nsid:1 lba:116 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:55.939 [2024-12-16 11:42:56.582924] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:75 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:55.939 [2024-12-16 11:42:56.596289] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1244050) 00:20:55.939 [2024-12-16 11:42:56.596345] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:119 nsid:1 lba:4939 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:55.939 [2024-12-16 11:42:56.596358] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:119 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:55.939 [2024-12-16 11:42:56.609425] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1244050) 00:20:55.939 [2024-12-16 11:42:56.609479] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:75 nsid:1 lba:16559 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:55.939 [2024-12-16 11:42:56.609508] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:75 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:55.939 [2024-12-16 11:42:56.622598] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1244050) 00:20:55.939 [2024-12-16 11:42:56.622654] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:77 nsid:1 lba:19159 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:55.939 [2024-12-16 11:42:56.622667] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:77 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:55.939 [2024-12-16 11:42:56.636510] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1244050) 00:20:55.939 [2024-12-16 11:42:56.636582] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:13958 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:55.939 [2024-12-16 11:42:56.636612] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:60 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:55.940 [2024-12-16 11:42:56.650666] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1244050) 00:20:55.940 [2024-12-16 11:42:56.650753] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:6029 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:55.940 [2024-12-16 11:42:56.650782] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:57 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:55.940 [2024-12-16 11:42:56.665494] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1244050) 00:20:55.940 [2024-12-16 11:42:56.665549] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:121 nsid:1 lba:18180 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:55.940 [2024-12-16 11:42:56.665579] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:121 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:55.940 [2024-12-16 11:42:56.677102] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1244050) 00:20:55.940 [2024-12-16 11:42:56.677183] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:92 nsid:1 lba:25486 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:55.940 [2024-12-16 11:42:56.677197] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:92 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:55.940 [2024-12-16 11:42:56.690172] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1244050) 00:20:55.940 [2024-12-16 11:42:56.690236] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:96 nsid:1 lba:2817 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:55.940 [2024-12-16 11:42:56.690264] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:96 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:56.199 [2024-12-16 11:42:56.703149] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1244050) 00:20:56.199 [2024-12-16 11:42:56.703214] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:13282 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:56.199 [2024-12-16 11:42:56.703244] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:39 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:56.199 [2024-12-16 11:42:56.717692] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1244050) 00:20:56.199 [2024-12-16 11:42:56.717763] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:1037 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:56.199 [2024-12-16 11:42:56.717792] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:33 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:56.199 [2024-12-16 11:42:56.732225] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1244050) 00:20:56.199 [2024-12-16 11:42:56.732287] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:66 nsid:1 lba:14562 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:56.199 [2024-12-16 11:42:56.732316] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:66 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:56.199 [2024-12-16 11:42:56.745746] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1244050) 00:20:56.199 [2024-12-16 11:42:56.745804] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:17510 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:56.199 [2024-12-16 11:42:56.745833] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:23 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:56.199 [2024-12-16 11:42:56.758961] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1244050) 00:20:56.199 [2024-12-16 11:42:56.759018] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:64 nsid:1 lba:10666 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:56.199 [2024-12-16 11:42:56.759047] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:64 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:56.199 [2024-12-16 11:42:56.771726] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1244050) 00:20:56.199 [2024-12-16 11:42:56.771769] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:112 nsid:1 lba:5602 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:56.199 [2024-12-16 11:42:56.771782] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:112 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:56.199 [2024-12-16 11:42:56.785947] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1244050) 00:20:56.199 [2024-12-16 11:42:56.786003] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:121 nsid:1 lba:2696 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:56.199 [2024-12-16 11:42:56.786032] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:121 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:56.199 [2024-12-16 11:42:56.801089] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1244050) 00:20:56.199 [2024-12-16 11:42:56.801156] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:6117 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:56.199 [2024-12-16 11:42:56.801171] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:21 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:56.199 [2024-12-16 11:42:56.815029] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1244050) 00:20:56.199 [2024-12-16 11:42:56.815084] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:79 nsid:1 lba:23149 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:56.199 [2024-12-16 11:42:56.815113] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:79 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:56.199 [2024-12-16 11:42:56.828527] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1244050) 00:20:56.199 [2024-12-16 11:42:56.828583] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:79 nsid:1 lba:8266 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:56.199 [2024-12-16 11:42:56.828612] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:79 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:56.199 [2024-12-16 11:42:56.841472] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1244050) 00:20:56.199 [2024-12-16 11:42:56.841535] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:79 nsid:1 lba:1889 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:56.199 [2024-12-16 11:42:56.841579] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:79 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:56.199 [2024-12-16 11:42:56.854613] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1244050) 00:20:56.199 [2024-12-16 11:42:56.854671] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:79 nsid:1 lba:25435 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:56.199 [2024-12-16 11:42:56.854700] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:79 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:56.199 [2024-12-16 11:42:56.867641] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1244050) 00:20:56.199 [2024-12-16 11:42:56.867703] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:79 nsid:1 lba:21993 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:56.199 [2024-12-16 11:42:56.867734] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:79 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:56.199 [2024-12-16 11:42:56.880694] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1244050) 00:20:56.199 [2024-12-16 11:42:56.880750] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:11963 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:56.199 [2024-12-16 11:42:56.880778] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:21 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:56.199 [2024-12-16 11:42:56.893813] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1244050) 00:20:56.199 [2024-12-16 11:42:56.893869] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:16257 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:56.199 [2024-12-16 11:42:56.893898] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:56.199 [2024-12-16 11:42:56.907196] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1244050) 00:20:56.199 [2024-12-16 11:42:56.907250] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:76 nsid:1 lba:10074 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:56.199 [2024-12-16 11:42:56.907278] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:76 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:56.199 [2024-12-16 11:42:56.920317] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1244050) 00:20:56.199 [2024-12-16 11:42:56.920372] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:82 nsid:1 lba:17416 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:56.199 [2024-12-16 11:42:56.920401] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:82 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:56.199 [2024-12-16 11:42:56.932855] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1244050) 00:20:56.199 [2024-12-16 11:42:56.932912] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:69 nsid:1 lba:22111 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:56.199 [2024-12-16 11:42:56.932940] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:69 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:56.199 [2024-12-16 11:42:56.947778] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1244050) 00:20:56.199 [2024-12-16 11:42:56.947816] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:111 nsid:1 lba:1581 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:56.199 [2024-12-16 11:42:56.947829] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:111 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:56.477 [2024-12-16 11:42:56.962315] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1244050) 00:20:56.477 [2024-12-16 11:42:56.962354] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:100 nsid:1 lba:13600 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:56.477 [2024-12-16 11:42:56.962367] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:100 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:56.477 [2024-12-16 11:42:56.975379] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1244050) 00:20:56.477 [2024-12-16 11:42:56.975420] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:99 nsid:1 lba:6886 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:56.477 [2024-12-16 11:42:56.975434] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:99 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:56.477 [2024-12-16 11:42:56.988857] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1244050) 00:20:56.477 [2024-12-16 11:42:56.988907] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:82 nsid:1 lba:5132 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:56.477 [2024-12-16 11:42:56.988936] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:82 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:56.477 [2024-12-16 11:42:57.003080] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1244050) 00:20:56.477 [2024-12-16 11:42:57.003165] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:67 nsid:1 lba:6104 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:56.477 [2024-12-16 11:42:57.003181] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:67 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:56.477 [2024-12-16 11:42:57.016910] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1244050) 00:20:56.477 [2024-12-16 11:42:57.016968] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:101 nsid:1 lba:4253 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:56.477 [2024-12-16 11:42:57.016997] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:101 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:56.477 [2024-12-16 11:42:57.030100] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1244050) 00:20:56.477 [2024-12-16 11:42:57.030197] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:107 nsid:1 lba:24782 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:56.477 [2024-12-16 11:42:57.030211] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:107 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:56.477 [2024-12-16 11:42:57.042983] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1244050) 00:20:56.477 [2024-12-16 11:42:57.043038] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:21066 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:56.477 [2024-12-16 11:42:57.043067] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:56.477 [2024-12-16 11:42:57.056689] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1244050) 00:20:56.477 [2024-12-16 11:42:57.056746] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:22566 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:56.477 [2024-12-16 11:42:57.056775] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:56.477 [2024-12-16 11:42:57.069968] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1244050) 00:20:56.477 [2024-12-16 11:42:57.070025] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:111 nsid:1 lba:9841 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:56.477 [2024-12-16 11:42:57.070054] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:111 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:56.477 [2024-12-16 11:42:57.083403] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1244050) 00:20:56.477 [2024-12-16 11:42:57.083458] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:11837 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:56.477 [2024-12-16 11:42:57.083487] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:53 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:56.477 [2024-12-16 11:42:57.097123] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1244050) 00:20:56.477 [2024-12-16 11:42:57.097187] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:25424 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:56.477 [2024-12-16 11:42:57.097217] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:43 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:56.477 [2024-12-16 11:42:57.109627] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1244050) 00:20:56.477 [2024-12-16 11:42:57.109683] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:102 nsid:1 lba:18408 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:56.477 [2024-12-16 11:42:57.109712] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:102 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:56.477 [2024-12-16 11:42:57.121169] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1244050) 00:20:56.477 [2024-12-16 11:42:57.121223] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:23019 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:56.477 [2024-12-16 11:42:57.121252] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:17 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:56.477 [2024-12-16 11:42:57.134170] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1244050) 00:20:56.477 [2024-12-16 11:42:57.134225] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:98 nsid:1 lba:10205 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:56.477 [2024-12-16 11:42:57.134254] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:98 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:56.477 [2024-12-16 11:42:57.147070] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1244050) 00:20:56.477 [2024-12-16 11:42:57.147127] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:98 nsid:1 lba:16133 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:56.477 [2024-12-16 11:42:57.147167] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:98 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:56.477 [2024-12-16 11:42:57.160276] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1244050) 00:20:56.477 [2024-12-16 11:42:57.160331] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:16163 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:56.477 [2024-12-16 11:42:57.160360] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:47 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:56.477 [2024-12-16 11:42:57.173964] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1244050) 00:20:56.477 [2024-12-16 11:42:57.174022] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:6922 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:56.477 [2024-12-16 11:42:57.174051] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:56.477 [2024-12-16 11:42:57.187864] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1244050) 00:20:56.477 [2024-12-16 11:42:57.187908] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:22665 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:56.477 [2024-12-16 11:42:57.187921] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:31 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:56.477 [2024-12-16 11:42:57.201514] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1244050) 00:20:56.477 [2024-12-16 11:42:57.201591] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:10498 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:56.477 [2024-12-16 11:42:57.201605] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:31 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:56.745 [2024-12-16 11:42:57.215447] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1244050) 00:20:56.745 [2024-12-16 11:42:57.215489] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:20237 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:56.745 [2024-12-16 11:42:57.215502] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:56.745 18812.00 IOPS, 73.48 MiB/s [2024-12-16T11:42:57.498Z] [2024-12-16 11:42:57.228137] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1244050) 00:20:56.745 [2024-12-16 11:42:57.228205] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:16511 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:56.745 [2024-12-16 11:42:57.228235] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:60 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:56.745 00:20:56.745 Latency(us) 00:20:56.745 [2024-12-16T11:42:57.498Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:20:56.745 Job: nvme0n1 (Core Mask 0x2, workload: randread, depth: 128, IO size: 4096) 00:20:56.745 nvme0n1 : 2.01 18827.64 73.55 0.00 0.00 6789.81 3589.59 17754.30 00:20:56.745 [2024-12-16T11:42:57.498Z] =================================================================================================================== 00:20:56.745 [2024-12-16T11:42:57.498Z] Total : 18827.64 73.55 0.00 0.00 6789.81 3589.59 17754.30 00:20:56.745 { 00:20:56.745 "results": [ 00:20:56.745 { 00:20:56.745 "job": "nvme0n1", 00:20:56.745 "core_mask": "0x2", 00:20:56.745 "workload": "randread", 00:20:56.745 "status": "finished", 00:20:56.745 "queue_depth": 128, 00:20:56.745 "io_size": 4096, 00:20:56.745 "runtime": 2.005137, 00:20:56.745 "iops": 18827.641203568634, 00:20:56.745 "mibps": 73.54547345143997, 00:20:56.745 "io_failed": 0, 00:20:56.745 "io_timeout": 0, 00:20:56.745 "avg_latency_us": 6789.807282744804, 00:20:56.745 "min_latency_us": 3589.5854545454545, 00:20:56.745 "max_latency_us": 17754.298181818183 00:20:56.745 } 00:20:56.745 ], 00:20:56.745 "core_count": 1 00:20:56.745 } 00:20:56.745 11:42:57 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@71 -- # get_transient_errcount nvme0n1 00:20:56.745 11:42:57 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@27 -- # bperf_rpc bdev_get_iostat -b nvme0n1 00:20:56.745 11:42:57 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_get_iostat -b nvme0n1 00:20:56.745 11:42:57 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@28 -- # jq -r '.bdevs[0] 00:20:56.745 | .driver_specific 00:20:56.745 | .nvme_error 00:20:56.745 | .status_code 00:20:56.745 | .command_transient_transport_error' 00:20:57.004 11:42:57 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@71 -- # (( 148 > 0 )) 00:20:57.004 11:42:57 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@73 -- # killprocess 96278 00:20:57.004 11:42:57 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@954 -- # '[' -z 96278 ']' 00:20:57.004 11:42:57 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@958 -- # kill -0 96278 00:20:57.004 11:42:57 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@959 -- # uname 00:20:57.004 11:42:57 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:20:57.004 11:42:57 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 96278 00:20:57.004 11:42:57 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:20:57.004 11:42:57 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:20:57.004 killing process with pid 96278 00:20:57.004 11:42:57 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@972 -- # echo 'killing process with pid 96278' 00:20:57.004 Received shutdown signal, test time was about 2.000000 seconds 00:20:57.004 00:20:57.004 Latency(us) 00:20:57.004 [2024-12-16T11:42:57.757Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:20:57.004 [2024-12-16T11:42:57.757Z] =================================================================================================================== 00:20:57.004 [2024-12-16T11:42:57.757Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:20:57.004 11:42:57 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@973 -- # kill 96278 00:20:57.004 11:42:57 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@978 -- # wait 96278 00:20:57.263 11:42:57 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@109 -- # run_bperf_err randread 131072 16 00:20:57.263 11:42:57 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@54 -- # local rw bs qd 00:20:57.263 11:42:57 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # rw=randread 00:20:57.263 11:42:57 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # bs=131072 00:20:57.263 11:42:57 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # qd=16 00:20:57.263 11:42:57 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@58 -- # bperfpid=96349 00:20:57.263 11:42:57 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@60 -- # waitforlisten 96349 /var/tmp/bperf.sock 00:20:57.263 11:42:57 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@57 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randread -o 131072 -t 2 -q 16 -z 00:20:57.263 11:42:57 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@835 -- # '[' -z 96349 ']' 00:20:57.263 11:42:57 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bperf.sock 00:20:57.263 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:20:57.263 11:42:57 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@840 -- # local max_retries=100 00:20:57.263 11:42:57 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:20:57.263 11:42:57 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@844 -- # xtrace_disable 00:20:57.263 11:42:57 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:20:57.263 [2024-12-16 11:42:57.862426] Starting SPDK v25.01-pre git sha1 c360129e7 / DPDK 24.03.0 initialization... 00:20:57.263 I/O size of 131072 is greater than zero copy threshold (65536). 00:20:57.263 Zero copy mechanism will not be used. 00:20:57.263 [2024-12-16 11:42:57.862549] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid96349 ] 00:20:57.263 [2024-12-16 11:42:58.010485] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:20:57.521 [2024-12-16 11:42:58.061673] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 1 00:20:57.521 11:42:58 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:20:57.521 11:42:58 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@868 -- # return 0 00:20:57.521 11:42:58 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@61 -- # bperf_rpc bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 00:20:57.521 11:42:58 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 00:20:57.779 11:42:58 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@63 -- # rpc_cmd accel_error_inject_error -o crc32c -t disable 00:20:57.779 11:42:58 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:57.779 11:42:58 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:20:57.779 11:42:58 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:57.779 11:42:58 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@64 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.3 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:20:57.779 11:42:58 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.3 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:20:58.038 nvme0n1 00:20:58.038 11:42:58 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@67 -- # rpc_cmd accel_error_inject_error -o crc32c -t corrupt -i 32 00:20:58.038 11:42:58 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:58.038 11:42:58 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:20:58.038 11:42:58 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:58.038 11:42:58 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@69 -- # bperf_py perform_tests 00:20:58.038 11:42:58 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@19 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:20:58.298 I/O size of 131072 is greater than zero copy threshold (65536). 00:20:58.298 Zero copy mechanism will not be used. 00:20:58.298 Running I/O for 2 seconds... 00:20:58.298 [2024-12-16 11:42:58.928223] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb4ae50) 00:20:58.298 [2024-12-16 11:42:58.928300] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:20576 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:58.298 [2024-12-16 11:42:58.928332] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0015 p:0 m:0 dnr:0 00:20:58.298 [2024-12-16 11:42:58.932040] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb4ae50) 00:20:58.298 [2024-12-16 11:42:58.932099] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:7712 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:58.298 [2024-12-16 11:42:58.932124] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0035 p:0 m:0 dnr:0 00:20:58.298 [2024-12-16 11:42:58.936743] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb4ae50) 00:20:58.298 [2024-12-16 11:42:58.936799] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:22304 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:58.298 [2024-12-16 11:42:58.936828] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0055 p:0 m:0 dnr:0 00:20:58.298 [2024-12-16 11:42:58.941517] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb4ae50) 00:20:58.298 [2024-12-16 11:42:58.941605] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:15936 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:58.298 [2024-12-16 11:42:58.941618] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0075 p:0 m:0 dnr:0 00:20:58.298 [2024-12-16 11:42:58.946581] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb4ae50) 00:20:58.298 [2024-12-16 11:42:58.946624] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:19200 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:58.298 [2024-12-16 11:42:58.946638] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0015 p:0 m:0 dnr:0 00:20:58.298 [2024-12-16 11:42:58.949772] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb4ae50) 00:20:58.298 [2024-12-16 11:42:58.949828] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:20928 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:58.298 [2024-12-16 11:42:58.949856] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0035 p:0 m:0 dnr:0 00:20:58.298 [2024-12-16 11:42:58.954041] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb4ae50) 00:20:58.298 [2024-12-16 11:42:58.954100] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:7328 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:58.298 [2024-12-16 11:42:58.954129] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0055 p:0 m:0 dnr:0 00:20:58.298 [2024-12-16 11:42:58.958875] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb4ae50) 00:20:58.298 [2024-12-16 11:42:58.958932] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:5376 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:58.298 [2024-12-16 11:42:58.958961] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0075 p:0 m:0 dnr:0 00:20:58.298 [2024-12-16 11:42:58.963613] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb4ae50) 00:20:58.298 [2024-12-16 11:42:58.963669] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:22752 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:58.298 [2024-12-16 11:42:58.963706] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0015 p:0 m:0 dnr:0 00:20:58.298 [2024-12-16 11:42:58.967722] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb4ae50) 00:20:58.298 [2024-12-16 11:42:58.967759] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:6912 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:58.298 [2024-12-16 11:42:58.967771] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0035 p:0 m:0 dnr:0 00:20:58.298 [2024-12-16 11:42:58.971371] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb4ae50) 00:20:58.298 [2024-12-16 11:42:58.971423] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:13952 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:58.298 [2024-12-16 11:42:58.971453] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0055 p:0 m:0 dnr:0 00:20:58.298 [2024-12-16 11:42:58.974667] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb4ae50) 00:20:58.298 [2024-12-16 11:42:58.974720] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:25472 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:58.298 [2024-12-16 11:42:58.974733] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0075 p:0 m:0 dnr:0 00:20:58.298 [2024-12-16 11:42:58.978919] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb4ae50) 00:20:58.298 [2024-12-16 11:42:58.978989] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:14400 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:58.298 [2024-12-16 11:42:58.979018] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0015 p:0 m:0 dnr:0 00:20:58.298 [2024-12-16 11:42:58.982837] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb4ae50) 00:20:58.298 [2024-12-16 11:42:58.982892] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:10560 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:58.298 [2024-12-16 11:42:58.982920] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0035 p:0 m:0 dnr:0 00:20:58.298 [2024-12-16 11:42:58.987074] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb4ae50) 00:20:58.298 [2024-12-16 11:42:58.987131] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:7776 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:58.298 [2024-12-16 11:42:58.987173] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0055 p:0 m:0 dnr:0 00:20:58.298 [2024-12-16 11:42:58.990944] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb4ae50) 00:20:58.298 [2024-12-16 11:42:58.990997] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:10272 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:58.298 [2024-12-16 11:42:58.991027] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0075 p:0 m:0 dnr:0 00:20:58.298 [2024-12-16 11:42:58.994687] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb4ae50) 00:20:58.298 [2024-12-16 11:42:58.994742] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:18752 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:58.298 [2024-12-16 11:42:58.994756] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0015 p:0 m:0 dnr:0 00:20:58.298 [2024-12-16 11:42:58.998382] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb4ae50) 00:20:58.298 [2024-12-16 11:42:58.998421] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:9856 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:58.298 [2024-12-16 11:42:58.998434] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0035 p:0 m:0 dnr:0 00:20:58.299 [2024-12-16 11:42:59.002671] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb4ae50) 00:20:58.299 [2024-12-16 11:42:59.002717] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:13056 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:58.299 [2024-12-16 11:42:59.002731] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0055 p:0 m:0 dnr:0 00:20:58.299 [2024-12-16 11:42:59.005714] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb4ae50) 00:20:58.299 [2024-12-16 11:42:59.005783] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:4032 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:58.299 [2024-12-16 11:42:59.005811] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0075 p:0 m:0 dnr:0 00:20:58.299 [2024-12-16 11:42:59.010529] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb4ae50) 00:20:58.299 [2024-12-16 11:42:59.010568] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:25088 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:58.299 [2024-12-16 11:42:59.010581] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0015 p:0 m:0 dnr:0 00:20:58.299 [2024-12-16 11:42:59.014122] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb4ae50) 00:20:58.299 [2024-12-16 11:42:59.014187] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:15904 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:58.299 [2024-12-16 11:42:59.014201] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0035 p:0 m:0 dnr:0 00:20:58.299 [2024-12-16 11:42:59.018727] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb4ae50) 00:20:58.299 [2024-12-16 11:42:59.018782] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:6624 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:58.299 [2024-12-16 11:42:59.018811] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0055 p:0 m:0 dnr:0 00:20:58.299 [2024-12-16 11:42:59.023823] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb4ae50) 00:20:58.299 [2024-12-16 11:42:59.023865] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:4352 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:58.299 [2024-12-16 11:42:59.023878] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0075 p:0 m:0 dnr:0 00:20:58.299 [2024-12-16 11:42:59.028746] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb4ae50) 00:20:58.299 [2024-12-16 11:42:59.028801] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:4768 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:58.299 [2024-12-16 11:42:59.028830] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0015 p:0 m:0 dnr:0 00:20:58.299 [2024-12-16 11:42:59.032798] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb4ae50) 00:20:58.299 [2024-12-16 11:42:59.032853] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:4608 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:58.299 [2024-12-16 11:42:59.032882] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0035 p:0 m:0 dnr:0 00:20:58.299 [2024-12-16 11:42:59.036437] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb4ae50) 00:20:58.299 [2024-12-16 11:42:59.036493] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:2560 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:58.299 [2024-12-16 11:42:59.036522] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0055 p:0 m:0 dnr:0 00:20:58.299 [2024-12-16 11:42:59.040900] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb4ae50) 00:20:58.299 [2024-12-16 11:42:59.040956] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:10368 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:58.299 [2024-12-16 11:42:59.040984] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0075 p:0 m:0 dnr:0 00:20:58.299 [2024-12-16 11:42:59.045928] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb4ae50) 00:20:58.299 [2024-12-16 11:42:59.045986] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:19872 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:58.299 [2024-12-16 11:42:59.046015] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0015 p:0 m:0 dnr:0 00:20:58.559 [2024-12-16 11:42:59.050831] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb4ae50) 00:20:58.559 [2024-12-16 11:42:59.050886] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:17888 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:58.559 [2024-12-16 11:42:59.050915] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0035 p:0 m:0 dnr:0 00:20:58.559 [2024-12-16 11:42:59.053828] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb4ae50) 00:20:58.559 [2024-12-16 11:42:59.053884] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:18624 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:58.559 [2024-12-16 11:42:59.053897] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0055 p:0 m:0 dnr:0 00:20:58.559 [2024-12-16 11:42:59.058603] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb4ae50) 00:20:58.559 [2024-12-16 11:42:59.058660] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:24832 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:58.559 [2024-12-16 11:42:59.058688] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0075 p:0 m:0 dnr:0 00:20:58.559 [2024-12-16 11:42:59.062274] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb4ae50) 00:20:58.559 [2024-12-16 11:42:59.062314] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:18112 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:58.559 [2024-12-16 11:42:59.062327] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0015 p:0 m:0 dnr:0 00:20:58.559 [2024-12-16 11:42:59.065972] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb4ae50) 00:20:58.559 [2024-12-16 11:42:59.066030] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:9056 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:58.559 [2024-12-16 11:42:59.066060] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0035 p:0 m:0 dnr:0 00:20:58.559 [2024-12-16 11:42:59.069584] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb4ae50) 00:20:58.559 [2024-12-16 11:42:59.069638] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:11552 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:58.559 [2024-12-16 11:42:59.069667] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0055 p:0 m:0 dnr:0 00:20:58.559 [2024-12-16 11:42:59.073343] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb4ae50) 00:20:58.559 [2024-12-16 11:42:59.073400] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:832 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:58.559 [2024-12-16 11:42:59.073429] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0075 p:0 m:0 dnr:0 00:20:58.559 [2024-12-16 11:42:59.077378] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb4ae50) 00:20:58.559 [2024-12-16 11:42:59.077438] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:15552 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:58.559 [2024-12-16 11:42:59.077451] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0015 p:0 m:0 dnr:0 00:20:58.559 [2024-12-16 11:42:59.080830] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb4ae50) 00:20:58.559 [2024-12-16 11:42:59.080888] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:11776 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:58.559 [2024-12-16 11:42:59.080901] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0035 p:0 m:0 dnr:0 00:20:58.559 [2024-12-16 11:42:59.084909] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb4ae50) 00:20:58.559 [2024-12-16 11:42:59.084966] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:21600 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:58.559 [2024-12-16 11:42:59.084994] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0055 p:0 m:0 dnr:0 00:20:58.559 [2024-12-16 11:42:59.088712] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb4ae50) 00:20:58.559 [2024-12-16 11:42:59.088769] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:19936 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:58.559 [2024-12-16 11:42:59.088798] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0075 p:0 m:0 dnr:0 00:20:58.559 [2024-12-16 11:42:59.093025] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb4ae50) 00:20:58.559 [2024-12-16 11:42:59.093083] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:3872 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:58.559 [2024-12-16 11:42:59.093111] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0015 p:0 m:0 dnr:0 00:20:58.559 [2024-12-16 11:42:59.096326] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb4ae50) 00:20:58.559 [2024-12-16 11:42:59.096382] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:960 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:58.559 [2024-12-16 11:42:59.096411] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0035 p:0 m:0 dnr:0 00:20:58.559 [2024-12-16 11:42:59.100642] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb4ae50) 00:20:58.560 [2024-12-16 11:42:59.100697] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:1472 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:58.560 [2024-12-16 11:42:59.100726] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0055 p:0 m:0 dnr:0 00:20:58.560 [2024-12-16 11:42:59.105516] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb4ae50) 00:20:58.560 [2024-12-16 11:42:59.105572] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:416 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:58.560 [2024-12-16 11:42:59.105602] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0075 p:0 m:0 dnr:0 00:20:58.560 [2024-12-16 11:42:59.108764] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb4ae50) 00:20:58.560 [2024-12-16 11:42:59.108820] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:9376 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:58.560 [2024-12-16 11:42:59.108849] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0015 p:0 m:0 dnr:0 00:20:58.560 [2024-12-16 11:42:59.112905] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb4ae50) 00:20:58.560 [2024-12-16 11:42:59.112963] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:8224 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:58.560 [2024-12-16 11:42:59.112991] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0035 p:0 m:0 dnr:0 00:20:58.560 [2024-12-16 11:42:59.117710] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb4ae50) 00:20:58.560 [2024-12-16 11:42:59.117767] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:608 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:58.560 [2024-12-16 11:42:59.117796] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0055 p:0 m:0 dnr:0 00:20:58.560 [2024-12-16 11:42:59.122238] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb4ae50) 00:20:58.560 [2024-12-16 11:42:59.122294] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:24832 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:58.560 [2024-12-16 11:42:59.122323] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0075 p:0 m:0 dnr:0 00:20:58.560 [2024-12-16 11:42:59.125404] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb4ae50) 00:20:58.560 [2024-12-16 11:42:59.125459] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:22848 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:58.560 [2024-12-16 11:42:59.125488] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0015 p:0 m:0 dnr:0 00:20:58.560 [2024-12-16 11:42:59.130130] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb4ae50) 00:20:58.560 [2024-12-16 11:42:59.130216] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:3264 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:58.560 [2024-12-16 11:42:59.130245] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0035 p:0 m:0 dnr:0 00:20:58.560 [2024-12-16 11:42:59.133558] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb4ae50) 00:20:58.560 [2024-12-16 11:42:59.133602] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:2400 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:58.560 [2024-12-16 11:42:59.133615] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0055 p:0 m:0 dnr:0 00:20:58.560 [2024-12-16 11:42:59.137707] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb4ae50) 00:20:58.560 [2024-12-16 11:42:59.137764] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:9440 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:58.560 [2024-12-16 11:42:59.137792] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0075 p:0 m:0 dnr:0 00:20:58.560 [2024-12-16 11:42:59.141475] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb4ae50) 00:20:58.560 [2024-12-16 11:42:59.141518] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:5888 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:58.560 [2024-12-16 11:42:59.141531] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0015 p:0 m:0 dnr:0 00:20:58.560 [2024-12-16 11:42:59.145725] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb4ae50) 00:20:58.560 [2024-12-16 11:42:59.145783] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:14688 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:58.560 [2024-12-16 11:42:59.145812] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0035 p:0 m:0 dnr:0 00:20:58.560 [2024-12-16 11:42:59.149050] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb4ae50) 00:20:58.560 [2024-12-16 11:42:59.149105] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:18464 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:58.560 [2024-12-16 11:42:59.149134] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0055 p:0 m:0 dnr:0 00:20:58.560 [2024-12-16 11:42:59.153189] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb4ae50) 00:20:58.560 [2024-12-16 11:42:59.153245] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:23520 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:58.560 [2024-12-16 11:42:59.153258] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0075 p:0 m:0 dnr:0 00:20:58.560 [2024-12-16 11:42:59.157785] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb4ae50) 00:20:58.560 [2024-12-16 11:42:59.157843] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:16992 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:58.560 [2024-12-16 11:42:59.157856] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0015 p:0 m:0 dnr:0 00:20:58.560 [2024-12-16 11:42:59.162800] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb4ae50) 00:20:58.560 [2024-12-16 11:42:59.162858] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:5088 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:58.560 [2024-12-16 11:42:59.162871] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0035 p:0 m:0 dnr:0 00:20:58.560 [2024-12-16 11:42:59.167507] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb4ae50) 00:20:58.560 [2024-12-16 11:42:59.167548] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:21920 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:58.560 [2024-12-16 11:42:59.167561] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0055 p:0 m:0 dnr:0 00:20:58.560 [2024-12-16 11:42:59.170289] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb4ae50) 00:20:58.560 [2024-12-16 11:42:59.170340] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:9856 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:58.560 [2024-12-16 11:42:59.170352] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0075 p:0 m:0 dnr:0 00:20:58.560 [2024-12-16 11:42:59.175046] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb4ae50) 00:20:58.560 [2024-12-16 11:42:59.175101] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:352 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:58.560 [2024-12-16 11:42:59.175114] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0015 p:0 m:0 dnr:0 00:20:58.560 [2024-12-16 11:42:59.179534] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb4ae50) 00:20:58.560 [2024-12-16 11:42:59.179605] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:16320 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:58.560 [2024-12-16 11:42:59.179618] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0035 p:0 m:0 dnr:0 00:20:58.560 [2024-12-16 11:42:59.184307] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb4ae50) 00:20:58.560 [2024-12-16 11:42:59.184363] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:4928 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:58.560 [2024-12-16 11:42:59.184392] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0055 p:0 m:0 dnr:0 00:20:58.560 [2024-12-16 11:42:59.188275] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb4ae50) 00:20:58.560 [2024-12-16 11:42:59.188347] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:10176 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:58.560 [2024-12-16 11:42:59.188375] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0075 p:0 m:0 dnr:0 00:20:58.560 [2024-12-16 11:42:59.191334] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb4ae50) 00:20:58.560 [2024-12-16 11:42:59.191386] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:22592 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:58.560 [2024-12-16 11:42:59.191416] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0015 p:0 m:0 dnr:0 00:20:58.560 [2024-12-16 11:42:59.195810] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb4ae50) 00:20:58.560 [2024-12-16 11:42:59.195848] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:5760 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:58.560 [2024-12-16 11:42:59.195862] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0035 p:0 m:0 dnr:0 00:20:58.560 [2024-12-16 11:42:59.200779] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb4ae50) 00:20:58.560 [2024-12-16 11:42:59.200835] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:18400 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:58.560 [2024-12-16 11:42:59.200865] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0055 p:0 m:0 dnr:0 00:20:58.560 [2024-12-16 11:42:59.204370] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb4ae50) 00:20:58.560 [2024-12-16 11:42:59.204412] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:9280 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:58.560 [2024-12-16 11:42:59.204425] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0075 p:0 m:0 dnr:0 00:20:58.560 [2024-12-16 11:42:59.208485] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb4ae50) 00:20:58.560 [2024-12-16 11:42:59.208543] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:9504 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:58.560 [2024-12-16 11:42:59.208556] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0015 p:0 m:0 dnr:0 00:20:58.560 [2024-12-16 11:42:59.212918] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb4ae50) 00:20:58.560 [2024-12-16 11:42:59.212972] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:19520 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:58.560 [2024-12-16 11:42:59.213001] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0035 p:0 m:0 dnr:0 00:20:58.561 [2024-12-16 11:42:59.217394] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb4ae50) 00:20:58.561 [2024-12-16 11:42:59.217451] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:5408 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:58.561 [2024-12-16 11:42:59.217480] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0055 p:0 m:0 dnr:0 00:20:58.561 [2024-12-16 11:42:59.221024] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb4ae50) 00:20:58.561 [2024-12-16 11:42:59.221079] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:3744 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:58.561 [2024-12-16 11:42:59.221107] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0075 p:0 m:0 dnr:0 00:20:58.561 [2024-12-16 11:42:59.224271] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb4ae50) 00:20:58.561 [2024-12-16 11:42:59.224342] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:1120 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:58.561 [2024-12-16 11:42:59.224370] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0015 p:0 m:0 dnr:0 00:20:58.561 [2024-12-16 11:42:59.228063] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb4ae50) 00:20:58.561 [2024-12-16 11:42:59.228118] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:12896 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:58.561 [2024-12-16 11:42:59.228146] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0035 p:0 m:0 dnr:0 00:20:58.561 [2024-12-16 11:42:59.233033] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb4ae50) 00:20:58.561 [2024-12-16 11:42:59.233090] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:18880 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:58.561 [2024-12-16 11:42:59.233119] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0055 p:0 m:0 dnr:0 00:20:58.561 [2024-12-16 11:42:59.236392] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb4ae50) 00:20:58.561 [2024-12-16 11:42:59.236447] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:7040 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:58.561 [2024-12-16 11:42:59.236460] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0075 p:0 m:0 dnr:0 00:20:58.561 [2024-12-16 11:42:59.240385] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb4ae50) 00:20:58.561 [2024-12-16 11:42:59.240440] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:19424 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:58.561 [2024-12-16 11:42:59.240452] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0015 p:0 m:0 dnr:0 00:20:58.561 [2024-12-16 11:42:59.245007] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb4ae50) 00:20:58.561 [2024-12-16 11:42:59.245061] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:17600 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:58.561 [2024-12-16 11:42:59.245090] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0035 p:0 m:0 dnr:0 00:20:58.561 [2024-12-16 11:42:59.248458] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb4ae50) 00:20:58.561 [2024-12-16 11:42:59.248513] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:23360 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:58.561 [2024-12-16 11:42:59.248542] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0055 p:0 m:0 dnr:0 00:20:58.561 [2024-12-16 11:42:59.253241] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb4ae50) 00:20:58.561 [2024-12-16 11:42:59.253295] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:6560 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:58.561 [2024-12-16 11:42:59.253323] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0075 p:0 m:0 dnr:0 00:20:58.561 [2024-12-16 11:42:59.256707] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb4ae50) 00:20:58.561 [2024-12-16 11:42:59.256758] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:23552 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:58.561 [2024-12-16 11:42:59.256785] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0015 p:0 m:0 dnr:0 00:20:58.561 [2024-12-16 11:42:59.260645] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb4ae50) 00:20:58.561 [2024-12-16 11:42:59.260716] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:13536 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:58.561 [2024-12-16 11:42:59.260744] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0035 p:0 m:0 dnr:0 00:20:58.561 [2024-12-16 11:42:59.264902] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb4ae50) 00:20:58.561 [2024-12-16 11:42:59.264957] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:4288 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:58.561 [2024-12-16 11:42:59.264984] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0055 p:0 m:0 dnr:0 00:20:58.561 [2024-12-16 11:42:59.269074] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb4ae50) 00:20:58.561 [2024-12-16 11:42:59.269127] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:17856 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:58.561 [2024-12-16 11:42:59.269182] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0075 p:0 m:0 dnr:0 00:20:58.561 [2024-12-16 11:42:59.273013] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb4ae50) 00:20:58.561 [2024-12-16 11:42:59.273068] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:16128 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:58.561 [2024-12-16 11:42:59.273097] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0015 p:0 m:0 dnr:0 00:20:58.561 [2024-12-16 11:42:59.276372] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb4ae50) 00:20:58.561 [2024-12-16 11:42:59.276428] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:10912 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:58.561 [2024-12-16 11:42:59.276456] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0035 p:0 m:0 dnr:0 00:20:58.561 [2024-12-16 11:42:59.280492] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb4ae50) 00:20:58.561 [2024-12-16 11:42:59.280549] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:10048 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:58.561 [2024-12-16 11:42:59.280578] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0055 p:0 m:0 dnr:0 00:20:58.561 [2024-12-16 11:42:59.283846] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb4ae50) 00:20:58.561 [2024-12-16 11:42:59.283884] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:8832 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:58.561 [2024-12-16 11:42:59.283899] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0075 p:0 m:0 dnr:0 00:20:58.561 [2024-12-16 11:42:59.288279] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb4ae50) 00:20:58.561 [2024-12-16 11:42:59.288349] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:16640 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:58.561 [2024-12-16 11:42:59.288377] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0015 p:0 m:0 dnr:0 00:20:58.561 [2024-12-16 11:42:59.292251] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb4ae50) 00:20:58.561 [2024-12-16 11:42:59.292283] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:23424 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:58.561 [2024-12-16 11:42:59.292327] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0035 p:0 m:0 dnr:0 00:20:58.561 [2024-12-16 11:42:59.295283] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb4ae50) 00:20:58.561 [2024-12-16 11:42:59.295334] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:13856 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:58.561 [2024-12-16 11:42:59.295362] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0055 p:0 m:0 dnr:0 00:20:58.561 [2024-12-16 11:42:59.299271] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb4ae50) 00:20:58.561 [2024-12-16 11:42:59.299325] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:14464 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:58.561 [2024-12-16 11:42:59.299354] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0075 p:0 m:0 dnr:0 00:20:58.561 [2024-12-16 11:42:59.303818] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb4ae50) 00:20:58.561 [2024-12-16 11:42:59.303859] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:1056 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:58.561 [2024-12-16 11:42:59.303872] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0015 p:0 m:0 dnr:0 00:20:58.561 [2024-12-16 11:42:59.306932] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb4ae50) 00:20:58.561 [2024-12-16 11:42:59.306982] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:2880 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:58.561 [2024-12-16 11:42:59.307010] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0035 p:0 m:0 dnr:0 00:20:58.822 [2024-12-16 11:42:59.311603] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb4ae50) 00:20:58.822 [2024-12-16 11:42:59.311658] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:2688 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:58.822 [2024-12-16 11:42:59.311686] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0055 p:0 m:0 dnr:0 00:20:58.822 [2024-12-16 11:42:59.315303] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb4ae50) 00:20:58.822 [2024-12-16 11:42:59.315354] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:1664 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:58.822 [2024-12-16 11:42:59.315382] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0075 p:0 m:0 dnr:0 00:20:58.822 [2024-12-16 11:42:59.318130] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb4ae50) 00:20:58.822 [2024-12-16 11:42:59.318191] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:8352 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:58.822 [2024-12-16 11:42:59.318220] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0015 p:0 m:0 dnr:0 00:20:58.822 [2024-12-16 11:42:59.322886] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb4ae50) 00:20:58.822 [2024-12-16 11:42:59.322942] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:5024 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:58.822 [2024-12-16 11:42:59.322970] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0035 p:0 m:0 dnr:0 00:20:58.822 [2024-12-16 11:42:59.326388] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb4ae50) 00:20:58.822 [2024-12-16 11:42:59.326442] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:14240 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:58.822 [2024-12-16 11:42:59.326470] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0055 p:0 m:0 dnr:0 00:20:58.822 [2024-12-16 11:42:59.329675] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb4ae50) 00:20:58.822 [2024-12-16 11:42:59.329731] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:12576 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:58.822 [2024-12-16 11:42:59.329758] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0075 p:0 m:0 dnr:0 00:20:58.822 [2024-12-16 11:42:59.333477] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb4ae50) 00:20:58.822 [2024-12-16 11:42:59.333532] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:6304 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:58.822 [2024-12-16 11:42:59.333560] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0015 p:0 m:0 dnr:0 00:20:58.822 [2024-12-16 11:42:59.337105] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb4ae50) 00:20:58.822 [2024-12-16 11:42:59.337184] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:3168 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:58.822 [2024-12-16 11:42:59.337198] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0035 p:0 m:0 dnr:0 00:20:58.822 [2024-12-16 11:42:59.341070] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb4ae50) 00:20:58.822 [2024-12-16 11:42:59.341127] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:22176 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:58.822 [2024-12-16 11:42:59.341166] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0055 p:0 m:0 dnr:0 00:20:58.822 [2024-12-16 11:42:59.344756] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb4ae50) 00:20:58.822 [2024-12-16 11:42:59.344810] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:12096 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:58.822 [2024-12-16 11:42:59.344838] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0075 p:0 m:0 dnr:0 00:20:58.822 [2024-12-16 11:42:59.349093] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb4ae50) 00:20:58.822 [2024-12-16 11:42:59.349174] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:14432 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:58.822 [2024-12-16 11:42:59.349187] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0015 p:0 m:0 dnr:0 00:20:58.822 [2024-12-16 11:42:59.352674] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb4ae50) 00:20:58.822 [2024-12-16 11:42:59.352731] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:16096 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:58.822 [2024-12-16 11:42:59.352759] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0035 p:0 m:0 dnr:0 00:20:58.822 [2024-12-16 11:42:59.356869] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb4ae50) 00:20:58.822 [2024-12-16 11:42:59.356925] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:17088 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:58.822 [2024-12-16 11:42:59.356953] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0055 p:0 m:0 dnr:0 00:20:58.822 [2024-12-16 11:42:59.361358] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb4ae50) 00:20:58.822 [2024-12-16 11:42:59.361415] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:4448 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:58.822 [2024-12-16 11:42:59.361428] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0075 p:0 m:0 dnr:0 00:20:58.822 [2024-12-16 11:42:59.365841] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb4ae50) 00:20:58.822 [2024-12-16 11:42:59.365897] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:544 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:58.822 [2024-12-16 11:42:59.365926] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0015 p:0 m:0 dnr:0 00:20:58.822 [2024-12-16 11:42:59.370747] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb4ae50) 00:20:58.822 [2024-12-16 11:42:59.370805] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:832 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:58.822 [2024-12-16 11:42:59.370818] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0035 p:0 m:0 dnr:0 00:20:58.822 [2024-12-16 11:42:59.375217] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb4ae50) 00:20:58.822 [2024-12-16 11:42:59.375281] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:16640 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:58.822 [2024-12-16 11:42:59.375311] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0055 p:0 m:0 dnr:0 00:20:58.822 [2024-12-16 11:42:59.379979] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb4ae50) 00:20:58.822 [2024-12-16 11:42:59.380035] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:21056 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:58.822 [2024-12-16 11:42:59.380079] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0075 p:0 m:0 dnr:0 00:20:58.822 [2024-12-16 11:42:59.384933] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb4ae50) 00:20:58.822 [2024-12-16 11:42:59.384989] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:17888 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:58.822 [2024-12-16 11:42:59.385017] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0015 p:0 m:0 dnr:0 00:20:58.822 [2024-12-16 11:42:59.389660] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb4ae50) 00:20:58.822 [2024-12-16 11:42:59.389746] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:3808 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:58.822 [2024-12-16 11:42:59.389774] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0035 p:0 m:0 dnr:0 00:20:58.822 [2024-12-16 11:42:59.394191] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb4ae50) 00:20:58.822 [2024-12-16 11:42:59.394254] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:9408 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:58.822 [2024-12-16 11:42:59.394282] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0055 p:0 m:0 dnr:0 00:20:58.823 [2024-12-16 11:42:59.397465] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb4ae50) 00:20:58.823 [2024-12-16 11:42:59.397521] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:24224 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:58.823 [2024-12-16 11:42:59.397534] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0075 p:0 m:0 dnr:0 00:20:58.823 [2024-12-16 11:42:59.402034] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb4ae50) 00:20:58.823 [2024-12-16 11:42:59.402090] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:11744 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:58.823 [2024-12-16 11:42:59.402119] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0015 p:0 m:0 dnr:0 00:20:58.823 [2024-12-16 11:42:59.405520] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb4ae50) 00:20:58.823 [2024-12-16 11:42:59.405575] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:22912 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:58.823 [2024-12-16 11:42:59.405604] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0035 p:0 m:0 dnr:0 00:20:58.823 [2024-12-16 11:42:59.409815] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb4ae50) 00:20:58.823 [2024-12-16 11:42:59.409871] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:17216 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:58.823 [2024-12-16 11:42:59.409899] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0055 p:0 m:0 dnr:0 00:20:58.823 [2024-12-16 11:42:59.414966] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb4ae50) 00:20:58.823 [2024-12-16 11:42:59.415023] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:20736 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:58.823 [2024-12-16 11:42:59.415052] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0075 p:0 m:0 dnr:0 00:20:58.823 [2024-12-16 11:42:59.418566] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb4ae50) 00:20:58.823 [2024-12-16 11:42:59.418606] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:13952 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:58.823 [2024-12-16 11:42:59.418619] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0015 p:0 m:0 dnr:0 00:20:58.823 [2024-12-16 11:42:59.422476] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb4ae50) 00:20:58.823 [2024-12-16 11:42:59.422517] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:16640 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:58.823 [2024-12-16 11:42:59.422530] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0035 p:0 m:0 dnr:0 00:20:58.823 [2024-12-16 11:42:59.426822] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb4ae50) 00:20:58.823 [2024-12-16 11:42:59.426880] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:11104 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:58.823 [2024-12-16 11:42:59.426909] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0055 p:0 m:0 dnr:0 00:20:58.823 [2024-12-16 11:42:59.430239] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb4ae50) 00:20:58.823 [2024-12-16 11:42:59.430293] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:17984 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:58.823 [2024-12-16 11:42:59.430323] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0075 p:0 m:0 dnr:0 00:20:58.823 [2024-12-16 11:42:59.434418] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb4ae50) 00:20:58.823 [2024-12-16 11:42:59.434477] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:448 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:58.823 [2024-12-16 11:42:59.434490] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0015 p:0 m:0 dnr:0 00:20:58.823 [2024-12-16 11:42:59.438444] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb4ae50) 00:20:58.823 [2024-12-16 11:42:59.438502] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:21696 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:58.823 [2024-12-16 11:42:59.438532] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0035 p:0 m:0 dnr:0 00:20:58.823 [2024-12-16 11:42:59.441792] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb4ae50) 00:20:58.823 [2024-12-16 11:42:59.441848] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:1728 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:58.823 [2024-12-16 11:42:59.441878] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0055 p:0 m:0 dnr:0 00:20:58.823 [2024-12-16 11:42:59.446038] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb4ae50) 00:20:58.823 [2024-12-16 11:42:59.446095] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:1280 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:58.823 [2024-12-16 11:42:59.446124] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0075 p:0 m:0 dnr:0 00:20:58.823 [2024-12-16 11:42:59.451170] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb4ae50) 00:20:58.823 [2024-12-16 11:42:59.451221] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:12256 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:58.823 [2024-12-16 11:42:59.451234] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0015 p:0 m:0 dnr:0 00:20:58.823 [2024-12-16 11:42:59.455915] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb4ae50) 00:20:58.823 [2024-12-16 11:42:59.455957] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:2720 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:58.823 [2024-12-16 11:42:59.455971] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0035 p:0 m:0 dnr:0 00:20:58.823 [2024-12-16 11:42:59.458786] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb4ae50) 00:20:58.823 [2024-12-16 11:42:59.458839] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:24640 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:58.823 [2024-12-16 11:42:59.458851] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0055 p:0 m:0 dnr:0 00:20:58.823 [2024-12-16 11:42:59.463020] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb4ae50) 00:20:58.823 [2024-12-16 11:42:59.463073] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:9728 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:58.823 [2024-12-16 11:42:59.463101] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0075 p:0 m:0 dnr:0 00:20:58.823 [2024-12-16 11:42:59.467422] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb4ae50) 00:20:58.823 [2024-12-16 11:42:59.467478] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:25344 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:58.823 [2024-12-16 11:42:59.467490] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0015 p:0 m:0 dnr:0 00:20:58.823 [2024-12-16 11:42:59.472387] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb4ae50) 00:20:58.823 [2024-12-16 11:42:59.472444] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:22080 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:58.823 [2024-12-16 11:42:59.472473] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0035 p:0 m:0 dnr:0 00:20:58.823 [2024-12-16 11:42:59.475755] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb4ae50) 00:20:58.823 [2024-12-16 11:42:59.475793] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:23808 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:58.823 [2024-12-16 11:42:59.475806] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0055 p:0 m:0 dnr:0 00:20:58.823 [2024-12-16 11:42:59.479614] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb4ae50) 00:20:58.823 [2024-12-16 11:42:59.479666] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:7104 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:58.823 [2024-12-16 11:42:59.479703] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0075 p:0 m:0 dnr:0 00:20:58.823 [2024-12-16 11:42:59.484603] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb4ae50) 00:20:58.823 [2024-12-16 11:42:59.484659] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:2880 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:58.823 [2024-12-16 11:42:59.484688] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0015 p:0 m:0 dnr:0 00:20:58.823 [2024-12-16 11:42:59.488110] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb4ae50) 00:20:58.823 [2024-12-16 11:42:59.488179] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:7552 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:58.823 [2024-12-16 11:42:59.488210] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0035 p:0 m:0 dnr:0 00:20:58.823 [2024-12-16 11:42:59.491897] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb4ae50) 00:20:58.823 [2024-12-16 11:42:59.491940] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:24704 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:58.823 [2024-12-16 11:42:59.491953] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0055 p:0 m:0 dnr:0 00:20:58.823 [2024-12-16 11:42:59.496263] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb4ae50) 00:20:58.823 [2024-12-16 11:42:59.496319] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:17184 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:58.823 [2024-12-16 11:42:59.496348] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0075 p:0 m:0 dnr:0 00:20:58.823 [2024-12-16 11:42:59.500314] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb4ae50) 00:20:58.823 [2024-12-16 11:42:59.500371] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:22304 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:58.823 [2024-12-16 11:42:59.500400] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0015 p:0 m:0 dnr:0 00:20:58.823 [2024-12-16 11:42:59.504177] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb4ae50) 00:20:58.823 [2024-12-16 11:42:59.504247] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:10144 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:58.823 [2024-12-16 11:42:59.504276] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0035 p:0 m:0 dnr:0 00:20:58.823 [2024-12-16 11:42:59.508318] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb4ae50) 00:20:58.823 [2024-12-16 11:42:59.508390] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:7072 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:58.824 [2024-12-16 11:42:59.508419] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0055 p:0 m:0 dnr:0 00:20:58.824 [2024-12-16 11:42:59.511479] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb4ae50) 00:20:58.824 [2024-12-16 11:42:59.511532] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:18080 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:58.824 [2024-12-16 11:42:59.511545] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0075 p:0 m:0 dnr:0 00:20:58.824 [2024-12-16 11:42:59.515838] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb4ae50) 00:20:58.824 [2024-12-16 11:42:59.515878] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:640 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:58.824 [2024-12-16 11:42:59.515890] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0015 p:0 m:0 dnr:0 00:20:58.824 [2024-12-16 11:42:59.520099] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb4ae50) 00:20:58.824 [2024-12-16 11:42:59.520182] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:5824 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:58.824 [2024-12-16 11:42:59.520196] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0035 p:0 m:0 dnr:0 00:20:58.824 [2024-12-16 11:42:59.523541] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb4ae50) 00:20:58.824 [2024-12-16 11:42:59.523609] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:448 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:58.824 [2024-12-16 11:42:59.523622] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0055 p:0 m:0 dnr:0 00:20:58.824 [2024-12-16 11:42:59.527598] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb4ae50) 00:20:58.824 [2024-12-16 11:42:59.527651] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:15168 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:58.824 [2024-12-16 11:42:59.527679] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0075 p:0 m:0 dnr:0 00:20:58.824 [2024-12-16 11:42:59.531801] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb4ae50) 00:20:58.824 [2024-12-16 11:42:59.531847] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:2976 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:58.824 [2024-12-16 11:42:59.531860] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0015 p:0 m:0 dnr:0 00:20:58.824 [2024-12-16 11:42:59.535617] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb4ae50) 00:20:58.824 [2024-12-16 11:42:59.535669] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:4736 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:58.824 [2024-12-16 11:42:59.535706] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0035 p:0 m:0 dnr:0 00:20:58.824 [2024-12-16 11:42:59.539658] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb4ae50) 00:20:58.824 [2024-12-16 11:42:59.539757] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:2048 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:58.824 [2024-12-16 11:42:59.539771] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0055 p:0 m:0 dnr:0 00:20:58.824 [2024-12-16 11:42:59.544173] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb4ae50) 00:20:58.824 [2024-12-16 11:42:59.544238] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:896 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:58.824 [2024-12-16 11:42:59.544267] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0075 p:0 m:0 dnr:0 00:20:58.824 [2024-12-16 11:42:59.547213] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb4ae50) 00:20:58.824 [2024-12-16 11:42:59.547275] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:17120 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:58.824 [2024-12-16 11:42:59.547305] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0015 p:0 m:0 dnr:0 00:20:58.824 [2024-12-16 11:42:59.551631] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb4ae50) 00:20:58.824 [2024-12-16 11:42:59.551687] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:20576 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:58.824 [2024-12-16 11:42:59.551731] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0035 p:0 m:0 dnr:0 00:20:58.824 [2024-12-16 11:42:59.556576] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb4ae50) 00:20:58.824 [2024-12-16 11:42:59.556650] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:19872 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:58.824 [2024-12-16 11:42:59.556679] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0055 p:0 m:0 dnr:0 00:20:58.824 [2024-12-16 11:42:59.560700] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb4ae50) 00:20:58.824 [2024-12-16 11:42:59.560755] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:17376 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:58.824 [2024-12-16 11:42:59.560784] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0075 p:0 m:0 dnr:0 00:20:58.824 [2024-12-16 11:42:59.564903] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb4ae50) 00:20:58.824 [2024-12-16 11:42:59.564959] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:6688 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:58.824 [2024-12-16 11:42:59.564987] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0015 p:0 m:0 dnr:0 00:20:58.824 [2024-12-16 11:42:59.568427] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb4ae50) 00:20:58.824 [2024-12-16 11:42:59.568482] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:13312 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:58.824 [2024-12-16 11:42:59.568511] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0035 p:0 m:0 dnr:0 00:20:58.824 [2024-12-16 11:42:59.572726] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb4ae50) 00:20:58.824 [2024-12-16 11:42:59.572783] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:7104 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:58.824 [2024-12-16 11:42:59.572811] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0055 p:0 m:0 dnr:0 00:20:59.084 [2024-12-16 11:42:59.577562] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb4ae50) 00:20:59.084 [2024-12-16 11:42:59.577619] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:24448 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:59.084 [2024-12-16 11:42:59.577648] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0075 p:0 m:0 dnr:0 00:20:59.084 [2024-12-16 11:42:59.580929] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb4ae50) 00:20:59.084 [2024-12-16 11:42:59.580985] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:21984 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:59.084 [2024-12-16 11:42:59.581012] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0015 p:0 m:0 dnr:0 00:20:59.084 [2024-12-16 11:42:59.585195] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb4ae50) 00:20:59.084 [2024-12-16 11:42:59.585251] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:17248 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:59.084 [2024-12-16 11:42:59.585279] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0035 p:0 m:0 dnr:0 00:20:59.085 [2024-12-16 11:42:59.589698] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb4ae50) 00:20:59.085 [2024-12-16 11:42:59.589754] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:1440 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:59.085 [2024-12-16 11:42:59.589782] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0055 p:0 m:0 dnr:0 00:20:59.085 [2024-12-16 11:42:59.594601] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb4ae50) 00:20:59.085 [2024-12-16 11:42:59.594657] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:7840 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:59.085 [2024-12-16 11:42:59.594686] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0075 p:0 m:0 dnr:0 00:20:59.085 [2024-12-16 11:42:59.599134] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb4ae50) 00:20:59.085 [2024-12-16 11:42:59.599197] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:1056 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:59.085 [2024-12-16 11:42:59.599227] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0015 p:0 m:0 dnr:0 00:20:59.085 [2024-12-16 11:42:59.602059] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb4ae50) 00:20:59.085 [2024-12-16 11:42:59.602115] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:11776 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:59.085 [2024-12-16 11:42:59.602143] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0035 p:0 m:0 dnr:0 00:20:59.085 [2024-12-16 11:42:59.606802] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb4ae50) 00:20:59.085 [2024-12-16 11:42:59.606859] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:24672 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:59.085 [2024-12-16 11:42:59.606887] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0055 p:0 m:0 dnr:0 00:20:59.085 [2024-12-16 11:42:59.611641] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb4ae50) 00:20:59.085 [2024-12-16 11:42:59.611718] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:9248 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:59.085 [2024-12-16 11:42:59.611731] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0075 p:0 m:0 dnr:0 00:20:59.085 [2024-12-16 11:42:59.616478] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb4ae50) 00:20:59.085 [2024-12-16 11:42:59.616551] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:20736 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:59.085 [2024-12-16 11:42:59.616595] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0015 p:0 m:0 dnr:0 00:20:59.085 [2024-12-16 11:42:59.620895] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb4ae50) 00:20:59.085 [2024-12-16 11:42:59.620952] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:19872 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:59.085 [2024-12-16 11:42:59.620981] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0035 p:0 m:0 dnr:0 00:20:59.085 [2024-12-16 11:42:59.625105] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb4ae50) 00:20:59.085 [2024-12-16 11:42:59.625186] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:7968 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:59.085 [2024-12-16 11:42:59.625199] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0055 p:0 m:0 dnr:0 00:20:59.085 [2024-12-16 11:42:59.628377] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb4ae50) 00:20:59.085 [2024-12-16 11:42:59.628419] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:2240 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:59.085 [2024-12-16 11:42:59.628432] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0075 p:0 m:0 dnr:0 00:20:59.085 [2024-12-16 11:42:59.632790] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb4ae50) 00:20:59.085 [2024-12-16 11:42:59.632847] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:11008 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:59.085 [2024-12-16 11:42:59.632860] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0015 p:0 m:0 dnr:0 00:20:59.085 [2024-12-16 11:42:59.637575] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb4ae50) 00:20:59.085 [2024-12-16 11:42:59.637618] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:11744 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:59.085 [2024-12-16 11:42:59.637631] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0035 p:0 m:0 dnr:0 00:20:59.085 [2024-12-16 11:42:59.642561] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb4ae50) 00:20:59.085 [2024-12-16 11:42:59.642632] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:1824 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:59.085 [2024-12-16 11:42:59.642661] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0055 p:0 m:0 dnr:0 00:20:59.085 [2024-12-16 11:42:59.647456] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb4ae50) 00:20:59.085 [2024-12-16 11:42:59.647498] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:21248 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:59.085 [2024-12-16 11:42:59.647512] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0075 p:0 m:0 dnr:0 00:20:59.085 [2024-12-16 11:42:59.650376] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb4ae50) 00:20:59.085 [2024-12-16 11:42:59.650412] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:512 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:59.085 [2024-12-16 11:42:59.650425] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0015 p:0 m:0 dnr:0 00:20:59.085 [2024-12-16 11:42:59.654736] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb4ae50) 00:20:59.085 [2024-12-16 11:42:59.654790] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:17536 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:59.085 [2024-12-16 11:42:59.654818] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0035 p:0 m:0 dnr:0 00:20:59.085 [2024-12-16 11:42:59.659653] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb4ae50) 00:20:59.085 [2024-12-16 11:42:59.659741] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:1952 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:59.085 [2024-12-16 11:42:59.659755] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0055 p:0 m:0 dnr:0 00:20:59.085 [2024-12-16 11:42:59.664879] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb4ae50) 00:20:59.085 [2024-12-16 11:42:59.664964] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:13248 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:59.085 [2024-12-16 11:42:59.664977] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0075 p:0 m:0 dnr:0 00:20:59.085 [2024-12-16 11:42:59.669594] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb4ae50) 00:20:59.085 [2024-12-16 11:42:59.669637] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:19520 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:59.085 [2024-12-16 11:42:59.669650] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0015 p:0 m:0 dnr:0 00:20:59.085 [2024-12-16 11:42:59.672479] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb4ae50) 00:20:59.085 [2024-12-16 11:42:59.672529] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:13632 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:59.085 [2024-12-16 11:42:59.672542] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0035 p:0 m:0 dnr:0 00:20:59.085 [2024-12-16 11:42:59.677671] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb4ae50) 00:20:59.085 [2024-12-16 11:42:59.677711] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:3552 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:59.085 [2024-12-16 11:42:59.677724] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0055 p:0 m:0 dnr:0 00:20:59.085 [2024-12-16 11:42:59.682792] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb4ae50) 00:20:59.085 [2024-12-16 11:42:59.682848] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:12736 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:59.085 [2024-12-16 11:42:59.682862] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0075 p:0 m:0 dnr:0 00:20:59.085 [2024-12-16 11:42:59.687354] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb4ae50) 00:20:59.085 [2024-12-16 11:42:59.687393] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:9760 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:59.085 [2024-12-16 11:42:59.687407] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0015 p:0 m:0 dnr:0 00:20:59.085 [2024-12-16 11:42:59.690738] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb4ae50) 00:20:59.085 [2024-12-16 11:42:59.690792] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:12672 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:59.085 [2024-12-16 11:42:59.690805] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0035 p:0 m:0 dnr:0 00:20:59.085 [2024-12-16 11:42:59.694832] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb4ae50) 00:20:59.086 [2024-12-16 11:42:59.694887] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:8384 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:59.086 [2024-12-16 11:42:59.694908] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0055 p:0 m:0 dnr:0 00:20:59.086 [2024-12-16 11:42:59.699807] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb4ae50) 00:20:59.086 [2024-12-16 11:42:59.699848] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:17792 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:59.086 [2024-12-16 11:42:59.699861] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0075 p:0 m:0 dnr:0 00:20:59.086 [2024-12-16 11:42:59.704460] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb4ae50) 00:20:59.086 [2024-12-16 11:42:59.704515] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:7552 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:59.086 [2024-12-16 11:42:59.704528] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0015 p:0 m:0 dnr:0 00:20:59.086 [2024-12-16 11:42:59.707819] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb4ae50) 00:20:59.086 [2024-12-16 11:42:59.707855] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:17728 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:59.086 [2024-12-16 11:42:59.707868] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0035 p:0 m:0 dnr:0 00:20:59.086 [2024-12-16 11:42:59.712293] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb4ae50) 00:20:59.086 [2024-12-16 11:42:59.712348] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:20896 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:59.086 [2024-12-16 11:42:59.712361] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0055 p:0 m:0 dnr:0 00:20:59.086 [2024-12-16 11:42:59.717483] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb4ae50) 00:20:59.086 [2024-12-16 11:42:59.717523] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:16832 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:59.086 [2024-12-16 11:42:59.717536] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0075 p:0 m:0 dnr:0 00:20:59.086 [2024-12-16 11:42:59.721411] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb4ae50) 00:20:59.086 [2024-12-16 11:42:59.721466] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:12992 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:59.086 [2024-12-16 11:42:59.721480] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0015 p:0 m:0 dnr:0 00:20:59.086 [2024-12-16 11:42:59.724732] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb4ae50) 00:20:59.086 [2024-12-16 11:42:59.724787] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:22048 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:59.086 [2024-12-16 11:42:59.724816] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0035 p:0 m:0 dnr:0 00:20:59.086 [2024-12-16 11:42:59.728424] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb4ae50) 00:20:59.086 [2024-12-16 11:42:59.728480] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:22528 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:59.086 [2024-12-16 11:42:59.728513] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0055 p:0 m:0 dnr:0 00:20:59.086 [2024-12-16 11:42:59.732611] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb4ae50) 00:20:59.086 [2024-12-16 11:42:59.732667] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:23296 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:59.086 [2024-12-16 11:42:59.732696] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0075 p:0 m:0 dnr:0 00:20:59.086 [2024-12-16 11:42:59.736253] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb4ae50) 00:20:59.086 [2024-12-16 11:42:59.736307] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:20512 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:59.086 [2024-12-16 11:42:59.736335] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0015 p:0 m:0 dnr:0 00:20:59.086 [2024-12-16 11:42:59.739956] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb4ae50) 00:20:59.086 [2024-12-16 11:42:59.739998] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:17344 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:59.086 [2024-12-16 11:42:59.740011] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0035 p:0 m:0 dnr:0 00:20:59.086 [2024-12-16 11:42:59.744174] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb4ae50) 00:20:59.086 [2024-12-16 11:42:59.744242] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:17536 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:59.086 [2024-12-16 11:42:59.744271] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0055 p:0 m:0 dnr:0 00:20:59.086 [2024-12-16 11:42:59.748102] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb4ae50) 00:20:59.086 [2024-12-16 11:42:59.748184] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:12384 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:59.086 [2024-12-16 11:42:59.748199] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0075 p:0 m:0 dnr:0 00:20:59.086 [2024-12-16 11:42:59.751937] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb4ae50) 00:20:59.086 [2024-12-16 11:42:59.751979] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:15808 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:59.086 [2024-12-16 11:42:59.751992] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0015 p:0 m:0 dnr:0 00:20:59.086 [2024-12-16 11:42:59.756264] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb4ae50) 00:20:59.086 [2024-12-16 11:42:59.756320] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:22496 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:59.086 [2024-12-16 11:42:59.756348] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0035 p:0 m:0 dnr:0 00:20:59.086 [2024-12-16 11:42:59.760474] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb4ae50) 00:20:59.086 [2024-12-16 11:42:59.760529] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:19104 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:59.086 [2024-12-16 11:42:59.760558] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0055 p:0 m:0 dnr:0 00:20:59.086 [2024-12-16 11:42:59.764263] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb4ae50) 00:20:59.086 [2024-12-16 11:42:59.764302] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:6688 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:59.086 [2024-12-16 11:42:59.764330] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0075 p:0 m:0 dnr:0 00:20:59.086 [2024-12-16 11:42:59.768147] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb4ae50) 00:20:59.086 [2024-12-16 11:42:59.768229] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:14784 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:59.086 [2024-12-16 11:42:59.768243] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0015 p:0 m:0 dnr:0 00:20:59.086 [2024-12-16 11:42:59.772327] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb4ae50) 00:20:59.086 [2024-12-16 11:42:59.772382] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:11808 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:59.086 [2024-12-16 11:42:59.772411] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0035 p:0 m:0 dnr:0 00:20:59.086 [2024-12-16 11:42:59.775781] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb4ae50) 00:20:59.086 [2024-12-16 11:42:59.775819] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:25472 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:59.086 [2024-12-16 11:42:59.775848] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0055 p:0 m:0 dnr:0 00:20:59.086 [2024-12-16 11:42:59.780173] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb4ae50) 00:20:59.086 [2024-12-16 11:42:59.780220] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:16192 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:59.086 [2024-12-16 11:42:59.780249] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0075 p:0 m:0 dnr:0 00:20:59.086 [2024-12-16 11:42:59.783031] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb4ae50) 00:20:59.086 [2024-12-16 11:42:59.783084] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:22624 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:59.086 [2024-12-16 11:42:59.783113] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0015 p:0 m:0 dnr:0 00:20:59.086 [2024-12-16 11:42:59.787257] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb4ae50) 00:20:59.086 [2024-12-16 11:42:59.787310] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:17440 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:59.086 [2024-12-16 11:42:59.787338] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0035 p:0 m:0 dnr:0 00:20:59.086 [2024-12-16 11:42:59.791930] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb4ae50) 00:20:59.086 [2024-12-16 11:42:59.791971] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:10848 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:59.086 [2024-12-16 11:42:59.791984] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0055 p:0 m:0 dnr:0 00:20:59.086 [2024-12-16 11:42:59.796496] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb4ae50) 00:20:59.086 [2024-12-16 11:42:59.796552] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:17472 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:59.086 [2024-12-16 11:42:59.796581] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0075 p:0 m:0 dnr:0 00:20:59.086 [2024-12-16 11:42:59.799889] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb4ae50) 00:20:59.086 [2024-12-16 11:42:59.799928] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:16320 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:59.086 [2024-12-16 11:42:59.799941] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0015 p:0 m:0 dnr:0 00:20:59.086 [2024-12-16 11:42:59.804178] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb4ae50) 00:20:59.087 [2024-12-16 11:42:59.804226] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:16832 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:59.087 [2024-12-16 11:42:59.804240] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0035 p:0 m:0 dnr:0 00:20:59.087 [2024-12-16 11:42:59.808732] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb4ae50) 00:20:59.087 [2024-12-16 11:42:59.808787] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:17504 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:59.087 [2024-12-16 11:42:59.808816] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0055 p:0 m:0 dnr:0 00:20:59.087 [2024-12-16 11:42:59.813114] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb4ae50) 00:20:59.087 [2024-12-16 11:42:59.813166] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:8160 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:59.087 [2024-12-16 11:42:59.813180] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0075 p:0 m:0 dnr:0 00:20:59.087 [2024-12-16 11:42:59.817343] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb4ae50) 00:20:59.087 [2024-12-16 11:42:59.817383] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:13344 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:59.087 [2024-12-16 11:42:59.817411] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0015 p:0 m:0 dnr:0 00:20:59.087 [2024-12-16 11:42:59.820783] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb4ae50) 00:20:59.087 [2024-12-16 11:42:59.820852] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:16544 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:59.087 [2024-12-16 11:42:59.820881] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0035 p:0 m:0 dnr:0 00:20:59.087 [2024-12-16 11:42:59.825375] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb4ae50) 00:20:59.087 [2024-12-16 11:42:59.825416] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:4288 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:59.087 [2024-12-16 11:42:59.825429] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0055 p:0 m:0 dnr:0 00:20:59.087 [2024-12-16 11:42:59.830110] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb4ae50) 00:20:59.087 [2024-12-16 11:42:59.830198] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:22240 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:59.087 [2024-12-16 11:42:59.830212] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0075 p:0 m:0 dnr:0 00:20:59.087 [2024-12-16 11:42:59.834879] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb4ae50) 00:20:59.087 [2024-12-16 11:42:59.834935] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:11200 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:59.087 [2024-12-16 11:42:59.834964] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0015 p:0 m:0 dnr:0 00:20:59.346 [2024-12-16 11:42:59.838237] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb4ae50) 00:20:59.346 [2024-12-16 11:42:59.838289] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:10336 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:59.346 [2024-12-16 11:42:59.838318] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0035 p:0 m:0 dnr:0 00:20:59.346 [2024-12-16 11:42:59.842714] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb4ae50) 00:20:59.346 [2024-12-16 11:42:59.842769] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:3232 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:59.346 [2024-12-16 11:42:59.842798] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0055 p:0 m:0 dnr:0 00:20:59.346 [2024-12-16 11:42:59.847796] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb4ae50) 00:20:59.346 [2024-12-16 11:42:59.847837] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:17344 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:59.346 [2024-12-16 11:42:59.847850] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0075 p:0 m:0 dnr:0 00:20:59.346 [2024-12-16 11:42:59.853125] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb4ae50) 00:20:59.346 [2024-12-16 11:42:59.853207] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:22176 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:59.346 [2024-12-16 11:42:59.853222] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0015 p:0 m:0 dnr:0 00:20:59.347 [2024-12-16 11:42:59.858085] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb4ae50) 00:20:59.347 [2024-12-16 11:42:59.858166] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:4768 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:59.347 [2024-12-16 11:42:59.858191] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0035 p:0 m:0 dnr:0 00:20:59.347 [2024-12-16 11:42:59.861024] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb4ae50) 00:20:59.347 [2024-12-16 11:42:59.861080] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:6944 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:59.347 [2024-12-16 11:42:59.861093] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0055 p:0 m:0 dnr:0 00:20:59.347 [2024-12-16 11:42:59.866072] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb4ae50) 00:20:59.347 [2024-12-16 11:42:59.866129] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:21376 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:59.347 [2024-12-16 11:42:59.866168] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0075 p:0 m:0 dnr:0 00:20:59.347 [2024-12-16 11:42:59.871056] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb4ae50) 00:20:59.347 [2024-12-16 11:42:59.871113] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:20256 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:59.347 [2024-12-16 11:42:59.871143] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0015 p:0 m:0 dnr:0 00:20:59.347 [2024-12-16 11:42:59.875687] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb4ae50) 00:20:59.347 [2024-12-16 11:42:59.875736] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:12128 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:59.347 [2024-12-16 11:42:59.875749] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0035 p:0 m:0 dnr:0 00:20:59.347 [2024-12-16 11:42:59.878817] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb4ae50) 00:20:59.347 [2024-12-16 11:42:59.878869] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:5504 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:59.347 [2024-12-16 11:42:59.878898] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0055 p:0 m:0 dnr:0 00:20:59.347 [2024-12-16 11:42:59.882992] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb4ae50) 00:20:59.347 [2024-12-16 11:42:59.883045] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:10368 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:59.347 [2024-12-16 11:42:59.883077] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0075 p:0 m:0 dnr:0 00:20:59.347 [2024-12-16 11:42:59.886854] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb4ae50) 00:20:59.347 [2024-12-16 11:42:59.886909] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:23872 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:59.347 [2024-12-16 11:42:59.886937] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0015 p:0 m:0 dnr:0 00:20:59.347 [2024-12-16 11:42:59.890518] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb4ae50) 00:20:59.347 [2024-12-16 11:42:59.890590] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:25568 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:59.347 [2024-12-16 11:42:59.890603] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0035 p:0 m:0 dnr:0 00:20:59.347 [2024-12-16 11:42:59.895169] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb4ae50) 00:20:59.347 [2024-12-16 11:42:59.895235] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:5888 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:59.347 [2024-12-16 11:42:59.895264] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0055 p:0 m:0 dnr:0 00:20:59.347 [2024-12-16 11:42:59.898792] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb4ae50) 00:20:59.347 [2024-12-16 11:42:59.898846] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:22400 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:59.347 [2024-12-16 11:42:59.898875] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0075 p:0 m:0 dnr:0 00:20:59.347 [2024-12-16 11:42:59.902651] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb4ae50) 00:20:59.347 [2024-12-16 11:42:59.902705] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:15200 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:59.347 [2024-12-16 11:42:59.902733] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0015 p:0 m:0 dnr:0 00:20:59.347 [2024-12-16 11:42:59.906966] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb4ae50) 00:20:59.347 [2024-12-16 11:42:59.907021] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:17440 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:59.347 [2024-12-16 11:42:59.907049] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0035 p:0 m:0 dnr:0 00:20:59.347 [2024-12-16 11:42:59.910589] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb4ae50) 00:20:59.347 [2024-12-16 11:42:59.910626] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:13504 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:59.347 [2024-12-16 11:42:59.910655] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0055 p:0 m:0 dnr:0 00:20:59.347 [2024-12-16 11:42:59.914904] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb4ae50) 00:20:59.347 [2024-12-16 11:42:59.914959] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:20320 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:59.347 [2024-12-16 11:42:59.914987] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0075 p:0 m:0 dnr:0 00:20:59.347 7445.00 IOPS, 930.62 MiB/s [2024-12-16T11:43:00.100Z] [2024-12-16 11:42:59.920676] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb4ae50) 00:20:59.347 [2024-12-16 11:42:59.920730] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:6528 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:59.347 [2024-12-16 11:42:59.920759] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0015 p:0 m:0 dnr:0 00:20:59.347 [2024-12-16 11:42:59.924467] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb4ae50) 00:20:59.347 [2024-12-16 11:42:59.924506] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:14784 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:59.347 [2024-12-16 11:42:59.924536] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0035 p:0 m:0 dnr:0 00:20:59.347 [2024-12-16 11:42:59.927964] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb4ae50) 00:20:59.347 [2024-12-16 11:42:59.928018] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:6656 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:59.347 [2024-12-16 11:42:59.928046] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0055 p:0 m:0 dnr:0 00:20:59.347 [2024-12-16 11:42:59.932049] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb4ae50) 00:20:59.347 [2024-12-16 11:42:59.932105] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:9376 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:59.347 [2024-12-16 11:42:59.932133] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0075 p:0 m:0 dnr:0 00:20:59.347 [2024-12-16 11:42:59.935936] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb4ae50) 00:20:59.347 [2024-12-16 11:42:59.935977] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:22048 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:59.347 [2024-12-16 11:42:59.935990] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0015 p:0 m:0 dnr:0 00:20:59.347 [2024-12-16 11:42:59.939420] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb4ae50) 00:20:59.347 [2024-12-16 11:42:59.939474] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:6240 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:59.347 [2024-12-16 11:42:59.939486] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0035 p:0 m:0 dnr:0 00:20:59.347 [2024-12-16 11:42:59.943370] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb4ae50) 00:20:59.347 [2024-12-16 11:42:59.943423] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:25568 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:59.347 [2024-12-16 11:42:59.943452] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0055 p:0 m:0 dnr:0 00:20:59.347 [2024-12-16 11:42:59.947385] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb4ae50) 00:20:59.347 [2024-12-16 11:42:59.947455] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:23040 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:59.347 [2024-12-16 11:42:59.947468] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0075 p:0 m:0 dnr:0 00:20:59.347 [2024-12-16 11:42:59.950849] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb4ae50) 00:20:59.347 [2024-12-16 11:42:59.950900] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:2336 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:59.347 [2024-12-16 11:42:59.950927] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0015 p:0 m:0 dnr:0 00:20:59.347 [2024-12-16 11:42:59.954800] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb4ae50) 00:20:59.347 [2024-12-16 11:42:59.954854] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:17824 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:59.347 [2024-12-16 11:42:59.954883] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0035 p:0 m:0 dnr:0 00:20:59.347 [2024-12-16 11:42:59.958303] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb4ae50) 00:20:59.347 [2024-12-16 11:42:59.958342] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:18816 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:59.347 [2024-12-16 11:42:59.958370] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0055 p:0 m:0 dnr:0 00:20:59.347 [2024-12-16 11:42:59.961718] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb4ae50) 00:20:59.347 [2024-12-16 11:42:59.961773] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:17440 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:59.347 [2024-12-16 11:42:59.961801] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0075 p:0 m:0 dnr:0 00:20:59.348 [2024-12-16 11:42:59.965800] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb4ae50) 00:20:59.348 [2024-12-16 11:42:59.965853] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:18624 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:59.348 [2024-12-16 11:42:59.965881] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0015 p:0 m:0 dnr:0 00:20:59.348 [2024-12-16 11:42:59.969599] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb4ae50) 00:20:59.348 [2024-12-16 11:42:59.969652] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:2400 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:59.348 [2024-12-16 11:42:59.969680] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0035 p:0 m:0 dnr:0 00:20:59.348 [2024-12-16 11:42:59.973899] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb4ae50) 00:20:59.348 [2024-12-16 11:42:59.973954] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:14400 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:59.348 [2024-12-16 11:42:59.973982] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0055 p:0 m:0 dnr:0 00:20:59.348 [2024-12-16 11:42:59.977353] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb4ae50) 00:20:59.348 [2024-12-16 11:42:59.977406] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:15840 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:59.348 [2024-12-16 11:42:59.977434] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0075 p:0 m:0 dnr:0 00:20:59.348 [2024-12-16 11:42:59.981758] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb4ae50) 00:20:59.348 [2024-12-16 11:42:59.981814] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:18784 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:59.348 [2024-12-16 11:42:59.981843] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0015 p:0 m:0 dnr:0 00:20:59.348 [2024-12-16 11:42:59.985652] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb4ae50) 00:20:59.348 [2024-12-16 11:42:59.985691] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:17696 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:59.348 [2024-12-16 11:42:59.985704] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0035 p:0 m:0 dnr:0 00:20:59.348 [2024-12-16 11:42:59.989591] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb4ae50) 00:20:59.348 [2024-12-16 11:42:59.989647] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:24384 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:59.348 [2024-12-16 11:42:59.989661] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0055 p:0 m:0 dnr:0 00:20:59.348 [2024-12-16 11:42:59.994051] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb4ae50) 00:20:59.348 [2024-12-16 11:42:59.994107] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:25536 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:59.348 [2024-12-16 11:42:59.994135] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0075 p:0 m:0 dnr:0 00:20:59.348 [2024-12-16 11:42:59.997862] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb4ae50) 00:20:59.348 [2024-12-16 11:42:59.997917] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:12512 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:59.348 [2024-12-16 11:42:59.997946] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0015 p:0 m:0 dnr:0 00:20:59.348 [2024-12-16 11:43:00.002090] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb4ae50) 00:20:59.348 [2024-12-16 11:43:00.002169] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:13504 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:59.348 [2024-12-16 11:43:00.002183] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0035 p:0 m:0 dnr:0 00:20:59.348 [2024-12-16 11:43:00.005831] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb4ae50) 00:20:59.348 [2024-12-16 11:43:00.005888] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:17280 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:59.348 [2024-12-16 11:43:00.005901] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0055 p:0 m:0 dnr:0 00:20:59.348 [2024-12-16 11:43:00.009729] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb4ae50) 00:20:59.348 [2024-12-16 11:43:00.009785] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:8320 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:59.348 [2024-12-16 11:43:00.009798] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0075 p:0 m:0 dnr:0 00:20:59.348 [2024-12-16 11:43:00.013679] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb4ae50) 00:20:59.348 [2024-12-16 11:43:00.013717] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:10144 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:59.348 [2024-12-16 11:43:00.013730] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0015 p:0 m:0 dnr:0 00:20:59.348 [2024-12-16 11:43:00.017363] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb4ae50) 00:20:59.348 [2024-12-16 11:43:00.017405] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:14784 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:59.348 [2024-12-16 11:43:00.017418] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0035 p:0 m:0 dnr:0 00:20:59.348 [2024-12-16 11:43:00.021811] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb4ae50) 00:20:59.348 [2024-12-16 11:43:00.021851] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:3136 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:59.348 [2024-12-16 11:43:00.021864] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0055 p:0 m:0 dnr:0 00:20:59.348 [2024-12-16 11:43:00.025320] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb4ae50) 00:20:59.348 [2024-12-16 11:43:00.025359] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:21728 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:59.348 [2024-12-16 11:43:00.025371] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0075 p:0 m:0 dnr:0 00:20:59.348 [2024-12-16 11:43:00.029113] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb4ae50) 00:20:59.348 [2024-12-16 11:43:00.029161] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:23520 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:59.348 [2024-12-16 11:43:00.029175] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0015 p:0 m:0 dnr:0 00:20:59.348 [2024-12-16 11:43:00.032832] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb4ae50) 00:20:59.348 [2024-12-16 11:43:00.032871] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:11296 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:59.348 [2024-12-16 11:43:00.032884] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0035 p:0 m:0 dnr:0 00:20:59.348 [2024-12-16 11:43:00.037789] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb4ae50) 00:20:59.348 [2024-12-16 11:43:00.037847] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:18176 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:59.348 [2024-12-16 11:43:00.037860] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0055 p:0 m:0 dnr:0 00:20:59.348 [2024-12-16 11:43:00.042310] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb4ae50) 00:20:59.348 [2024-12-16 11:43:00.042354] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:22368 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:59.348 [2024-12-16 11:43:00.042367] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0075 p:0 m:0 dnr:0 00:20:59.348 [2024-12-16 11:43:00.045188] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb4ae50) 00:20:59.348 [2024-12-16 11:43:00.045221] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:17856 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:59.348 [2024-12-16 11:43:00.045234] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0015 p:0 m:0 dnr:0 00:20:59.348 [2024-12-16 11:43:00.049937] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb4ae50) 00:20:59.348 [2024-12-16 11:43:00.049978] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:2304 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:59.348 [2024-12-16 11:43:00.049991] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0035 p:0 m:0 dnr:0 00:20:59.348 [2024-12-16 11:43:00.054808] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb4ae50) 00:20:59.348 [2024-12-16 11:43:00.054849] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:672 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:59.348 [2024-12-16 11:43:00.054863] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0055 p:0 m:0 dnr:0 00:20:59.348 [2024-12-16 11:43:00.059047] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb4ae50) 00:20:59.348 [2024-12-16 11:43:00.059086] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:3296 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:59.348 [2024-12-16 11:43:00.059099] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0075 p:0 m:0 dnr:0 00:20:59.348 [2024-12-16 11:43:00.062098] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb4ae50) 00:20:59.348 [2024-12-16 11:43:00.062151] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:19296 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:59.348 [2024-12-16 11:43:00.062175] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0015 p:0 m:0 dnr:0 00:20:59.348 [2024-12-16 11:43:00.066599] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb4ae50) 00:20:59.348 [2024-12-16 11:43:00.066639] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:20640 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:59.348 [2024-12-16 11:43:00.066652] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0035 p:0 m:0 dnr:0 00:20:59.348 [2024-12-16 11:43:00.069806] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb4ae50) 00:20:59.348 [2024-12-16 11:43:00.069862] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:20544 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:59.348 [2024-12-16 11:43:00.069875] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0055 p:0 m:0 dnr:0 00:20:59.348 [2024-12-16 11:43:00.074063] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb4ae50) 00:20:59.348 [2024-12-16 11:43:00.074121] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:992 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:59.348 [2024-12-16 11:43:00.074150] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0075 p:0 m:0 dnr:0 00:20:59.349 [2024-12-16 11:43:00.078353] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb4ae50) 00:20:59.349 [2024-12-16 11:43:00.078410] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:8416 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:59.349 [2024-12-16 11:43:00.078424] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0015 p:0 m:0 dnr:0 00:20:59.349 [2024-12-16 11:43:00.082407] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb4ae50) 00:20:59.349 [2024-12-16 11:43:00.082448] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:3200 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:59.349 [2024-12-16 11:43:00.082460] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0035 p:0 m:0 dnr:0 00:20:59.349 [2024-12-16 11:43:00.085574] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb4ae50) 00:20:59.349 [2024-12-16 11:43:00.085631] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:24384 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:59.349 [2024-12-16 11:43:00.085659] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0055 p:0 m:0 dnr:0 00:20:59.349 [2024-12-16 11:43:00.090245] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb4ae50) 00:20:59.349 [2024-12-16 11:43:00.090280] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:15680 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:59.349 [2024-12-16 11:43:00.090309] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0075 p:0 m:0 dnr:0 00:20:59.349 [2024-12-16 11:43:00.094759] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb4ae50) 00:20:59.349 [2024-12-16 11:43:00.094814] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:7360 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:59.349 [2024-12-16 11:43:00.094843] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0015 p:0 m:0 dnr:0 00:20:59.609 [2024-12-16 11:43:00.099508] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb4ae50) 00:20:59.609 [2024-12-16 11:43:00.099550] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:4160 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:59.609 [2024-12-16 11:43:00.099564] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0035 p:0 m:0 dnr:0 00:20:59.609 [2024-12-16 11:43:00.102358] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb4ae50) 00:20:59.609 [2024-12-16 11:43:00.102413] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:19680 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:59.609 [2024-12-16 11:43:00.102426] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0055 p:0 m:0 dnr:0 00:20:59.609 [2024-12-16 11:43:00.107426] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb4ae50) 00:20:59.609 [2024-12-16 11:43:00.107468] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:23488 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:59.609 [2024-12-16 11:43:00.107481] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0075 p:0 m:0 dnr:0 00:20:59.609 [2024-12-16 11:43:00.111078] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb4ae50) 00:20:59.609 [2024-12-16 11:43:00.111133] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:16096 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:59.609 [2024-12-16 11:43:00.111173] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0015 p:0 m:0 dnr:0 00:20:59.609 [2024-12-16 11:43:00.115481] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb4ae50) 00:20:59.609 [2024-12-16 11:43:00.115538] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:19840 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:59.609 [2024-12-16 11:43:00.115567] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0035 p:0 m:0 dnr:0 00:20:59.609 [2024-12-16 11:43:00.120365] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb4ae50) 00:20:59.609 [2024-12-16 11:43:00.120439] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:6176 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:59.609 [2024-12-16 11:43:00.120468] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0055 p:0 m:0 dnr:0 00:20:59.609 [2024-12-16 11:43:00.123739] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb4ae50) 00:20:59.609 [2024-12-16 11:43:00.123780] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:2016 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:59.609 [2024-12-16 11:43:00.123793] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0075 p:0 m:0 dnr:0 00:20:59.609 [2024-12-16 11:43:00.128215] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb4ae50) 00:20:59.609 [2024-12-16 11:43:00.128270] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:7584 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:59.609 [2024-12-16 11:43:00.128298] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0015 p:0 m:0 dnr:0 00:20:59.609 [2024-12-16 11:43:00.133155] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb4ae50) 00:20:59.609 [2024-12-16 11:43:00.133222] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:19456 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:59.609 [2024-12-16 11:43:00.133252] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0035 p:0 m:0 dnr:0 00:20:59.609 [2024-12-16 11:43:00.138158] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb4ae50) 00:20:59.609 [2024-12-16 11:43:00.138224] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:23104 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:59.609 [2024-12-16 11:43:00.138253] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0055 p:0 m:0 dnr:0 00:20:59.609 [2024-12-16 11:43:00.141052] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb4ae50) 00:20:59.609 [2024-12-16 11:43:00.141104] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:20544 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:59.609 [2024-12-16 11:43:00.141132] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0075 p:0 m:0 dnr:0 00:20:59.609 [2024-12-16 11:43:00.145617] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb4ae50) 00:20:59.609 [2024-12-16 11:43:00.145672] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:19712 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:59.609 [2024-12-16 11:43:00.145702] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0015 p:0 m:0 dnr:0 00:20:59.609 [2024-12-16 11:43:00.150450] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb4ae50) 00:20:59.609 [2024-12-16 11:43:00.150506] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:12704 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:59.609 [2024-12-16 11:43:00.150534] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0035 p:0 m:0 dnr:0 00:20:59.609 [2024-12-16 11:43:00.155571] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb4ae50) 00:20:59.609 [2024-12-16 11:43:00.155629] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:24160 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:59.609 [2024-12-16 11:43:00.155658] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0055 p:0 m:0 dnr:0 00:20:59.609 [2024-12-16 11:43:00.158972] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb4ae50) 00:20:59.609 [2024-12-16 11:43:00.159024] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:6624 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:59.609 [2024-12-16 11:43:00.159052] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0075 p:0 m:0 dnr:0 00:20:59.609 [2024-12-16 11:43:00.163406] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb4ae50) 00:20:59.609 [2024-12-16 11:43:00.163462] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:21344 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:59.609 [2024-12-16 11:43:00.163492] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0015 p:0 m:0 dnr:0 00:20:59.609 [2024-12-16 11:43:00.168801] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb4ae50) 00:20:59.609 [2024-12-16 11:43:00.168858] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:9664 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:59.609 [2024-12-16 11:43:00.168887] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0035 p:0 m:0 dnr:0 00:20:59.609 [2024-12-16 11:43:00.173893] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb4ae50) 00:20:59.609 [2024-12-16 11:43:00.173950] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:11744 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:59.609 [2024-12-16 11:43:00.173979] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0055 p:0 m:0 dnr:0 00:20:59.609 [2024-12-16 11:43:00.178505] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb4ae50) 00:20:59.609 [2024-12-16 11:43:00.178560] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:22304 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:59.609 [2024-12-16 11:43:00.178588] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0075 p:0 m:0 dnr:0 00:20:59.609 [2024-12-16 11:43:00.181829] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb4ae50) 00:20:59.609 [2024-12-16 11:43:00.181882] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:12480 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:59.609 [2024-12-16 11:43:00.181910] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0015 p:0 m:0 dnr:0 00:20:59.609 [2024-12-16 11:43:00.186436] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb4ae50) 00:20:59.609 [2024-12-16 11:43:00.186492] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:1632 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:59.609 [2024-12-16 11:43:00.186521] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0035 p:0 m:0 dnr:0 00:20:59.609 [2024-12-16 11:43:00.191676] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb4ae50) 00:20:59.609 [2024-12-16 11:43:00.191754] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:21504 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:59.609 [2024-12-16 11:43:00.191768] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0055 p:0 m:0 dnr:0 00:20:59.609 [2024-12-16 11:43:00.196462] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb4ae50) 00:20:59.609 [2024-12-16 11:43:00.196519] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:8416 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:59.609 [2024-12-16 11:43:00.196548] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0075 p:0 m:0 dnr:0 00:20:59.609 [2024-12-16 11:43:00.199344] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb4ae50) 00:20:59.609 [2024-12-16 11:43:00.199379] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:17280 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:59.609 [2024-12-16 11:43:00.199408] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0015 p:0 m:0 dnr:0 00:20:59.609 [2024-12-16 11:43:00.204655] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb4ae50) 00:20:59.609 [2024-12-16 11:43:00.204700] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:19168 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:59.609 [2024-12-16 11:43:00.204713] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0035 p:0 m:0 dnr:0 00:20:59.609 [2024-12-16 11:43:00.209279] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb4ae50) 00:20:59.609 [2024-12-16 11:43:00.209341] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:9600 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:59.609 [2024-12-16 11:43:00.209370] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0055 p:0 m:0 dnr:0 00:20:59.609 [2024-12-16 11:43:00.214724] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb4ae50) 00:20:59.609 [2024-12-16 11:43:00.214815] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:7584 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:59.610 [2024-12-16 11:43:00.214844] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0075 p:0 m:0 dnr:0 00:20:59.610 [2024-12-16 11:43:00.218259] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb4ae50) 00:20:59.610 [2024-12-16 11:43:00.218292] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:11168 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:59.610 [2024-12-16 11:43:00.218322] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0015 p:0 m:0 dnr:0 00:20:59.610 [2024-12-16 11:43:00.222945] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb4ae50) 00:20:59.610 [2024-12-16 11:43:00.223003] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:5440 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:59.610 [2024-12-16 11:43:00.223017] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0035 p:0 m:0 dnr:0 00:20:59.610 [2024-12-16 11:43:00.228172] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb4ae50) 00:20:59.610 [2024-12-16 11:43:00.228223] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:4576 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:59.610 [2024-12-16 11:43:00.228236] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0055 p:0 m:0 dnr:0 00:20:59.610 [2024-12-16 11:43:00.233057] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb4ae50) 00:20:59.610 [2024-12-16 11:43:00.233113] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:7776 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:59.610 [2024-12-16 11:43:00.233142] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0075 p:0 m:0 dnr:0 00:20:59.610 [2024-12-16 11:43:00.237241] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb4ae50) 00:20:59.610 [2024-12-16 11:43:00.237297] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:928 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:59.610 [2024-12-16 11:43:00.237326] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0015 p:0 m:0 dnr:0 00:20:59.610 [2024-12-16 11:43:00.240578] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb4ae50) 00:20:59.610 [2024-12-16 11:43:00.240635] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:7392 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:59.610 [2024-12-16 11:43:00.240663] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0035 p:0 m:0 dnr:0 00:20:59.610 [2024-12-16 11:43:00.245613] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb4ae50) 00:20:59.610 [2024-12-16 11:43:00.245671] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:22080 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:59.610 [2024-12-16 11:43:00.245684] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0055 p:0 m:0 dnr:0 00:20:59.610 [2024-12-16 11:43:00.251013] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb4ae50) 00:20:59.610 [2024-12-16 11:43:00.251071] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:23968 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:59.610 [2024-12-16 11:43:00.251101] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0075 p:0 m:0 dnr:0 00:20:59.610 [2024-12-16 11:43:00.254728] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb4ae50) 00:20:59.610 [2024-12-16 11:43:00.254783] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:24416 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:59.610 [2024-12-16 11:43:00.254813] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0015 p:0 m:0 dnr:0 00:20:59.610 [2024-12-16 11:43:00.259229] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb4ae50) 00:20:59.610 [2024-12-16 11:43:00.259290] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:22464 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:59.610 [2024-12-16 11:43:00.259308] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0035 p:0 m:0 dnr:0 00:20:59.610 [2024-12-16 11:43:00.264611] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb4ae50) 00:20:59.610 [2024-12-16 11:43:00.264698] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:13792 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:59.610 [2024-12-16 11:43:00.264712] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0055 p:0 m:0 dnr:0 00:20:59.610 [2024-12-16 11:43:00.269800] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb4ae50) 00:20:59.610 [2024-12-16 11:43:00.269858] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:7136 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:59.610 [2024-12-16 11:43:00.269871] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0075 p:0 m:0 dnr:0 00:20:59.610 [2024-12-16 11:43:00.274222] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb4ae50) 00:20:59.610 [2024-12-16 11:43:00.274257] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:20768 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:59.610 [2024-12-16 11:43:00.274270] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0015 p:0 m:0 dnr:0 00:20:59.610 [2024-12-16 11:43:00.278830] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb4ae50) 00:20:59.610 [2024-12-16 11:43:00.278887] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:12608 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:59.610 [2024-12-16 11:43:00.278900] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0035 p:0 m:0 dnr:0 00:20:59.610 [2024-12-16 11:43:00.282211] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb4ae50) 00:20:59.610 [2024-12-16 11:43:00.282248] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:11744 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:59.610 [2024-12-16 11:43:00.282262] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0055 p:0 m:0 dnr:0 00:20:59.610 [2024-12-16 11:43:00.287014] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb4ae50) 00:20:59.610 [2024-12-16 11:43:00.287056] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:20512 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:59.610 [2024-12-16 11:43:00.287069] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0075 p:0 m:0 dnr:0 00:20:59.610 [2024-12-16 11:43:00.292306] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb4ae50) 00:20:59.610 [2024-12-16 11:43:00.292362] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:15648 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:59.610 [2024-12-16 11:43:00.292376] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0015 p:0 m:0 dnr:0 00:20:59.610 [2024-12-16 11:43:00.296676] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb4ae50) 00:20:59.610 [2024-12-16 11:43:00.296717] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:12128 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:59.610 [2024-12-16 11:43:00.296730] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0035 p:0 m:0 dnr:0 00:20:59.610 [2024-12-16 11:43:00.299852] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb4ae50) 00:20:59.610 [2024-12-16 11:43:00.299892] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:4480 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:59.610 [2024-12-16 11:43:00.299905] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0055 p:0 m:0 dnr:0 00:20:59.610 [2024-12-16 11:43:00.304837] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb4ae50) 00:20:59.610 [2024-12-16 11:43:00.304894] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:3808 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:59.610 [2024-12-16 11:43:00.304907] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0075 p:0 m:0 dnr:0 00:20:59.610 [2024-12-16 11:43:00.308358] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb4ae50) 00:20:59.610 [2024-12-16 11:43:00.308414] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:16768 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:59.610 [2024-12-16 11:43:00.308427] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0015 p:0 m:0 dnr:0 00:20:59.610 [2024-12-16 11:43:00.312511] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb4ae50) 00:20:59.610 [2024-12-16 11:43:00.312550] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:384 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:59.610 [2024-12-16 11:43:00.312564] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0035 p:0 m:0 dnr:0 00:20:59.610 [2024-12-16 11:43:00.317555] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb4ae50) 00:20:59.610 [2024-12-16 11:43:00.317612] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:17856 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:59.610 [2024-12-16 11:43:00.317625] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0055 p:0 m:0 dnr:0 00:20:59.610 [2024-12-16 11:43:00.322330] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb4ae50) 00:20:59.610 [2024-12-16 11:43:00.322370] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:736 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:59.610 [2024-12-16 11:43:00.322383] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0075 p:0 m:0 dnr:0 00:20:59.610 [2024-12-16 11:43:00.325018] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb4ae50) 00:20:59.610 [2024-12-16 11:43:00.325073] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:13984 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:59.610 [2024-12-16 11:43:00.325086] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0015 p:0 m:0 dnr:0 00:20:59.610 [2024-12-16 11:43:00.329728] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb4ae50) 00:20:59.610 [2024-12-16 11:43:00.329784] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:384 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:59.610 [2024-12-16 11:43:00.329797] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0035 p:0 m:0 dnr:0 00:20:59.610 [2024-12-16 11:43:00.334200] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb4ae50) 00:20:59.610 [2024-12-16 11:43:00.334251] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:544 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:59.610 [2024-12-16 11:43:00.334280] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0055 p:0 m:0 dnr:0 00:20:59.610 [2024-12-16 11:43:00.339388] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb4ae50) 00:20:59.610 [2024-12-16 11:43:00.339444] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:16608 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:59.611 [2024-12-16 11:43:00.339457] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0075 p:0 m:0 dnr:0 00:20:59.611 [2024-12-16 11:43:00.343568] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb4ae50) 00:20:59.611 [2024-12-16 11:43:00.343621] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:14976 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:59.611 [2024-12-16 11:43:00.343635] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0015 p:0 m:0 dnr:0 00:20:59.611 [2024-12-16 11:43:00.348259] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb4ae50) 00:20:59.611 [2024-12-16 11:43:00.348301] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:12064 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:59.611 [2024-12-16 11:43:00.348314] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0035 p:0 m:0 dnr:0 00:20:59.611 [2024-12-16 11:43:00.351130] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb4ae50) 00:20:59.611 [2024-12-16 11:43:00.351178] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:5024 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:59.611 [2024-12-16 11:43:00.351191] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0055 p:0 m:0 dnr:0 00:20:59.611 [2024-12-16 11:43:00.356237] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb4ae50) 00:20:59.611 [2024-12-16 11:43:00.356277] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:2944 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:59.611 [2024-12-16 11:43:00.356290] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0075 p:0 m:0 dnr:0 00:20:59.871 [2024-12-16 11:43:00.360908] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb4ae50) 00:20:59.871 [2024-12-16 11:43:00.360948] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:352 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:59.871 [2024-12-16 11:43:00.360961] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0015 p:0 m:0 dnr:0 00:20:59.871 [2024-12-16 11:43:00.365848] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb4ae50) 00:20:59.871 [2024-12-16 11:43:00.365904] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:18368 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:59.871 [2024-12-16 11:43:00.365932] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0035 p:0 m:0 dnr:0 00:20:59.871 [2024-12-16 11:43:00.368831] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb4ae50) 00:20:59.871 [2024-12-16 11:43:00.368885] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:22880 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:59.871 [2024-12-16 11:43:00.368913] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0055 p:0 m:0 dnr:0 00:20:59.871 [2024-12-16 11:43:00.373706] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb4ae50) 00:20:59.871 [2024-12-16 11:43:00.373762] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:20288 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:59.871 [2024-12-16 11:43:00.373791] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0075 p:0 m:0 dnr:0 00:20:59.871 [2024-12-16 11:43:00.377918] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb4ae50) 00:20:59.872 [2024-12-16 11:43:00.377973] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:7488 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:59.872 [2024-12-16 11:43:00.378002] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0015 p:0 m:0 dnr:0 00:20:59.872 [2024-12-16 11:43:00.381174] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb4ae50) 00:20:59.872 [2024-12-16 11:43:00.381242] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:15232 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:59.872 [2024-12-16 11:43:00.381271] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0035 p:0 m:0 dnr:0 00:20:59.872 [2024-12-16 11:43:00.386394] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb4ae50) 00:20:59.872 [2024-12-16 11:43:00.386435] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:15520 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:59.872 [2024-12-16 11:43:00.386447] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0055 p:0 m:0 dnr:0 00:20:59.872 [2024-12-16 11:43:00.390970] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb4ae50) 00:20:59.872 [2024-12-16 11:43:00.391025] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:13088 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:59.872 [2024-12-16 11:43:00.391039] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0075 p:0 m:0 dnr:0 00:20:59.872 [2024-12-16 11:43:00.394386] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb4ae50) 00:20:59.872 [2024-12-16 11:43:00.394425] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:2720 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:59.872 [2024-12-16 11:43:00.394439] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0015 p:0 m:0 dnr:0 00:20:59.872 [2024-12-16 11:43:00.398731] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb4ae50) 00:20:59.872 [2024-12-16 11:43:00.398787] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:20416 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:59.872 [2024-12-16 11:43:00.398801] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0035 p:0 m:0 dnr:0 00:20:59.872 [2024-12-16 11:43:00.403865] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb4ae50) 00:20:59.872 [2024-12-16 11:43:00.403907] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:21664 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:59.872 [2024-12-16 11:43:00.403920] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0055 p:0 m:0 dnr:0 00:20:59.872 [2024-12-16 11:43:00.408602] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb4ae50) 00:20:59.872 [2024-12-16 11:43:00.408689] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:10176 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:59.872 [2024-12-16 11:43:00.408717] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0075 p:0 m:0 dnr:0 00:20:59.872 [2024-12-16 11:43:00.413489] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb4ae50) 00:20:59.872 [2024-12-16 11:43:00.413529] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:3968 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:59.872 [2024-12-16 11:43:00.413543] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0015 p:0 m:0 dnr:0 00:20:59.872 [2024-12-16 11:43:00.416268] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb4ae50) 00:20:59.872 [2024-12-16 11:43:00.416319] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:19552 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:59.872 [2024-12-16 11:43:00.416332] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0035 p:0 m:0 dnr:0 00:20:59.872 [2024-12-16 11:43:00.421144] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb4ae50) 00:20:59.872 [2024-12-16 11:43:00.421228] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:22144 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:59.872 [2024-12-16 11:43:00.421258] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0055 p:0 m:0 dnr:0 00:20:59.872 [2024-12-16 11:43:00.425586] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb4ae50) 00:20:59.872 [2024-12-16 11:43:00.425626] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:3008 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:59.872 [2024-12-16 11:43:00.425639] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0075 p:0 m:0 dnr:0 00:20:59.872 [2024-12-16 11:43:00.428368] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb4ae50) 00:20:59.872 [2024-12-16 11:43:00.428407] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:3104 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:59.872 [2024-12-16 11:43:00.428435] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0015 p:0 m:0 dnr:0 00:20:59.872 [2024-12-16 11:43:00.433375] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb4ae50) 00:20:59.872 [2024-12-16 11:43:00.433414] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:8928 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:59.872 [2024-12-16 11:43:00.433442] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0035 p:0 m:0 dnr:0 00:20:59.872 [2024-12-16 11:43:00.437417] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb4ae50) 00:20:59.872 [2024-12-16 11:43:00.437487] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:16768 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:59.872 [2024-12-16 11:43:00.437500] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0055 p:0 m:0 dnr:0 00:20:59.872 [2024-12-16 11:43:00.440776] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb4ae50) 00:20:59.872 [2024-12-16 11:43:00.440831] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:14240 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:59.872 [2024-12-16 11:43:00.440859] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0075 p:0 m:0 dnr:0 00:20:59.872 [2024-12-16 11:43:00.445480] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb4ae50) 00:20:59.872 [2024-12-16 11:43:00.445522] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:20160 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:59.872 [2024-12-16 11:43:00.445534] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0015 p:0 m:0 dnr:0 00:20:59.872 [2024-12-16 11:43:00.450501] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb4ae50) 00:20:59.872 [2024-12-16 11:43:00.450557] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:18208 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:59.872 [2024-12-16 11:43:00.450585] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0035 p:0 m:0 dnr:0 00:20:59.872 [2024-12-16 11:43:00.454488] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb4ae50) 00:20:59.872 [2024-12-16 11:43:00.454541] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:22176 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:59.872 [2024-12-16 11:43:00.454570] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0055 p:0 m:0 dnr:0 00:20:59.872 [2024-12-16 11:43:00.458975] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb4ae50) 00:20:59.872 [2024-12-16 11:43:00.459030] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:22112 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:59.872 [2024-12-16 11:43:00.459058] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0075 p:0 m:0 dnr:0 00:20:59.872 [2024-12-16 11:43:00.462422] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb4ae50) 00:20:59.872 [2024-12-16 11:43:00.462477] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:14176 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:59.872 [2024-12-16 11:43:00.462490] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0015 p:0 m:0 dnr:0 00:20:59.872 [2024-12-16 11:43:00.466601] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb4ae50) 00:20:59.872 [2024-12-16 11:43:00.466642] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:12960 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:59.872 [2024-12-16 11:43:00.466655] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0035 p:0 m:0 dnr:0 00:20:59.872 [2024-12-16 11:43:00.471565] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb4ae50) 00:20:59.872 [2024-12-16 11:43:00.471607] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:1376 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:59.872 [2024-12-16 11:43:00.471620] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0055 p:0 m:0 dnr:0 00:20:59.872 [2024-12-16 11:43:00.475017] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb4ae50) 00:20:59.872 [2024-12-16 11:43:00.475068] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:12448 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:59.872 [2024-12-16 11:43:00.475098] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0075 p:0 m:0 dnr:0 00:20:59.872 [2024-12-16 11:43:00.479414] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb4ae50) 00:20:59.872 [2024-12-16 11:43:00.479475] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:3168 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:59.872 [2024-12-16 11:43:00.479488] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0015 p:0 m:0 dnr:0 00:20:59.872 [2024-12-16 11:43:00.484304] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb4ae50) 00:20:59.872 [2024-12-16 11:43:00.484343] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:2304 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:59.872 [2024-12-16 11:43:00.484372] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0035 p:0 m:0 dnr:0 00:20:59.872 [2024-12-16 11:43:00.487603] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb4ae50) 00:20:59.872 [2024-12-16 11:43:00.487641] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:16032 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:59.872 [2024-12-16 11:43:00.487654] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0055 p:0 m:0 dnr:0 00:20:59.872 [2024-12-16 11:43:00.491713] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb4ae50) 00:20:59.872 [2024-12-16 11:43:00.491751] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:22784 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:59.872 [2024-12-16 11:43:00.491764] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0075 p:0 m:0 dnr:0 00:20:59.873 [2024-12-16 11:43:00.496801] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb4ae50) 00:20:59.873 [2024-12-16 11:43:00.496857] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:6592 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:59.873 [2024-12-16 11:43:00.496887] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0015 p:0 m:0 dnr:0 00:20:59.873 [2024-12-16 11:43:00.501694] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb4ae50) 00:20:59.873 [2024-12-16 11:43:00.501751] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:9280 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:59.873 [2024-12-16 11:43:00.501780] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0035 p:0 m:0 dnr:0 00:20:59.873 [2024-12-16 11:43:00.505443] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb4ae50) 00:20:59.873 [2024-12-16 11:43:00.505498] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:10656 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:59.873 [2024-12-16 11:43:00.505511] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0055 p:0 m:0 dnr:0 00:20:59.873 [2024-12-16 11:43:00.509640] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb4ae50) 00:20:59.873 [2024-12-16 11:43:00.509696] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:17440 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:59.873 [2024-12-16 11:43:00.509725] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0075 p:0 m:0 dnr:0 00:20:59.873 [2024-12-16 11:43:00.514111] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb4ae50) 00:20:59.873 [2024-12-16 11:43:00.514181] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:4576 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:59.873 [2024-12-16 11:43:00.514211] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0015 p:0 m:0 dnr:0 00:20:59.873 [2024-12-16 11:43:00.519098] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb4ae50) 00:20:59.873 [2024-12-16 11:43:00.519180] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:18432 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:59.873 [2024-12-16 11:43:00.519195] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0035 p:0 m:0 dnr:0 00:20:59.873 [2024-12-16 11:43:00.522095] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb4ae50) 00:20:59.873 [2024-12-16 11:43:00.522176] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:14272 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:59.873 [2024-12-16 11:43:00.522190] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0055 p:0 m:0 dnr:0 00:20:59.873 [2024-12-16 11:43:00.526405] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb4ae50) 00:20:59.873 [2024-12-16 11:43:00.526460] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:17600 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:59.873 [2024-12-16 11:43:00.526488] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0075 p:0 m:0 dnr:0 00:20:59.873 [2024-12-16 11:43:00.530939] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb4ae50) 00:20:59.873 [2024-12-16 11:43:00.530993] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:12480 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:59.873 [2024-12-16 11:43:00.531022] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0015 p:0 m:0 dnr:0 00:20:59.873 [2024-12-16 11:43:00.535389] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb4ae50) 00:20:59.873 [2024-12-16 11:43:00.535429] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:3520 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:59.873 [2024-12-16 11:43:00.535442] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0035 p:0 m:0 dnr:0 00:20:59.873 [2024-12-16 11:43:00.538915] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb4ae50) 00:20:59.873 [2024-12-16 11:43:00.538971] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:24608 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:59.873 [2024-12-16 11:43:00.538999] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0055 p:0 m:0 dnr:0 00:20:59.873 [2024-12-16 11:43:00.543920] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb4ae50) 00:20:59.873 [2024-12-16 11:43:00.543962] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:2944 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:59.873 [2024-12-16 11:43:00.543975] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0075 p:0 m:0 dnr:0 00:20:59.873 [2024-12-16 11:43:00.549092] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb4ae50) 00:20:59.873 [2024-12-16 11:43:00.549175] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:23776 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:59.873 [2024-12-16 11:43:00.549189] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0015 p:0 m:0 dnr:0 00:20:59.873 [2024-12-16 11:43:00.554326] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb4ae50) 00:20:59.873 [2024-12-16 11:43:00.554368] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:320 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:59.873 [2024-12-16 11:43:00.554381] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0035 p:0 m:0 dnr:0 00:20:59.873 [2024-12-16 11:43:00.557734] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb4ae50) 00:20:59.873 [2024-12-16 11:43:00.557805] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:22496 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:59.873 [2024-12-16 11:43:00.557818] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0055 p:0 m:0 dnr:0 00:20:59.873 [2024-12-16 11:43:00.562422] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb4ae50) 00:20:59.873 [2024-12-16 11:43:00.562463] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:10048 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:59.873 [2024-12-16 11:43:00.562476] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0075 p:0 m:0 dnr:0 00:20:59.873 [2024-12-16 11:43:00.567575] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb4ae50) 00:20:59.873 [2024-12-16 11:43:00.567618] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:15872 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:59.873 [2024-12-16 11:43:00.567631] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0015 p:0 m:0 dnr:0 00:20:59.873 [2024-12-16 11:43:00.572834] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb4ae50) 00:20:59.873 [2024-12-16 11:43:00.572893] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:18496 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:59.873 [2024-12-16 11:43:00.572907] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0035 p:0 m:0 dnr:0 00:20:59.873 [2024-12-16 11:43:00.577633] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb4ae50) 00:20:59.873 [2024-12-16 11:43:00.577689] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:12864 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:59.873 [2024-12-16 11:43:00.577718] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0055 p:0 m:0 dnr:0 00:20:59.873 [2024-12-16 11:43:00.580406] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb4ae50) 00:20:59.873 [2024-12-16 11:43:00.580459] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:4192 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:59.873 [2024-12-16 11:43:00.580488] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0075 p:0 m:0 dnr:0 00:20:59.873 [2024-12-16 11:43:00.584857] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb4ae50) 00:20:59.873 [2024-12-16 11:43:00.584899] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:21184 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:59.873 [2024-12-16 11:43:00.584923] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0015 p:0 m:0 dnr:0 00:20:59.873 [2024-12-16 11:43:00.588854] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb4ae50) 00:20:59.873 [2024-12-16 11:43:00.588910] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:3680 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:59.873 [2024-12-16 11:43:00.588938] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0035 p:0 m:0 dnr:0 00:20:59.873 [2024-12-16 11:43:00.592896] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb4ae50) 00:20:59.873 [2024-12-16 11:43:00.592949] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:2112 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:59.873 [2024-12-16 11:43:00.592976] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0055 p:0 m:0 dnr:0 00:20:59.873 [2024-12-16 11:43:00.598108] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb4ae50) 00:20:59.873 [2024-12-16 11:43:00.598188] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:15872 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:59.873 [2024-12-16 11:43:00.598202] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0075 p:0 m:0 dnr:0 00:20:59.873 [2024-12-16 11:43:00.603017] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb4ae50) 00:20:59.873 [2024-12-16 11:43:00.603071] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:4864 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:59.873 [2024-12-16 11:43:00.603099] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0015 p:0 m:0 dnr:0 00:20:59.873 [2024-12-16 11:43:00.606493] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb4ae50) 00:20:59.873 [2024-12-16 11:43:00.606546] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:14688 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:59.873 [2024-12-16 11:43:00.606574] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0035 p:0 m:0 dnr:0 00:20:59.873 [2024-12-16 11:43:00.611025] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb4ae50) 00:20:59.873 [2024-12-16 11:43:00.611080] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:2368 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:59.873 [2024-12-16 11:43:00.611108] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0055 p:0 m:0 dnr:0 00:20:59.873 [2024-12-16 11:43:00.615951] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb4ae50) 00:20:59.873 [2024-12-16 11:43:00.615994] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:11360 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:59.874 [2024-12-16 11:43:00.616008] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0075 p:0 m:0 dnr:0 00:20:59.874 [2024-12-16 11:43:00.619318] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb4ae50) 00:20:59.874 [2024-12-16 11:43:00.619370] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:14624 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:59.874 [2024-12-16 11:43:00.619399] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0015 p:0 m:0 dnr:0 00:21:00.134 [2024-12-16 11:43:00.623502] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb4ae50) 00:21:00.134 [2024-12-16 11:43:00.623559] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:17664 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:00.134 [2024-12-16 11:43:00.623588] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0035 p:0 m:0 dnr:0 00:21:00.134 [2024-12-16 11:43:00.627756] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb4ae50) 00:21:00.134 [2024-12-16 11:43:00.627794] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:20064 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:00.134 [2024-12-16 11:43:00.627808] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0055 p:0 m:0 dnr:0 00:21:00.134 [2024-12-16 11:43:00.631675] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb4ae50) 00:21:00.134 [2024-12-16 11:43:00.631736] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:3680 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:00.134 [2024-12-16 11:43:00.631749] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0075 p:0 m:0 dnr:0 00:21:00.134 [2024-12-16 11:43:00.635749] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb4ae50) 00:21:00.134 [2024-12-16 11:43:00.635788] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:23840 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:00.134 [2024-12-16 11:43:00.635801] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0015 p:0 m:0 dnr:0 00:21:00.134 [2024-12-16 11:43:00.639951] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb4ae50) 00:21:00.134 [2024-12-16 11:43:00.640007] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:13984 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:00.134 [2024-12-16 11:43:00.640050] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0035 p:0 m:0 dnr:0 00:21:00.134 [2024-12-16 11:43:00.644245] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb4ae50) 00:21:00.134 [2024-12-16 11:43:00.644300] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:13024 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:00.134 [2024-12-16 11:43:00.644328] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0055 p:0 m:0 dnr:0 00:21:00.134 [2024-12-16 11:43:00.647553] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb4ae50) 00:21:00.134 [2024-12-16 11:43:00.647591] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:19616 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:00.135 [2024-12-16 11:43:00.647604] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0075 p:0 m:0 dnr:0 00:21:00.135 [2024-12-16 11:43:00.652127] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb4ae50) 00:21:00.135 [2024-12-16 11:43:00.652192] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:5472 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:00.135 [2024-12-16 11:43:00.652221] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0015 p:0 m:0 dnr:0 00:21:00.135 [2024-12-16 11:43:00.656969] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb4ae50) 00:21:00.135 [2024-12-16 11:43:00.657025] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:21088 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:00.135 [2024-12-16 11:43:00.657055] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0035 p:0 m:0 dnr:0 00:21:00.135 [2024-12-16 11:43:00.661763] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb4ae50) 00:21:00.135 [2024-12-16 11:43:00.661819] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:17376 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:00.135 [2024-12-16 11:43:00.661849] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0055 p:0 m:0 dnr:0 00:21:00.135 [2024-12-16 11:43:00.666273] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb4ae50) 00:21:00.135 [2024-12-16 11:43:00.666314] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:9760 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:00.135 [2024-12-16 11:43:00.666343] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0075 p:0 m:0 dnr:0 00:21:00.135 [2024-12-16 11:43:00.670836] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb4ae50) 00:21:00.135 [2024-12-16 11:43:00.670893] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:15840 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:00.135 [2024-12-16 11:43:00.670921] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0015 p:0 m:0 dnr:0 00:21:00.135 [2024-12-16 11:43:00.674056] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb4ae50) 00:21:00.135 [2024-12-16 11:43:00.674111] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:10624 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:00.135 [2024-12-16 11:43:00.674125] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0035 p:0 m:0 dnr:0 00:21:00.135 [2024-12-16 11:43:00.677497] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb4ae50) 00:21:00.135 [2024-12-16 11:43:00.677538] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:23680 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:00.135 [2024-12-16 11:43:00.677550] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0055 p:0 m:0 dnr:0 00:21:00.135 [2024-12-16 11:43:00.682046] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb4ae50) 00:21:00.135 [2024-12-16 11:43:00.682086] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:24352 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:00.135 [2024-12-16 11:43:00.682114] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0075 p:0 m:0 dnr:0 00:21:00.135 [2024-12-16 11:43:00.686452] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb4ae50) 00:21:00.135 [2024-12-16 11:43:00.686495] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:18688 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:00.135 [2024-12-16 11:43:00.686508] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0015 p:0 m:0 dnr:0 00:21:00.135 [2024-12-16 11:43:00.689802] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb4ae50) 00:21:00.135 [2024-12-16 11:43:00.689842] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:6144 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:00.135 [2024-12-16 11:43:00.689870] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0035 p:0 m:0 dnr:0 00:21:00.135 [2024-12-16 11:43:00.693889] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb4ae50) 00:21:00.135 [2024-12-16 11:43:00.693929] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:10784 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:00.135 [2024-12-16 11:43:00.693958] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0055 p:0 m:0 dnr:0 00:21:00.135 [2024-12-16 11:43:00.697098] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb4ae50) 00:21:00.135 [2024-12-16 11:43:00.697167] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:23168 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:00.135 [2024-12-16 11:43:00.697182] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0075 p:0 m:0 dnr:0 00:21:00.135 [2024-12-16 11:43:00.701276] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb4ae50) 00:21:00.135 [2024-12-16 11:43:00.701314] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:5856 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:00.135 [2024-12-16 11:43:00.701344] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0015 p:0 m:0 dnr:0 00:21:00.135 [2024-12-16 11:43:00.705418] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb4ae50) 00:21:00.135 [2024-12-16 11:43:00.705460] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:18304 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:00.135 [2024-12-16 11:43:00.705472] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0035 p:0 m:0 dnr:0 00:21:00.135 [2024-12-16 11:43:00.709144] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb4ae50) 00:21:00.135 [2024-12-16 11:43:00.709194] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:18400 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:00.135 [2024-12-16 11:43:00.709223] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0055 p:0 m:0 dnr:0 00:21:00.135 [2024-12-16 11:43:00.713441] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb4ae50) 00:21:00.135 [2024-12-16 11:43:00.713483] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:10496 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:00.135 [2024-12-16 11:43:00.713496] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0075 p:0 m:0 dnr:0 00:21:00.135 [2024-12-16 11:43:00.717184] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb4ae50) 00:21:00.135 [2024-12-16 11:43:00.717223] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:2496 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:00.135 [2024-12-16 11:43:00.717252] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0015 p:0 m:0 dnr:0 00:21:00.135 [2024-12-16 11:43:00.721250] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb4ae50) 00:21:00.135 [2024-12-16 11:43:00.721289] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:13248 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:00.135 [2024-12-16 11:43:00.721318] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0035 p:0 m:0 dnr:0 00:21:00.135 [2024-12-16 11:43:00.724557] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb4ae50) 00:21:00.135 [2024-12-16 11:43:00.724598] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:12992 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:00.135 [2024-12-16 11:43:00.724626] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0055 p:0 m:0 dnr:0 00:21:00.135 [2024-12-16 11:43:00.728570] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb4ae50) 00:21:00.135 [2024-12-16 11:43:00.728609] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:14656 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:00.135 [2024-12-16 11:43:00.728638] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0075 p:0 m:0 dnr:0 00:21:00.135 [2024-12-16 11:43:00.732643] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb4ae50) 00:21:00.135 [2024-12-16 11:43:00.732684] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:18816 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:00.135 [2024-12-16 11:43:00.732713] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0015 p:0 m:0 dnr:0 00:21:00.135 [2024-12-16 11:43:00.735916] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb4ae50) 00:21:00.135 [2024-12-16 11:43:00.735956] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:3616 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:00.135 [2024-12-16 11:43:00.735968] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0035 p:0 m:0 dnr:0 00:21:00.135 [2024-12-16 11:43:00.740293] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb4ae50) 00:21:00.135 [2024-12-16 11:43:00.740332] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:20000 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:00.135 [2024-12-16 11:43:00.740361] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0055 p:0 m:0 dnr:0 00:21:00.135 [2024-12-16 11:43:00.745163] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb4ae50) 00:21:00.135 [2024-12-16 11:43:00.745203] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:14304 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:00.135 [2024-12-16 11:43:00.745231] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0075 p:0 m:0 dnr:0 00:21:00.135 [2024-12-16 11:43:00.749840] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb4ae50) 00:21:00.135 [2024-12-16 11:43:00.749881] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:12448 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:00.135 [2024-12-16 11:43:00.749910] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0015 p:0 m:0 dnr:0 00:21:00.135 [2024-12-16 11:43:00.753390] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb4ae50) 00:21:00.135 [2024-12-16 11:43:00.753430] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:7872 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:00.135 [2024-12-16 11:43:00.753443] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0035 p:0 m:0 dnr:0 00:21:00.135 [2024-12-16 11:43:00.757562] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb4ae50) 00:21:00.135 [2024-12-16 11:43:00.757602] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:4896 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:00.135 [2024-12-16 11:43:00.757614] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0055 p:0 m:0 dnr:0 00:21:00.135 [2024-12-16 11:43:00.761831] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb4ae50) 00:21:00.135 [2024-12-16 11:43:00.761870] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:960 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:00.136 [2024-12-16 11:43:00.761899] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0075 p:0 m:0 dnr:0 00:21:00.136 [2024-12-16 11:43:00.765792] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb4ae50) 00:21:00.136 [2024-12-16 11:43:00.765831] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:2912 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:00.136 [2024-12-16 11:43:00.765860] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0015 p:0 m:0 dnr:0 00:21:00.136 [2024-12-16 11:43:00.769127] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb4ae50) 00:21:00.136 [2024-12-16 11:43:00.769181] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:22240 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:00.136 [2024-12-16 11:43:00.769194] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0035 p:0 m:0 dnr:0 00:21:00.136 [2024-12-16 11:43:00.774053] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb4ae50) 00:21:00.136 [2024-12-16 11:43:00.774093] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:256 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:00.136 [2024-12-16 11:43:00.774122] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0055 p:0 m:0 dnr:0 00:21:00.136 [2024-12-16 11:43:00.779236] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb4ae50) 00:21:00.136 [2024-12-16 11:43:00.779275] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:20800 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:00.136 [2024-12-16 11:43:00.779304] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0075 p:0 m:0 dnr:0 00:21:00.136 [2024-12-16 11:43:00.784280] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb4ae50) 00:21:00.136 [2024-12-16 11:43:00.784318] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:23456 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:00.136 [2024-12-16 11:43:00.784347] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0015 p:0 m:0 dnr:0 00:21:00.136 [2024-12-16 11:43:00.787780] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb4ae50) 00:21:00.136 [2024-12-16 11:43:00.787818] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:5760 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:00.136 [2024-12-16 11:43:00.787831] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0035 p:0 m:0 dnr:0 00:21:00.136 [2024-12-16 11:43:00.792089] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb4ae50) 00:21:00.136 [2024-12-16 11:43:00.792129] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:16608 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:00.136 [2024-12-16 11:43:00.792170] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0055 p:0 m:0 dnr:0 00:21:00.136 [2024-12-16 11:43:00.796938] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb4ae50) 00:21:00.136 [2024-12-16 11:43:00.796978] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:3936 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:00.136 [2024-12-16 11:43:00.797006] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0075 p:0 m:0 dnr:0 00:21:00.136 [2024-12-16 11:43:00.801526] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb4ae50) 00:21:00.136 [2024-12-16 11:43:00.801565] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:24032 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:00.136 [2024-12-16 11:43:00.801593] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0015 p:0 m:0 dnr:0 00:21:00.136 [2024-12-16 11:43:00.804995] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb4ae50) 00:21:00.136 [2024-12-16 11:43:00.805035] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:20320 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:00.136 [2024-12-16 11:43:00.805064] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0035 p:0 m:0 dnr:0 00:21:00.136 [2024-12-16 11:43:00.808569] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb4ae50) 00:21:00.136 [2024-12-16 11:43:00.808610] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:16864 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:00.136 [2024-12-16 11:43:00.808639] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0055 p:0 m:0 dnr:0 00:21:00.136 [2024-12-16 11:43:00.812351] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb4ae50) 00:21:00.136 [2024-12-16 11:43:00.812391] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:15008 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:00.136 [2024-12-16 11:43:00.812420] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0075 p:0 m:0 dnr:0 00:21:00.136 [2024-12-16 11:43:00.815763] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb4ae50) 00:21:00.136 [2024-12-16 11:43:00.815802] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:2080 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:00.136 [2024-12-16 11:43:00.815814] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0015 p:0 m:0 dnr:0 00:21:00.136 [2024-12-16 11:43:00.820107] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb4ae50) 00:21:00.136 [2024-12-16 11:43:00.820160] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:21696 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:00.136 [2024-12-16 11:43:00.820185] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0035 p:0 m:0 dnr:0 00:21:00.136 [2024-12-16 11:43:00.824729] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb4ae50) 00:21:00.136 [2024-12-16 11:43:00.824771] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:19776 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:00.136 [2024-12-16 11:43:00.824784] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0055 p:0 m:0 dnr:0 00:21:00.136 [2024-12-16 11:43:00.828265] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb4ae50) 00:21:00.136 [2024-12-16 11:43:00.828307] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:8672 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:00.136 [2024-12-16 11:43:00.828320] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0075 p:0 m:0 dnr:0 00:21:00.136 [2024-12-16 11:43:00.832229] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb4ae50) 00:21:00.136 [2024-12-16 11:43:00.832268] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:22144 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:00.136 [2024-12-16 11:43:00.832281] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0015 p:0 m:0 dnr:0 00:21:00.136 [2024-12-16 11:43:00.837229] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb4ae50) 00:21:00.136 [2024-12-16 11:43:00.837269] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:13504 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:00.136 [2024-12-16 11:43:00.837282] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0035 p:0 m:0 dnr:0 00:21:00.136 [2024-12-16 11:43:00.842127] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb4ae50) 00:21:00.136 [2024-12-16 11:43:00.842179] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:5728 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:00.136 [2024-12-16 11:43:00.842193] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0055 p:0 m:0 dnr:0 00:21:00.136 [2024-12-16 11:43:00.847009] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb4ae50) 00:21:00.136 [2024-12-16 11:43:00.847050] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:1408 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:00.136 [2024-12-16 11:43:00.847063] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0075 p:0 m:0 dnr:0 00:21:00.136 [2024-12-16 11:43:00.849758] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb4ae50) 00:21:00.136 [2024-12-16 11:43:00.849796] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:5984 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:00.136 [2024-12-16 11:43:00.849809] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0015 p:0 m:0 dnr:0 00:21:00.136 [2024-12-16 11:43:00.854157] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb4ae50) 00:21:00.136 [2024-12-16 11:43:00.854193] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:2784 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:00.136 [2024-12-16 11:43:00.854206] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0035 p:0 m:0 dnr:0 00:21:00.136 [2024-12-16 11:43:00.858740] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb4ae50) 00:21:00.136 [2024-12-16 11:43:00.858781] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:4640 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:00.136 [2024-12-16 11:43:00.858794] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0055 p:0 m:0 dnr:0 00:21:00.136 [2024-12-16 11:43:00.863384] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb4ae50) 00:21:00.136 [2024-12-16 11:43:00.863425] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:16480 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:00.136 [2024-12-16 11:43:00.863438] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0075 p:0 m:0 dnr:0 00:21:00.136 [2024-12-16 11:43:00.866630] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb4ae50) 00:21:00.136 [2024-12-16 11:43:00.866670] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:24320 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:00.136 [2024-12-16 11:43:00.866684] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0015 p:0 m:0 dnr:0 00:21:00.136 [2024-12-16 11:43:00.871042] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb4ae50) 00:21:00.136 [2024-12-16 11:43:00.871084] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:19648 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:00.136 [2024-12-16 11:43:00.871097] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0035 p:0 m:0 dnr:0 00:21:00.136 [2024-12-16 11:43:00.875764] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb4ae50) 00:21:00.136 [2024-12-16 11:43:00.875804] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:992 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:00.136 [2024-12-16 11:43:00.875817] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0055 p:0 m:0 dnr:0 00:21:00.136 [2024-12-16 11:43:00.880131] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb4ae50) 00:21:00.136 [2024-12-16 11:43:00.880183] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:20288 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:00.137 [2024-12-16 11:43:00.880197] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0075 p:0 m:0 dnr:0 00:21:00.137 [2024-12-16 11:43:00.884795] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb4ae50) 00:21:00.137 [2024-12-16 11:43:00.884836] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:832 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:00.137 [2024-12-16 11:43:00.884848] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0015 p:0 m:0 dnr:0 00:21:00.396 [2024-12-16 11:43:00.889158] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb4ae50) 00:21:00.396 [2024-12-16 11:43:00.889197] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:3648 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:00.396 [2024-12-16 11:43:00.889210] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0035 p:0 m:0 dnr:0 00:21:00.396 [2024-12-16 11:43:00.893692] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb4ae50) 00:21:00.396 [2024-12-16 11:43:00.893734] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:12480 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:00.396 [2024-12-16 11:43:00.893747] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0055 p:0 m:0 dnr:0 00:21:00.396 [2024-12-16 11:43:00.898873] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb4ae50) 00:21:00.396 [2024-12-16 11:43:00.898915] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:17632 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:00.396 [2024-12-16 11:43:00.898928] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0075 p:0 m:0 dnr:0 00:21:00.396 [2024-12-16 11:43:00.902478] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb4ae50) 00:21:00.396 [2024-12-16 11:43:00.902518] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:24672 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:00.396 [2024-12-16 11:43:00.902531] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0015 p:0 m:0 dnr:0 00:21:00.396 [2024-12-16 11:43:00.906948] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb4ae50) 00:21:00.396 [2024-12-16 11:43:00.906988] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:3360 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:00.396 [2024-12-16 11:43:00.907001] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0035 p:0 m:0 dnr:0 00:21:00.396 [2024-12-16 11:43:00.912075] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb4ae50) 00:21:00.396 [2024-12-16 11:43:00.912117] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:15360 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:00.396 [2024-12-16 11:43:00.912130] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0055 p:0 m:0 dnr:0 00:21:00.396 [2024-12-16 11:43:00.916900] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb4ae50) 00:21:00.396 [2024-12-16 11:43:00.916940] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:14784 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:00.396 [2024-12-16 11:43:00.916953] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0075 p:0 m:0 dnr:0 00:21:00.396 7372.00 IOPS, 921.50 MiB/s 00:21:00.396 Latency(us) 00:21:00.396 [2024-12-16T11:43:01.149Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:21:00.396 Job: nvme0n1 (Core Mask 0x2, workload: randread, depth: 16, IO size: 131072) 00:21:00.396 nvme0n1 : 2.00 7371.98 921.50 0.00 0.00 2166.53 644.19 9353.77 00:21:00.396 [2024-12-16T11:43:01.149Z] =================================================================================================================== 00:21:00.396 [2024-12-16T11:43:01.149Z] Total : 7371.98 921.50 0.00 0.00 2166.53 644.19 9353.77 00:21:00.396 { 00:21:00.396 "results": [ 00:21:00.396 { 00:21:00.396 "job": "nvme0n1", 00:21:00.396 "core_mask": "0x2", 00:21:00.396 "workload": "randread", 00:21:00.396 "status": "finished", 00:21:00.396 "queue_depth": 16, 00:21:00.396 "io_size": 131072, 00:21:00.396 "runtime": 2.002175, 00:21:00.396 "iops": 7371.982968521733, 00:21:00.396 "mibps": 921.4978710652166, 00:21:00.396 "io_failed": 0, 00:21:00.396 "io_timeout": 0, 00:21:00.396 "avg_latency_us": 2166.5295767430403, 00:21:00.396 "min_latency_us": 644.189090909091, 00:21:00.396 "max_latency_us": 9353.774545454546 00:21:00.396 } 00:21:00.396 ], 00:21:00.396 "core_count": 1 00:21:00.396 } 00:21:00.396 11:43:00 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@71 -- # get_transient_errcount nvme0n1 00:21:00.396 11:43:00 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@27 -- # bperf_rpc bdev_get_iostat -b nvme0n1 00:21:00.396 11:43:00 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_get_iostat -b nvme0n1 00:21:00.396 11:43:00 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@28 -- # jq -r '.bdevs[0] 00:21:00.396 | .driver_specific 00:21:00.396 | .nvme_error 00:21:00.396 | .status_code 00:21:00.396 | .command_transient_transport_error' 00:21:00.655 11:43:01 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@71 -- # (( 476 > 0 )) 00:21:00.655 11:43:01 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@73 -- # killprocess 96349 00:21:00.655 11:43:01 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@954 -- # '[' -z 96349 ']' 00:21:00.655 11:43:01 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@958 -- # kill -0 96349 00:21:00.655 11:43:01 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@959 -- # uname 00:21:00.655 11:43:01 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:21:00.655 11:43:01 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 96349 00:21:00.655 11:43:01 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:21:00.655 killing process with pid 96349 00:21:00.655 11:43:01 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:21:00.655 11:43:01 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@972 -- # echo 'killing process with pid 96349' 00:21:00.655 Received shutdown signal, test time was about 2.000000 seconds 00:21:00.655 00:21:00.655 Latency(us) 00:21:00.655 [2024-12-16T11:43:01.408Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:21:00.655 [2024-12-16T11:43:01.408Z] =================================================================================================================== 00:21:00.655 [2024-12-16T11:43:01.408Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:21:00.655 11:43:01 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@973 -- # kill 96349 00:21:00.655 11:43:01 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@978 -- # wait 96349 00:21:00.913 11:43:01 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@114 -- # run_bperf_err randwrite 4096 128 00:21:00.913 11:43:01 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@54 -- # local rw bs qd 00:21:00.913 11:43:01 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # rw=randwrite 00:21:00.913 11:43:01 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # bs=4096 00:21:00.913 11:43:01 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # qd=128 00:21:00.914 11:43:01 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@58 -- # bperfpid=96427 00:21:00.914 11:43:01 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@60 -- # waitforlisten 96427 /var/tmp/bperf.sock 00:21:00.914 11:43:01 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@57 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randwrite -o 4096 -t 2 -q 128 -z 00:21:00.914 11:43:01 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@835 -- # '[' -z 96427 ']' 00:21:00.914 11:43:01 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bperf.sock 00:21:00.914 11:43:01 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@840 -- # local max_retries=100 00:21:00.914 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:21:00.914 11:43:01 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:21:00.914 11:43:01 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@844 -- # xtrace_disable 00:21:00.914 11:43:01 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:21:00.914 [2024-12-16 11:43:01.566225] Starting SPDK v25.01-pre git sha1 c360129e7 / DPDK 24.03.0 initialization... 00:21:00.914 [2024-12-16 11:43:01.566365] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid96427 ] 00:21:01.172 [2024-12-16 11:43:01.711682] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:21:01.172 [2024-12-16 11:43:01.761978] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 1 00:21:01.172 11:43:01 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:21:01.172 11:43:01 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@868 -- # return 0 00:21:01.172 11:43:01 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@61 -- # bperf_rpc bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 00:21:01.172 11:43:01 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 00:21:01.431 11:43:02 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@63 -- # rpc_cmd accel_error_inject_error -o crc32c -t disable 00:21:01.431 11:43:02 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:01.431 11:43:02 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:21:01.689 11:43:02 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:01.690 11:43:02 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@64 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.3 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:21:01.690 11:43:02 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.3 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:21:01.948 nvme0n1 00:21:01.948 11:43:02 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@67 -- # rpc_cmd accel_error_inject_error -o crc32c -t corrupt -i 256 00:21:01.948 11:43:02 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:01.948 11:43:02 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:21:01.948 11:43:02 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:01.948 11:43:02 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@69 -- # bperf_py perform_tests 00:21:01.948 11:43:02 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@19 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:21:01.948 Running I/O for 2 seconds... 00:21:01.948 [2024-12-16 11:43:02.677696] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x97ce60) with pdu=0x200016ef20d8 00:21:01.948 [2024-12-16 11:43:02.679529] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:21525 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:01.948 [2024-12-16 11:43:02.679960] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:50 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 00:21:01.948 [2024-12-16 11:43:02.692242] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x97ce60) with pdu=0x200016ee0a68 00:21:01.948 [2024-12-16 11:43:02.694144] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:106 nsid:1 lba:9192 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:01.948 [2024-12-16 11:43:02.694221] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:106 cdw0:0 sqhd:0066 p:0 m:0 dnr:0 00:21:01.948 [2024-12-16 11:43:02.700744] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x97ce60) with pdu=0x200016ee3498 00:21:02.226 [2024-12-16 11:43:02.701680] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:108 nsid:1 lba:884 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:02.226 [2024-12-16 11:43:02.701764] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:108 cdw0:0 sqhd:0007 p:0 m:0 dnr:0 00:21:02.226 [2024-12-16 11:43:02.714988] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x97ce60) with pdu=0x200016ef4b08 00:21:02.226 [2024-12-16 11:43:02.716606] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:14773 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:02.226 [2024-12-16 11:43:02.716643] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:40 cdw0:0 sqhd:0049 p:0 m:0 dnr:0 00:21:02.226 [2024-12-16 11:43:02.726114] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x97ce60) with pdu=0x200016ef8618 00:21:02.226 [2024-12-16 11:43:02.727497] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:120 nsid:1 lba:5458 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:02.226 [2024-12-16 11:43:02.727534] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:120 cdw0:0 sqhd:0069 p:0 m:0 dnr:0 00:21:02.226 [2024-12-16 11:43:02.737781] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x97ce60) with pdu=0x200016eeff18 00:21:02.226 [2024-12-16 11:43:02.739091] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:6686 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:02.226 [2024-12-16 11:43:02.739173] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:58 cdw0:0 sqhd:002c p:0 m:0 dnr:0 00:21:02.226 [2024-12-16 11:43:02.751536] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x97ce60) with pdu=0x200016ede8a8 00:21:02.226 [2024-12-16 11:43:02.753532] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:97 nsid:1 lba:12230 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:02.226 [2024-12-16 11:43:02.753600] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:97 cdw0:0 sqhd:006e p:0 m:0 dnr:0 00:21:02.226 [2024-12-16 11:43:02.759819] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x97ce60) with pdu=0x200016ee5658 00:21:02.226 [2024-12-16 11:43:02.760815] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:99 nsid:1 lba:10600 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:02.226 [2024-12-16 11:43:02.760866] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:99 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:21:02.226 [2024-12-16 11:43:02.773852] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x97ce60) with pdu=0x200016efeb58 00:21:02.226 [2024-12-16 11:43:02.775554] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:23826 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:02.226 [2024-12-16 11:43:02.775603] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0051 p:0 m:0 dnr:0 00:21:02.226 [2024-12-16 11:43:02.784653] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x97ce60) with pdu=0x200016ef6458 00:21:02.226 [2024-12-16 11:43:02.786069] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:21753 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:02.226 [2024-12-16 11:43:02.786121] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:58 cdw0:0 sqhd:0071 p:0 m:0 dnr:0 00:21:02.226 [2024-12-16 11:43:02.795759] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x97ce60) with pdu=0x200016eedd58 00:21:02.226 [2024-12-16 11:43:02.797125] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:66 nsid:1 lba:842 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:02.226 [2024-12-16 11:43:02.797199] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:66 cdw0:0 sqhd:0034 p:0 m:0 dnr:0 00:21:02.226 [2024-12-16 11:43:02.806393] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x97ce60) with pdu=0x200016edf118 00:21:02.226 [2024-12-16 11:43:02.807530] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:21634 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:02.226 [2024-12-16 11:43:02.807582] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:30 cdw0:0 sqhd:0054 p:0 m:0 dnr:0 00:21:02.226 [2024-12-16 11:43:02.817434] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x97ce60) with pdu=0x200016ee7818 00:21:02.226 [2024-12-16 11:43:02.818552] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:92 nsid:1 lba:782 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:02.226 [2024-12-16 11:43:02.818602] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:92 cdw0:0 sqhd:0017 p:0 m:0 dnr:0 00:21:02.226 [2024-12-16 11:43:02.831134] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x97ce60) with pdu=0x200016efc998 00:21:02.226 [2024-12-16 11:43:02.832917] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:16952 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:02.226 [2024-12-16 11:43:02.832969] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0059 p:0 m:0 dnr:0 00:21:02.226 [2024-12-16 11:43:02.839417] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x97ce60) with pdu=0x200016ef8618 00:21:02.226 [2024-12-16 11:43:02.840207] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:4398 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:02.226 [2024-12-16 11:43:02.840246] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:21:02.226 [2024-12-16 11:43:02.853063] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x97ce60) with pdu=0x200016eebb98 00:21:02.226 [2024-12-16 11:43:02.854568] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:75 nsid:1 lba:23036 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:02.226 [2024-12-16 11:43:02.854617] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:75 cdw0:0 sqhd:003c p:0 m:0 dnr:0 00:21:02.226 [2024-12-16 11:43:02.863948] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x97ce60) with pdu=0x200016ee12d8 00:21:02.226 [2024-12-16 11:43:02.865187] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:12136 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:02.226 [2024-12-16 11:43:02.865233] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:22 cdw0:0 sqhd:005c p:0 m:0 dnr:0 00:21:02.226 [2024-12-16 11:43:02.875069] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x97ce60) with pdu=0x200016ee99d8 00:21:02.226 [2024-12-16 11:43:02.876285] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:85 nsid:1 lba:13199 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:02.226 [2024-12-16 11:43:02.876323] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:85 cdw0:0 sqhd:001f p:0 m:0 dnr:0 00:21:02.226 [2024-12-16 11:43:02.888691] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x97ce60) with pdu=0x200016efa7d8 00:21:02.226 [2024-12-16 11:43:02.890532] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:11278 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:02.226 [2024-12-16 11:43:02.890597] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:16 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:21:02.226 [2024-12-16 11:43:02.897006] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x97ce60) with pdu=0x200016efa7d8 00:21:02.226 [2024-12-16 11:43:02.897893] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:24512 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:02.226 [2024-12-16 11:43:02.897944] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:21:02.226 [2024-12-16 11:43:02.910524] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x97ce60) with pdu=0x200016ee99d8 00:21:02.226 [2024-12-16 11:43:02.912099] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:84 nsid:1 lba:1209 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:02.226 [2024-12-16 11:43:02.912173] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:84 cdw0:0 sqhd:0044 p:0 m:0 dnr:0 00:21:02.226 [2024-12-16 11:43:02.921535] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x97ce60) with pdu=0x200016ee3498 00:21:02.226 [2024-12-16 11:43:02.922792] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:19614 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:02.226 [2024-12-16 11:43:02.922828] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0064 p:0 m:0 dnr:0 00:21:02.226 [2024-12-16 11:43:02.933330] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x97ce60) with pdu=0x200016eebb98 00:21:02.226 [2024-12-16 11:43:02.934600] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:77 nsid:1 lba:9209 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:02.226 [2024-12-16 11:43:02.934651] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:77 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:21:02.226 [2024-12-16 11:43:02.945957] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x97ce60) with pdu=0x200016efc128 00:21:02.226 [2024-12-16 11:43:02.947416] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:109 nsid:1 lba:6310 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:02.226 [2024-12-16 11:43:02.947459] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:109 cdw0:0 sqhd:0065 p:0 m:0 dnr:0 00:21:02.227 [2024-12-16 11:43:02.957851] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x97ce60) with pdu=0x200016ef0ff8 00:21:02.227 [2024-12-16 11:43:02.959107] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:3589 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:02.227 [2024-12-16 11:43:02.959190] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:37 cdw0:0 sqhd:0028 p:0 m:0 dnr:0 00:21:02.227 [2024-12-16 11:43:02.972093] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x97ce60) with pdu=0x200016eecc78 00:21:02.227 [2024-12-16 11:43:02.974052] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:123 nsid:1 lba:7414 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:02.227 [2024-12-16 11:43:02.974104] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:123 cdw0:0 sqhd:006a p:0 m:0 dnr:0 00:21:02.486 [2024-12-16 11:43:02.980519] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x97ce60) with pdu=0x200016ee8088 00:21:02.486 [2024-12-16 11:43:02.981479] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:121 nsid:1 lba:16492 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:02.486 [2024-12-16 11:43:02.981544] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:121 cdw0:0 sqhd:000b p:0 m:0 dnr:0 00:21:02.486 [2024-12-16 11:43:02.994308] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x97ce60) with pdu=0x200016ef5be8 00:21:02.486 [2024-12-16 11:43:02.995965] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:633 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:02.486 [2024-12-16 11:43:02.996021] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:57 cdw0:0 sqhd:004d p:0 m:0 dnr:0 00:21:02.486 [2024-12-16 11:43:03.004964] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x97ce60) with pdu=0x200016efe2e8 00:21:02.486 [2024-12-16 11:43:03.006409] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:95 nsid:1 lba:11457 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:02.486 [2024-12-16 11:43:03.006445] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:95 cdw0:0 sqhd:006d p:0 m:0 dnr:0 00:21:02.486 [2024-12-16 11:43:03.015931] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x97ce60) with pdu=0x200016eeee38 00:21:02.486 [2024-12-16 11:43:03.017310] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:12826 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:02.486 [2024-12-16 11:43:03.017344] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0030 p:0 m:0 dnr:0 00:21:02.486 [2024-12-16 11:43:03.026450] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x97ce60) with pdu=0x200016eee5c8 00:21:02.486 [2024-12-16 11:43:03.027620] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:15955 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:02.486 [2024-12-16 11:43:03.027670] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:57 cdw0:0 sqhd:0050 p:0 m:0 dnr:0 00:21:02.486 [2024-12-16 11:43:03.037379] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x97ce60) with pdu=0x200016ee5ec8 00:21:02.486 [2024-12-16 11:43:03.038486] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:19443 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:02.486 [2024-12-16 11:43:03.038537] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:58 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:21:02.486 [2024-12-16 11:43:03.050990] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x97ce60) with pdu=0x200016ef7da8 00:21:02.486 [2024-12-16 11:43:03.052797] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:11987 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:02.486 [2024-12-16 11:43:03.052845] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:27 cdw0:0 sqhd:0055 p:0 m:0 dnr:0 00:21:02.486 [2024-12-16 11:43:03.059228] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x97ce60) with pdu=0x200016efe2e8 00:21:02.486 [2024-12-16 11:43:03.060034] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:2426 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:02.486 [2024-12-16 11:43:03.060101] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:29 cdw0:0 sqhd:0076 p:0 m:0 dnr:0 00:21:02.486 [2024-12-16 11:43:03.072735] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x97ce60) with pdu=0x200016edf988 00:21:02.486 [2024-12-16 11:43:03.074137] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:71 nsid:1 lba:2572 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:02.486 [2024-12-16 11:43:03.074215] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:71 cdw0:0 sqhd:0038 p:0 m:0 dnr:0 00:21:02.486 [2024-12-16 11:43:03.083953] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x97ce60) with pdu=0x200016eec408 00:21:02.486 [2024-12-16 11:43:03.085112] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:92 nsid:1 lba:23227 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:02.486 [2024-12-16 11:43:03.085188] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:92 cdw0:0 sqhd:0058 p:0 m:0 dnr:0 00:21:02.486 [2024-12-16 11:43:03.095819] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x97ce60) with pdu=0x200016ee3d08 00:21:02.486 [2024-12-16 11:43:03.096926] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:94 nsid:1 lba:20625 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:02.486 [2024-12-16 11:43:03.096976] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:94 cdw0:0 sqhd:001b p:0 m:0 dnr:0 00:21:02.486 [2024-12-16 11:43:03.110130] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x97ce60) with pdu=0x200016ef9f68 00:21:02.486 [2024-12-16 11:43:03.112043] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:4730 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:02.486 [2024-12-16 11:43:03.112094] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:18 cdw0:0 sqhd:005d p:0 m:0 dnr:0 00:21:02.486 [2024-12-16 11:43:03.118700] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x97ce60) with pdu=0x200016efc128 00:21:02.486 [2024-12-16 11:43:03.119587] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:8551 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:02.486 [2024-12-16 11:43:03.119635] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:20 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:21:02.486 [2024-12-16 11:43:03.132095] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x97ce60) with pdu=0x200016ee1b48 00:21:02.486 [2024-12-16 11:43:03.133659] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:82 nsid:1 lba:1389 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:02.486 [2024-12-16 11:43:03.133708] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:82 cdw0:0 sqhd:0040 p:0 m:0 dnr:0 00:21:02.486 [2024-12-16 11:43:03.142713] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x97ce60) with pdu=0x200016eea248 00:21:02.486 [2024-12-16 11:43:03.144013] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:2751 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:02.486 [2024-12-16 11:43:03.144062] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:21:02.486 [2024-12-16 11:43:03.153910] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x97ce60) with pdu=0x200016ee1b48 00:21:02.486 [2024-12-16 11:43:03.155130] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:86 nsid:1 lba:6249 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:02.486 [2024-12-16 11:43:03.155205] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:86 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:21:02.486 [2024-12-16 11:43:03.167364] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x97ce60) with pdu=0x200016efc128 00:21:02.486 [2024-12-16 11:43:03.169224] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:18847 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:02.486 [2024-12-16 11:43:03.169259] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0065 p:0 m:0 dnr:0 00:21:02.486 [2024-12-16 11:43:03.175400] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x97ce60) with pdu=0x200016ef9f68 00:21:02.486 [2024-12-16 11:43:03.176346] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:11665 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:02.486 [2024-12-16 11:43:03.176397] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0006 p:0 m:0 dnr:0 00:21:02.486 [2024-12-16 11:43:03.188927] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x97ce60) with pdu=0x200016ee3d08 00:21:02.486 [2024-12-16 11:43:03.190578] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:8341 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:02.486 [2024-12-16 11:43:03.190643] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0048 p:0 m:0 dnr:0 00:21:02.486 [2024-12-16 11:43:03.199819] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x97ce60) with pdu=0x200016ee8088 00:21:02.486 [2024-12-16 11:43:03.201149] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:9353 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:02.486 [2024-12-16 11:43:03.201212] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:25 cdw0:0 sqhd:0068 p:0 m:0 dnr:0 00:21:02.486 [2024-12-16 11:43:03.211366] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x97ce60) with pdu=0x200016edf988 00:21:02.486 [2024-12-16 11:43:03.212668] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:73 nsid:1 lba:21859 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:02.486 [2024-12-16 11:43:03.212720] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:73 cdw0:0 sqhd:002b p:0 m:0 dnr:0 00:21:02.486 [2024-12-16 11:43:03.225578] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x97ce60) with pdu=0x200016efe2e8 00:21:02.486 [2024-12-16 11:43:03.227530] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:12706 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:02.486 [2024-12-16 11:43:03.227598] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:63 cdw0:0 sqhd:006d p:0 m:0 dnr:0 00:21:02.486 [2024-12-16 11:43:03.233753] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x97ce60) with pdu=0x200016ef7da8 00:21:02.486 [2024-12-16 11:43:03.234771] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:1461 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:02.486 [2024-12-16 11:43:03.234821] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:000e p:0 m:0 dnr:0 00:21:02.747 [2024-12-16 11:43:03.247424] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x97ce60) with pdu=0x200016eee5c8 00:21:02.747 [2024-12-16 11:43:03.249093] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:21517 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:02.747 [2024-12-16 11:43:03.249170] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0050 p:0 m:0 dnr:0 00:21:02.747 [2024-12-16 11:43:03.258050] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x97ce60) with pdu=0x200016ee5ec8 00:21:02.747 [2024-12-16 11:43:03.259502] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:25295 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:02.747 [2024-12-16 11:43:03.259538] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:25 cdw0:0 sqhd:0070 p:0 m:0 dnr:0 00:21:02.747 [2024-12-16 11:43:03.269501] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x97ce60) with pdu=0x200016eeee38 00:21:02.747 [2024-12-16 11:43:03.270871] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:4873 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:02.747 [2024-12-16 11:43:03.270922] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:32 cdw0:0 sqhd:0033 p:0 m:0 dnr:0 00:21:02.747 [2024-12-16 11:43:03.280699] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x97ce60) with pdu=0x200016efe2e8 00:21:02.747 [2024-12-16 11:43:03.281790] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:109 nsid:1 lba:11320 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:02.747 [2024-12-16 11:43:03.281828] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:109 cdw0:0 sqhd:0053 p:0 m:0 dnr:0 00:21:02.747 [2024-12-16 11:43:03.292172] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x97ce60) with pdu=0x200016ef5be8 00:21:02.747 [2024-12-16 11:43:03.293258] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:121 nsid:1 lba:14868 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:02.747 [2024-12-16 11:43:03.293295] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:121 cdw0:0 sqhd:0016 p:0 m:0 dnr:0 00:21:02.747 [2024-12-16 11:43:03.306599] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x97ce60) with pdu=0x200016eec408 00:21:02.747 [2024-12-16 11:43:03.308482] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:12657 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:02.747 [2024-12-16 11:43:03.308520] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:125 cdw0:0 sqhd:0058 p:0 m:0 dnr:0 00:21:02.747 [2024-12-16 11:43:03.315232] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x97ce60) with pdu=0x200016ee7818 00:21:02.747 [2024-12-16 11:43:03.316015] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:9246 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:02.747 [2024-12-16 11:43:03.316069] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0079 p:0 m:0 dnr:0 00:21:02.747 [2024-12-16 11:43:03.329448] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x97ce60) with pdu=0x200016ef0ff8 00:21:02.747 [2024-12-16 11:43:03.330885] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:23202 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:02.747 [2024-12-16 11:43:03.330935] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:38 cdw0:0 sqhd:003b p:0 m:0 dnr:0 00:21:02.747 [2024-12-16 11:43:03.341139] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x97ce60) with pdu=0x200016ef96f8 00:21:02.747 [2024-12-16 11:43:03.342152] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:70 nsid:1 lba:14770 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:02.747 [2024-12-16 11:43:03.342197] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:70 cdw0:0 sqhd:006c p:0 m:0 dnr:0 00:21:02.747 [2024-12-16 11:43:03.352306] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x97ce60) with pdu=0x200016ee49b0 00:21:02.747 [2024-12-16 11:43:03.353161] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:69 nsid:1 lba:16455 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:02.747 [2024-12-16 11:43:03.353225] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:69 cdw0:0 sqhd:005b p:0 m:0 dnr:0 00:21:02.747 [2024-12-16 11:43:03.363412] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x97ce60) with pdu=0x200016efd640 00:21:02.747 [2024-12-16 11:43:03.364071] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:68 nsid:1 lba:14891 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:02.747 [2024-12-16 11:43:03.364110] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:68 cdw0:0 sqhd:004d p:0 m:0 dnr:0 00:21:02.747 [2024-12-16 11:43:03.376532] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x97ce60) with pdu=0x200016ef0ff8 00:21:02.747 [2024-12-16 11:43:03.377983] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:101 nsid:1 lba:11751 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:02.747 [2024-12-16 11:43:03.378034] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:101 cdw0:0 sqhd:006e p:0 m:0 dnr:0 00:21:02.747 [2024-12-16 11:43:03.387471] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x97ce60) with pdu=0x200016ee2c28 00:21:02.747 [2024-12-16 11:43:03.388848] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:67 nsid:1 lba:19431 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:02.747 [2024-12-16 11:43:03.388900] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:67 cdw0:0 sqhd:005d p:0 m:0 dnr:0 00:21:02.747 [2024-12-16 11:43:03.398516] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x97ce60) with pdu=0x200016ede8a8 00:21:02.747 [2024-12-16 11:43:03.399683] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:120 nsid:1 lba:23487 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:02.747 [2024-12-16 11:43:03.399744] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:120 cdw0:0 sqhd:004f p:0 m:0 dnr:0 00:21:02.747 [2024-12-16 11:43:03.409507] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x97ce60) with pdu=0x200016ef1ca0 00:21:02.747 [2024-12-16 11:43:03.410512] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:20332 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:02.747 [2024-12-16 11:43:03.410563] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:51 cdw0:0 sqhd:003e p:0 m:0 dnr:0 00:21:02.747 [2024-12-16 11:43:03.420419] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x97ce60) with pdu=0x200016ef6890 00:21:02.747 [2024-12-16 11:43:03.421270] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:89 nsid:1 lba:14191 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:02.747 [2024-12-16 11:43:03.421305] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:89 cdw0:0 sqhd:0030 p:0 m:0 dnr:0 00:21:02.747 [2024-12-16 11:43:03.434887] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x97ce60) with pdu=0x200016eec840 00:21:02.747 [2024-12-16 11:43:03.436794] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:3493 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:02.747 [2024-12-16 11:43:03.436846] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:47 cdw0:0 sqhd:0070 p:0 m:0 dnr:0 00:21:02.747 [2024-12-16 11:43:03.445914] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x97ce60) with pdu=0x200016eddc00 00:21:02.747 [2024-12-16 11:43:03.447616] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:1972 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:02.747 [2024-12-16 11:43:03.447679] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:53 cdw0:0 sqhd:005f p:0 m:0 dnr:0 00:21:02.747 [2024-12-16 11:43:03.454635] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x97ce60) with pdu=0x200016efe2e8 00:21:02.747 [2024-12-16 11:43:03.455472] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:4360 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:02.747 [2024-12-16 11:43:03.455506] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:47 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:02.747 [2024-12-16 11:43:03.468861] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x97ce60) with pdu=0x200016ef0ff8 00:21:02.747 [2024-12-16 11:43:03.470378] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:7951 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:02.747 [2024-12-16 11:43:03.470419] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:21:02.747 [2024-12-16 11:43:03.479828] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x97ce60) with pdu=0x200016ef0bc0 00:21:02.747 [2024-12-16 11:43:03.481064] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:95 nsid:1 lba:16844 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:02.747 [2024-12-16 11:43:03.481101] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:95 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:21:02.747 [2024-12-16 11:43:03.491315] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x97ce60) with pdu=0x200016eff3c8 00:21:02.747 [2024-12-16 11:43:03.492561] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:91 nsid:1 lba:17783 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:02.747 [2024-12-16 11:43:03.492613] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:91 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:21:03.007 [2024-12-16 11:43:03.505335] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x97ce60) with pdu=0x200016ee84c0 00:21:03.007 [2024-12-16 11:43:03.507190] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:19737 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:03.007 [2024-12-16 11:43:03.507224] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:60 cdw0:0 sqhd:0067 p:0 m:0 dnr:0 00:21:03.007 [2024-12-16 11:43:03.513356] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x97ce60) with pdu=0x200016ee01f8 00:21:03.007 [2024-12-16 11:43:03.514330] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:8459 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:03.007 [2024-12-16 11:43:03.514385] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:32 cdw0:0 sqhd:0008 p:0 m:0 dnr:0 00:21:03.007 [2024-12-16 11:43:03.526890] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x97ce60) with pdu=0x200016eed920 00:21:03.007 [2024-12-16 11:43:03.528621] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:12081 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:03.007 [2024-12-16 11:43:03.528671] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:34 cdw0:0 sqhd:004a p:0 m:0 dnr:0 00:21:03.007 [2024-12-16 11:43:03.537443] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x97ce60) with pdu=0x200016ee38d0 00:21:03.007 [2024-12-16 11:43:03.538772] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:17858 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:03.007 [2024-12-16 11:43:03.538822] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:22 cdw0:0 sqhd:006a p:0 m:0 dnr:0 00:21:03.007 [2024-12-16 11:43:03.547801] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x97ce60) with pdu=0x200016ee5a90 00:21:03.007 [2024-12-16 11:43:03.548796] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:11432 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:03.007 [2024-12-16 11:43:03.548845] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:24 cdw0:0 sqhd:003e p:0 m:0 dnr:0 00:21:03.007 [2024-12-16 11:43:03.558491] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x97ce60) with pdu=0x200016ee0a68 00:21:03.007 [2024-12-16 11:43:03.559378] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:6171 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:03.007 [2024-12-16 11:43:03.559429] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:002c p:0 m:0 dnr:0 00:21:03.007 [2024-12-16 11:43:03.572620] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x97ce60) with pdu=0x200016ef4298 00:21:03.007 [2024-12-16 11:43:03.574438] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:118 nsid:1 lba:24210 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:03.007 [2024-12-16 11:43:03.574474] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:118 cdw0:0 sqhd:006c p:0 m:0 dnr:0 00:21:03.007 [2024-12-16 11:43:03.583605] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x97ce60) with pdu=0x200016efb480 00:21:03.007 [2024-12-16 11:43:03.585373] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:99 nsid:1 lba:2758 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:03.007 [2024-12-16 11:43:03.585423] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:99 cdw0:0 sqhd:006b p:0 m:0 dnr:0 00:21:03.007 [2024-12-16 11:43:03.594163] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x97ce60) with pdu=0x200016ede8a8 00:21:03.007 [2024-12-16 11:43:03.595826] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:4527 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:03.007 [2024-12-16 11:43:03.595862] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:36 cdw0:0 sqhd:005e p:0 m:0 dnr:0 00:21:03.007 [2024-12-16 11:43:03.603434] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x97ce60) with pdu=0x200016ef8a50 00:21:03.007 [2024-12-16 11:43:03.604205] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:77 nsid:1 lba:10082 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:03.007 [2024-12-16 11:43:03.604242] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:77 cdw0:0 sqhd:004c p:0 m:0 dnr:0 00:21:03.007 [2024-12-16 11:43:03.614627] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x97ce60) with pdu=0x200016ef8e88 00:21:03.007 [2024-12-16 11:43:03.615850] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:23714 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:03.007 [2024-12-16 11:43:03.615888] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:29 cdw0:0 sqhd:0020 p:0 m:0 dnr:0 00:21:03.007 [2024-12-16 11:43:03.627890] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x97ce60) with pdu=0x200016efc128 00:21:03.007 [2024-12-16 11:43:03.629780] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:15902 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:03.007 [2024-12-16 11:43:03.629830] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:36 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:21:03.007 [2024-12-16 11:43:03.635928] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x97ce60) with pdu=0x200016ef9b30 00:21:03.007 [2024-12-16 11:43:03.636685] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:2329 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:03.007 [2024-12-16 11:43:03.636749] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:28 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:21:03.007 [2024-12-16 11:43:03.649767] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x97ce60) with pdu=0x200016ee3060 00:21:03.007 [2024-12-16 11:43:03.651499] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:104 nsid:1 lba:22646 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:03.007 [2024-12-16 11:43:03.651532] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:104 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:21:03.007 [2024-12-16 11:43:03.657888] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x97ce60) with pdu=0x200016ee6300 00:21:03.007 [2024-12-16 11:43:03.658838] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:106 nsid:1 lba:19458 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:03.007 [2024-12-16 11:43:03.658888] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:106 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:21:03.007 21836.00 IOPS, 85.30 MiB/s [2024-12-16T11:43:03.760Z] [2024-12-16 11:43:03.672931] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x97ce60) with pdu=0x200016efc128 00:21:03.007 [2024-12-16 11:43:03.674481] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:73 nsid:1 lba:22233 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:03.007 [2024-12-16 11:43:03.674548] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:73 cdw0:0 sqhd:0044 p:0 m:0 dnr:0 00:21:03.007 [2024-12-16 11:43:03.683438] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x97ce60) with pdu=0x200016eec408 00:21:03.007 [2024-12-16 11:43:03.684754] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:21693 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:03.007 [2024-12-16 11:43:03.684803] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:26 cdw0:0 sqhd:0064 p:0 m:0 dnr:0 00:21:03.007 [2024-12-16 11:43:03.694896] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x97ce60) with pdu=0x200016ef3e60 00:21:03.007 [2024-12-16 11:43:03.696244] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:83 nsid:1 lba:3421 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:03.007 [2024-12-16 11:43:03.696275] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:83 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:21:03.007 [2024-12-16 11:43:03.709116] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x97ce60) with pdu=0x200016efd640 00:21:03.007 [2024-12-16 11:43:03.711103] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:72 nsid:1 lba:20293 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:03.007 [2024-12-16 11:43:03.711177] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:72 cdw0:0 sqhd:0069 p:0 m:0 dnr:0 00:21:03.007 [2024-12-16 11:43:03.717534] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x97ce60) with pdu=0x200016ee4578 00:21:03.007 [2024-12-16 11:43:03.718573] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:76 nsid:1 lba:17266 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:03.007 [2024-12-16 11:43:03.718622] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:76 cdw0:0 sqhd:000a p:0 m:0 dnr:0 00:21:03.007 [2024-12-16 11:43:03.731202] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x97ce60) with pdu=0x200016ef2d80 00:21:03.007 [2024-12-16 11:43:03.732830] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:14694 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:03.007 [2024-12-16 11:43:03.732882] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:17 cdw0:0 sqhd:004c p:0 m:0 dnr:0 00:21:03.007 [2024-12-16 11:43:03.741863] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x97ce60) with pdu=0x200016eea680 00:21:03.007 [2024-12-16 11:43:03.743306] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:8845 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:03.007 [2024-12-16 11:43:03.743355] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:30 cdw0:0 sqhd:006c p:0 m:0 dnr:0 00:21:03.007 [2024-12-16 11:43:03.752894] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x97ce60) with pdu=0x200016eef270 00:21:03.007 [2024-12-16 11:43:03.754266] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:19178 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:03.007 [2024-12-16 11:43:03.754317] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:61 cdw0:0 sqhd:002f p:0 m:0 dnr:0 00:21:03.267 [2024-12-16 11:43:03.766447] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x97ce60) with pdu=0x200016ef57b0 00:21:03.267 [2024-12-16 11:43:03.768460] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:72 nsid:1 lba:19418 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:03.267 [2024-12-16 11:43:03.768502] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:72 cdw0:0 sqhd:0071 p:0 m:0 dnr:0 00:21:03.267 [2024-12-16 11:43:03.774525] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x97ce60) with pdu=0x200016ee6fa8 00:21:03.267 [2024-12-16 11:43:03.775622] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:116 nsid:1 lba:8397 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:03.267 [2024-12-16 11:43:03.775671] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:116 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:21:03.267 [2024-12-16 11:43:03.787967] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x97ce60) with pdu=0x200016ee88f8 00:21:03.267 [2024-12-16 11:43:03.789691] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:91 nsid:1 lba:17643 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:03.267 [2024-12-16 11:43:03.789739] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:91 cdw0:0 sqhd:0054 p:0 m:0 dnr:0 00:21:03.267 [2024-12-16 11:43:03.796059] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x97ce60) with pdu=0x200016efa7d8 00:21:03.267 [2024-12-16 11:43:03.796828] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:105 nsid:1 lba:18277 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:03.267 [2024-12-16 11:43:03.796863] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:105 cdw0:0 sqhd:0075 p:0 m:0 dnr:0 00:21:03.267 [2024-12-16 11:43:03.809643] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x97ce60) with pdu=0x200016edf550 00:21:03.267 [2024-12-16 11:43:03.811063] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:17259 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:03.267 [2024-12-16 11:43:03.811112] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:51 cdw0:0 sqhd:0037 p:0 m:0 dnr:0 00:21:03.267 [2024-12-16 11:43:03.820059] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x97ce60) with pdu=0x200016efd640 00:21:03.267 [2024-12-16 11:43:03.821263] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:18331 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:03.267 [2024-12-16 11:43:03.821298] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:45 cdw0:0 sqhd:0057 p:0 m:0 dnr:0 00:21:03.267 [2024-12-16 11:43:03.830939] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x97ce60) with pdu=0x200016ef7100 00:21:03.267 [2024-12-16 11:43:03.832136] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:6093 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:03.267 [2024-12-16 11:43:03.832213] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:50 cdw0:0 sqhd:001a p:0 m:0 dnr:0 00:21:03.267 [2024-12-16 11:43:03.844659] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x97ce60) with pdu=0x200016eedd58 00:21:03.267 [2024-12-16 11:43:03.846469] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:20247 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:03.267 [2024-12-16 11:43:03.846506] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:18 cdw0:0 sqhd:005c p:0 m:0 dnr:0 00:21:03.267 [2024-12-16 11:43:03.852847] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x97ce60) with pdu=0x200016ef4f40 00:21:03.267 [2024-12-16 11:43:03.853700] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:23285 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:03.267 [2024-12-16 11:43:03.853734] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:32 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:21:03.267 [2024-12-16 11:43:03.866344] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x97ce60) with pdu=0x200016ee1710 00:21:03.267 [2024-12-16 11:43:03.867861] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:2923 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:03.267 [2024-12-16 11:43:03.867900] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:49 cdw0:0 sqhd:003f p:0 m:0 dnr:0 00:21:03.267 [2024-12-16 11:43:03.876800] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x97ce60) with pdu=0x200016ee6300 00:21:03.267 [2024-12-16 11:43:03.878040] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:24310 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:03.267 [2024-12-16 11:43:03.878091] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:57 cdw0:0 sqhd:005f p:0 m:0 dnr:0 00:21:03.267 [2024-12-16 11:43:03.887664] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x97ce60) with pdu=0x200016ef2948 00:21:03.267 [2024-12-16 11:43:03.888901] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:120 nsid:1 lba:14415 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:03.267 [2024-12-16 11:43:03.888951] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:120 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:21:03.267 [2024-12-16 11:43:03.901177] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x97ce60) with pdu=0x200016eec408 00:21:03.267 [2024-12-16 11:43:03.903074] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:10934 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:03.267 [2024-12-16 11:43:03.903124] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:58 cdw0:0 sqhd:0064 p:0 m:0 dnr:0 00:21:03.267 [2024-12-16 11:43:03.909273] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x97ce60) with pdu=0x200016ee8088 00:21:03.267 [2024-12-16 11:43:03.910212] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:107 nsid:1 lba:20677 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:03.267 [2024-12-16 11:43:03.910266] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:107 cdw0:0 sqhd:0005 p:0 m:0 dnr:0 00:21:03.267 [2024-12-16 11:43:03.920971] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x97ce60) with pdu=0x200016ef8618 00:21:03.267 [2024-12-16 11:43:03.921887] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:80 nsid:1 lba:8351 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:03.267 [2024-12-16 11:43:03.921938] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:80 cdw0:0 sqhd:0015 p:0 m:0 dnr:0 00:21:03.267 [2024-12-16 11:43:03.934359] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x97ce60) with pdu=0x200016efc560 00:21:03.267 [2024-12-16 11:43:03.935772] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:115 nsid:1 lba:18257 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:03.267 [2024-12-16 11:43:03.935829] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:115 cdw0:0 sqhd:0036 p:0 m:0 dnr:0 00:21:03.267 [2024-12-16 11:43:03.945795] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x97ce60) with pdu=0x200016eed920 00:21:03.267 [2024-12-16 11:43:03.947765] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:96 nsid:1 lba:9305 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:03.267 [2024-12-16 11:43:03.947825] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:96 cdw0:0 sqhd:0056 p:0 m:0 dnr:0 00:21:03.267 [2024-12-16 11:43:03.956819] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x97ce60) with pdu=0x200016edf118 00:21:03.267 [2024-12-16 11:43:03.957604] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:13604 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:03.267 [2024-12-16 11:43:03.957644] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0077 p:0 m:0 dnr:0 00:21:03.267 [2024-12-16 11:43:03.971091] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x97ce60) with pdu=0x200016eea680 00:21:03.267 [2024-12-16 11:43:03.972584] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:108 nsid:1 lba:18284 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:03.267 [2024-12-16 11:43:03.972624] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:108 cdw0:0 sqhd:0039 p:0 m:0 dnr:0 00:21:03.267 [2024-12-16 11:43:03.982316] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x97ce60) with pdu=0x200016ef31b8 00:21:03.267 [2024-12-16 11:43:03.983510] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:124 nsid:1 lba:8270 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:03.267 [2024-12-16 11:43:03.983548] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:124 cdw0:0 sqhd:0059 p:0 m:0 dnr:0 00:21:03.267 [2024-12-16 11:43:03.993889] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x97ce60) with pdu=0x200016ef7538 00:21:03.267 [2024-12-16 11:43:03.995055] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:4349 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:03.267 [2024-12-16 11:43:03.995105] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:125 cdw0:0 sqhd:001c p:0 m:0 dnr:0 00:21:03.267 [2024-12-16 11:43:04.007886] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x97ce60) with pdu=0x200016ee6738 00:21:03.267 [2024-12-16 11:43:04.009676] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:1264 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:03.267 [2024-12-16 11:43:04.009725] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:18 cdw0:0 sqhd:005e p:0 m:0 dnr:0 00:21:03.268 [2024-12-16 11:43:04.016266] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x97ce60) with pdu=0x200016ee12d8 00:21:03.268 [2024-12-16 11:43:04.017112] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:4211 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:03.268 [2024-12-16 11:43:04.017159] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:21:03.527 [2024-12-16 11:43:04.030090] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x97ce60) with pdu=0x200016eec408 00:21:03.527 [2024-12-16 11:43:04.031635] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:85 nsid:1 lba:17768 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:03.527 [2024-12-16 11:43:04.031687] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:85 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:21:03.527 [2024-12-16 11:43:04.040901] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x97ce60) with pdu=0x200016ee8d30 00:21:03.527 [2024-12-16 11:43:04.042219] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:83 nsid:1 lba:16174 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:03.527 [2024-12-16 11:43:04.042250] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:83 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:21:03.527 [2024-12-16 11:43:04.052277] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x97ce60) with pdu=0x200016ef4f40 00:21:03.527 [2024-12-16 11:43:04.053522] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:22915 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:03.527 [2024-12-16 11:43:04.053573] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:62 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 00:21:03.527 [2024-12-16 11:43:04.066250] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x97ce60) with pdu=0x200016ee0a68 00:21:03.527 [2024-12-16 11:43:04.068188] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:105 nsid:1 lba:311 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:03.527 [2024-12-16 11:43:04.068233] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:105 cdw0:0 sqhd:0066 p:0 m:0 dnr:0 00:21:03.527 [2024-12-16 11:43:04.074488] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x97ce60) with pdu=0x200016ef96f8 00:21:03.527 [2024-12-16 11:43:04.075431] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:1199 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:03.527 [2024-12-16 11:43:04.075467] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:32 cdw0:0 sqhd:0007 p:0 m:0 dnr:0 00:21:03.527 [2024-12-16 11:43:04.088316] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x97ce60) with pdu=0x200016efbcf0 00:21:03.527 [2024-12-16 11:43:04.089882] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:112 nsid:1 lba:13548 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:03.527 [2024-12-16 11:43:04.089933] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:112 cdw0:0 sqhd:0049 p:0 m:0 dnr:0 00:21:03.527 [2024-12-16 11:43:04.099324] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x97ce60) with pdu=0x200016ef8e88 00:21:03.527 [2024-12-16 11:43:04.100710] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:86 nsid:1 lba:5927 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:03.527 [2024-12-16 11:43:04.100762] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:86 cdw0:0 sqhd:0069 p:0 m:0 dnr:0 00:21:03.527 [2024-12-16 11:43:04.110884] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x97ce60) with pdu=0x200016efa7d8 00:21:03.527 [2024-12-16 11:43:04.112210] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:16818 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:03.527 [2024-12-16 11:43:04.112246] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:35 cdw0:0 sqhd:002c p:0 m:0 dnr:0 00:21:03.527 [2024-12-16 11:43:04.125071] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x97ce60) with pdu=0x200016ede8a8 00:21:03.527 [2024-12-16 11:43:04.127055] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:23077 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:03.527 [2024-12-16 11:43:04.127107] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:46 cdw0:0 sqhd:006e p:0 m:0 dnr:0 00:21:03.527 [2024-12-16 11:43:04.133687] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x97ce60) with pdu=0x200016ef1ca0 00:21:03.527 [2024-12-16 11:43:04.134700] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:23972 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:03.527 [2024-12-16 11:43:04.134736] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:43 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:21:03.527 [2024-12-16 11:43:04.147617] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x97ce60) with pdu=0x200016ef9f68 00:21:03.527 [2024-12-16 11:43:04.149340] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:19220 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:03.527 [2024-12-16 11:43:04.149375] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0051 p:0 m:0 dnr:0 00:21:03.527 [2024-12-16 11:43:04.158944] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x97ce60) with pdu=0x200016ef57b0 00:21:03.527 [2024-12-16 11:43:04.160148] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:3653 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:03.527 [2024-12-16 11:43:04.160225] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:59 cdw0:0 sqhd:0071 p:0 m:0 dnr:0 00:21:03.527 [2024-12-16 11:43:04.169812] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x97ce60) with pdu=0x200016efda78 00:21:03.527 [2024-12-16 11:43:04.170713] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:10685 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:03.527 [2024-12-16 11:43:04.170766] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:59 cdw0:0 sqhd:0064 p:0 m:0 dnr:0 00:21:03.527 [2024-12-16 11:43:04.180073] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x97ce60) with pdu=0x200016eecc78 00:21:03.527 [2024-12-16 11:43:04.181201] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:50 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:03.527 [2024-12-16 11:43:04.181276] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:35 cdw0:0 sqhd:0052 p:0 m:0 dnr:0 00:21:03.527 [2024-12-16 11:43:04.191179] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x97ce60) with pdu=0x200016ef4f40 00:21:03.527 [2024-12-16 11:43:04.192302] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:20854 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:03.527 [2024-12-16 11:43:04.192339] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:47 cdw0:0 sqhd:0015 p:0 m:0 dnr:0 00:21:03.527 [2024-12-16 11:43:04.204788] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x97ce60) with pdu=0x200016ee8d30 00:21:03.527 [2024-12-16 11:43:04.206507] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:15816 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:03.527 [2024-12-16 11:43:04.206540] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:24 cdw0:0 sqhd:0057 p:0 m:0 dnr:0 00:21:03.527 [2024-12-16 11:43:04.212966] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x97ce60) with pdu=0x200016edf550 00:21:03.527 [2024-12-16 11:43:04.213769] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:19798 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:03.527 [2024-12-16 11:43:04.213819] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0078 p:0 m:0 dnr:0 00:21:03.527 [2024-12-16 11:43:04.226889] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x97ce60) with pdu=0x200016ef1430 00:21:03.527 [2024-12-16 11:43:04.228464] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:7395 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:03.527 [2024-12-16 11:43:04.228532] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:003a p:0 m:0 dnr:0 00:21:03.527 [2024-12-16 11:43:04.237916] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x97ce60) with pdu=0x200016eef270 00:21:03.527 [2024-12-16 11:43:04.239135] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:114 nsid:1 lba:11109 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:03.527 [2024-12-16 11:43:04.239210] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:114 cdw0:0 sqhd:005a p:0 m:0 dnr:0 00:21:03.527 [2024-12-16 11:43:04.248927] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x97ce60) with pdu=0x200016ef6890 00:21:03.527 [2024-12-16 11:43:04.250119] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:85 nsid:1 lba:10113 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:03.527 [2024-12-16 11:43:04.250193] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:85 cdw0:0 sqhd:001d p:0 m:0 dnr:0 00:21:03.527 [2024-12-16 11:43:04.260178] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x97ce60) with pdu=0x200016ee4140 00:21:03.527 [2024-12-16 11:43:04.261389] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:102 nsid:1 lba:23236 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:03.527 [2024-12-16 11:43:04.261422] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:102 cdw0:0 sqhd:002d p:0 m:0 dnr:0 00:21:03.527 [2024-12-16 11:43:04.270550] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x97ce60) with pdu=0x200016ef1430 00:21:03.528 [2024-12-16 11:43:04.271606] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:120 nsid:1 lba:19319 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:03.528 [2024-12-16 11:43:04.271655] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:120 cdw0:0 sqhd:001c p:0 m:0 dnr:0 00:21:03.786 [2024-12-16 11:43:04.283833] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x97ce60) with pdu=0x200016ee9e10 00:21:03.786 [2024-12-16 11:43:04.285670] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:23786 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:03.786 [2024-12-16 11:43:04.285719] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:60 cdw0:0 sqhd:005e p:0 m:0 dnr:0 00:21:03.786 [2024-12-16 11:43:04.291807] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x97ce60) with pdu=0x200016eef6a8 00:21:03.786 [2024-12-16 11:43:04.292723] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:87 nsid:1 lba:291 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:03.786 [2024-12-16 11:43:04.292772] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:87 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:21:03.786 [2024-12-16 11:43:04.305006] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x97ce60) with pdu=0x200016ee5ec8 00:21:03.786 [2024-12-16 11:43:04.306568] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:5576 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:03.787 [2024-12-16 11:43:04.306616] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:21:03.787 [2024-12-16 11:43:04.315282] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x97ce60) with pdu=0x200016ef5378 00:21:03.787 [2024-12-16 11:43:04.316600] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:66 nsid:1 lba:18714 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:03.787 [2024-12-16 11:43:04.316653] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:66 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:21:03.787 [2024-12-16 11:43:04.326336] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x97ce60) with pdu=0x200016efe2e8 00:21:03.787 [2024-12-16 11:43:04.327571] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:8959 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:03.787 [2024-12-16 11:43:04.327622] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 00:21:03.787 [2024-12-16 11:43:04.339749] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x97ce60) with pdu=0x200016eeff18 00:21:03.787 [2024-12-16 11:43:04.341593] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:21718 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:03.787 [2024-12-16 11:43:04.341656] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:16 cdw0:0 sqhd:0066 p:0 m:0 dnr:0 00:21:03.787 [2024-12-16 11:43:04.347642] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x97ce60) with pdu=0x200016ee8088 00:21:03.787 [2024-12-16 11:43:04.348630] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:8721 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:03.787 [2024-12-16 11:43:04.348693] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:35 cdw0:0 sqhd:0007 p:0 m:0 dnr:0 00:21:03.787 [2024-12-16 11:43:04.360886] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x97ce60) with pdu=0x200016efbcf0 00:21:03.787 [2024-12-16 11:43:04.362534] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:108 nsid:1 lba:21098 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:03.787 [2024-12-16 11:43:04.362567] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:108 cdw0:0 sqhd:0049 p:0 m:0 dnr:0 00:21:03.787 [2024-12-16 11:43:04.371383] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x97ce60) with pdu=0x200016ee3060 00:21:03.787 [2024-12-16 11:43:04.372744] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:10638 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:03.787 [2024-12-16 11:43:04.372796] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:58 cdw0:0 sqhd:0069 p:0 m:0 dnr:0 00:21:03.787 [2024-12-16 11:43:04.382125] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x97ce60) with pdu=0x200016efc560 00:21:03.787 [2024-12-16 11:43:04.383449] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:3886 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:03.787 [2024-12-16 11:43:04.383497] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:002c p:0 m:0 dnr:0 00:21:03.787 [2024-12-16 11:43:04.395639] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x97ce60) with pdu=0x200016ee9168 00:21:03.787 [2024-12-16 11:43:04.397705] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:13635 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:03.787 [2024-12-16 11:43:04.397754] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:45 cdw0:0 sqhd:006e p:0 m:0 dnr:0 00:21:03.787 [2024-12-16 11:43:04.403900] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x97ce60) with pdu=0x200016ee23b8 00:21:03.787 [2024-12-16 11:43:04.404962] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:84 nsid:1 lba:372 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:03.787 [2024-12-16 11:43:04.405005] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:84 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:21:03.787 [2024-12-16 11:43:04.416958] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x97ce60) with pdu=0x200016efef90 00:21:03.787 [2024-12-16 11:43:04.418671] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:95 nsid:1 lba:5488 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:03.787 [2024-12-16 11:43:04.418718] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:95 cdw0:0 sqhd:0051 p:0 m:0 dnr:0 00:21:03.787 [2024-12-16 11:43:04.427277] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x97ce60) with pdu=0x200016ee5a90 00:21:03.787 [2024-12-16 11:43:04.428722] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:19761 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:03.787 [2024-12-16 11:43:04.428775] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0071 p:0 m:0 dnr:0 00:21:03.787 [2024-12-16 11:43:04.438096] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x97ce60) with pdu=0x200016ee5658 00:21:03.787 [2024-12-16 11:43:04.439515] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:21744 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:03.787 [2024-12-16 11:43:04.439564] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0034 p:0 m:0 dnr:0 00:21:03.787 [2024-12-16 11:43:04.448266] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x97ce60) with pdu=0x200016eec840 00:21:03.787 [2024-12-16 11:43:04.449452] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:16309 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:03.787 [2024-12-16 11:43:04.449501] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:53 cdw0:0 sqhd:0054 p:0 m:0 dnr:0 00:21:03.787 [2024-12-16 11:43:04.458976] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x97ce60) with pdu=0x200016ef81e0 00:21:03.787 [2024-12-16 11:43:04.460134] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:10863 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:03.787 [2024-12-16 11:43:04.460206] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:48 cdw0:0 sqhd:0017 p:0 m:0 dnr:0 00:21:03.787 [2024-12-16 11:43:04.472805] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x97ce60) with pdu=0x200016efda78 00:21:03.787 [2024-12-16 11:43:04.474656] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:19028 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:03.787 [2024-12-16 11:43:04.474718] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:27 cdw0:0 sqhd:0059 p:0 m:0 dnr:0 00:21:03.787 [2024-12-16 11:43:04.481498] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x97ce60) with pdu=0x200016ee3060 00:21:03.787 [2024-12-16 11:43:04.482273] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:11415 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:03.787 [2024-12-16 11:43:04.482310] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:57 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:21:03.787 [2024-12-16 11:43:04.496136] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x97ce60) with pdu=0x200016ef0350 00:21:03.787 [2024-12-16 11:43:04.497472] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:90 nsid:1 lba:5148 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:03.787 [2024-12-16 11:43:04.497507] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:90 cdw0:0 sqhd:005a p:0 m:0 dnr:0 00:21:03.787 [2024-12-16 11:43:04.507477] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x97ce60) with pdu=0x200016ee3d08 00:21:03.787 [2024-12-16 11:43:04.508596] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:122 nsid:1 lba:23875 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:03.787 [2024-12-16 11:43:04.508632] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:122 cdw0:0 sqhd:004c p:0 m:0 dnr:0 00:21:03.787 [2024-12-16 11:43:04.518891] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x97ce60) with pdu=0x200016efb8b8 00:21:03.787 [2024-12-16 11:43:04.519910] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:119 nsid:1 lba:639 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:03.787 [2024-12-16 11:43:04.519951] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:119 cdw0:0 sqhd:003b p:0 m:0 dnr:0 00:21:03.787 [2024-12-16 11:43:04.530403] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x97ce60) with pdu=0x200016ef6890 00:21:03.787 [2024-12-16 11:43:04.531204] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:76 nsid:1 lba:25516 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:03.787 [2024-12-16 11:43:04.531245] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:76 cdw0:0 sqhd:002d p:0 m:0 dnr:0 00:21:04.046 [2024-12-16 11:43:04.544706] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x97ce60) with pdu=0x200016eecc78 00:21:04.046 [2024-12-16 11:43:04.546517] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:88 nsid:1 lba:18694 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:04.046 [2024-12-16 11:43:04.546582] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:88 cdw0:0 sqhd:006d p:0 m:0 dnr:0 00:21:04.046 [2024-12-16 11:43:04.555497] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x97ce60) with pdu=0x200016ef7538 00:21:04.046 [2024-12-16 11:43:04.557162] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:117 nsid:1 lba:11671 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:04.046 [2024-12-16 11:43:04.557207] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:117 cdw0:0 sqhd:005c p:0 m:0 dnr:0 00:21:04.046 [2024-12-16 11:43:04.563970] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x97ce60) with pdu=0x200016efd640 00:21:04.046 [2024-12-16 11:43:04.564829] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:3 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:04.046 [2024-12-16 11:43:04.564865] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:37 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:21:04.046 [2024-12-16 11:43:04.577667] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x97ce60) with pdu=0x200016edf988 00:21:04.046 [2024-12-16 11:43:04.579155] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:98 nsid:1 lba:14222 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:04.046 [2024-12-16 11:43:04.579227] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:98 cdw0:0 sqhd:003f p:0 m:0 dnr:0 00:21:04.046 [2024-12-16 11:43:04.588330] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x97ce60) with pdu=0x200016ef7970 00:21:04.046 [2024-12-16 11:43:04.589577] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:24128 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:04.046 [2024-12-16 11:43:04.589629] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:59 cdw0:0 sqhd:005f p:0 m:0 dnr:0 00:21:04.046 [2024-12-16 11:43:04.599102] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x97ce60) with pdu=0x200016edf118 00:21:04.046 [2024-12-16 11:43:04.600342] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:81 nsid:1 lba:21509 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:04.046 [2024-12-16 11:43:04.600395] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:81 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:21:04.046 [2024-12-16 11:43:04.612869] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x97ce60) with pdu=0x200016ee1f80 00:21:04.046 [2024-12-16 11:43:04.614746] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:74 nsid:1 lba:7593 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:04.046 [2024-12-16 11:43:04.614796] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:74 cdw0:0 sqhd:0064 p:0 m:0 dnr:0 00:21:04.046 [2024-12-16 11:43:04.621268] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x97ce60) with pdu=0x200016ee6fa8 00:21:04.046 [2024-12-16 11:43:04.622185] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:21905 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:04.046 [2024-12-16 11:43:04.622228] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0005 p:0 m:0 dnr:0 00:21:04.046 [2024-12-16 11:43:04.634901] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x97ce60) with pdu=0x200016eed4e8 00:21:04.046 [2024-12-16 11:43:04.636497] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:18183 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:04.046 [2024-12-16 11:43:04.636564] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:35 cdw0:0 sqhd:0047 p:0 m:0 dnr:0 00:21:04.046 [2024-12-16 11:43:04.645696] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x97ce60) with pdu=0x200016ef1430 00:21:04.046 [2024-12-16 11:43:04.647042] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:24597 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:04.046 [2024-12-16 11:43:04.647094] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:19 cdw0:0 sqhd:0067 p:0 m:0 dnr:0 00:21:04.046 [2024-12-16 11:43:04.657087] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x97ce60) with pdu=0x200016ee23b8 00:21:04.047 [2024-12-16 11:43:04.658384] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:96 nsid:1 lba:1190 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:04.047 [2024-12-16 11:43:04.658435] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:96 cdw0:0 sqhd:002a p:0 m:0 dnr:0 00:21:04.047 22077.00 IOPS, 86.24 MiB/s 00:21:04.047 Latency(us) 00:21:04.047 [2024-12-16T11:43:04.800Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:21:04.047 Job: nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:21:04.047 nvme0n1 : 2.01 22048.89 86.13 0.00 0.00 5796.75 2383.13 15490.33 00:21:04.047 [2024-12-16T11:43:04.800Z] =================================================================================================================== 00:21:04.047 [2024-12-16T11:43:04.800Z] Total : 22048.89 86.13 0.00 0.00 5796.75 2383.13 15490.33 00:21:04.047 { 00:21:04.047 "results": [ 00:21:04.047 { 00:21:04.047 "job": "nvme0n1", 00:21:04.047 "core_mask": "0x2", 00:21:04.047 "workload": "randwrite", 00:21:04.047 "status": "finished", 00:21:04.047 "queue_depth": 128, 00:21:04.047 "io_size": 4096, 00:21:04.047 "runtime": 2.008355, 00:21:04.047 "iops": 22048.890758854883, 00:21:04.047 "mibps": 86.12847952677689, 00:21:04.047 "io_failed": 0, 00:21:04.047 "io_timeout": 0, 00:21:04.047 "avg_latency_us": 5796.748583417846, 00:21:04.047 "min_latency_us": 2383.1272727272726, 00:21:04.047 "max_latency_us": 15490.327272727272 00:21:04.047 } 00:21:04.047 ], 00:21:04.047 "core_count": 1 00:21:04.047 } 00:21:04.047 11:43:04 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@71 -- # get_transient_errcount nvme0n1 00:21:04.047 11:43:04 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@27 -- # bperf_rpc bdev_get_iostat -b nvme0n1 00:21:04.047 11:43:04 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_get_iostat -b nvme0n1 00:21:04.047 11:43:04 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@28 -- # jq -r '.bdevs[0] 00:21:04.047 | .driver_specific 00:21:04.047 | .nvme_error 00:21:04.047 | .status_code 00:21:04.047 | .command_transient_transport_error' 00:21:04.306 11:43:05 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@71 -- # (( 173 > 0 )) 00:21:04.306 11:43:05 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@73 -- # killprocess 96427 00:21:04.306 11:43:05 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@954 -- # '[' -z 96427 ']' 00:21:04.306 11:43:05 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@958 -- # kill -0 96427 00:21:04.306 11:43:05 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@959 -- # uname 00:21:04.306 11:43:05 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:21:04.306 11:43:05 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 96427 00:21:04.306 11:43:05 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:21:04.306 11:43:05 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:21:04.306 killing process with pid 96427 00:21:04.306 Received shutdown signal, test time was about 2.000000 seconds 00:21:04.306 00:21:04.306 Latency(us) 00:21:04.306 [2024-12-16T11:43:05.059Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:21:04.306 [2024-12-16T11:43:05.059Z] =================================================================================================================== 00:21:04.306 [2024-12-16T11:43:05.059Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:21:04.306 11:43:05 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@972 -- # echo 'killing process with pid 96427' 00:21:04.306 11:43:05 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@973 -- # kill 96427 00:21:04.306 11:43:05 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@978 -- # wait 96427 00:21:04.563 11:43:05 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@115 -- # run_bperf_err randwrite 131072 16 00:21:04.563 11:43:05 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@54 -- # local rw bs qd 00:21:04.563 11:43:05 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # rw=randwrite 00:21:04.563 11:43:05 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # bs=131072 00:21:04.563 11:43:05 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # qd=16 00:21:04.563 11:43:05 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@58 -- # bperfpid=96504 00:21:04.563 11:43:05 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@57 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randwrite -o 131072 -t 2 -q 16 -z 00:21:04.563 11:43:05 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@60 -- # waitforlisten 96504 /var/tmp/bperf.sock 00:21:04.563 11:43:05 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@835 -- # '[' -z 96504 ']' 00:21:04.563 11:43:05 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bperf.sock 00:21:04.564 11:43:05 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@840 -- # local max_retries=100 00:21:04.564 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:21:04.564 11:43:05 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:21:04.564 11:43:05 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@844 -- # xtrace_disable 00:21:04.564 11:43:05 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:21:04.564 [2024-12-16 11:43:05.314676] Starting SPDK v25.01-pre git sha1 c360129e7 / DPDK 24.03.0 initialization... 00:21:04.564 I/O size of 131072 is greater than zero copy threshold (65536). 00:21:04.564 Zero copy mechanism will not be used. 00:21:04.564 [2024-12-16 11:43:05.314855] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid96504 ] 00:21:04.822 [2024-12-16 11:43:05.470965] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:21:04.822 [2024-12-16 11:43:05.527000] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 1 00:21:05.081 11:43:05 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:21:05.081 11:43:05 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@868 -- # return 0 00:21:05.081 11:43:05 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@61 -- # bperf_rpc bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 00:21:05.081 11:43:05 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 00:21:05.340 11:43:05 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@63 -- # rpc_cmd accel_error_inject_error -o crc32c -t disable 00:21:05.340 11:43:05 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:05.340 11:43:05 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:21:05.340 11:43:05 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:05.340 11:43:05 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@64 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.3 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:21:05.340 11:43:05 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.3 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:21:05.599 nvme0n1 00:21:05.859 11:43:06 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@67 -- # rpc_cmd accel_error_inject_error -o crc32c -t corrupt -i 32 00:21:05.859 11:43:06 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:05.859 11:43:06 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:21:05.859 11:43:06 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:05.859 11:43:06 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@69 -- # bperf_py perform_tests 00:21:05.859 11:43:06 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@19 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:21:05.859 I/O size of 131072 is greater than zero copy threshold (65536). 00:21:05.859 Zero copy mechanism will not be used. 00:21:05.859 Running I/O for 2 seconds... 00:21:05.859 [2024-12-16 11:43:06.491640] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x97d1a0) with pdu=0x200016eff3c8 00:21:05.859 [2024-12-16 11:43:06.491817] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:22208 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:05.859 [2024-12-16 11:43:06.491851] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:21:05.859 [2024-12-16 11:43:06.497243] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x97d1a0) with pdu=0x200016eff3c8 00:21:05.859 [2024-12-16 11:43:06.497368] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:2752 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:05.859 [2024-12-16 11:43:06.497393] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:21:05.859 [2024-12-16 11:43:06.502250] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x97d1a0) with pdu=0x200016eff3c8 00:21:05.859 [2024-12-16 11:43:06.502380] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:17856 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:05.859 [2024-12-16 11:43:06.502405] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:21:05.859 [2024-12-16 11:43:06.507196] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x97d1a0) with pdu=0x200016eff3c8 00:21:05.859 [2024-12-16 11:43:06.507297] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:12064 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:05.859 [2024-12-16 11:43:06.507320] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:21:05.859 [2024-12-16 11:43:06.512271] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x97d1a0) with pdu=0x200016eff3c8 00:21:05.859 [2024-12-16 11:43:06.512364] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:24320 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:05.859 [2024-12-16 11:43:06.512389] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:21:05.859 [2024-12-16 11:43:06.517667] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x97d1a0) with pdu=0x200016eff3c8 00:21:05.859 [2024-12-16 11:43:06.517800] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:19840 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:05.859 [2024-12-16 11:43:06.517824] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:21:05.859 [2024-12-16 11:43:06.522796] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x97d1a0) with pdu=0x200016eff3c8 00:21:05.859 [2024-12-16 11:43:06.522904] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:7168 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:05.859 [2024-12-16 11:43:06.522929] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:21:05.859 [2024-12-16 11:43:06.527844] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x97d1a0) with pdu=0x200016eff3c8 00:21:05.859 [2024-12-16 11:43:06.527923] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:23680 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:05.859 [2024-12-16 11:43:06.527947] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:21:05.859 [2024-12-16 11:43:06.533004] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x97d1a0) with pdu=0x200016eff3c8 00:21:05.859 [2024-12-16 11:43:06.533113] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:23808 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:05.859 [2024-12-16 11:43:06.533136] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:21:05.859 [2024-12-16 11:43:06.538329] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x97d1a0) with pdu=0x200016eff3c8 00:21:05.859 [2024-12-16 11:43:06.538431] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:7840 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:05.859 [2024-12-16 11:43:06.538454] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:21:05.859 [2024-12-16 11:43:06.543497] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x97d1a0) with pdu=0x200016eff3c8 00:21:05.859 [2024-12-16 11:43:06.543601] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:6240 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:05.859 [2024-12-16 11:43:06.543626] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:21:05.859 [2024-12-16 11:43:06.548742] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x97d1a0) with pdu=0x200016eff3c8 00:21:05.859 [2024-12-16 11:43:06.548850] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:19584 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:05.859 [2024-12-16 11:43:06.548873] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:21:05.859 [2024-12-16 11:43:06.553896] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x97d1a0) with pdu=0x200016eff3c8 00:21:05.859 [2024-12-16 11:43:06.553977] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:11808 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:05.859 [2024-12-16 11:43:06.554000] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:21:05.859 [2024-12-16 11:43:06.558823] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x97d1a0) with pdu=0x200016eff3c8 00:21:05.859 [2024-12-16 11:43:06.558972] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:25184 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:05.859 [2024-12-16 11:43:06.558994] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:21:05.859 [2024-12-16 11:43:06.563863] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x97d1a0) with pdu=0x200016eff3c8 00:21:05.859 [2024-12-16 11:43:06.563949] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:10400 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:05.859 [2024-12-16 11:43:06.563973] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:21:05.859 [2024-12-16 11:43:06.568793] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x97d1a0) with pdu=0x200016eff3c8 00:21:05.859 [2024-12-16 11:43:06.568901] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:19072 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:05.859 [2024-12-16 11:43:06.568925] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:21:05.859 [2024-12-16 11:43:06.573893] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x97d1a0) with pdu=0x200016eff3c8 00:21:05.859 [2024-12-16 11:43:06.574024] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:6976 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:05.859 [2024-12-16 11:43:06.574047] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:21:05.859 [2024-12-16 11:43:06.578832] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x97d1a0) with pdu=0x200016eff3c8 00:21:05.860 [2024-12-16 11:43:06.578944] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:20192 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:05.860 [2024-12-16 11:43:06.578968] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:21:05.860 [2024-12-16 11:43:06.583847] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x97d1a0) with pdu=0x200016eff3c8 00:21:05.860 [2024-12-16 11:43:06.583939] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:18528 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:05.860 [2024-12-16 11:43:06.583963] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:21:05.860 [2024-12-16 11:43:06.588837] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x97d1a0) with pdu=0x200016eff3c8 00:21:05.860 [2024-12-16 11:43:06.588947] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:13216 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:05.860 [2024-12-16 11:43:06.588971] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:21:05.860 [2024-12-16 11:43:06.593701] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x97d1a0) with pdu=0x200016eff3c8 00:21:05.860 [2024-12-16 11:43:06.593829] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:3872 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:05.860 [2024-12-16 11:43:06.593852] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:21:05.860 [2024-12-16 11:43:06.598641] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x97d1a0) with pdu=0x200016eff3c8 00:21:05.860 [2024-12-16 11:43:06.598748] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:96 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:05.860 [2024-12-16 11:43:06.598771] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:21:05.860 [2024-12-16 11:43:06.603736] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x97d1a0) with pdu=0x200016eff3c8 00:21:05.860 [2024-12-16 11:43:06.603837] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:17728 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:05.860 [2024-12-16 11:43:06.603860] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:21:05.860 [2024-12-16 11:43:06.608683] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x97d1a0) with pdu=0x200016eff3c8 00:21:05.860 [2024-12-16 11:43:06.608796] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:9984 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:05.860 [2024-12-16 11:43:06.608818] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:21:06.173 [2024-12-16 11:43:06.613629] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x97d1a0) with pdu=0x200016eff3c8 00:21:06.173 [2024-12-16 11:43:06.613771] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:16928 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:06.173 [2024-12-16 11:43:06.613794] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:21:06.173 [2024-12-16 11:43:06.618606] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x97d1a0) with pdu=0x200016eff3c8 00:21:06.173 [2024-12-16 11:43:06.618715] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:20736 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:06.173 [2024-12-16 11:43:06.618738] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:21:06.173 [2024-12-16 11:43:06.623498] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x97d1a0) with pdu=0x200016eff3c8 00:21:06.173 [2024-12-16 11:43:06.623660] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:832 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:06.173 [2024-12-16 11:43:06.623682] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:21:06.173 [2024-12-16 11:43:06.628515] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x97d1a0) with pdu=0x200016eff3c8 00:21:06.173 [2024-12-16 11:43:06.628640] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:3584 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:06.173 [2024-12-16 11:43:06.628663] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:21:06.173 [2024-12-16 11:43:06.633651] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x97d1a0) with pdu=0x200016eff3c8 00:21:06.173 [2024-12-16 11:43:06.633744] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:22336 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:06.173 [2024-12-16 11:43:06.633767] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:21:06.173 [2024-12-16 11:43:06.638633] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x97d1a0) with pdu=0x200016eff3c8 00:21:06.173 [2024-12-16 11:43:06.638766] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:20768 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:06.173 [2024-12-16 11:43:06.638789] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:21:06.173 [2024-12-16 11:43:06.643649] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x97d1a0) with pdu=0x200016eff3c8 00:21:06.173 [2024-12-16 11:43:06.643751] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:23456 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:06.173 [2024-12-16 11:43:06.643775] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:21:06.173 [2024-12-16 11:43:06.648624] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x97d1a0) with pdu=0x200016eff3c8 00:21:06.173 [2024-12-16 11:43:06.648732] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:5056 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:06.173 [2024-12-16 11:43:06.648756] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:21:06.173 [2024-12-16 11:43:06.653643] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x97d1a0) with pdu=0x200016eff3c8 00:21:06.173 [2024-12-16 11:43:06.653753] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:16128 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:06.173 [2024-12-16 11:43:06.653776] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:21:06.173 [2024-12-16 11:43:06.658954] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x97d1a0) with pdu=0x200016eff3c8 00:21:06.173 [2024-12-16 11:43:06.659078] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:704 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:06.173 [2024-12-16 11:43:06.659103] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:21:06.173 [2024-12-16 11:43:06.663998] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x97d1a0) with pdu=0x200016eff3c8 00:21:06.173 [2024-12-16 11:43:06.664105] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:15968 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:06.173 [2024-12-16 11:43:06.664130] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:21:06.173 [2024-12-16 11:43:06.669013] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x97d1a0) with pdu=0x200016eff3c8 00:21:06.173 [2024-12-16 11:43:06.669115] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:16704 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:06.173 [2024-12-16 11:43:06.669140] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:21:06.173 [2024-12-16 11:43:06.674008] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x97d1a0) with pdu=0x200016eff3c8 00:21:06.173 [2024-12-16 11:43:06.674128] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:3072 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:06.173 [2024-12-16 11:43:06.674152] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:21:06.173 [2024-12-16 11:43:06.678991] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x97d1a0) with pdu=0x200016eff3c8 00:21:06.173 [2024-12-16 11:43:06.679090] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:16896 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:06.173 [2024-12-16 11:43:06.679113] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:21:06.173 [2024-12-16 11:43:06.683914] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x97d1a0) with pdu=0x200016eff3c8 00:21:06.173 [2024-12-16 11:43:06.683997] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:10528 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:06.173 [2024-12-16 11:43:06.684020] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:21:06.173 [2024-12-16 11:43:06.688787] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x97d1a0) with pdu=0x200016eff3c8 00:21:06.173 [2024-12-16 11:43:06.688881] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:1504 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:06.173 [2024-12-16 11:43:06.688904] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:21:06.173 [2024-12-16 11:43:06.693742] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x97d1a0) with pdu=0x200016eff3c8 00:21:06.173 [2024-12-16 11:43:06.693836] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:10048 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:06.173 [2024-12-16 11:43:06.693859] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:21:06.174 [2024-12-16 11:43:06.698640] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x97d1a0) with pdu=0x200016eff3c8 00:21:06.174 [2024-12-16 11:43:06.698754] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:20800 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:06.174 [2024-12-16 11:43:06.698778] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:21:06.174 [2024-12-16 11:43:06.703542] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x97d1a0) with pdu=0x200016eff3c8 00:21:06.174 [2024-12-16 11:43:06.703673] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:7040 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:06.174 [2024-12-16 11:43:06.703710] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:21:06.174 [2024-12-16 11:43:06.708438] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x97d1a0) with pdu=0x200016eff3c8 00:21:06.174 [2024-12-16 11:43:06.708584] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:3776 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:06.174 [2024-12-16 11:43:06.708607] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:21:06.174 [2024-12-16 11:43:06.713393] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x97d1a0) with pdu=0x200016eff3c8 00:21:06.174 [2024-12-16 11:43:06.713508] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:3872 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:06.174 [2024-12-16 11:43:06.713532] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:21:06.174 [2024-12-16 11:43:06.718420] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x97d1a0) with pdu=0x200016eff3c8 00:21:06.174 [2024-12-16 11:43:06.718560] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:16544 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:06.174 [2024-12-16 11:43:06.718582] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:21:06.174 [2024-12-16 11:43:06.723376] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x97d1a0) with pdu=0x200016eff3c8 00:21:06.174 [2024-12-16 11:43:06.723469] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:12064 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:06.174 [2024-12-16 11:43:06.723493] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:21:06.174 [2024-12-16 11:43:06.728122] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x97d1a0) with pdu=0x200016eff3c8 00:21:06.174 [2024-12-16 11:43:06.728269] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:21280 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:06.174 [2024-12-16 11:43:06.728292] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:21:06.174 [2024-12-16 11:43:06.733104] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x97d1a0) with pdu=0x200016eff3c8 00:21:06.174 [2024-12-16 11:43:06.733231] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:4256 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:06.174 [2024-12-16 11:43:06.733254] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:21:06.174 [2024-12-16 11:43:06.737984] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x97d1a0) with pdu=0x200016eff3c8 00:21:06.174 [2024-12-16 11:43:06.738115] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:22080 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:06.174 [2024-12-16 11:43:06.738138] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:21:06.174 [2024-12-16 11:43:06.742875] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x97d1a0) with pdu=0x200016eff3c8 00:21:06.174 [2024-12-16 11:43:06.743005] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:6112 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:06.174 [2024-12-16 11:43:06.743028] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:21:06.174 [2024-12-16 11:43:06.747829] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x97d1a0) with pdu=0x200016eff3c8 00:21:06.174 [2024-12-16 11:43:06.747905] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:21536 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:06.174 [2024-12-16 11:43:06.747929] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:21:06.174 [2024-12-16 11:43:06.752858] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x97d1a0) with pdu=0x200016eff3c8 00:21:06.174 [2024-12-16 11:43:06.752970] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:17952 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:06.174 [2024-12-16 11:43:06.752993] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:21:06.174 [2024-12-16 11:43:06.757736] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x97d1a0) with pdu=0x200016eff3c8 00:21:06.174 [2024-12-16 11:43:06.757844] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:19040 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:06.174 [2024-12-16 11:43:06.757866] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:21:06.174 [2024-12-16 11:43:06.762636] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x97d1a0) with pdu=0x200016eff3c8 00:21:06.174 [2024-12-16 11:43:06.762750] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:3936 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:06.174 [2024-12-16 11:43:06.762773] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:21:06.174 [2024-12-16 11:43:06.767563] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x97d1a0) with pdu=0x200016eff3c8 00:21:06.174 [2024-12-16 11:43:06.767655] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:3296 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:06.174 [2024-12-16 11:43:06.767678] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:21:06.174 [2024-12-16 11:43:06.772445] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x97d1a0) with pdu=0x200016eff3c8 00:21:06.174 [2024-12-16 11:43:06.772573] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:22624 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:06.174 [2024-12-16 11:43:06.772596] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:21:06.174 [2024-12-16 11:43:06.777422] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x97d1a0) with pdu=0x200016eff3c8 00:21:06.174 [2024-12-16 11:43:06.777539] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:10048 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:06.174 [2024-12-16 11:43:06.777561] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:21:06.174 [2024-12-16 11:43:06.782299] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x97d1a0) with pdu=0x200016eff3c8 00:21:06.174 [2024-12-16 11:43:06.782432] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:18752 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:06.174 [2024-12-16 11:43:06.782456] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:21:06.174 [2024-12-16 11:43:06.787274] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x97d1a0) with pdu=0x200016eff3c8 00:21:06.174 [2024-12-16 11:43:06.787404] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:24256 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:06.174 [2024-12-16 11:43:06.787428] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:21:06.174 [2024-12-16 11:43:06.792285] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x97d1a0) with pdu=0x200016eff3c8 00:21:06.174 [2024-12-16 11:43:06.792391] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:10784 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:06.174 [2024-12-16 11:43:06.792413] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:21:06.174 [2024-12-16 11:43:06.797049] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x97d1a0) with pdu=0x200016eff3c8 00:21:06.174 [2024-12-16 11:43:06.797159] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:3296 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:06.174 [2024-12-16 11:43:06.797196] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:21:06.174 [2024-12-16 11:43:06.801973] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x97d1a0) with pdu=0x200016eff3c8 00:21:06.174 [2024-12-16 11:43:06.802084] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:4896 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:06.174 [2024-12-16 11:43:06.802107] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:21:06.174 [2024-12-16 11:43:06.806849] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x97d1a0) with pdu=0x200016eff3c8 00:21:06.174 [2024-12-16 11:43:06.806961] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:8384 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:06.174 [2024-12-16 11:43:06.806984] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:21:06.174 [2024-12-16 11:43:06.811701] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x97d1a0) with pdu=0x200016eff3c8 00:21:06.174 [2024-12-16 11:43:06.811801] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:24640 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:06.174 [2024-12-16 11:43:06.811824] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:21:06.174 [2024-12-16 11:43:06.816617] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x97d1a0) with pdu=0x200016eff3c8 00:21:06.174 [2024-12-16 11:43:06.816723] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:24960 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:06.174 [2024-12-16 11:43:06.816746] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:21:06.174 [2024-12-16 11:43:06.821476] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x97d1a0) with pdu=0x200016eff3c8 00:21:06.174 [2024-12-16 11:43:06.821610] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:18624 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:06.174 [2024-12-16 11:43:06.821633] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:21:06.174 [2024-12-16 11:43:06.826437] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x97d1a0) with pdu=0x200016eff3c8 00:21:06.174 [2024-12-16 11:43:06.826576] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:3104 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:06.174 [2024-12-16 11:43:06.826599] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:21:06.174 [2024-12-16 11:43:06.831338] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x97d1a0) with pdu=0x200016eff3c8 00:21:06.174 [2024-12-16 11:43:06.831448] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:4352 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:06.174 [2024-12-16 11:43:06.831470] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:21:06.175 [2024-12-16 11:43:06.836245] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x97d1a0) with pdu=0x200016eff3c8 00:21:06.175 [2024-12-16 11:43:06.836359] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:19840 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:06.175 [2024-12-16 11:43:06.836381] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:21:06.175 [2024-12-16 11:43:06.841127] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x97d1a0) with pdu=0x200016eff3c8 00:21:06.175 [2024-12-16 11:43:06.841253] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:20480 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:06.175 [2024-12-16 11:43:06.841275] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:21:06.175 [2024-12-16 11:43:06.845998] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x97d1a0) with pdu=0x200016eff3c8 00:21:06.175 [2024-12-16 11:43:06.846112] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:8256 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:06.175 [2024-12-16 11:43:06.846134] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:21:06.175 [2024-12-16 11:43:06.850813] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x97d1a0) with pdu=0x200016eff3c8 00:21:06.175 [2024-12-16 11:43:06.850926] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:3456 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:06.175 [2024-12-16 11:43:06.850949] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:21:06.175 [2024-12-16 11:43:06.855765] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x97d1a0) with pdu=0x200016eff3c8 00:21:06.175 [2024-12-16 11:43:06.855864] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:15040 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:06.175 [2024-12-16 11:43:06.855887] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:21:06.175 [2024-12-16 11:43:06.860688] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x97d1a0) with pdu=0x200016eff3c8 00:21:06.175 [2024-12-16 11:43:06.860788] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:20544 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:06.175 [2024-12-16 11:43:06.860811] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:21:06.175 [2024-12-16 11:43:06.865688] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x97d1a0) with pdu=0x200016eff3c8 00:21:06.175 [2024-12-16 11:43:06.865796] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:16864 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:06.175 [2024-12-16 11:43:06.865819] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:21:06.175 [2024-12-16 11:43:06.870563] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x97d1a0) with pdu=0x200016eff3c8 00:21:06.175 [2024-12-16 11:43:06.870695] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:24640 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:06.175 [2024-12-16 11:43:06.870717] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:21:06.175 [2024-12-16 11:43:06.875405] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x97d1a0) with pdu=0x200016eff3c8 00:21:06.175 [2024-12-16 11:43:06.875519] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:15808 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:06.175 [2024-12-16 11:43:06.875542] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:21:06.175 [2024-12-16 11:43:06.880369] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x97d1a0) with pdu=0x200016eff3c8 00:21:06.175 [2024-12-16 11:43:06.880477] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:3968 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:06.175 [2024-12-16 11:43:06.880500] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:21:06.175 [2024-12-16 11:43:06.885125] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x97d1a0) with pdu=0x200016eff3c8 00:21:06.175 [2024-12-16 11:43:06.885263] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:22688 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:06.175 [2024-12-16 11:43:06.885285] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:21:06.175 [2024-12-16 11:43:06.890247] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x97d1a0) with pdu=0x200016eff3c8 00:21:06.175 [2024-12-16 11:43:06.890348] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:7648 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:06.175 [2024-12-16 11:43:06.890377] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:21:06.457 [2024-12-16 11:43:06.895376] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x97d1a0) with pdu=0x200016eff3c8 00:21:06.457 [2024-12-16 11:43:06.895457] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:12416 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:06.457 [2024-12-16 11:43:06.895480] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:21:06.457 [2024-12-16 11:43:06.900344] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x97d1a0) with pdu=0x200016eff3c8 00:21:06.457 [2024-12-16 11:43:06.900443] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:20800 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:06.457 [2024-12-16 11:43:06.900466] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:21:06.457 [2024-12-16 11:43:06.905441] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x97d1a0) with pdu=0x200016eff3c8 00:21:06.457 [2024-12-16 11:43:06.905542] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:10592 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:06.457 [2024-12-16 11:43:06.905565] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:21:06.457 [2024-12-16 11:43:06.910451] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x97d1a0) with pdu=0x200016eff3c8 00:21:06.457 [2024-12-16 11:43:06.910567] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:4352 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:06.457 [2024-12-16 11:43:06.910589] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:21:06.457 [2024-12-16 11:43:06.915548] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x97d1a0) with pdu=0x200016eff3c8 00:21:06.457 [2024-12-16 11:43:06.915650] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:15136 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:06.457 [2024-12-16 11:43:06.915674] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:21:06.458 [2024-12-16 11:43:06.920676] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x97d1a0) with pdu=0x200016eff3c8 00:21:06.458 [2024-12-16 11:43:06.920790] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:2880 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:06.458 [2024-12-16 11:43:06.920813] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:21:06.458 [2024-12-16 11:43:06.925812] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x97d1a0) with pdu=0x200016eff3c8 00:21:06.458 [2024-12-16 11:43:06.925943] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:19360 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:06.458 [2024-12-16 11:43:06.925966] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:21:06.458 [2024-12-16 11:43:06.930914] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x97d1a0) with pdu=0x200016eff3c8 00:21:06.458 [2024-12-16 11:43:06.931063] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:12960 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:06.458 [2024-12-16 11:43:06.931086] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:21:06.458 [2024-12-16 11:43:06.935982] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x97d1a0) with pdu=0x200016eff3c8 00:21:06.458 [2024-12-16 11:43:06.936085] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:64 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:06.458 [2024-12-16 11:43:06.936109] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:21:06.458 [2024-12-16 11:43:06.941032] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x97d1a0) with pdu=0x200016eff3c8 00:21:06.458 [2024-12-16 11:43:06.941143] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:5376 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:06.458 [2024-12-16 11:43:06.941167] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:21:06.458 [2024-12-16 11:43:06.945998] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x97d1a0) with pdu=0x200016eff3c8 00:21:06.458 [2024-12-16 11:43:06.946112] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:24640 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:06.458 [2024-12-16 11:43:06.946135] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:21:06.458 [2024-12-16 11:43:06.951057] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x97d1a0) with pdu=0x200016eff3c8 00:21:06.458 [2024-12-16 11:43:06.951172] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:20768 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:06.458 [2024-12-16 11:43:06.951196] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:21:06.458 [2024-12-16 11:43:06.956079] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x97d1a0) with pdu=0x200016eff3c8 00:21:06.458 [2024-12-16 11:43:06.956210] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:6016 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:06.458 [2024-12-16 11:43:06.956234] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:21:06.458 [2024-12-16 11:43:06.961019] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x97d1a0) with pdu=0x200016eff3c8 00:21:06.458 [2024-12-16 11:43:06.961134] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:13408 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:06.458 [2024-12-16 11:43:06.961157] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:21:06.458 [2024-12-16 11:43:06.966109] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x97d1a0) with pdu=0x200016eff3c8 00:21:06.458 [2024-12-16 11:43:06.966223] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:8864 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:06.458 [2024-12-16 11:43:06.966247] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:21:06.458 [2024-12-16 11:43:06.971084] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x97d1a0) with pdu=0x200016eff3c8 00:21:06.458 [2024-12-16 11:43:06.971205] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:12640 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:06.458 [2024-12-16 11:43:06.971228] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:21:06.458 [2024-12-16 11:43:06.976096] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x97d1a0) with pdu=0x200016eff3c8 00:21:06.458 [2024-12-16 11:43:06.976189] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:1856 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:06.458 [2024-12-16 11:43:06.976213] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:21:06.458 [2024-12-16 11:43:06.981077] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x97d1a0) with pdu=0x200016eff3c8 00:21:06.458 [2024-12-16 11:43:06.981171] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:1344 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:06.458 [2024-12-16 11:43:06.981195] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:21:06.458 [2024-12-16 11:43:06.985955] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x97d1a0) with pdu=0x200016eff3c8 00:21:06.458 [2024-12-16 11:43:06.986041] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:21504 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:06.458 [2024-12-16 11:43:06.986064] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:21:06.458 [2024-12-16 11:43:06.990990] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x97d1a0) with pdu=0x200016eff3c8 00:21:06.458 [2024-12-16 11:43:06.991109] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:10752 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:06.458 [2024-12-16 11:43:06.991132] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:21:06.458 [2024-12-16 11:43:06.996020] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x97d1a0) with pdu=0x200016eff3c8 00:21:06.458 [2024-12-16 11:43:06.996121] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:1984 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:06.458 [2024-12-16 11:43:06.996157] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:21:06.458 [2024-12-16 11:43:07.001040] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x97d1a0) with pdu=0x200016eff3c8 00:21:06.458 [2024-12-16 11:43:07.001155] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:224 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:06.458 [2024-12-16 11:43:07.001178] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:21:06.458 [2024-12-16 11:43:07.006003] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x97d1a0) with pdu=0x200016eff3c8 00:21:06.458 [2024-12-16 11:43:07.006079] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:8032 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:06.458 [2024-12-16 11:43:07.006103] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:21:06.458 [2024-12-16 11:43:07.010997] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x97d1a0) with pdu=0x200016eff3c8 00:21:06.458 [2024-12-16 11:43:07.011080] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:18336 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:06.458 [2024-12-16 11:43:07.011104] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:21:06.458 [2024-12-16 11:43:07.016004] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x97d1a0) with pdu=0x200016eff3c8 00:21:06.458 [2024-12-16 11:43:07.016102] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:21312 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:06.458 [2024-12-16 11:43:07.016125] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:21:06.458 [2024-12-16 11:43:07.021006] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x97d1a0) with pdu=0x200016eff3c8 00:21:06.458 [2024-12-16 11:43:07.021084] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:24352 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:06.458 [2024-12-16 11:43:07.021108] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:21:06.458 [2024-12-16 11:43:07.026056] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x97d1a0) with pdu=0x200016eff3c8 00:21:06.458 [2024-12-16 11:43:07.026161] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:25088 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:06.458 [2024-12-16 11:43:07.026199] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:21:06.458 [2024-12-16 11:43:07.031095] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x97d1a0) with pdu=0x200016eff3c8 00:21:06.458 [2024-12-16 11:43:07.031234] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:256 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:06.458 [2024-12-16 11:43:07.031258] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:21:06.458 [2024-12-16 11:43:07.036131] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x97d1a0) with pdu=0x200016eff3c8 00:21:06.458 [2024-12-16 11:43:07.036261] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:13760 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:06.458 [2024-12-16 11:43:07.036284] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:21:06.458 [2024-12-16 11:43:07.041190] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x97d1a0) with pdu=0x200016eff3c8 00:21:06.458 [2024-12-16 11:43:07.041290] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:21984 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:06.458 [2024-12-16 11:43:07.041314] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:21:06.458 [2024-12-16 11:43:07.046180] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x97d1a0) with pdu=0x200016eff3c8 00:21:06.458 [2024-12-16 11:43:07.046279] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:15104 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:06.458 [2024-12-16 11:43:07.046303] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:21:06.458 [2024-12-16 11:43:07.051151] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x97d1a0) with pdu=0x200016eff3c8 00:21:06.458 [2024-12-16 11:43:07.051249] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:22240 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:06.458 [2024-12-16 11:43:07.051273] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:21:06.458 [2024-12-16 11:43:07.056133] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x97d1a0) with pdu=0x200016eff3c8 00:21:06.459 [2024-12-16 11:43:07.056239] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:1440 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:06.459 [2024-12-16 11:43:07.056262] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:21:06.459 [2024-12-16 11:43:07.061085] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x97d1a0) with pdu=0x200016eff3c8 00:21:06.459 [2024-12-16 11:43:07.061176] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:1152 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:06.459 [2024-12-16 11:43:07.061200] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:21:06.459 [2024-12-16 11:43:07.066110] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x97d1a0) with pdu=0x200016eff3c8 00:21:06.459 [2024-12-16 11:43:07.066227] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:5056 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:06.459 [2024-12-16 11:43:07.066251] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:21:06.459 [2024-12-16 11:43:07.071003] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x97d1a0) with pdu=0x200016eff3c8 00:21:06.459 [2024-12-16 11:43:07.071102] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:22496 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:06.459 [2024-12-16 11:43:07.071124] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:21:06.459 [2024-12-16 11:43:07.075940] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x97d1a0) with pdu=0x200016eff3c8 00:21:06.459 [2024-12-16 11:43:07.076024] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:21248 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:06.459 [2024-12-16 11:43:07.076048] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:21:06.459 [2024-12-16 11:43:07.080920] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x97d1a0) with pdu=0x200016eff3c8 00:21:06.459 [2024-12-16 11:43:07.080997] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:6368 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:06.459 [2024-12-16 11:43:07.081021] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:21:06.459 [2024-12-16 11:43:07.085941] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x97d1a0) with pdu=0x200016eff3c8 00:21:06.459 [2024-12-16 11:43:07.086042] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:21280 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:06.459 [2024-12-16 11:43:07.086064] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:21:06.459 [2024-12-16 11:43:07.090876] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x97d1a0) with pdu=0x200016eff3c8 00:21:06.459 [2024-12-16 11:43:07.090960] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:3200 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:06.459 [2024-12-16 11:43:07.090983] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:21:06.459 [2024-12-16 11:43:07.095836] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x97d1a0) with pdu=0x200016eff3c8 00:21:06.459 [2024-12-16 11:43:07.095923] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:8160 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:06.459 [2024-12-16 11:43:07.095946] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:21:06.459 [2024-12-16 11:43:07.100828] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x97d1a0) with pdu=0x200016eff3c8 00:21:06.459 [2024-12-16 11:43:07.100911] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:16064 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:06.459 [2024-12-16 11:43:07.100934] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:21:06.459 [2024-12-16 11:43:07.105868] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x97d1a0) with pdu=0x200016eff3c8 00:21:06.459 [2024-12-16 11:43:07.105967] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:15040 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:06.459 [2024-12-16 11:43:07.105990] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:21:06.459 [2024-12-16 11:43:07.110953] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x97d1a0) with pdu=0x200016eff3c8 00:21:06.459 [2024-12-16 11:43:07.111062] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:10208 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:06.459 [2024-12-16 11:43:07.111086] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:21:06.459 [2024-12-16 11:43:07.115964] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x97d1a0) with pdu=0x200016eff3c8 00:21:06.459 [2024-12-16 11:43:07.116083] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:19680 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:06.459 [2024-12-16 11:43:07.116106] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:21:06.459 [2024-12-16 11:43:07.120923] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x97d1a0) with pdu=0x200016eff3c8 00:21:06.459 [2024-12-16 11:43:07.121024] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:20384 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:06.459 [2024-12-16 11:43:07.121048] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:21:06.459 [2024-12-16 11:43:07.125797] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x97d1a0) with pdu=0x200016eff3c8 00:21:06.459 [2024-12-16 11:43:07.125925] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:20864 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:06.459 [2024-12-16 11:43:07.125948] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:21:06.459 [2024-12-16 11:43:07.130762] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x97d1a0) with pdu=0x200016eff3c8 00:21:06.459 [2024-12-16 11:43:07.130894] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:5728 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:06.459 [2024-12-16 11:43:07.130918] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:21:06.459 [2024-12-16 11:43:07.135870] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x97d1a0) with pdu=0x200016eff3c8 00:21:06.459 [2024-12-16 11:43:07.135969] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:18368 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:06.459 [2024-12-16 11:43:07.135994] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:21:06.459 [2024-12-16 11:43:07.141010] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x97d1a0) with pdu=0x200016eff3c8 00:21:06.459 [2024-12-16 11:43:07.141118] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:4928 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:06.459 [2024-12-16 11:43:07.141140] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:21:06.459 [2024-12-16 11:43:07.146044] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x97d1a0) with pdu=0x200016eff3c8 00:21:06.459 [2024-12-16 11:43:07.146168] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:3232 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:06.459 [2024-12-16 11:43:07.146191] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:21:06.459 [2024-12-16 11:43:07.150901] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x97d1a0) with pdu=0x200016eff3c8 00:21:06.459 [2024-12-16 11:43:07.151029] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:4768 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:06.459 [2024-12-16 11:43:07.151053] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:21:06.459 [2024-12-16 11:43:07.155841] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x97d1a0) with pdu=0x200016eff3c8 00:21:06.459 [2024-12-16 11:43:07.155936] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:8416 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:06.459 [2024-12-16 11:43:07.155959] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:21:06.459 [2024-12-16 11:43:07.160723] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x97d1a0) with pdu=0x200016eff3c8 00:21:06.459 [2024-12-16 11:43:07.160838] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:3968 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:06.459 [2024-12-16 11:43:07.160862] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:21:06.459 [2024-12-16 11:43:07.165779] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x97d1a0) with pdu=0x200016eff3c8 00:21:06.459 [2024-12-16 11:43:07.165864] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:22336 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:06.459 [2024-12-16 11:43:07.165887] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:21:06.459 [2024-12-16 11:43:07.170919] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x97d1a0) with pdu=0x200016eff3c8 00:21:06.459 [2024-12-16 11:43:07.171010] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:1408 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:06.459 [2024-12-16 11:43:07.171033] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:21:06.459 [2024-12-16 11:43:07.175928] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x97d1a0) with pdu=0x200016eff3c8 00:21:06.459 [2024-12-16 11:43:07.176016] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:10880 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:06.459 [2024-12-16 11:43:07.176039] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:21:06.459 [2024-12-16 11:43:07.181050] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x97d1a0) with pdu=0x200016eff3c8 00:21:06.459 [2024-12-16 11:43:07.181146] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:10176 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:06.459 [2024-12-16 11:43:07.181182] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:21:06.459 [2024-12-16 11:43:07.186290] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x97d1a0) with pdu=0x200016eff3c8 00:21:06.459 [2024-12-16 11:43:07.186385] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:19168 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:06.459 [2024-12-16 11:43:07.186408] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:21:06.459 [2024-12-16 11:43:07.191369] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x97d1a0) with pdu=0x200016eff3c8 00:21:06.459 [2024-12-16 11:43:07.191453] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:18880 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:06.459 [2024-12-16 11:43:07.191476] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:21:06.459 [2024-12-16 11:43:07.196594] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x97d1a0) with pdu=0x200016eff3c8 00:21:06.460 [2024-12-16 11:43:07.196721] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:1216 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:06.460 [2024-12-16 11:43:07.196744] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:21:06.460 [2024-12-16 11:43:07.201697] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x97d1a0) with pdu=0x200016eff3c8 00:21:06.460 [2024-12-16 11:43:07.201811] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:9728 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:06.460 [2024-12-16 11:43:07.201833] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:21:06.460 [2024-12-16 11:43:07.206807] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x97d1a0) with pdu=0x200016eff3c8 00:21:06.460 [2024-12-16 11:43:07.206923] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:3648 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:06.460 [2024-12-16 11:43:07.206945] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:21:06.719 [2024-12-16 11:43:07.211895] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x97d1a0) with pdu=0x200016eff3c8 00:21:06.719 [2024-12-16 11:43:07.211976] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:19424 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:06.719 [2024-12-16 11:43:07.212000] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:21:06.719 [2024-12-16 11:43:07.216786] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x97d1a0) with pdu=0x200016eff3c8 00:21:06.719 [2024-12-16 11:43:07.216901] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:8608 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:06.719 [2024-12-16 11:43:07.216924] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:21:06.719 [2024-12-16 11:43:07.221796] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x97d1a0) with pdu=0x200016eff3c8 00:21:06.719 [2024-12-16 11:43:07.221912] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:7584 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:06.719 [2024-12-16 11:43:07.221935] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:21:06.719 [2024-12-16 11:43:07.226821] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x97d1a0) with pdu=0x200016eff3c8 00:21:06.719 [2024-12-16 11:43:07.226943] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:3776 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:06.719 [2024-12-16 11:43:07.226966] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:21:06.719 [2024-12-16 11:43:07.231809] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x97d1a0) with pdu=0x200016eff3c8 00:21:06.719 [2024-12-16 11:43:07.231895] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:3712 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:06.719 [2024-12-16 11:43:07.231918] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:21:06.719 [2024-12-16 11:43:07.236815] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x97d1a0) with pdu=0x200016eff3c8 00:21:06.719 [2024-12-16 11:43:07.236927] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:18848 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:06.719 [2024-12-16 11:43:07.236949] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:21:06.719 [2024-12-16 11:43:07.241772] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x97d1a0) with pdu=0x200016eff3c8 00:21:06.719 [2024-12-16 11:43:07.241896] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:24704 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:06.719 [2024-12-16 11:43:07.241919] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:21:06.719 [2024-12-16 11:43:07.246748] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x97d1a0) with pdu=0x200016eff3c8 00:21:06.720 [2024-12-16 11:43:07.246872] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:7072 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:06.720 [2024-12-16 11:43:07.246895] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:21:06.720 [2024-12-16 11:43:07.251618] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x97d1a0) with pdu=0x200016eff3c8 00:21:06.720 [2024-12-16 11:43:07.251735] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:25216 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:06.720 [2024-12-16 11:43:07.251758] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:21:06.720 [2024-12-16 11:43:07.256609] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x97d1a0) with pdu=0x200016eff3c8 00:21:06.720 [2024-12-16 11:43:07.256731] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:2752 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:06.720 [2024-12-16 11:43:07.256754] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:21:06.720 [2024-12-16 11:43:07.261510] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x97d1a0) with pdu=0x200016eff3c8 00:21:06.720 [2024-12-16 11:43:07.261613] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:16992 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:06.720 [2024-12-16 11:43:07.261636] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:21:06.720 [2024-12-16 11:43:07.266548] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x97d1a0) with pdu=0x200016eff3c8 00:21:06.720 [2024-12-16 11:43:07.266677] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:21024 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:06.720 [2024-12-16 11:43:07.266700] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:21:06.720 [2024-12-16 11:43:07.271440] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x97d1a0) with pdu=0x200016eff3c8 00:21:06.720 [2024-12-16 11:43:07.271571] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:23168 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:06.720 [2024-12-16 11:43:07.271594] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:21:06.720 [2024-12-16 11:43:07.276499] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x97d1a0) with pdu=0x200016eff3c8 00:21:06.720 [2024-12-16 11:43:07.276624] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:13632 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:06.720 [2024-12-16 11:43:07.276646] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:21:06.720 [2024-12-16 11:43:07.281378] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x97d1a0) with pdu=0x200016eff3c8 00:21:06.720 [2024-12-16 11:43:07.281507] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:22688 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:06.720 [2024-12-16 11:43:07.281529] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:21:06.720 [2024-12-16 11:43:07.286381] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x97d1a0) with pdu=0x200016eff3c8 00:21:06.720 [2024-12-16 11:43:07.286486] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:5440 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:06.720 [2024-12-16 11:43:07.286509] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:21:06.720 [2024-12-16 11:43:07.291298] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x97d1a0) with pdu=0x200016eff3c8 00:21:06.720 [2024-12-16 11:43:07.291445] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:15168 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:06.720 [2024-12-16 11:43:07.291468] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:21:06.720 [2024-12-16 11:43:07.296255] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x97d1a0) with pdu=0x200016eff3c8 00:21:06.720 [2024-12-16 11:43:07.296357] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:5952 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:06.720 [2024-12-16 11:43:07.296379] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:21:06.720 [2024-12-16 11:43:07.301237] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x97d1a0) with pdu=0x200016eff3c8 00:21:06.720 [2024-12-16 11:43:07.301363] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:5984 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:06.720 [2024-12-16 11:43:07.301386] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:21:06.720 [2024-12-16 11:43:07.306220] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x97d1a0) with pdu=0x200016eff3c8 00:21:06.720 [2024-12-16 11:43:07.306327] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:8768 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:06.720 [2024-12-16 11:43:07.306349] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:21:06.720 [2024-12-16 11:43:07.311066] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x97d1a0) with pdu=0x200016eff3c8 00:21:06.720 [2024-12-16 11:43:07.311192] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:8000 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:06.720 [2024-12-16 11:43:07.311216] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:21:06.720 [2024-12-16 11:43:07.315907] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x97d1a0) with pdu=0x200016eff3c8 00:21:06.720 [2024-12-16 11:43:07.315984] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:8416 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:06.720 [2024-12-16 11:43:07.316007] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:21:06.720 [2024-12-16 11:43:07.320811] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x97d1a0) with pdu=0x200016eff3c8 00:21:06.720 [2024-12-16 11:43:07.320922] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:21536 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:06.720 [2024-12-16 11:43:07.320945] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:21:06.720 [2024-12-16 11:43:07.325812] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x97d1a0) with pdu=0x200016eff3c8 00:21:06.720 [2024-12-16 11:43:07.325920] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:24640 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:06.720 [2024-12-16 11:43:07.325943] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:21:06.720 [2024-12-16 11:43:07.330795] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x97d1a0) with pdu=0x200016eff3c8 00:21:06.720 [2024-12-16 11:43:07.330898] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:17280 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:06.720 [2024-12-16 11:43:07.330921] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:21:06.720 [2024-12-16 11:43:07.335664] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x97d1a0) with pdu=0x200016eff3c8 00:21:06.720 [2024-12-16 11:43:07.335797] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:8352 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:06.720 [2024-12-16 11:43:07.335820] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:21:06.720 [2024-12-16 11:43:07.340535] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x97d1a0) with pdu=0x200016eff3c8 00:21:06.720 [2024-12-16 11:43:07.340664] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:7744 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:06.720 [2024-12-16 11:43:07.340686] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:21:06.720 [2024-12-16 11:43:07.345460] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x97d1a0) with pdu=0x200016eff3c8 00:21:06.720 [2024-12-16 11:43:07.345565] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:17568 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:06.720 [2024-12-16 11:43:07.345589] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:21:06.720 [2024-12-16 11:43:07.350347] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x97d1a0) with pdu=0x200016eff3c8 00:21:06.720 [2024-12-16 11:43:07.350478] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:25152 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:06.720 [2024-12-16 11:43:07.350500] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:21:06.720 [2024-12-16 11:43:07.355218] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x97d1a0) with pdu=0x200016eff3c8 00:21:06.720 [2024-12-16 11:43:07.355341] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:22112 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:06.720 [2024-12-16 11:43:07.355369] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:21:06.720 [2024-12-16 11:43:07.360226] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x97d1a0) with pdu=0x200016eff3c8 00:21:06.720 [2024-12-16 11:43:07.360341] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:2016 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:06.720 [2024-12-16 11:43:07.360364] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:21:06.720 [2024-12-16 11:43:07.365198] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x97d1a0) with pdu=0x200016eff3c8 00:21:06.720 [2024-12-16 11:43:07.365288] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:25152 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:06.720 [2024-12-16 11:43:07.365312] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:21:06.720 [2024-12-16 11:43:07.370006] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x97d1a0) with pdu=0x200016eff3c8 00:21:06.720 [2024-12-16 11:43:07.370112] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:10400 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:06.720 [2024-12-16 11:43:07.370134] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:21:06.720 [2024-12-16 11:43:07.374849] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x97d1a0) with pdu=0x200016eff3c8 00:21:06.720 [2024-12-16 11:43:07.374971] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:12384 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:06.720 [2024-12-16 11:43:07.374993] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:21:06.720 [2024-12-16 11:43:07.379869] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x97d1a0) with pdu=0x200016eff3c8 00:21:06.720 [2024-12-16 11:43:07.379950] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:3232 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:06.721 [2024-12-16 11:43:07.379973] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:21:06.721 [2024-12-16 11:43:07.384732] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x97d1a0) with pdu=0x200016eff3c8 00:21:06.721 [2024-12-16 11:43:07.384845] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:19136 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:06.721 [2024-12-16 11:43:07.384867] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:21:06.721 [2024-12-16 11:43:07.389778] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x97d1a0) with pdu=0x200016eff3c8 00:21:06.721 [2024-12-16 11:43:07.389892] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:16672 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:06.721 [2024-12-16 11:43:07.389916] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:21:06.721 [2024-12-16 11:43:07.394680] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x97d1a0) with pdu=0x200016eff3c8 00:21:06.721 [2024-12-16 11:43:07.394793] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:18176 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:06.721 [2024-12-16 11:43:07.394815] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:21:06.721 [2024-12-16 11:43:07.399652] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x97d1a0) with pdu=0x200016eff3c8 00:21:06.721 [2024-12-16 11:43:07.399777] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:9728 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:06.721 [2024-12-16 11:43:07.399801] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:21:06.721 [2024-12-16 11:43:07.404588] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x97d1a0) with pdu=0x200016eff3c8 00:21:06.721 [2024-12-16 11:43:07.404695] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:1280 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:06.721 [2024-12-16 11:43:07.404719] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:21:06.721 [2024-12-16 11:43:07.409464] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x97d1a0) with pdu=0x200016eff3c8 00:21:06.721 [2024-12-16 11:43:07.409593] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:64 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:06.721 [2024-12-16 11:43:07.409615] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:21:06.721 [2024-12-16 11:43:07.414410] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x97d1a0) with pdu=0x200016eff3c8 00:21:06.721 [2024-12-16 11:43:07.414514] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:14400 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:06.721 [2024-12-16 11:43:07.414537] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:21:06.721 [2024-12-16 11:43:07.419239] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x97d1a0) with pdu=0x200016eff3c8 00:21:06.721 [2024-12-16 11:43:07.419376] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:6080 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:06.721 [2024-12-16 11:43:07.419405] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:21:06.721 [2024-12-16 11:43:07.424139] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x97d1a0) with pdu=0x200016eff3c8 00:21:06.721 [2024-12-16 11:43:07.424284] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:3680 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:06.721 [2024-12-16 11:43:07.424306] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:21:06.721 [2024-12-16 11:43:07.428984] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x97d1a0) with pdu=0x200016eff3c8 00:21:06.721 [2024-12-16 11:43:07.429097] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:11552 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:06.721 [2024-12-16 11:43:07.429119] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:21:06.721 [2024-12-16 11:43:07.433927] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x97d1a0) with pdu=0x200016eff3c8 00:21:06.721 [2024-12-16 11:43:07.434042] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:15456 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:06.721 [2024-12-16 11:43:07.434063] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:21:06.721 [2024-12-16 11:43:07.438805] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x97d1a0) with pdu=0x200016eff3c8 00:21:06.721 [2024-12-16 11:43:07.438916] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:928 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:06.721 [2024-12-16 11:43:07.438937] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:21:06.721 [2024-12-16 11:43:07.443821] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x97d1a0) with pdu=0x200016eff3c8 00:21:06.721 [2024-12-16 11:43:07.443897] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:7616 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:06.721 [2024-12-16 11:43:07.443920] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:21:06.721 [2024-12-16 11:43:07.448963] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x97d1a0) with pdu=0x200016eff3c8 00:21:06.721 [2024-12-16 11:43:07.449073] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:16384 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:06.721 [2024-12-16 11:43:07.449095] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:21:06.721 [2024-12-16 11:43:07.454002] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x97d1a0) with pdu=0x200016eff3c8 00:21:06.721 [2024-12-16 11:43:07.454104] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:24960 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:06.721 [2024-12-16 11:43:07.454127] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:21:06.721 [2024-12-16 11:43:07.458788] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x97d1a0) with pdu=0x200016eff3c8 00:21:06.721 [2024-12-16 11:43:07.458900] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:24896 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:06.721 [2024-12-16 11:43:07.458921] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:21:06.721 [2024-12-16 11:43:07.463906] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x97d1a0) with pdu=0x200016eff3c8 00:21:06.721 [2024-12-16 11:43:07.463989] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:19616 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:06.721 [2024-12-16 11:43:07.464012] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:21:06.721 [2024-12-16 11:43:07.468759] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x97d1a0) with pdu=0x200016eff3c8 00:21:06.721 [2024-12-16 11:43:07.468869] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:6080 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:06.721 [2024-12-16 11:43:07.468891] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:21:06.981 [2024-12-16 11:43:07.473715] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x97d1a0) with pdu=0x200016eff3c8 00:21:06.981 [2024-12-16 11:43:07.473825] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:22496 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:06.981 [2024-12-16 11:43:07.473848] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:21:06.981 [2024-12-16 11:43:07.478573] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x97d1a0) with pdu=0x200016eff3c8 00:21:06.981 [2024-12-16 11:43:07.478684] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:10880 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:06.981 [2024-12-16 11:43:07.478706] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:21:06.981 [2024-12-16 11:43:07.483329] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x97d1a0) with pdu=0x200016eff3c8 00:21:06.981 [2024-12-16 11:43:07.483450] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:512 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:06.981 [2024-12-16 11:43:07.483475] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:21:06.981 [2024-12-16 11:43:07.488234] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x97d1a0) with pdu=0x200016eff3c8 00:21:06.981 [2024-12-16 11:43:07.488349] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:288 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:06.981 [2024-12-16 11:43:07.488372] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:21:06.981 6203.00 IOPS, 775.38 MiB/s [2024-12-16T11:43:07.734Z] [2024-12-16 11:43:07.494115] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x97d1a0) with pdu=0x200016eff3c8 00:21:06.981 [2024-12-16 11:43:07.494237] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:9632 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:06.981 [2024-12-16 11:43:07.494261] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:21:06.981 [2024-12-16 11:43:07.499033] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x97d1a0) with pdu=0x200016eff3c8 00:21:06.981 [2024-12-16 11:43:07.499144] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:4800 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:06.981 [2024-12-16 11:43:07.499166] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:21:06.982 [2024-12-16 11:43:07.503812] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x97d1a0) with pdu=0x200016eff3c8 00:21:06.982 [2024-12-16 11:43:07.503925] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:23360 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:06.982 [2024-12-16 11:43:07.503947] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:21:06.982 [2024-12-16 11:43:07.508697] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x97d1a0) with pdu=0x200016eff3c8 00:21:06.982 [2024-12-16 11:43:07.508795] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:704 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:06.982 [2024-12-16 11:43:07.508817] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:21:06.982 [2024-12-16 11:43:07.513530] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x97d1a0) with pdu=0x200016eff3c8 00:21:06.982 [2024-12-16 11:43:07.513639] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:8960 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:06.982 [2024-12-16 11:43:07.513660] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:21:06.982 [2024-12-16 11:43:07.518430] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x97d1a0) with pdu=0x200016eff3c8 00:21:06.982 [2024-12-16 11:43:07.518531] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:11232 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:06.982 [2024-12-16 11:43:07.518554] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:21:06.982 [2024-12-16 11:43:07.523161] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x97d1a0) with pdu=0x200016eff3c8 00:21:06.982 [2024-12-16 11:43:07.523288] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:2912 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:06.982 [2024-12-16 11:43:07.523310] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:21:06.982 [2024-12-16 11:43:07.528007] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x97d1a0) with pdu=0x200016eff3c8 00:21:06.982 [2024-12-16 11:43:07.528125] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:4416 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:06.982 [2024-12-16 11:43:07.528146] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:21:06.982 [2024-12-16 11:43:07.533014] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x97d1a0) with pdu=0x200016eff3c8 00:21:06.982 [2024-12-16 11:43:07.533139] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:23072 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:06.982 [2024-12-16 11:43:07.533163] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:21:06.982 [2024-12-16 11:43:07.538204] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x97d1a0) with pdu=0x200016eff3c8 00:21:06.982 [2024-12-16 11:43:07.538335] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:384 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:06.982 [2024-12-16 11:43:07.538358] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:21:06.982 [2024-12-16 11:43:07.543397] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x97d1a0) with pdu=0x200016eff3c8 00:21:06.982 [2024-12-16 11:43:07.543514] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:1952 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:06.982 [2024-12-16 11:43:07.543537] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:21:06.982 [2024-12-16 11:43:07.548486] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x97d1a0) with pdu=0x200016eff3c8 00:21:06.982 [2024-12-16 11:43:07.548565] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:4992 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:06.982 [2024-12-16 11:43:07.548588] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:21:06.982 [2024-12-16 11:43:07.553701] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x97d1a0) with pdu=0x200016eff3c8 00:21:06.982 [2024-12-16 11:43:07.553831] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:21344 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:06.982 [2024-12-16 11:43:07.553854] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:21:06.982 [2024-12-16 11:43:07.558846] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x97d1a0) with pdu=0x200016eff3c8 00:21:06.982 [2024-12-16 11:43:07.558957] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:9568 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:06.982 [2024-12-16 11:43:07.558979] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:21:06.982 [2024-12-16 11:43:07.563999] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x97d1a0) with pdu=0x200016eff3c8 00:21:06.982 [2024-12-16 11:43:07.564130] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:15168 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:06.982 [2024-12-16 11:43:07.564153] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:21:06.982 [2024-12-16 11:43:07.569120] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x97d1a0) with pdu=0x200016eff3c8 00:21:06.982 [2024-12-16 11:43:07.569310] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:8704 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:06.982 [2024-12-16 11:43:07.569334] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:21:06.982 [2024-12-16 11:43:07.574141] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x97d1a0) with pdu=0x200016eff3c8 00:21:06.982 [2024-12-16 11:43:07.574253] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:7360 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:06.982 [2024-12-16 11:43:07.574275] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:21:06.982 [2024-12-16 11:43:07.579048] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x97d1a0) with pdu=0x200016eff3c8 00:21:06.982 [2024-12-16 11:43:07.579152] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:4896 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:06.982 [2024-12-16 11:43:07.579188] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:21:06.982 [2024-12-16 11:43:07.583844] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x97d1a0) with pdu=0x200016eff3c8 00:21:06.982 [2024-12-16 11:43:07.583945] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:20320 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:06.982 [2024-12-16 11:43:07.583968] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:21:06.982 [2024-12-16 11:43:07.588848] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x97d1a0) with pdu=0x200016eff3c8 00:21:06.982 [2024-12-16 11:43:07.588958] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:14208 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:06.982 [2024-12-16 11:43:07.588981] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:21:06.982 [2024-12-16 11:43:07.593770] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x97d1a0) with pdu=0x200016eff3c8 00:21:06.982 [2024-12-16 11:43:07.593882] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:7808 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:06.982 [2024-12-16 11:43:07.593905] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:21:06.982 [2024-12-16 11:43:07.598706] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x97d1a0) with pdu=0x200016eff3c8 00:21:06.982 [2024-12-16 11:43:07.598819] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:11392 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:06.982 [2024-12-16 11:43:07.598841] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:21:06.982 [2024-12-16 11:43:07.603608] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x97d1a0) with pdu=0x200016eff3c8 00:21:06.982 [2024-12-16 11:43:07.603738] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:22304 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:06.982 [2024-12-16 11:43:07.603761] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:21:06.982 [2024-12-16 11:43:07.608580] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x97d1a0) with pdu=0x200016eff3c8 00:21:06.982 [2024-12-16 11:43:07.608692] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:5280 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:06.982 [2024-12-16 11:43:07.608713] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:21:06.982 [2024-12-16 11:43:07.613474] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x97d1a0) with pdu=0x200016eff3c8 00:21:06.982 [2024-12-16 11:43:07.613585] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:2752 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:06.982 [2024-12-16 11:43:07.613606] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:21:06.982 [2024-12-16 11:43:07.618359] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x97d1a0) with pdu=0x200016eff3c8 00:21:06.982 [2024-12-16 11:43:07.618458] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:10560 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:06.982 [2024-12-16 11:43:07.618479] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:21:06.982 [2024-12-16 11:43:07.623257] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x97d1a0) with pdu=0x200016eff3c8 00:21:06.982 [2024-12-16 11:43:07.623410] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:4064 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:06.982 [2024-12-16 11:43:07.623433] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:21:06.982 [2024-12-16 11:43:07.628173] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x97d1a0) with pdu=0x200016eff3c8 00:21:06.982 [2024-12-16 11:43:07.628277] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:4160 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:06.982 [2024-12-16 11:43:07.628298] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:21:06.982 [2024-12-16 11:43:07.633077] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x97d1a0) with pdu=0x200016eff3c8 00:21:06.982 [2024-12-16 11:43:07.633227] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:18848 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:06.982 [2024-12-16 11:43:07.633249] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:21:06.982 [2024-12-16 11:43:07.637985] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x97d1a0) with pdu=0x200016eff3c8 00:21:06.982 [2024-12-16 11:43:07.638107] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:7872 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:06.983 [2024-12-16 11:43:07.638130] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:21:06.983 [2024-12-16 11:43:07.642815] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x97d1a0) with pdu=0x200016eff3c8 00:21:06.983 [2024-12-16 11:43:07.642924] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:18592 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:06.983 [2024-12-16 11:43:07.642946] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:21:06.983 [2024-12-16 11:43:07.647765] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x97d1a0) with pdu=0x200016eff3c8 00:21:06.983 [2024-12-16 11:43:07.647863] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:6848 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:06.983 [2024-12-16 11:43:07.647886] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:21:06.983 [2024-12-16 11:43:07.652724] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x97d1a0) with pdu=0x200016eff3c8 00:21:06.983 [2024-12-16 11:43:07.652854] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:6528 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:06.983 [2024-12-16 11:43:07.652877] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:21:06.983 [2024-12-16 11:43:07.658039] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x97d1a0) with pdu=0x200016eff3c8 00:21:06.983 [2024-12-16 11:43:07.658158] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:16640 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:06.983 [2024-12-16 11:43:07.658183] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:21:06.983 [2024-12-16 11:43:07.663199] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x97d1a0) with pdu=0x200016eff3c8 00:21:06.983 [2024-12-16 11:43:07.663343] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:20160 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:06.983 [2024-12-16 11:43:07.663388] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:21:06.983 [2024-12-16 11:43:07.668433] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x97d1a0) with pdu=0x200016eff3c8 00:21:06.983 [2024-12-16 11:43:07.668549] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:5888 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:06.983 [2024-12-16 11:43:07.668573] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:21:06.983 [2024-12-16 11:43:07.673537] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x97d1a0) with pdu=0x200016eff3c8 00:21:06.983 [2024-12-16 11:43:07.673629] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:10912 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:06.983 [2024-12-16 11:43:07.673652] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:21:06.983 [2024-12-16 11:43:07.678611] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x97d1a0) with pdu=0x200016eff3c8 00:21:06.983 [2024-12-16 11:43:07.678694] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:4512 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:06.983 [2024-12-16 11:43:07.678717] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:21:06.983 [2024-12-16 11:43:07.683792] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x97d1a0) with pdu=0x200016eff3c8 00:21:06.983 [2024-12-16 11:43:07.683878] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:5408 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:06.983 [2024-12-16 11:43:07.683902] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:21:06.983 [2024-12-16 11:43:07.688972] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x97d1a0) with pdu=0x200016eff3c8 00:21:06.983 [2024-12-16 11:43:07.689085] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:5632 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:06.983 [2024-12-16 11:43:07.689107] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:21:06.983 [2024-12-16 11:43:07.694153] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x97d1a0) with pdu=0x200016eff3c8 00:21:06.983 [2024-12-16 11:43:07.694276] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:11968 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:06.983 [2024-12-16 11:43:07.694300] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:21:06.983 [2024-12-16 11:43:07.699293] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x97d1a0) with pdu=0x200016eff3c8 00:21:06.983 [2024-12-16 11:43:07.699388] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:4160 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:06.983 [2024-12-16 11:43:07.699418] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:21:06.983 [2024-12-16 11:43:07.704352] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x97d1a0) with pdu=0x200016eff3c8 00:21:06.983 [2024-12-16 11:43:07.704477] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:12640 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:06.983 [2024-12-16 11:43:07.704501] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:21:06.983 [2024-12-16 11:43:07.709404] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x97d1a0) with pdu=0x200016eff3c8 00:21:06.983 [2024-12-16 11:43:07.709534] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:24000 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:06.983 [2024-12-16 11:43:07.709556] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:21:06.983 [2024-12-16 11:43:07.714420] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x97d1a0) with pdu=0x200016eff3c8 00:21:06.983 [2024-12-16 11:43:07.714535] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:15008 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:06.983 [2024-12-16 11:43:07.714557] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:21:06.983 [2024-12-16 11:43:07.719391] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x97d1a0) with pdu=0x200016eff3c8 00:21:06.983 [2024-12-16 11:43:07.719531] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:672 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:06.983 [2024-12-16 11:43:07.719554] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:21:06.983 [2024-12-16 11:43:07.724416] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x97d1a0) with pdu=0x200016eff3c8 00:21:06.983 [2024-12-16 11:43:07.724525] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:15936 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:06.983 [2024-12-16 11:43:07.724548] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:21:06.983 [2024-12-16 11:43:07.729338] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x97d1a0) with pdu=0x200016eff3c8 00:21:06.983 [2024-12-16 11:43:07.729428] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:23424 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:06.983 [2024-12-16 11:43:07.729451] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:21:07.243 [2024-12-16 11:43:07.734349] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x97d1a0) with pdu=0x200016eff3c8 00:21:07.243 [2024-12-16 11:43:07.734472] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:16416 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:07.243 [2024-12-16 11:43:07.734496] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:21:07.243 [2024-12-16 11:43:07.739390] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x97d1a0) with pdu=0x200016eff3c8 00:21:07.243 [2024-12-16 11:43:07.739518] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:5216 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:07.243 [2024-12-16 11:43:07.739541] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:21:07.243 [2024-12-16 11:43:07.744413] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x97d1a0) with pdu=0x200016eff3c8 00:21:07.243 [2024-12-16 11:43:07.744518] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:16736 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:07.243 [2024-12-16 11:43:07.744542] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:21:07.243 [2024-12-16 11:43:07.749269] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x97d1a0) with pdu=0x200016eff3c8 00:21:07.243 [2024-12-16 11:43:07.749394] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:17856 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:07.243 [2024-12-16 11:43:07.749417] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:21:07.243 [2024-12-16 11:43:07.754283] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x97d1a0) with pdu=0x200016eff3c8 00:21:07.243 [2024-12-16 11:43:07.754419] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:13152 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:07.243 [2024-12-16 11:43:07.754441] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:21:07.243 [2024-12-16 11:43:07.759408] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x97d1a0) with pdu=0x200016eff3c8 00:21:07.243 [2024-12-16 11:43:07.759505] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:14144 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:07.243 [2024-12-16 11:43:07.759528] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:21:07.243 [2024-12-16 11:43:07.764425] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x97d1a0) with pdu=0x200016eff3c8 00:21:07.243 [2024-12-16 11:43:07.764533] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:11648 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:07.243 [2024-12-16 11:43:07.764557] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:21:07.243 [2024-12-16 11:43:07.769542] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x97d1a0) with pdu=0x200016eff3c8 00:21:07.243 [2024-12-16 11:43:07.769653] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:6432 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:07.243 [2024-12-16 11:43:07.769677] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:21:07.243 [2024-12-16 11:43:07.774610] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x97d1a0) with pdu=0x200016eff3c8 00:21:07.244 [2024-12-16 11:43:07.774716] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:22208 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:07.244 [2024-12-16 11:43:07.774738] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:21:07.244 [2024-12-16 11:43:07.779472] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x97d1a0) with pdu=0x200016eff3c8 00:21:07.244 [2024-12-16 11:43:07.779619] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:11968 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:07.244 [2024-12-16 11:43:07.779642] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:21:07.244 [2024-12-16 11:43:07.784518] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x97d1a0) with pdu=0x200016eff3c8 00:21:07.244 [2024-12-16 11:43:07.784664] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:12288 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:07.244 [2024-12-16 11:43:07.784686] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:21:07.244 [2024-12-16 11:43:07.789469] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x97d1a0) with pdu=0x200016eff3c8 00:21:07.244 [2024-12-16 11:43:07.789573] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:17504 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:07.244 [2024-12-16 11:43:07.789596] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:21:07.244 [2024-12-16 11:43:07.794358] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x97d1a0) with pdu=0x200016eff3c8 00:21:07.244 [2024-12-16 11:43:07.794461] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:24128 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:07.244 [2024-12-16 11:43:07.794483] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:21:07.244 [2024-12-16 11:43:07.799165] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x97d1a0) with pdu=0x200016eff3c8 00:21:07.244 [2024-12-16 11:43:07.799293] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:20704 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:07.244 [2024-12-16 11:43:07.799315] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:21:07.244 [2024-12-16 11:43:07.804130] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x97d1a0) with pdu=0x200016eff3c8 00:21:07.244 [2024-12-16 11:43:07.804245] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:23616 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:07.244 [2024-12-16 11:43:07.804268] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:21:07.244 [2024-12-16 11:43:07.809048] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x97d1a0) with pdu=0x200016eff3c8 00:21:07.244 [2024-12-16 11:43:07.809162] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:13376 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:07.244 [2024-12-16 11:43:07.809199] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:21:07.244 [2024-12-16 11:43:07.814041] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x97d1a0) with pdu=0x200016eff3c8 00:21:07.244 [2024-12-16 11:43:07.814148] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:17504 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:07.244 [2024-12-16 11:43:07.814184] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:21:07.244 [2024-12-16 11:43:07.818914] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x97d1a0) with pdu=0x200016eff3c8 00:21:07.244 [2024-12-16 11:43:07.819046] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:23840 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:07.244 [2024-12-16 11:43:07.819068] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:21:07.244 [2024-12-16 11:43:07.823856] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x97d1a0) with pdu=0x200016eff3c8 00:21:07.244 [2024-12-16 11:43:07.823945] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:6944 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:07.244 [2024-12-16 11:43:07.823967] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:21:07.244 [2024-12-16 11:43:07.828844] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x97d1a0) with pdu=0x200016eff3c8 00:21:07.244 [2024-12-16 11:43:07.828958] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:12096 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:07.244 [2024-12-16 11:43:07.828982] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:21:07.244 [2024-12-16 11:43:07.833748] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x97d1a0) with pdu=0x200016eff3c8 00:21:07.244 [2024-12-16 11:43:07.833860] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:14560 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:07.244 [2024-12-16 11:43:07.833883] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:21:07.244 [2024-12-16 11:43:07.838633] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x97d1a0) with pdu=0x200016eff3c8 00:21:07.244 [2024-12-16 11:43:07.838756] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:10560 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:07.244 [2024-12-16 11:43:07.838780] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:21:07.244 [2024-12-16 11:43:07.843487] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x97d1a0) with pdu=0x200016eff3c8 00:21:07.244 [2024-12-16 11:43:07.843618] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:8192 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:07.244 [2024-12-16 11:43:07.843641] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:21:07.244 [2024-12-16 11:43:07.848484] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x97d1a0) with pdu=0x200016eff3c8 00:21:07.244 [2024-12-16 11:43:07.848589] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:1632 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:07.244 [2024-12-16 11:43:07.848611] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:21:07.244 [2024-12-16 11:43:07.853436] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x97d1a0) with pdu=0x200016eff3c8 00:21:07.244 [2024-12-16 11:43:07.853565] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:12896 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:07.244 [2024-12-16 11:43:07.853588] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:21:07.244 [2024-12-16 11:43:07.858319] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x97d1a0) with pdu=0x200016eff3c8 00:21:07.244 [2024-12-16 11:43:07.858423] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:17632 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:07.244 [2024-12-16 11:43:07.858446] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:21:07.244 [2024-12-16 11:43:07.863247] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x97d1a0) with pdu=0x200016eff3c8 00:21:07.244 [2024-12-16 11:43:07.863362] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:24576 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:07.244 [2024-12-16 11:43:07.863393] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:21:07.244 [2024-12-16 11:43:07.868323] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x97d1a0) with pdu=0x200016eff3c8 00:21:07.244 [2024-12-16 11:43:07.868413] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:23552 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:07.244 [2024-12-16 11:43:07.868436] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:21:07.244 [2024-12-16 11:43:07.873400] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x97d1a0) with pdu=0x200016eff3c8 00:21:07.244 [2024-12-16 11:43:07.873476] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:1184 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:07.244 [2024-12-16 11:43:07.873499] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:21:07.244 [2024-12-16 11:43:07.878160] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x97d1a0) with pdu=0x200016eff3c8 00:21:07.244 [2024-12-16 11:43:07.878271] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:448 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:07.244 [2024-12-16 11:43:07.878293] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:21:07.244 [2024-12-16 11:43:07.883073] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x97d1a0) with pdu=0x200016eff3c8 00:21:07.244 [2024-12-16 11:43:07.883220] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:8768 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:07.244 [2024-12-16 11:43:07.883243] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:21:07.244 [2024-12-16 11:43:07.888103] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x97d1a0) with pdu=0x200016eff3c8 00:21:07.244 [2024-12-16 11:43:07.888245] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:16608 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:07.244 [2024-12-16 11:43:07.888269] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:21:07.244 [2024-12-16 11:43:07.893003] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x97d1a0) with pdu=0x200016eff3c8 00:21:07.244 [2024-12-16 11:43:07.893117] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:5696 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:07.244 [2024-12-16 11:43:07.893139] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:21:07.244 [2024-12-16 11:43:07.898017] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x97d1a0) with pdu=0x200016eff3c8 00:21:07.244 [2024-12-16 11:43:07.898129] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:18272 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:07.244 [2024-12-16 11:43:07.898151] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:21:07.244 [2024-12-16 11:43:07.902921] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x97d1a0) with pdu=0x200016eff3c8 00:21:07.244 [2024-12-16 11:43:07.903033] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:8224 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:07.244 [2024-12-16 11:43:07.903055] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:21:07.244 [2024-12-16 11:43:07.907795] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x97d1a0) with pdu=0x200016eff3c8 00:21:07.244 [2024-12-16 11:43:07.907898] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:17376 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:07.245 [2024-12-16 11:43:07.907921] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:21:07.245 [2024-12-16 11:43:07.912686] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x97d1a0) with pdu=0x200016eff3c8 00:21:07.245 [2024-12-16 11:43:07.912778] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:25408 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:07.245 [2024-12-16 11:43:07.912801] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:21:07.245 [2024-12-16 11:43:07.917716] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x97d1a0) with pdu=0x200016eff3c8 00:21:07.245 [2024-12-16 11:43:07.917825] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:16320 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:07.245 [2024-12-16 11:43:07.917847] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:21:07.245 [2024-12-16 11:43:07.922804] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x97d1a0) with pdu=0x200016eff3c8 00:21:07.245 [2024-12-16 11:43:07.922916] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:12448 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:07.245 [2024-12-16 11:43:07.922939] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:21:07.245 [2024-12-16 11:43:07.927810] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x97d1a0) with pdu=0x200016eff3c8 00:21:07.245 [2024-12-16 11:43:07.927934] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:9600 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:07.245 [2024-12-16 11:43:07.927957] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:21:07.245 [2024-12-16 11:43:07.932818] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x97d1a0) with pdu=0x200016eff3c8 00:21:07.245 [2024-12-16 11:43:07.932930] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:19872 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:07.245 [2024-12-16 11:43:07.932952] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:21:07.245 [2024-12-16 11:43:07.937629] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x97d1a0) with pdu=0x200016eff3c8 00:21:07.245 [2024-12-16 11:43:07.937732] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:15968 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:07.245 [2024-12-16 11:43:07.937755] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:21:07.245 [2024-12-16 11:43:07.942437] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x97d1a0) with pdu=0x200016eff3c8 00:21:07.245 [2024-12-16 11:43:07.942538] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:18688 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:07.245 [2024-12-16 11:43:07.942561] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:21:07.245 [2024-12-16 11:43:07.947186] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x97d1a0) with pdu=0x200016eff3c8 00:21:07.245 [2024-12-16 11:43:07.947308] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:128 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:07.245 [2024-12-16 11:43:07.947331] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:21:07.245 [2024-12-16 11:43:07.952084] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x97d1a0) with pdu=0x200016eff3c8 00:21:07.245 [2024-12-16 11:43:07.952206] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:25024 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:07.245 [2024-12-16 11:43:07.952241] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:21:07.245 [2024-12-16 11:43:07.957012] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x97d1a0) with pdu=0x200016eff3c8 00:21:07.245 [2024-12-16 11:43:07.957117] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:12512 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:07.245 [2024-12-16 11:43:07.957139] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:21:07.245 [2024-12-16 11:43:07.961937] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x97d1a0) with pdu=0x200016eff3c8 00:21:07.245 [2024-12-16 11:43:07.962042] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:22688 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:07.245 [2024-12-16 11:43:07.962065] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:21:07.245 [2024-12-16 11:43:07.966952] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x97d1a0) with pdu=0x200016eff3c8 00:21:07.245 [2024-12-16 11:43:07.967063] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:13728 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:07.245 [2024-12-16 11:43:07.967085] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:21:07.245 [2024-12-16 11:43:07.971931] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x97d1a0) with pdu=0x200016eff3c8 00:21:07.245 [2024-12-16 11:43:07.972025] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:24064 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:07.245 [2024-12-16 11:43:07.972058] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:21:07.245 [2024-12-16 11:43:07.976880] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x97d1a0) with pdu=0x200016eff3c8 00:21:07.245 [2024-12-16 11:43:07.976993] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:25408 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:07.245 [2024-12-16 11:43:07.977016] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:21:07.245 [2024-12-16 11:43:07.981870] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x97d1a0) with pdu=0x200016eff3c8 00:21:07.245 [2024-12-16 11:43:07.981983] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:18528 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:07.245 [2024-12-16 11:43:07.982005] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:21:07.245 [2024-12-16 11:43:07.986835] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x97d1a0) with pdu=0x200016eff3c8 00:21:07.245 [2024-12-16 11:43:07.986946] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:18304 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:07.245 [2024-12-16 11:43:07.986969] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:21:07.245 [2024-12-16 11:43:07.991650] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x97d1a0) with pdu=0x200016eff3c8 00:21:07.245 [2024-12-16 11:43:07.991781] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:23104 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:07.245 [2024-12-16 11:43:07.991804] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:21:07.505 [2024-12-16 11:43:07.996518] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x97d1a0) with pdu=0x200016eff3c8 00:21:07.505 [2024-12-16 11:43:07.996649] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:7872 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:07.505 [2024-12-16 11:43:07.996672] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:21:07.505 [2024-12-16 11:43:08.001507] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x97d1a0) with pdu=0x200016eff3c8 00:21:07.505 [2024-12-16 11:43:08.001636] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:22432 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:07.505 [2024-12-16 11:43:08.001658] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:21:07.505 [2024-12-16 11:43:08.006412] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x97d1a0) with pdu=0x200016eff3c8 00:21:07.505 [2024-12-16 11:43:08.006541] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:3744 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:07.505 [2024-12-16 11:43:08.006564] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:21:07.505 [2024-12-16 11:43:08.011269] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x97d1a0) with pdu=0x200016eff3c8 00:21:07.505 [2024-12-16 11:43:08.011425] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:24768 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:07.505 [2024-12-16 11:43:08.011448] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:21:07.505 [2024-12-16 11:43:08.016233] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x97d1a0) with pdu=0x200016eff3c8 00:21:07.505 [2024-12-16 11:43:08.016356] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:16224 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:07.505 [2024-12-16 11:43:08.016385] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:21:07.505 [2024-12-16 11:43:08.021189] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x97d1a0) with pdu=0x200016eff3c8 00:21:07.505 [2024-12-16 11:43:08.021325] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:22400 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:07.505 [2024-12-16 11:43:08.021347] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:21:07.505 [2024-12-16 11:43:08.026241] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x97d1a0) with pdu=0x200016eff3c8 00:21:07.505 [2024-12-16 11:43:08.026347] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:6656 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:07.505 [2024-12-16 11:43:08.026370] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:21:07.505 [2024-12-16 11:43:08.031318] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x97d1a0) with pdu=0x200016eff3c8 00:21:07.505 [2024-12-16 11:43:08.031424] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:3520 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:07.505 [2024-12-16 11:43:08.031447] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:21:07.505 [2024-12-16 11:43:08.036392] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x97d1a0) with pdu=0x200016eff3c8 00:21:07.505 [2024-12-16 11:43:08.036491] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:4768 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:07.505 [2024-12-16 11:43:08.036516] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:21:07.505 [2024-12-16 11:43:08.041421] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x97d1a0) with pdu=0x200016eff3c8 00:21:07.505 [2024-12-16 11:43:08.041523] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:6944 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:07.505 [2024-12-16 11:43:08.041547] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:21:07.505 [2024-12-16 11:43:08.046602] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x97d1a0) with pdu=0x200016eff3c8 00:21:07.505 [2024-12-16 11:43:08.046685] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:3616 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:07.505 [2024-12-16 11:43:08.046711] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:21:07.505 [2024-12-16 11:43:08.051803] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x97d1a0) with pdu=0x200016eff3c8 00:21:07.505 [2024-12-16 11:43:08.051900] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:8000 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:07.506 [2024-12-16 11:43:08.051923] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:21:07.506 [2024-12-16 11:43:08.057040] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x97d1a0) with pdu=0x200016eff3c8 00:21:07.506 [2024-12-16 11:43:08.057174] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:9312 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:07.506 [2024-12-16 11:43:08.057197] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:21:07.506 [2024-12-16 11:43:08.062137] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x97d1a0) with pdu=0x200016eff3c8 00:21:07.506 [2024-12-16 11:43:08.062283] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:2016 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:07.506 [2024-12-16 11:43:08.062306] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:21:07.506 [2024-12-16 11:43:08.067170] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x97d1a0) with pdu=0x200016eff3c8 00:21:07.506 [2024-12-16 11:43:08.067260] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:18944 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:07.506 [2024-12-16 11:43:08.067299] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:21:07.506 [2024-12-16 11:43:08.072132] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x97d1a0) with pdu=0x200016eff3c8 00:21:07.506 [2024-12-16 11:43:08.072254] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:14816 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:07.506 [2024-12-16 11:43:08.072278] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:21:07.506 [2024-12-16 11:43:08.077073] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x97d1a0) with pdu=0x200016eff3c8 00:21:07.506 [2024-12-16 11:43:08.077246] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:16960 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:07.506 [2024-12-16 11:43:08.077270] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:21:07.506 [2024-12-16 11:43:08.081967] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x97d1a0) with pdu=0x200016eff3c8 00:21:07.506 [2024-12-16 11:43:08.082082] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:19648 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:07.506 [2024-12-16 11:43:08.082104] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:21:07.506 [2024-12-16 11:43:08.086835] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x97d1a0) with pdu=0x200016eff3c8 00:21:07.506 [2024-12-16 11:43:08.086963] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:13760 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:07.506 [2024-12-16 11:43:08.086986] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:21:07.506 [2024-12-16 11:43:08.091676] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x97d1a0) with pdu=0x200016eff3c8 00:21:07.506 [2024-12-16 11:43:08.091806] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:24448 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:07.506 [2024-12-16 11:43:08.091829] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:21:07.506 [2024-12-16 11:43:08.096691] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x97d1a0) with pdu=0x200016eff3c8 00:21:07.506 [2024-12-16 11:43:08.096804] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:13248 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:07.506 [2024-12-16 11:43:08.096827] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:21:07.506 [2024-12-16 11:43:08.101663] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x97d1a0) with pdu=0x200016eff3c8 00:21:07.506 [2024-12-16 11:43:08.101773] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:3840 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:07.506 [2024-12-16 11:43:08.101795] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:21:07.506 [2024-12-16 11:43:08.106666] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x97d1a0) with pdu=0x200016eff3c8 00:21:07.506 [2024-12-16 11:43:08.106771] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:9344 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:07.506 [2024-12-16 11:43:08.106793] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:21:07.506 [2024-12-16 11:43:08.111742] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x97d1a0) with pdu=0x200016eff3c8 00:21:07.506 [2024-12-16 11:43:08.111825] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:2112 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:07.506 [2024-12-16 11:43:08.111848] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:21:07.506 [2024-12-16 11:43:08.116802] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x97d1a0) with pdu=0x200016eff3c8 00:21:07.506 [2024-12-16 11:43:08.116916] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:16128 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:07.506 [2024-12-16 11:43:08.116938] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:21:07.506 [2024-12-16 11:43:08.121792] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x97d1a0) with pdu=0x200016eff3c8 00:21:07.506 [2024-12-16 11:43:08.121879] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:14816 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:07.506 [2024-12-16 11:43:08.121902] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:21:07.506 [2024-12-16 11:43:08.126865] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x97d1a0) with pdu=0x200016eff3c8 00:21:07.506 [2024-12-16 11:43:08.126950] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:13024 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:07.506 [2024-12-16 11:43:08.126974] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:21:07.506 [2024-12-16 11:43:08.132013] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x97d1a0) with pdu=0x200016eff3c8 00:21:07.506 [2024-12-16 11:43:08.132143] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:7808 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:07.506 [2024-12-16 11:43:08.132167] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:21:07.506 [2024-12-16 11:43:08.137198] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x97d1a0) with pdu=0x200016eff3c8 00:21:07.506 [2024-12-16 11:43:08.137315] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:3872 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:07.506 [2024-12-16 11:43:08.137338] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:21:07.506 [2024-12-16 11:43:08.142167] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x97d1a0) with pdu=0x200016eff3c8 00:21:07.506 [2024-12-16 11:43:08.142282] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:2688 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:07.506 [2024-12-16 11:43:08.142305] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:21:07.506 [2024-12-16 11:43:08.147308] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x97d1a0) with pdu=0x200016eff3c8 00:21:07.506 [2024-12-16 11:43:08.147403] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:5984 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:07.506 [2024-12-16 11:43:08.147425] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:21:07.506 [2024-12-16 11:43:08.152402] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x97d1a0) with pdu=0x200016eff3c8 00:21:07.506 [2024-12-16 11:43:08.152515] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:16992 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:07.506 [2024-12-16 11:43:08.152538] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:21:07.506 [2024-12-16 11:43:08.157385] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x97d1a0) with pdu=0x200016eff3c8 00:21:07.506 [2024-12-16 11:43:08.157495] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:4672 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:07.506 [2024-12-16 11:43:08.157518] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:21:07.506 [2024-12-16 11:43:08.162364] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x97d1a0) with pdu=0x200016eff3c8 00:21:07.506 [2024-12-16 11:43:08.162454] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:32 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:07.506 [2024-12-16 11:43:08.162477] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:21:07.506 [2024-12-16 11:43:08.167392] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x97d1a0) with pdu=0x200016eff3c8 00:21:07.506 [2024-12-16 11:43:08.167482] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:10976 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:07.506 [2024-12-16 11:43:08.167504] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:21:07.506 [2024-12-16 11:43:08.172449] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x97d1a0) with pdu=0x200016eff3c8 00:21:07.506 [2024-12-16 11:43:08.172542] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:21984 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:07.506 [2024-12-16 11:43:08.172565] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:21:07.506 [2024-12-16 11:43:08.177465] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x97d1a0) with pdu=0x200016eff3c8 00:21:07.506 [2024-12-16 11:43:08.177567] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:23488 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:07.506 [2024-12-16 11:43:08.177590] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:21:07.506 [2024-12-16 11:43:08.182511] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x97d1a0) with pdu=0x200016eff3c8 00:21:07.506 [2024-12-16 11:43:08.182618] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:14272 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:07.507 [2024-12-16 11:43:08.182642] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:21:07.507 [2024-12-16 11:43:08.187568] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x97d1a0) with pdu=0x200016eff3c8 00:21:07.507 [2024-12-16 11:43:08.187652] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:18432 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:07.507 [2024-12-16 11:43:08.187675] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:21:07.507 [2024-12-16 11:43:08.192547] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x97d1a0) with pdu=0x200016eff3c8 00:21:07.507 [2024-12-16 11:43:08.192641] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:20896 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:07.507 [2024-12-16 11:43:08.192664] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:21:07.507 [2024-12-16 11:43:08.197507] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x97d1a0) with pdu=0x200016eff3c8 00:21:07.507 [2024-12-16 11:43:08.197593] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:11648 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:07.507 [2024-12-16 11:43:08.197617] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:21:07.507 [2024-12-16 11:43:08.202521] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x97d1a0) with pdu=0x200016eff3c8 00:21:07.507 [2024-12-16 11:43:08.202602] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:20672 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:07.507 [2024-12-16 11:43:08.202625] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:21:07.507 [2024-12-16 11:43:08.207480] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x97d1a0) with pdu=0x200016eff3c8 00:21:07.507 [2024-12-16 11:43:08.207584] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:4960 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:07.507 [2024-12-16 11:43:08.207609] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:21:07.507 [2024-12-16 11:43:08.212447] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x97d1a0) with pdu=0x200016eff3c8 00:21:07.507 [2024-12-16 11:43:08.212578] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:22336 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:07.507 [2024-12-16 11:43:08.212600] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:21:07.507 [2024-12-16 11:43:08.217437] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x97d1a0) with pdu=0x200016eff3c8 00:21:07.507 [2024-12-16 11:43:08.217523] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:24384 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:07.507 [2024-12-16 11:43:08.217545] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:21:07.507 [2024-12-16 11:43:08.222484] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x97d1a0) with pdu=0x200016eff3c8 00:21:07.507 [2024-12-16 11:43:08.222580] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:22624 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:07.507 [2024-12-16 11:43:08.222603] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:21:07.507 [2024-12-16 11:43:08.227459] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x97d1a0) with pdu=0x200016eff3c8 00:21:07.507 [2024-12-16 11:43:08.227533] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:12256 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:07.507 [2024-12-16 11:43:08.227555] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:21:07.507 [2024-12-16 11:43:08.232449] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x97d1a0) with pdu=0x200016eff3c8 00:21:07.507 [2024-12-16 11:43:08.232557] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:8032 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:07.507 [2024-12-16 11:43:08.232580] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:21:07.507 [2024-12-16 11:43:08.237465] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x97d1a0) with pdu=0x200016eff3c8 00:21:07.507 [2024-12-16 11:43:08.237552] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:21120 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:07.507 [2024-12-16 11:43:08.237574] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:21:07.507 [2024-12-16 11:43:08.242526] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x97d1a0) with pdu=0x200016eff3c8 00:21:07.507 [2024-12-16 11:43:08.242624] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:20928 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:07.507 [2024-12-16 11:43:08.242647] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:21:07.507 [2024-12-16 11:43:08.247528] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x97d1a0) with pdu=0x200016eff3c8 00:21:07.507 [2024-12-16 11:43:08.247630] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:21984 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:07.507 [2024-12-16 11:43:08.247653] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:21:07.507 [2024-12-16 11:43:08.252525] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x97d1a0) with pdu=0x200016eff3c8 00:21:07.507 [2024-12-16 11:43:08.252660] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:16416 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:07.507 [2024-12-16 11:43:08.252683] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:21:07.767 [2024-12-16 11:43:08.257596] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x97d1a0) with pdu=0x200016eff3c8 00:21:07.767 [2024-12-16 11:43:08.257686] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:23360 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:07.767 [2024-12-16 11:43:08.257710] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:21:07.767 [2024-12-16 11:43:08.262565] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x97d1a0) with pdu=0x200016eff3c8 00:21:07.767 [2024-12-16 11:43:08.262654] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:13792 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:07.767 [2024-12-16 11:43:08.262678] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:21:07.767 [2024-12-16 11:43:08.267630] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x97d1a0) with pdu=0x200016eff3c8 00:21:07.767 [2024-12-16 11:43:08.267719] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:20608 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:07.767 [2024-12-16 11:43:08.267742] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:21:07.767 [2024-12-16 11:43:08.272692] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x97d1a0) with pdu=0x200016eff3c8 00:21:07.767 [2024-12-16 11:43:08.272786] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:1408 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:07.767 [2024-12-16 11:43:08.272810] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:21:07.767 [2024-12-16 11:43:08.277768] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x97d1a0) with pdu=0x200016eff3c8 00:21:07.767 [2024-12-16 11:43:08.277880] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:5792 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:07.767 [2024-12-16 11:43:08.277903] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:21:07.767 [2024-12-16 11:43:08.282816] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x97d1a0) with pdu=0x200016eff3c8 00:21:07.767 [2024-12-16 11:43:08.282932] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:18432 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:07.767 [2024-12-16 11:43:08.282955] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:21:07.767 [2024-12-16 11:43:08.287843] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x97d1a0) with pdu=0x200016eff3c8 00:21:07.767 [2024-12-16 11:43:08.287920] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:4064 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:07.767 [2024-12-16 11:43:08.287943] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:21:07.767 [2024-12-16 11:43:08.292855] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x97d1a0) with pdu=0x200016eff3c8 00:21:07.767 [2024-12-16 11:43:08.292957] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:9696 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:07.767 [2024-12-16 11:43:08.292980] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:21:07.767 [2024-12-16 11:43:08.297804] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x97d1a0) with pdu=0x200016eff3c8 00:21:07.767 [2024-12-16 11:43:08.297904] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:12512 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:07.767 [2024-12-16 11:43:08.297927] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:21:07.767 [2024-12-16 11:43:08.302807] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x97d1a0) with pdu=0x200016eff3c8 00:21:07.767 [2024-12-16 11:43:08.302883] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:1408 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:07.767 [2024-12-16 11:43:08.302907] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:21:07.767 [2024-12-16 11:43:08.307798] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x97d1a0) with pdu=0x200016eff3c8 00:21:07.767 [2024-12-16 11:43:08.307882] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:15168 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:07.767 [2024-12-16 11:43:08.307908] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:21:07.767 [2024-12-16 11:43:08.312823] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x97d1a0) with pdu=0x200016eff3c8 00:21:07.767 [2024-12-16 11:43:08.312956] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:1344 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:07.767 [2024-12-16 11:43:08.312980] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:21:07.767 [2024-12-16 11:43:08.317913] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x97d1a0) with pdu=0x200016eff3c8 00:21:07.767 [2024-12-16 11:43:08.318028] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:11136 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:07.767 [2024-12-16 11:43:08.318051] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:21:07.767 [2024-12-16 11:43:08.322848] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x97d1a0) with pdu=0x200016eff3c8 00:21:07.767 [2024-12-16 11:43:08.322961] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:18816 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:07.767 [2024-12-16 11:43:08.322984] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:21:07.767 [2024-12-16 11:43:08.327822] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x97d1a0) with pdu=0x200016eff3c8 00:21:07.767 [2024-12-16 11:43:08.327907] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:8928 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:07.767 [2024-12-16 11:43:08.327930] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:21:07.768 [2024-12-16 11:43:08.332800] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x97d1a0) with pdu=0x200016eff3c8 00:21:07.768 [2024-12-16 11:43:08.332915] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:21024 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:07.768 [2024-12-16 11:43:08.332938] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:21:07.768 [2024-12-16 11:43:08.337776] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x97d1a0) with pdu=0x200016eff3c8 00:21:07.768 [2024-12-16 11:43:08.337892] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:18944 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:07.768 [2024-12-16 11:43:08.337914] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:21:07.768 [2024-12-16 11:43:08.342842] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x97d1a0) with pdu=0x200016eff3c8 00:21:07.768 [2024-12-16 11:43:08.342971] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:15616 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:07.768 [2024-12-16 11:43:08.342994] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:21:07.768 [2024-12-16 11:43:08.347884] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x97d1a0) with pdu=0x200016eff3c8 00:21:07.768 [2024-12-16 11:43:08.347962] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:19936 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:07.768 [2024-12-16 11:43:08.347984] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:21:07.768 [2024-12-16 11:43:08.352897] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x97d1a0) with pdu=0x200016eff3c8 00:21:07.768 [2024-12-16 11:43:08.352977] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:6208 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:07.768 [2024-12-16 11:43:08.353000] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:21:07.768 [2024-12-16 11:43:08.357852] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x97d1a0) with pdu=0x200016eff3c8 00:21:07.768 [2024-12-16 11:43:08.357964] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:12864 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:07.768 [2024-12-16 11:43:08.357987] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:21:07.768 [2024-12-16 11:43:08.362796] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x97d1a0) with pdu=0x200016eff3c8 00:21:07.768 [2024-12-16 11:43:08.362906] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:20320 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:07.768 [2024-12-16 11:43:08.362929] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:21:07.768 [2024-12-16 11:43:08.367874] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x97d1a0) with pdu=0x200016eff3c8 00:21:07.768 [2024-12-16 11:43:08.367951] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:17504 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:07.768 [2024-12-16 11:43:08.367974] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:21:07.768 [2024-12-16 11:43:08.372911] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x97d1a0) with pdu=0x200016eff3c8 00:21:07.768 [2024-12-16 11:43:08.373015] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:6688 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:07.768 [2024-12-16 11:43:08.373038] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:21:07.768 [2024-12-16 11:43:08.377959] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x97d1a0) with pdu=0x200016eff3c8 00:21:07.768 [2024-12-16 11:43:08.378073] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:20480 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:07.768 [2024-12-16 11:43:08.378095] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:21:07.768 [2024-12-16 11:43:08.382892] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x97d1a0) with pdu=0x200016eff3c8 00:21:07.768 [2024-12-16 11:43:08.383009] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:1472 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:07.768 [2024-12-16 11:43:08.383032] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:21:07.768 [2024-12-16 11:43:08.387862] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x97d1a0) with pdu=0x200016eff3c8 00:21:07.768 [2024-12-16 11:43:08.387945] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:20896 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:07.768 [2024-12-16 11:43:08.387968] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:21:07.768 [2024-12-16 11:43:08.392763] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x97d1a0) with pdu=0x200016eff3c8 00:21:07.768 [2024-12-16 11:43:08.392877] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:19424 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:07.768 [2024-12-16 11:43:08.392899] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:21:07.768 [2024-12-16 11:43:08.397764] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x97d1a0) with pdu=0x200016eff3c8 00:21:07.768 [2024-12-16 11:43:08.397876] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:3296 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:07.768 [2024-12-16 11:43:08.397899] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:21:07.768 [2024-12-16 11:43:08.402688] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x97d1a0) with pdu=0x200016eff3c8 00:21:07.768 [2024-12-16 11:43:08.402800] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:3968 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:07.768 [2024-12-16 11:43:08.402823] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:21:07.768 [2024-12-16 11:43:08.407586] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x97d1a0) with pdu=0x200016eff3c8 00:21:07.768 [2024-12-16 11:43:08.407729] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:5600 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:07.768 [2024-12-16 11:43:08.407752] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:21:07.768 [2024-12-16 11:43:08.412741] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x97d1a0) with pdu=0x200016eff3c8 00:21:07.768 [2024-12-16 11:43:08.412854] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:1984 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:07.768 [2024-12-16 11:43:08.412876] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:21:07.768 [2024-12-16 11:43:08.417719] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x97d1a0) with pdu=0x200016eff3c8 00:21:07.768 [2024-12-16 11:43:08.417841] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:10304 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:07.768 [2024-12-16 11:43:08.417865] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:21:07.768 [2024-12-16 11:43:08.422805] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x97d1a0) with pdu=0x200016eff3c8 00:21:07.768 [2024-12-16 11:43:08.422917] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:14720 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:07.768 [2024-12-16 11:43:08.422939] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:21:07.768 [2024-12-16 11:43:08.427851] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x97d1a0) with pdu=0x200016eff3c8 00:21:07.768 [2024-12-16 11:43:08.427942] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:19488 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:07.768 [2024-12-16 11:43:08.427964] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:21:07.768 [2024-12-16 11:43:08.432793] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x97d1a0) with pdu=0x200016eff3c8 00:21:07.768 [2024-12-16 11:43:08.432906] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:21952 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:07.768 [2024-12-16 11:43:08.432928] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:21:07.768 [2024-12-16 11:43:08.437736] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x97d1a0) with pdu=0x200016eff3c8 00:21:07.768 [2024-12-16 11:43:08.437846] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:12384 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:07.768 [2024-12-16 11:43:08.437870] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:21:07.768 [2024-12-16 11:43:08.442684] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x97d1a0) with pdu=0x200016eff3c8 00:21:07.769 [2024-12-16 11:43:08.442796] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:13056 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:07.769 [2024-12-16 11:43:08.442819] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:21:07.769 [2024-12-16 11:43:08.447669] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x97d1a0) with pdu=0x200016eff3c8 00:21:07.769 [2024-12-16 11:43:08.447766] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:22752 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:07.769 [2024-12-16 11:43:08.447789] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:21:07.769 [2024-12-16 11:43:08.452738] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x97d1a0) with pdu=0x200016eff3c8 00:21:07.769 [2024-12-16 11:43:08.452839] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:32 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:07.769 [2024-12-16 11:43:08.452862] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:21:07.769 [2024-12-16 11:43:08.457742] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x97d1a0) with pdu=0x200016eff3c8 00:21:07.769 [2024-12-16 11:43:08.457838] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:13472 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:07.769 [2024-12-16 11:43:08.457862] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:21:07.769 [2024-12-16 11:43:08.462778] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x97d1a0) with pdu=0x200016eff3c8 00:21:07.769 [2024-12-16 11:43:08.462878] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:1408 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:07.769 [2024-12-16 11:43:08.462900] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:21:07.769 [2024-12-16 11:43:08.467757] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x97d1a0) with pdu=0x200016eff3c8 00:21:07.769 [2024-12-16 11:43:08.467839] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:7904 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:07.769 [2024-12-16 11:43:08.467863] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:21:07.769 [2024-12-16 11:43:08.472698] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x97d1a0) with pdu=0x200016eff3c8 00:21:07.769 [2024-12-16 11:43:08.472797] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:800 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:07.769 [2024-12-16 11:43:08.472821] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:21:07.769 [2024-12-16 11:43:08.477622] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x97d1a0) with pdu=0x200016eff3c8 00:21:07.769 [2024-12-16 11:43:08.477722] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:8864 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:07.769 [2024-12-16 11:43:08.477745] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:21:07.769 [2024-12-16 11:43:08.482671] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x97d1a0) with pdu=0x200016eff3c8 00:21:07.769 [2024-12-16 11:43:08.482771] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:1952 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:07.769 [2024-12-16 11:43:08.482795] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:21:07.769 [2024-12-16 11:43:08.487661] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x97d1a0) with pdu=0x200016eff3c8 00:21:07.769 [2024-12-16 11:43:08.487779] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:22240 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:07.769 [2024-12-16 11:43:08.487803] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:21:07.769 6204.00 IOPS, 775.50 MiB/s 00:21:07.769 Latency(us) 00:21:07.769 [2024-12-16T11:43:08.522Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:21:07.769 Job: nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 16, IO size: 131072) 00:21:07.769 nvme0n1 : 2.00 6202.61 775.33 0.00 0.00 2573.52 1697.98 6017.40 00:21:07.769 [2024-12-16T11:43:08.522Z] =================================================================================================================== 00:21:07.769 [2024-12-16T11:43:08.522Z] Total : 6202.61 775.33 0.00 0.00 2573.52 1697.98 6017.40 00:21:07.769 { 00:21:07.769 "results": [ 00:21:07.769 { 00:21:07.769 "job": "nvme0n1", 00:21:07.769 "core_mask": "0x2", 00:21:07.769 "workload": "randwrite", 00:21:07.769 "status": "finished", 00:21:07.769 "queue_depth": 16, 00:21:07.769 "io_size": 131072, 00:21:07.769 "runtime": 2.003995, 00:21:07.769 "iops": 6202.610285953807, 00:21:07.769 "mibps": 775.3262857442259, 00:21:07.769 "io_failed": 0, 00:21:07.769 "io_timeout": 0, 00:21:07.769 "avg_latency_us": 2573.5174290938344, 00:21:07.769 "min_latency_us": 1697.9781818181818, 00:21:07.769 "max_latency_us": 6017.396363636363 00:21:07.769 } 00:21:07.769 ], 00:21:07.769 "core_count": 1 00:21:07.769 } 00:21:07.769 11:43:08 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@71 -- # get_transient_errcount nvme0n1 00:21:07.769 11:43:08 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@27 -- # bperf_rpc bdev_get_iostat -b nvme0n1 00:21:07.769 11:43:08 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_get_iostat -b nvme0n1 00:21:07.769 11:43:08 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@28 -- # jq -r '.bdevs[0] 00:21:07.769 | .driver_specific 00:21:07.769 | .nvme_error 00:21:07.769 | .status_code 00:21:07.769 | .command_transient_transport_error' 00:21:08.335 11:43:08 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@71 -- # (( 401 > 0 )) 00:21:08.335 11:43:08 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@73 -- # killprocess 96504 00:21:08.335 11:43:08 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@954 -- # '[' -z 96504 ']' 00:21:08.335 11:43:08 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@958 -- # kill -0 96504 00:21:08.336 11:43:08 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@959 -- # uname 00:21:08.336 11:43:08 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:21:08.336 11:43:08 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 96504 00:21:08.336 11:43:08 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:21:08.336 killing process with pid 96504 00:21:08.336 11:43:08 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:21:08.336 11:43:08 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@972 -- # echo 'killing process with pid 96504' 00:21:08.336 11:43:08 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@973 -- # kill 96504 00:21:08.336 Received shutdown signal, test time was about 2.000000 seconds 00:21:08.336 00:21:08.336 Latency(us) 00:21:08.336 [2024-12-16T11:43:09.089Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:21:08.336 [2024-12-16T11:43:09.089Z] =================================================================================================================== 00:21:08.336 [2024-12-16T11:43:09.089Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:21:08.336 11:43:08 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@978 -- # wait 96504 00:21:08.336 11:43:09 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@116 -- # killprocess 96243 00:21:08.336 11:43:09 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@954 -- # '[' -z 96243 ']' 00:21:08.336 11:43:09 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@958 -- # kill -0 96243 00:21:08.336 11:43:09 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@959 -- # uname 00:21:08.336 11:43:09 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:21:08.336 11:43:09 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 96243 00:21:08.336 11:43:09 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:21:08.336 killing process with pid 96243 00:21:08.336 11:43:09 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:21:08.336 11:43:09 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@972 -- # echo 'killing process with pid 96243' 00:21:08.336 11:43:09 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@973 -- # kill 96243 00:21:08.336 11:43:09 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@978 -- # wait 96243 00:21:08.594 00:21:08.594 real 0m15.755s 00:21:08.594 user 0m30.636s 00:21:08.594 sys 0m4.326s 00:21:08.594 ************************************ 00:21:08.594 END TEST nvmf_digest_error 00:21:08.594 ************************************ 00:21:08.594 11:43:09 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@1130 -- # xtrace_disable 00:21:08.595 11:43:09 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:21:08.595 11:43:09 nvmf_tcp.nvmf_host.nvmf_digest -- host/digest.sh@149 -- # trap - SIGINT SIGTERM EXIT 00:21:08.595 11:43:09 nvmf_tcp.nvmf_host.nvmf_digest -- host/digest.sh@150 -- # nvmftestfini 00:21:08.595 11:43:09 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@516 -- # nvmfcleanup 00:21:08.595 11:43:09 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@121 -- # sync 00:21:08.853 11:43:09 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:21:08.853 11:43:09 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@124 -- # set +e 00:21:08.853 11:43:09 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@125 -- # for i in {1..20} 00:21:08.853 11:43:09 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:21:08.853 rmmod nvme_tcp 00:21:08.853 rmmod nvme_fabrics 00:21:08.853 rmmod nvme_keyring 00:21:08.853 11:43:09 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:21:08.853 11:43:09 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@128 -- # set -e 00:21:08.853 11:43:09 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@129 -- # return 0 00:21:08.853 11:43:09 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@517 -- # '[' -n 96243 ']' 00:21:08.853 11:43:09 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@518 -- # killprocess 96243 00:21:08.853 11:43:09 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@954 -- # '[' -z 96243 ']' 00:21:08.853 11:43:09 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@958 -- # kill -0 96243 00:21:08.853 /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh: line 958: kill: (96243) - No such process 00:21:08.853 Process with pid 96243 is not found 00:21:08.853 11:43:09 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@981 -- # echo 'Process with pid 96243 is not found' 00:21:08.853 11:43:09 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:21:08.853 11:43:09 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:21:08.853 11:43:09 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:21:08.853 11:43:09 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@297 -- # iptr 00:21:08.853 11:43:09 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@791 -- # iptables-save 00:21:08.853 11:43:09 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@791 -- # iptables-restore 00:21:08.853 11:43:09 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:21:08.853 11:43:09 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@298 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:21:08.853 11:43:09 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@299 -- # nvmf_veth_fini 00:21:08.853 11:43:09 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@233 -- # ip link set nvmf_init_br nomaster 00:21:08.853 11:43:09 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@234 -- # ip link set nvmf_init_br2 nomaster 00:21:08.853 11:43:09 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@235 -- # ip link set nvmf_tgt_br nomaster 00:21:08.853 11:43:09 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@236 -- # ip link set nvmf_tgt_br2 nomaster 00:21:08.853 11:43:09 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@237 -- # ip link set nvmf_init_br down 00:21:08.853 11:43:09 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@238 -- # ip link set nvmf_init_br2 down 00:21:08.853 11:43:09 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@239 -- # ip link set nvmf_tgt_br down 00:21:08.853 11:43:09 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@240 -- # ip link set nvmf_tgt_br2 down 00:21:08.853 11:43:09 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@241 -- # ip link delete nvmf_br type bridge 00:21:08.853 11:43:09 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@242 -- # ip link delete nvmf_init_if 00:21:09.111 11:43:09 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@243 -- # ip link delete nvmf_init_if2 00:21:09.111 11:43:09 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@244 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:21:09.111 11:43:09 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@245 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:21:09.111 11:43:09 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@246 -- # remove_spdk_ns 00:21:09.111 11:43:09 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:21:09.111 11:43:09 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:21:09.111 11:43:09 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:21:09.111 11:43:09 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@300 -- # return 0 00:21:09.111 00:21:09.111 real 0m33.438s 00:21:09.111 user 1m2.848s 00:21:09.111 sys 0m9.431s 00:21:09.111 11:43:09 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@1130 -- # xtrace_disable 00:21:09.112 11:43:09 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@10 -- # set +x 00:21:09.112 ************************************ 00:21:09.112 END TEST nvmf_digest 00:21:09.112 ************************************ 00:21:09.112 11:43:09 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@36 -- # [[ 1 -eq 1 ]] 00:21:09.112 11:43:09 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@36 -- # [[ tcp == \t\c\p ]] 00:21:09.112 11:43:09 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@38 -- # run_test nvmf_mdns_discovery /home/vagrant/spdk_repo/spdk/test/nvmf/host/mdns_discovery.sh --transport=tcp 00:21:09.112 11:43:09 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:21:09.112 11:43:09 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1111 -- # xtrace_disable 00:21:09.112 11:43:09 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:21:09.112 ************************************ 00:21:09.112 START TEST nvmf_mdns_discovery 00:21:09.112 ************************************ 00:21:09.112 11:43:09 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/host/mdns_discovery.sh --transport=tcp 00:21:09.112 * Looking for test storage... 00:21:09.112 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/host 00:21:09.112 11:43:09 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:21:09.112 11:43:09 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:21:09.112 11:43:09 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@1711 -- # lcov --version 00:21:09.372 11:43:09 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:21:09.372 11:43:09 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:21:09.372 11:43:09 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- scripts/common.sh@333 -- # local ver1 ver1_l 00:21:09.372 11:43:09 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- scripts/common.sh@334 -- # local ver2 ver2_l 00:21:09.372 11:43:09 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- scripts/common.sh@336 -- # IFS=.-: 00:21:09.372 11:43:09 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- scripts/common.sh@336 -- # read -ra ver1 00:21:09.372 11:43:09 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- scripts/common.sh@337 -- # IFS=.-: 00:21:09.372 11:43:09 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- scripts/common.sh@337 -- # read -ra ver2 00:21:09.372 11:43:09 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- scripts/common.sh@338 -- # local 'op=<' 00:21:09.372 11:43:09 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- scripts/common.sh@340 -- # ver1_l=2 00:21:09.372 11:43:09 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- scripts/common.sh@341 -- # ver2_l=1 00:21:09.372 11:43:09 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:21:09.372 11:43:09 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- scripts/common.sh@344 -- # case "$op" in 00:21:09.372 11:43:09 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- scripts/common.sh@345 -- # : 1 00:21:09.372 11:43:09 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- scripts/common.sh@364 -- # (( v = 0 )) 00:21:09.372 11:43:09 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:21:09.372 11:43:09 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- scripts/common.sh@365 -- # decimal 1 00:21:09.372 11:43:09 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- scripts/common.sh@353 -- # local d=1 00:21:09.372 11:43:09 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:21:09.372 11:43:09 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- scripts/common.sh@355 -- # echo 1 00:21:09.372 11:43:09 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- scripts/common.sh@365 -- # ver1[v]=1 00:21:09.372 11:43:09 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- scripts/common.sh@366 -- # decimal 2 00:21:09.372 11:43:09 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- scripts/common.sh@353 -- # local d=2 00:21:09.372 11:43:09 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:21:09.372 11:43:09 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- scripts/common.sh@355 -- # echo 2 00:21:09.372 11:43:09 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- scripts/common.sh@366 -- # ver2[v]=2 00:21:09.372 11:43:09 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:21:09.372 11:43:09 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:21:09.372 11:43:09 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- scripts/common.sh@368 -- # return 0 00:21:09.372 11:43:09 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:21:09.372 11:43:09 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:21:09.372 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:21:09.372 --rc genhtml_branch_coverage=1 00:21:09.372 --rc genhtml_function_coverage=1 00:21:09.372 --rc genhtml_legend=1 00:21:09.372 --rc geninfo_all_blocks=1 00:21:09.372 --rc geninfo_unexecuted_blocks=1 00:21:09.372 00:21:09.372 ' 00:21:09.372 11:43:09 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:21:09.372 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:21:09.372 --rc genhtml_branch_coverage=1 00:21:09.372 --rc genhtml_function_coverage=1 00:21:09.372 --rc genhtml_legend=1 00:21:09.372 --rc geninfo_all_blocks=1 00:21:09.372 --rc geninfo_unexecuted_blocks=1 00:21:09.372 00:21:09.372 ' 00:21:09.372 11:43:09 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:21:09.372 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:21:09.372 --rc genhtml_branch_coverage=1 00:21:09.372 --rc genhtml_function_coverage=1 00:21:09.372 --rc genhtml_legend=1 00:21:09.372 --rc geninfo_all_blocks=1 00:21:09.372 --rc geninfo_unexecuted_blocks=1 00:21:09.372 00:21:09.372 ' 00:21:09.372 11:43:09 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:21:09.372 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:21:09.372 --rc genhtml_branch_coverage=1 00:21:09.372 --rc genhtml_function_coverage=1 00:21:09.372 --rc genhtml_legend=1 00:21:09.372 --rc geninfo_all_blocks=1 00:21:09.372 --rc geninfo_unexecuted_blocks=1 00:21:09.372 00:21:09.373 ' 00:21:09.373 11:43:09 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@11 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:21:09.373 11:43:09 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- nvmf/common.sh@7 -- # uname -s 00:21:09.373 11:43:09 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:21:09.373 11:43:09 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:21:09.373 11:43:09 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:21:09.373 11:43:09 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:21:09.373 11:43:09 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:21:09.373 11:43:09 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:21:09.373 11:43:09 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:21:09.373 11:43:09 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:21:09.373 11:43:09 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:21:09.373 11:43:09 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:21:09.373 11:43:09 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:2fb5c03d-b67f-4471-b8a9-a83bf7b15be8 00:21:09.373 11:43:09 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- nvmf/common.sh@18 -- # NVME_HOSTID=2fb5c03d-b67f-4471-b8a9-a83bf7b15be8 00:21:09.373 11:43:09 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:21:09.373 11:43:09 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:21:09.373 11:43:09 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:21:09.373 11:43:09 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:21:09.373 11:43:09 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- nvmf/common.sh@49 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:21:09.373 11:43:09 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- scripts/common.sh@15 -- # shopt -s extglob 00:21:09.373 11:43:09 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:21:09.373 11:43:09 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:21:09.373 11:43:09 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:21:09.373 11:43:09 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:09.373 11:43:09 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:09.373 11:43:09 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:09.373 11:43:09 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- paths/export.sh@5 -- # export PATH 00:21:09.373 11:43:09 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:09.373 11:43:09 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- nvmf/common.sh@51 -- # : 0 00:21:09.373 11:43:09 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:21:09.373 11:43:09 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:21:09.373 11:43:09 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:21:09.373 11:43:09 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:21:09.373 11:43:09 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:21:09.373 11:43:09 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:21:09.373 /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:21:09.373 11:43:09 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:21:09.373 11:43:09 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:21:09.373 11:43:09 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- nvmf/common.sh@55 -- # have_pci_nics=0 00:21:09.373 11:43:09 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@13 -- # DISCOVERY_FILTER=address 00:21:09.373 11:43:09 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@14 -- # DISCOVERY_PORT=8009 00:21:09.373 11:43:09 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@15 -- # DISCOVERY_NQN=nqn.2014-08.org.nvmexpress.discovery 00:21:09.373 11:43:09 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@18 -- # NQN=nqn.2016-06.io.spdk:cnode 00:21:09.373 11:43:09 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@19 -- # NQN2=nqn.2016-06.io.spdk:cnode2 00:21:09.373 11:43:09 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@21 -- # HOST_NQN=nqn.2021-12.io.spdk:test 00:21:09.373 11:43:09 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@22 -- # HOST_SOCK=/tmp/host.sock 00:21:09.373 11:43:09 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@24 -- # nvmftestinit 00:21:09.373 11:43:09 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:21:09.373 11:43:09 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:21:09.373 11:43:09 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- nvmf/common.sh@476 -- # prepare_net_devs 00:21:09.373 11:43:09 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- nvmf/common.sh@438 -- # local -g is_hw=no 00:21:09.373 11:43:09 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- nvmf/common.sh@440 -- # remove_spdk_ns 00:21:09.373 11:43:09 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:21:09.373 11:43:09 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:21:09.373 11:43:09 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:21:09.373 11:43:09 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- nvmf/common.sh@442 -- # [[ virt != virt ]] 00:21:09.373 11:43:09 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- nvmf/common.sh@444 -- # [[ no == yes ]] 00:21:09.373 11:43:09 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- nvmf/common.sh@451 -- # [[ virt == phy ]] 00:21:09.373 11:43:09 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- nvmf/common.sh@454 -- # [[ virt == phy-fallback ]] 00:21:09.373 11:43:09 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- nvmf/common.sh@459 -- # [[ tcp == tcp ]] 00:21:09.373 11:43:09 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- nvmf/common.sh@460 -- # nvmf_veth_init 00:21:09.373 11:43:09 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- nvmf/common.sh@145 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:21:09.373 11:43:09 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- nvmf/common.sh@146 -- # NVMF_SECOND_INITIATOR_IP=10.0.0.2 00:21:09.373 11:43:09 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- nvmf/common.sh@147 -- # NVMF_FIRST_TARGET_IP=10.0.0.3 00:21:09.373 11:43:09 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- nvmf/common.sh@148 -- # NVMF_SECOND_TARGET_IP=10.0.0.4 00:21:09.373 11:43:09 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- nvmf/common.sh@149 -- # NVMF_INITIATOR_IP=10.0.0.1 00:21:09.373 11:43:09 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- nvmf/common.sh@150 -- # NVMF_BRIDGE=nvmf_br 00:21:09.373 11:43:09 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- nvmf/common.sh@151 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:21:09.373 11:43:09 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- nvmf/common.sh@152 -- # NVMF_INITIATOR_INTERFACE2=nvmf_init_if2 00:21:09.373 11:43:09 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- nvmf/common.sh@153 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:21:09.373 11:43:09 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- nvmf/common.sh@154 -- # NVMF_INITIATOR_BRIDGE2=nvmf_init_br2 00:21:09.373 11:43:09 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- nvmf/common.sh@155 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:21:09.373 11:43:09 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- nvmf/common.sh@156 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:21:09.373 11:43:09 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- nvmf/common.sh@157 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:21:09.373 11:43:09 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- nvmf/common.sh@158 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:21:09.373 11:43:09 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- nvmf/common.sh@159 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:21:09.373 11:43:09 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- nvmf/common.sh@160 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:21:09.373 11:43:09 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- nvmf/common.sh@162 -- # ip link set nvmf_init_br nomaster 00:21:09.373 Cannot find device "nvmf_init_br" 00:21:09.373 11:43:09 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- nvmf/common.sh@162 -- # true 00:21:09.373 11:43:09 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- nvmf/common.sh@163 -- # ip link set nvmf_init_br2 nomaster 00:21:09.373 Cannot find device "nvmf_init_br2" 00:21:09.373 11:43:10 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- nvmf/common.sh@163 -- # true 00:21:09.373 11:43:10 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- nvmf/common.sh@164 -- # ip link set nvmf_tgt_br nomaster 00:21:09.373 Cannot find device "nvmf_tgt_br" 00:21:09.373 11:43:10 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- nvmf/common.sh@164 -- # true 00:21:09.373 11:43:10 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- nvmf/common.sh@165 -- # ip link set nvmf_tgt_br2 nomaster 00:21:09.373 Cannot find device "nvmf_tgt_br2" 00:21:09.373 11:43:10 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- nvmf/common.sh@165 -- # true 00:21:09.373 11:43:10 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- nvmf/common.sh@166 -- # ip link set nvmf_init_br down 00:21:09.373 Cannot find device "nvmf_init_br" 00:21:09.373 11:43:10 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- nvmf/common.sh@166 -- # true 00:21:09.373 11:43:10 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- nvmf/common.sh@167 -- # ip link set nvmf_init_br2 down 00:21:09.373 Cannot find device "nvmf_init_br2" 00:21:09.373 11:43:10 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- nvmf/common.sh@167 -- # true 00:21:09.373 11:43:10 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- nvmf/common.sh@168 -- # ip link set nvmf_tgt_br down 00:21:09.373 Cannot find device "nvmf_tgt_br" 00:21:09.374 11:43:10 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- nvmf/common.sh@168 -- # true 00:21:09.374 11:43:10 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- nvmf/common.sh@169 -- # ip link set nvmf_tgt_br2 down 00:21:09.374 Cannot find device "nvmf_tgt_br2" 00:21:09.374 11:43:10 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- nvmf/common.sh@169 -- # true 00:21:09.374 11:43:10 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- nvmf/common.sh@170 -- # ip link delete nvmf_br type bridge 00:21:09.374 Cannot find device "nvmf_br" 00:21:09.374 11:43:10 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- nvmf/common.sh@170 -- # true 00:21:09.374 11:43:10 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- nvmf/common.sh@171 -- # ip link delete nvmf_init_if 00:21:09.374 Cannot find device "nvmf_init_if" 00:21:09.374 11:43:10 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- nvmf/common.sh@171 -- # true 00:21:09.374 11:43:10 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- nvmf/common.sh@172 -- # ip link delete nvmf_init_if2 00:21:09.374 Cannot find device "nvmf_init_if2" 00:21:09.374 11:43:10 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- nvmf/common.sh@172 -- # true 00:21:09.374 11:43:10 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- nvmf/common.sh@173 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:21:09.374 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:21:09.374 11:43:10 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- nvmf/common.sh@173 -- # true 00:21:09.374 11:43:10 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- nvmf/common.sh@174 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:21:09.374 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:21:09.374 11:43:10 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- nvmf/common.sh@174 -- # true 00:21:09.374 11:43:10 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- nvmf/common.sh@177 -- # ip netns add nvmf_tgt_ns_spdk 00:21:09.633 11:43:10 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- nvmf/common.sh@180 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:21:09.633 11:43:10 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- nvmf/common.sh@181 -- # ip link add nvmf_init_if2 type veth peer name nvmf_init_br2 00:21:09.633 11:43:10 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- nvmf/common.sh@182 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:21:09.633 11:43:10 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- nvmf/common.sh@183 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:21:09.633 11:43:10 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:21:09.633 11:43:10 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- nvmf/common.sh@187 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:21:09.633 11:43:10 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- nvmf/common.sh@190 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:21:09.633 11:43:10 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- nvmf/common.sh@191 -- # ip addr add 10.0.0.2/24 dev nvmf_init_if2 00:21:09.633 11:43:10 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- nvmf/common.sh@192 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if 00:21:09.633 11:43:10 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- nvmf/common.sh@193 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.4/24 dev nvmf_tgt_if2 00:21:09.633 11:43:10 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- nvmf/common.sh@196 -- # ip link set nvmf_init_if up 00:21:09.633 11:43:10 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- nvmf/common.sh@197 -- # ip link set nvmf_init_if2 up 00:21:09.633 11:43:10 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- nvmf/common.sh@198 -- # ip link set nvmf_init_br up 00:21:09.633 11:43:10 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- nvmf/common.sh@199 -- # ip link set nvmf_init_br2 up 00:21:09.633 11:43:10 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- nvmf/common.sh@200 -- # ip link set nvmf_tgt_br up 00:21:09.633 11:43:10 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- nvmf/common.sh@201 -- # ip link set nvmf_tgt_br2 up 00:21:09.633 11:43:10 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- nvmf/common.sh@202 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:21:09.633 11:43:10 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- nvmf/common.sh@203 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:21:09.633 11:43:10 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- nvmf/common.sh@204 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:21:09.633 11:43:10 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- nvmf/common.sh@207 -- # ip link add nvmf_br type bridge 00:21:09.633 11:43:10 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- nvmf/common.sh@208 -- # ip link set nvmf_br up 00:21:09.633 11:43:10 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- nvmf/common.sh@211 -- # ip link set nvmf_init_br master nvmf_br 00:21:09.633 11:43:10 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- nvmf/common.sh@212 -- # ip link set nvmf_init_br2 master nvmf_br 00:21:09.633 11:43:10 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- nvmf/common.sh@213 -- # ip link set nvmf_tgt_br master nvmf_br 00:21:09.633 11:43:10 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- nvmf/common.sh@214 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:21:09.633 11:43:10 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- nvmf/common.sh@217 -- # ipts -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:21:09.633 11:43:10 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT' 00:21:09.633 11:43:10 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- nvmf/common.sh@218 -- # ipts -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT 00:21:09.633 11:43:10 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT' 00:21:09.633 11:43:10 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- nvmf/common.sh@219 -- # ipts -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:21:09.633 11:43:10 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- nvmf/common.sh@790 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT -m comment --comment 'SPDK_NVMF:-A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT' 00:21:09.633 11:43:10 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- nvmf/common.sh@222 -- # ping -c 1 10.0.0.3 00:21:09.633 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:21:09.633 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.110 ms 00:21:09.633 00:21:09.633 --- 10.0.0.3 ping statistics --- 00:21:09.633 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:21:09.633 rtt min/avg/max/mdev = 0.110/0.110/0.110/0.000 ms 00:21:09.633 11:43:10 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- nvmf/common.sh@223 -- # ping -c 1 10.0.0.4 00:21:09.633 PING 10.0.0.4 (10.0.0.4) 56(84) bytes of data. 00:21:09.633 64 bytes from 10.0.0.4: icmp_seq=1 ttl=64 time=0.062 ms 00:21:09.633 00:21:09.633 --- 10.0.0.4 ping statistics --- 00:21:09.633 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:21:09.633 rtt min/avg/max/mdev = 0.062/0.062/0.062/0.000 ms 00:21:09.633 11:43:10 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- nvmf/common.sh@224 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:21:09.633 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:21:09.633 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.034 ms 00:21:09.633 00:21:09.633 --- 10.0.0.1 ping statistics --- 00:21:09.633 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:21:09.633 rtt min/avg/max/mdev = 0.034/0.034/0.034/0.000 ms 00:21:09.633 11:43:10 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- nvmf/common.sh@225 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.2 00:21:09.633 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:21:09.633 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.064 ms 00:21:09.633 00:21:09.633 --- 10.0.0.2 ping statistics --- 00:21:09.633 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:21:09.633 rtt min/avg/max/mdev = 0.064/0.064/0.064/0.000 ms 00:21:09.633 11:43:10 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- nvmf/common.sh@227 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:21:09.633 11:43:10 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- nvmf/common.sh@461 -- # return 0 00:21:09.633 11:43:10 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:21:09.633 11:43:10 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:21:09.633 11:43:10 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:21:09.633 11:43:10 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:21:09.633 11:43:10 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:21:09.633 11:43:10 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:21:09.633 11:43:10 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:21:09.892 11:43:10 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@29 -- # nvmfappstart -m 0x2 --wait-for-rpc 00:21:09.892 11:43:10 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:21:09.892 11:43:10 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@726 -- # xtrace_disable 00:21:09.892 11:43:10 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@10 -- # set +x 00:21:09.892 11:43:10 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- nvmf/common.sh@509 -- # nvmfpid=96839 00:21:09.892 11:43:10 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- nvmf/common.sh@508 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 --wait-for-rpc 00:21:09.892 11:43:10 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- nvmf/common.sh@510 -- # waitforlisten 96839 00:21:09.892 11:43:10 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@835 -- # '[' -z 96839 ']' 00:21:09.892 11:43:10 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:21:09.892 11:43:10 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@840 -- # local max_retries=100 00:21:09.892 11:43:10 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:21:09.892 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:21:09.892 11:43:10 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@844 -- # xtrace_disable 00:21:09.892 11:43:10 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@10 -- # set +x 00:21:09.892 [2024-12-16 11:43:10.472349] Starting SPDK v25.01-pre git sha1 c360129e7 / DPDK 24.03.0 initialization... 00:21:09.892 [2024-12-16 11:43:10.472466] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:21:09.892 [2024-12-16 11:43:10.620503] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:21:10.151 [2024-12-16 11:43:10.685093] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:21:10.151 [2024-12-16 11:43:10.685182] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:21:10.151 [2024-12-16 11:43:10.685211] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:21:10.151 [2024-12-16 11:43:10.685219] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:21:10.151 [2024-12-16 11:43:10.685227] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:21:10.151 [2024-12-16 11:43:10.685630] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 1 00:21:10.151 11:43:10 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:21:10.151 11:43:10 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@868 -- # return 0 00:21:10.151 11:43:10 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:21:10.151 11:43:10 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@732 -- # xtrace_disable 00:21:10.151 11:43:10 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@10 -- # set +x 00:21:10.151 11:43:10 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:21:10.151 11:43:10 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@31 -- # rpc_cmd nvmf_set_config --discovery-filter=address 00:21:10.151 11:43:10 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:10.151 11:43:10 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@10 -- # set +x 00:21:10.151 11:43:10 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:10.151 11:43:10 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@32 -- # rpc_cmd framework_start_init 00:21:10.151 11:43:10 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:10.151 11:43:10 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@10 -- # set +x 00:21:10.409 11:43:10 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:10.409 11:43:10 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@33 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:21:10.409 11:43:10 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:10.409 11:43:10 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@10 -- # set +x 00:21:10.409 [2024-12-16 11:43:10.914007] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:21:10.409 11:43:10 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:10.409 11:43:10 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@34 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2014-08.org.nvmexpress.discovery -t tcp -a 10.0.0.3 -s 8009 00:21:10.409 11:43:10 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:10.409 11:43:10 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@10 -- # set +x 00:21:10.409 [2024-12-16 11:43:10.922192] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 8009 *** 00:21:10.409 11:43:10 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:10.410 11:43:10 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@36 -- # rpc_cmd bdev_null_create null0 1000 512 00:21:10.410 11:43:10 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:10.410 11:43:10 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@10 -- # set +x 00:21:10.410 null0 00:21:10.410 11:43:10 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:10.410 11:43:10 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@37 -- # rpc_cmd bdev_null_create null1 1000 512 00:21:10.410 11:43:10 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:10.410 11:43:10 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@10 -- # set +x 00:21:10.410 null1 00:21:10.410 11:43:10 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:10.410 11:43:10 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@38 -- # rpc_cmd bdev_null_create null2 1000 512 00:21:10.410 11:43:10 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:10.410 11:43:10 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@10 -- # set +x 00:21:10.410 null2 00:21:10.410 11:43:10 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:10.410 11:43:10 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@39 -- # rpc_cmd bdev_null_create null3 1000 512 00:21:10.410 11:43:10 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:10.410 11:43:10 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@10 -- # set +x 00:21:10.410 null3 00:21:10.410 11:43:10 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:10.410 11:43:10 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@40 -- # rpc_cmd bdev_wait_for_examine 00:21:10.410 11:43:10 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:10.410 11:43:10 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@10 -- # set +x 00:21:10.410 11:43:10 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:10.410 11:43:10 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@48 -- # hostpid=96875 00:21:10.410 11:43:10 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@47 -- # /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -m 0x1 -r /tmp/host.sock 00:21:10.410 11:43:10 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@49 -- # waitforlisten 96875 /tmp/host.sock 00:21:10.410 11:43:10 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@835 -- # '[' -z 96875 ']' 00:21:10.410 11:43:10 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@839 -- # local rpc_addr=/tmp/host.sock 00:21:10.410 11:43:10 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@840 -- # local max_retries=100 00:21:10.410 11:43:10 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /tmp/host.sock...' 00:21:10.410 Waiting for process to start up and listen on UNIX domain socket /tmp/host.sock... 00:21:10.410 11:43:10 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@844 -- # xtrace_disable 00:21:10.410 11:43:10 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@10 -- # set +x 00:21:10.410 [2024-12-16 11:43:11.028873] Starting SPDK v25.01-pre git sha1 c360129e7 / DPDK 24.03.0 initialization... 00:21:10.410 [2024-12-16 11:43:11.028987] [ DPDK EAL parameters: nvmf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid96875 ] 00:21:10.668 [2024-12-16 11:43:11.175794] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:21:10.668 [2024-12-16 11:43:11.236690] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:21:10.668 11:43:11 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:21:10.668 11:43:11 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@868 -- # return 0 00:21:10.668 11:43:11 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@51 -- # trap 'process_shm --id $NVMF_APP_SHM_ID;exit 1' SIGINT SIGTERM 00:21:10.668 11:43:11 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@52 -- # trap 'process_shm --id $NVMF_APP_SHM_ID;nvmftestfini;kill $hostpid;kill $avahipid;' EXIT 00:21:10.668 11:43:11 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@56 -- # avahi-daemon --kill 00:21:10.927 11:43:11 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@58 -- # avahipid=96892 00:21:10.927 11:43:11 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@59 -- # sleep 1 00:21:10.927 11:43:11 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@57 -- # ip netns exec nvmf_tgt_ns_spdk avahi-daemon -f /dev/fd/63 00:21:10.927 11:43:11 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@57 -- # echo -e '[server]\nallow-interfaces=nvmf_tgt_if,nvmf_tgt_if2\nuse-ipv4=yes\nuse-ipv6=no' 00:21:10.927 Process 1059 died: No such process; trying to remove PID file. (/run/avahi-daemon//pid) 00:21:10.927 Found user 'avahi' (UID 70) and group 'avahi' (GID 70). 00:21:10.927 Successfully dropped root privileges. 00:21:10.927 avahi-daemon 0.8 starting up. 00:21:10.927 WARNING: No NSS support for mDNS detected, consider installing nss-mdns! 00:21:10.927 Successfully called chroot(). 00:21:10.927 Successfully dropped remaining capabilities. 00:21:10.927 No service file found in /etc/avahi/services. 00:21:11.863 Joining mDNS multicast group on interface nvmf_tgt_if2.IPv4 with address 10.0.0.4. 00:21:11.863 New relevant interface nvmf_tgt_if2.IPv4 for mDNS. 00:21:11.863 Joining mDNS multicast group on interface nvmf_tgt_if.IPv4 with address 10.0.0.3. 00:21:11.863 New relevant interface nvmf_tgt_if.IPv4 for mDNS. 00:21:11.863 Network interface enumeration completed. 00:21:11.863 Registering new address record for fe80::6084:d4ff:fe9b:2260 on nvmf_tgt_if2.*. 00:21:11.863 Registering new address record for 10.0.0.4 on nvmf_tgt_if2.IPv4. 00:21:11.863 Registering new address record for fe80::3c47:c4ff:feac:c7a5 on nvmf_tgt_if.*. 00:21:11.863 Registering new address record for 10.0.0.3 on nvmf_tgt_if.IPv4. 00:21:11.863 Server startup complete. Host name is fedora39-cloud-1721788873-2326.local. Local service cookie is 2403469740. 00:21:11.863 11:43:12 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@61 -- # rpc_cmd -s /tmp/host.sock log_set_flag bdev_nvme 00:21:11.863 11:43:12 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:11.863 11:43:12 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@10 -- # set +x 00:21:11.863 11:43:12 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:11.863 11:43:12 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@62 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_start_mdns_discovery -b mdns -s _nvme-disc._tcp -q nqn.2021-12.io.spdk:test 00:21:11.863 11:43:12 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:11.863 11:43:12 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@10 -- # set +x 00:21:11.863 11:43:12 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:11.863 11:43:12 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@114 -- # notify_id=0 00:21:11.863 11:43:12 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@120 -- # get_subsystem_names 00:21:11.864 11:43:12 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@69 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:21:11.864 11:43:12 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:11.864 11:43:12 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@10 -- # set +x 00:21:11.864 11:43:12 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@69 -- # sort 00:21:11.864 11:43:12 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@69 -- # jq -r '.[].name' 00:21:11.864 11:43:12 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@69 -- # xargs 00:21:11.864 11:43:12 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:11.864 11:43:12 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@120 -- # [[ '' == '' ]] 00:21:11.864 11:43:12 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@121 -- # get_bdev_list 00:21:11.864 11:43:12 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@65 -- # jq -r '.[].name' 00:21:11.864 11:43:12 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@65 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:21:11.864 11:43:12 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:11.864 11:43:12 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@65 -- # sort 00:21:11.864 11:43:12 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@10 -- # set +x 00:21:11.864 11:43:12 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@65 -- # xargs 00:21:11.864 11:43:12 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:11.864 11:43:12 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@121 -- # [[ '' == '' ]] 00:21:11.864 11:43:12 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@123 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 00:21:11.864 11:43:12 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:11.864 11:43:12 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@10 -- # set +x 00:21:12.123 11:43:12 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:12.123 11:43:12 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@124 -- # get_subsystem_names 00:21:12.123 11:43:12 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@69 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:21:12.123 11:43:12 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@69 -- # jq -r '.[].name' 00:21:12.123 11:43:12 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:12.123 11:43:12 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@69 -- # sort 00:21:12.123 11:43:12 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@10 -- # set +x 00:21:12.123 11:43:12 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@69 -- # xargs 00:21:12.123 11:43:12 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:12.123 11:43:12 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@124 -- # [[ '' == '' ]] 00:21:12.123 11:43:12 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@125 -- # get_bdev_list 00:21:12.123 11:43:12 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@65 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:21:12.123 11:43:12 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:12.123 11:43:12 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@10 -- # set +x 00:21:12.123 11:43:12 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@65 -- # jq -r '.[].name' 00:21:12.123 11:43:12 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@65 -- # sort 00:21:12.123 11:43:12 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@65 -- # xargs 00:21:12.123 11:43:12 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:12.123 11:43:12 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@125 -- # [[ '' == '' ]] 00:21:12.123 11:43:12 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@127 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 null0 00:21:12.123 11:43:12 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:12.123 11:43:12 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@10 -- # set +x 00:21:12.123 11:43:12 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:12.123 11:43:12 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@128 -- # get_subsystem_names 00:21:12.123 11:43:12 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@69 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:21:12.123 11:43:12 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:12.123 11:43:12 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@69 -- # sort 00:21:12.123 11:43:12 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@10 -- # set +x 00:21:12.123 11:43:12 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@69 -- # xargs 00:21:12.123 11:43:12 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@69 -- # jq -r '.[].name' 00:21:12.123 11:43:12 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:12.123 [2024-12-16 11:43:12.790910] bdev_mdns_client.c: 396:mdns_browse_handler: *INFO*: (Browser) CACHE_EXHAUSTED 00:21:12.123 11:43:12 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@128 -- # [[ '' == '' ]] 00:21:12.123 11:43:12 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@129 -- # get_bdev_list 00:21:12.123 11:43:12 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@65 -- # jq -r '.[].name' 00:21:12.123 11:43:12 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@65 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:21:12.123 11:43:12 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:12.123 11:43:12 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@10 -- # set +x 00:21:12.123 11:43:12 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@65 -- # sort 00:21:12.123 11:43:12 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@65 -- # xargs 00:21:12.123 11:43:12 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:12.123 11:43:12 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@129 -- # [[ '' == '' ]] 00:21:12.123 11:43:12 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@133 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.3 -s 4420 00:21:12.123 11:43:12 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:12.123 11:43:12 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@10 -- # set +x 00:21:12.123 [2024-12-16 11:43:12.862657] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4420 *** 00:21:12.123 11:43:12 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:12.123 11:43:12 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@137 -- # rpc_cmd nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode0 nqn.2021-12.io.spdk:test 00:21:12.123 11:43:12 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:12.123 11:43:12 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@10 -- # set +x 00:21:12.123 11:43:12 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:12.382 11:43:12 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@140 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode20 00:21:12.382 11:43:12 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:12.382 11:43:12 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@10 -- # set +x 00:21:12.382 11:43:12 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:12.382 11:43:12 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@141 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode20 null2 00:21:12.382 11:43:12 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:12.382 11:43:12 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@10 -- # set +x 00:21:12.382 11:43:12 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:12.382 11:43:12 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@145 -- # rpc_cmd nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode20 nqn.2021-12.io.spdk:test 00:21:12.382 11:43:12 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:12.382 11:43:12 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@10 -- # set +x 00:21:12.382 11:43:12 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:12.382 11:43:12 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@148 -- # rpc_cmd nvmf_publish_mdns_prr 00:21:12.382 11:43:12 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:12.382 11:43:12 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@10 -- # set +x 00:21:12.382 11:43:12 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:12.382 11:43:12 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@149 -- # sleep 5 00:21:12.950 [2024-12-16 11:43:13.690955] bdev_mdns_client.c: 396:mdns_browse_handler: *INFO*: (Browser) ALL_FOR_NOW 00:21:13.590 [2024-12-16 11:43:14.090916] bdev_mdns_client.c: 255:mdns_resolve_handler: *INFO*: Service 'spdk0' of type '_nvme-disc._tcp' in domain 'local' 00:21:13.590 [2024-12-16 11:43:14.090970] bdev_mdns_client.c: 260:mdns_resolve_handler: *INFO*: fedora39-cloud-1721788873-2326.local:8009 (10.0.0.4) 00:21:13.590 TXT="nqn=nqn.2014-08.org.nvmexpress.discovery" "p=tcp" 00:21:13.590 cookie is 0 00:21:13.590 is_local: 1 00:21:13.590 our_own: 0 00:21:13.590 wide_area: 0 00:21:13.590 multicast: 1 00:21:13.590 cached: 1 00:21:13.591 [2024-12-16 11:43:14.190914] bdev_mdns_client.c: 255:mdns_resolve_handler: *INFO*: Service 'spdk0' of type '_nvme-disc._tcp' in domain 'local' 00:21:13.591 [2024-12-16 11:43:14.190963] bdev_mdns_client.c: 260:mdns_resolve_handler: *INFO*: fedora39-cloud-1721788873-2326.local:8009 (10.0.0.3) 00:21:13.591 TXT="nqn=nqn.2014-08.org.nvmexpress.discovery" "p=tcp" 00:21:13.591 cookie is 0 00:21:13.591 is_local: 1 00:21:13.591 our_own: 0 00:21:13.591 wide_area: 0 00:21:13.591 multicast: 1 00:21:13.591 cached: 1 00:21:14.547 [2024-12-16 11:43:15.092384] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:14.547 [2024-12-16 11:43:15.092483] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x140a850 with addr=10.0.0.4, port=8009 00:21:14.547 [2024-12-16 11:43:15.092526] nvme_tcp.c:2612:nvme_tcp_ctrlr_construct: *ERROR*: failed to create admin qpair 00:21:14.547 [2024-12-16 11:43:15.092549] nvme.c: 842:nvme_probe_internal: *ERROR*: NVMe ctrlr scan failed 00:21:14.547 [2024-12-16 11:43:15.092560] bdev_nvme.c:7584:discovery_poller: *ERROR*: Discovery[10.0.0.4:8009] could not start discovery connect 00:21:14.547 [2024-12-16 11:43:15.199076] bdev_nvme.c:7516:discovery_attach_cb: *INFO*: Discovery[10.0.0.3:8009] discovery ctrlr attached 00:21:14.547 [2024-12-16 11:43:15.199105] bdev_nvme.c:7602:discovery_poller: *INFO*: Discovery[10.0.0.3:8009] discovery ctrlr connected 00:21:14.547 [2024-12-16 11:43:15.199132] bdev_nvme.c:7479:get_discovery_log_page: *INFO*: Discovery[10.0.0.3:8009] sent discovery log page command 00:21:14.547 [2024-12-16 11:43:15.287206] bdev_nvme.c:7445:discovery_log_page_cb: *INFO*: Discovery[10.0.0.3:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.3:4420 new subsystem mdns1_nvme0 00:21:14.805 [2024-12-16 11:43:15.347553] bdev_nvme.c:5663:nvme_ctrlr_create_done: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] ctrlr was created to 10.0.0.3:4420 00:21:14.805 [2024-12-16 11:43:15.348525] bdev_nvme.c:1990:bdev_nvme_create_qpair: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Connecting qpair 0x143fa10:1 started. 00:21:14.805 [2024-12-16 11:43:15.350526] bdev_nvme.c:7335:discovery_attach_controller_done: *INFO*: Discovery[10.0.0.3:8009] attach mdns1_nvme0 done 00:21:14.805 [2024-12-16 11:43:15.350563] bdev_nvme.c:7294:discovery_remove_controllers: *INFO*: Discovery[10.0.0.3:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.3:4420 found again 00:21:14.805 [2024-12-16 11:43:15.357386] bdev_nvme.c:1792:bdev_nvme_disconnected_qpair_cb: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] qpair 0x143fa10 was disconnected and freed. delete nvme_qpair. 00:21:15.372 [2024-12-16 11:43:16.092236] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:15.372 [2024-12-16 11:43:16.092313] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x143f810 with addr=10.0.0.4, port=8009 00:21:15.372 [2024-12-16 11:43:16.092340] nvme_tcp.c:2612:nvme_tcp_ctrlr_construct: *ERROR*: failed to create admin qpair 00:21:15.372 [2024-12-16 11:43:16.092352] nvme.c: 842:nvme_probe_internal: *ERROR*: NVMe ctrlr scan failed 00:21:15.372 [2024-12-16 11:43:16.092362] bdev_nvme.c:7584:discovery_poller: *ERROR*: Discovery[10.0.0.4:8009] could not start discovery connect 00:21:16.750 [2024-12-16 11:43:17.092175] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:16.750 [2024-12-16 11:43:17.092283] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1428560 with addr=10.0.0.4, port=8009 00:21:16.750 [2024-12-16 11:43:17.092309] nvme_tcp.c:2612:nvme_tcp_ctrlr_construct: *ERROR*: failed to create admin qpair 00:21:16.750 [2024-12-16 11:43:17.092321] nvme.c: 842:nvme_probe_internal: *ERROR*: NVMe ctrlr scan failed 00:21:16.750 [2024-12-16 11:43:17.092330] bdev_nvme.c:7584:discovery_poller: *ERROR*: Discovery[10.0.0.4:8009] could not start discovery connect 00:21:17.318 11:43:17 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@152 -- # check_mdns_request_exists spdk1 10.0.0.4 8009 'not found' 00:21:17.318 11:43:17 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@85 -- # local process=spdk1 00:21:17.318 11:43:17 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@86 -- # local ip=10.0.0.4 00:21:17.318 11:43:17 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@87 -- # local port=8009 00:21:17.318 11:43:17 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@88 -- # local 'check_type=not found' 00:21:17.318 11:43:17 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@89 -- # local output 00:21:17.318 11:43:17 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@92 -- # avahi-browse -t -r _nvme-disc._tcp -p 00:21:17.318 11:43:17 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@92 -- # output='+;(null);IPv4;spdk0;_nvme-disc._tcp;local 00:21:17.318 +;(null);IPv4;spdk0;_nvme-disc._tcp;local 00:21:17.318 =;(null);IPv4;spdk0;_nvme-disc._tcp;local;fedora39-cloud-1721788873-2326.local;10.0.0.4;8009;"nqn=nqn.2014-08.org.nvmexpress.discovery" "p=tcp" 00:21:17.318 =;(null);IPv4;spdk0;_nvme-disc._tcp;local;fedora39-cloud-1721788873-2326.local;10.0.0.3;8009;"nqn=nqn.2014-08.org.nvmexpress.discovery" "p=tcp"' 00:21:17.318 11:43:17 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@93 -- # readarray -t lines 00:21:17.318 11:43:17 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@95 -- # for line in "${lines[@]}" 00:21:17.318 11:43:17 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@96 -- # [[ +;(null);IPv4;spdk0;_nvme-disc._tcp;local == *\s\p\d\k\1* ]] 00:21:17.318 11:43:17 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@95 -- # for line in "${lines[@]}" 00:21:17.318 11:43:17 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@96 -- # [[ +;(null);IPv4;spdk0;_nvme-disc._tcp;local == *\s\p\d\k\1* ]] 00:21:17.318 11:43:17 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@95 -- # for line in "${lines[@]}" 00:21:17.318 11:43:17 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@96 -- # [[ =;(null);IPv4;spdk0;_nvme-disc._tcp;local;fedora39-cloud-1721788873-2326.local;10.0.0.4;8009;"nqn=nqn.2014-08.org.nvmexpress.discovery" "p=tcp" == *\s\p\d\k\1* ]] 00:21:17.318 11:43:17 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@95 -- # for line in "${lines[@]}" 00:21:17.318 11:43:17 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@96 -- # [[ =;(null);IPv4;spdk0;_nvme-disc._tcp;local;fedora39-cloud-1721788873-2326.local;10.0.0.3;8009;"nqn=nqn.2014-08.org.nvmexpress.discovery" "p=tcp" == *\s\p\d\k\1* ]] 00:21:17.318 11:43:17 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@105 -- # [[ not found == \f\o\u\n\d ]] 00:21:17.318 11:43:17 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@108 -- # return 0 00:21:17.318 11:43:17 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@154 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2014-08.org.nvmexpress.discovery -t tcp -a 10.0.0.4 -s 8009 00:21:17.318 11:43:17 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:17.318 11:43:17 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@10 -- # set +x 00:21:17.318 [2024-12-16 11:43:17.948777] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.4 port 8009 *** 00:21:17.318 [2024-12-16 11:43:17.951346] bdev_nvme.c:7498:discovery_aer_cb: *INFO*: Discovery[10.0.0.3:8009] got aer 00:21:17.318 [2024-12-16 11:43:17.951386] bdev_nvme.c:7479:get_discovery_log_page: *INFO*: Discovery[10.0.0.3:8009] sent discovery log page command 00:21:17.318 11:43:17 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:17.318 11:43:17 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@156 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode20 -t tcp -a 10.0.0.4 -s 4420 00:21:17.318 11:43:17 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:17.318 11:43:17 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@10 -- # set +x 00:21:17.318 [2024-12-16 11:43:17.956659] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.4 port 4420 *** 00:21:17.318 [2024-12-16 11:43:17.957315] bdev_nvme.c:7498:discovery_aer_cb: *INFO*: Discovery[10.0.0.3:8009] got aer 00:21:17.318 11:43:17 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:17.318 11:43:17 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@157 -- # sleep 1 00:21:17.577 [2024-12-16 11:43:18.088421] bdev_nvme.c:7294:discovery_remove_controllers: *INFO*: Discovery[10.0.0.3:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.3:4420 found again 00:21:17.577 [2024-12-16 11:43:18.088465] bdev_nvme.c:7479:get_discovery_log_page: *INFO*: Discovery[10.0.0.3:8009] sent discovery log page command 00:21:17.577 [2024-12-16 11:43:18.099025] bdev_nvme.c:7516:discovery_attach_cb: *INFO*: Discovery[10.0.0.4:8009] discovery ctrlr attached 00:21:17.577 [2024-12-16 11:43:18.099050] bdev_nvme.c:7602:discovery_poller: *INFO*: Discovery[10.0.0.4:8009] discovery ctrlr connected 00:21:17.577 [2024-12-16 11:43:18.099081] bdev_nvme.c:7479:get_discovery_log_page: *INFO*: Discovery[10.0.0.4:8009] sent discovery log page command 00:21:17.577 [2024-12-16 11:43:18.175173] bdev_nvme.c:7294:discovery_remove_controllers: *INFO*: Discovery[10.0.0.3:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.3:4420 found again 00:21:17.577 [2024-12-16 11:43:18.185144] bdev_nvme.c:7445:discovery_log_page_cb: *INFO*: Discovery[10.0.0.4:8009] NVM nqn.2016-06.io.spdk:cnode20:10.0.0.4:4420 new subsystem mdns0_nvme0 00:21:17.577 [2024-12-16 11:43:18.239462] bdev_nvme.c:5663:nvme_ctrlr_create_done: *INFO*: [nqn.2016-06.io.spdk:cnode20, 1] ctrlr was created to 10.0.0.4:4420 00:21:17.577 [2024-12-16 11:43:18.240089] bdev_nvme.c:1990:bdev_nvme_create_qpair: *INFO*: [nqn.2016-06.io.spdk:cnode20, 1] Connecting qpair 0x143cce0:1 started. 00:21:17.577 [2024-12-16 11:43:18.241823] bdev_nvme.c:7335:discovery_attach_controller_done: *INFO*: Discovery[10.0.0.4:8009] attach mdns0_nvme0 done 00:21:17.577 [2024-12-16 11:43:18.241846] bdev_nvme.c:7294:discovery_remove_controllers: *INFO*: Discovery[10.0.0.4:8009] NVM nqn.2016-06.io.spdk:cnode20:10.0.0.4:4420 found again 00:21:17.577 [2024-12-16 11:43:18.247731] bdev_nvme.c:1792:bdev_nvme_disconnected_qpair_cb: *INFO*: [nqn.2016-06.io.spdk:cnode20, 1] qpair 0x143cce0 was disconnected and freed. delete nvme_qpair. 00:21:18.514 11:43:18 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@160 -- # check_mdns_request_exists spdk1 10.0.0.4 8009 found 00:21:18.514 11:43:18 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@85 -- # local process=spdk1 00:21:18.514 11:43:18 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@86 -- # local ip=10.0.0.4 00:21:18.514 11:43:18 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@87 -- # local port=8009 00:21:18.514 11:43:18 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@88 -- # local check_type=found 00:21:18.514 11:43:18 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@89 -- # local output 00:21:18.514 11:43:18 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@92 -- # avahi-browse -t -r _nvme-disc._tcp -p 00:21:18.514 11:43:18 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@92 -- # output='+;(null);IPv4;spdk1;_nvme-disc._tcp;local 00:21:18.514 +;(null);IPv4;spdk0;_nvme-disc._tcp;local 00:21:18.514 +;(null);IPv4;spdk1;_nvme-disc._tcp;local 00:21:18.514 +;(null);IPv4;spdk0;_nvme-disc._tcp;local 00:21:18.514 =;(null);IPv4;spdk1;_nvme-disc._tcp;local;fedora39-cloud-1721788873-2326.local;10.0.0.4;8009;"nqn=nqn.2014-08.org.nvmexpress.discovery" "p=tcp" 00:21:18.514 =;(null);IPv4;spdk0;_nvme-disc._tcp;local;fedora39-cloud-1721788873-2326.local;10.0.0.4;8009;"nqn=nqn.2014-08.org.nvmexpress.discovery" "p=tcp" 00:21:18.514 =;(null);IPv4;spdk1;_nvme-disc._tcp;local;fedora39-cloud-1721788873-2326.local;10.0.0.3;8009;"nqn=nqn.2014-08.org.nvmexpress.discovery" "p=tcp" 00:21:18.514 =;(null);IPv4;spdk0;_nvme-disc._tcp;local;fedora39-cloud-1721788873-2326.local;10.0.0.3;8009;"nqn=nqn.2014-08.org.nvmexpress.discovery" "p=tcp"' 00:21:18.514 11:43:18 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@93 -- # readarray -t lines 00:21:18.514 11:43:18 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@95 -- # for line in "${lines[@]}" 00:21:18.514 11:43:18 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@96 -- # [[ +;(null);IPv4;spdk1;_nvme-disc._tcp;local == *\s\p\d\k\1* ]] 00:21:18.514 11:43:18 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@96 -- # [[ +;(null);IPv4;spdk1;_nvme-disc._tcp;local == *\1\0\.\0\.\0\.\4* ]] 00:21:18.514 11:43:18 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@95 -- # for line in "${lines[@]}" 00:21:18.514 11:43:18 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@96 -- # [[ +;(null);IPv4;spdk0;_nvme-disc._tcp;local == *\s\p\d\k\1* ]] 00:21:18.514 11:43:18 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@95 -- # for line in "${lines[@]}" 00:21:18.514 11:43:18 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@96 -- # [[ +;(null);IPv4;spdk1;_nvme-disc._tcp;local == *\s\p\d\k\1* ]] 00:21:18.514 11:43:18 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@96 -- # [[ +;(null);IPv4;spdk1;_nvme-disc._tcp;local == *\1\0\.\0\.\0\.\4* ]] 00:21:18.514 11:43:18 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@95 -- # for line in "${lines[@]}" 00:21:18.514 11:43:18 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@96 -- # [[ +;(null);IPv4;spdk0;_nvme-disc._tcp;local == *\s\p\d\k\1* ]] 00:21:18.514 11:43:18 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@95 -- # for line in "${lines[@]}" 00:21:18.514 11:43:18 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@96 -- # [[ =;(null);IPv4;spdk1;_nvme-disc._tcp;local;fedora39-cloud-1721788873-2326.local;10.0.0.4;8009;"nqn=nqn.2014-08.org.nvmexpress.discovery" "p=tcp" == *\s\p\d\k\1* ]] 00:21:18.514 11:43:18 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@96 -- # [[ =;(null);IPv4;spdk1;_nvme-disc._tcp;local;fedora39-cloud-1721788873-2326.local;10.0.0.4;8009;"nqn=nqn.2014-08.org.nvmexpress.discovery" "p=tcp" == *\1\0\.\0\.\0\.\4* ]] 00:21:18.514 11:43:18 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@96 -- # [[ =;(null);IPv4;spdk1;_nvme-disc._tcp;local;fedora39-cloud-1721788873-2326.local;10.0.0.4;8009;"nqn=nqn.2014-08.org.nvmexpress.discovery" "p=tcp" == *\8\0\0\9* ]] 00:21:18.514 11:43:18 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@97 -- # [[ found == \f\o\u\n\d ]] 00:21:18.514 11:43:18 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@98 -- # return 0 00:21:18.514 11:43:18 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@162 -- # get_mdns_discovery_svcs 00:21:18.514 11:43:18 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@81 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_mdns_discovery_info 00:21:18.514 11:43:18 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:18.514 11:43:18 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@10 -- # set +x 00:21:18.514 11:43:18 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@81 -- # jq -r '.[].name' 00:21:18.514 11:43:18 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@81 -- # sort 00:21:18.514 11:43:18 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@81 -- # xargs 00:21:18.514 11:43:19 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:18.514 11:43:19 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@162 -- # [[ mdns == \m\d\n\s ]] 00:21:18.514 11:43:19 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@163 -- # get_discovery_ctrlrs 00:21:18.514 11:43:19 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@77 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_discovery_info 00:21:18.514 11:43:19 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:18.514 11:43:19 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@10 -- # set +x 00:21:18.514 11:43:19 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@77 -- # jq -r '.[].name' 00:21:18.514 11:43:19 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@77 -- # sort 00:21:18.514 11:43:19 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@77 -- # xargs 00:21:18.514 11:43:19 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:18.514 11:43:19 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@163 -- # [[ mdns0_nvme mdns1_nvme == \m\d\n\s\0\_\n\v\m\e\ \m\d\n\s\1\_\n\v\m\e ]] 00:21:18.514 11:43:19 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@164 -- # get_subsystem_names 00:21:18.514 11:43:19 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@69 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:21:18.514 11:43:19 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:18.514 11:43:19 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@10 -- # set +x 00:21:18.514 11:43:19 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@69 -- # xargs 00:21:18.514 11:43:19 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@69 -- # sort 00:21:18.515 11:43:19 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@69 -- # jq -r '.[].name' 00:21:18.515 11:43:19 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:18.515 11:43:19 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@164 -- # [[ mdns0_nvme0 mdns1_nvme0 == \m\d\n\s\0\_\n\v\m\e\0\ \m\d\n\s\1\_\n\v\m\e\0 ]] 00:21:18.515 11:43:19 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@165 -- # get_bdev_list 00:21:18.515 11:43:19 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@65 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:21:18.515 11:43:19 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:18.515 11:43:19 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@10 -- # set +x 00:21:18.515 11:43:19 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@65 -- # jq -r '.[].name' 00:21:18.515 11:43:19 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@65 -- # xargs 00:21:18.515 11:43:19 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@65 -- # sort 00:21:18.515 11:43:19 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:18.515 11:43:19 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@165 -- # [[ mdns0_nvme0n1 mdns1_nvme0n1 == \m\d\n\s\0\_\n\v\m\e\0\n\1\ \m\d\n\s\1\_\n\v\m\e\0\n\1 ]] 00:21:18.515 11:43:19 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@166 -- # get_subsystem_paths mdns0_nvme0 00:21:18.515 11:43:19 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@73 -- # jq -r '.[].ctrlrs[].trid.trsvcid' 00:21:18.515 11:43:19 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@73 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers -n mdns0_nvme0 00:21:18.515 11:43:19 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@73 -- # sort -n 00:21:18.515 11:43:19 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:18.515 11:43:19 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@10 -- # set +x 00:21:18.515 11:43:19 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@73 -- # xargs 00:21:18.515 11:43:19 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:18.515 11:43:19 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@166 -- # [[ 4420 == \4\4\2\0 ]] 00:21:18.515 11:43:19 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@167 -- # get_subsystem_paths mdns1_nvme0 00:21:18.515 11:43:19 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@73 -- # jq -r '.[].ctrlrs[].trid.trsvcid' 00:21:18.515 11:43:19 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@73 -- # xargs 00:21:18.515 11:43:19 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@73 -- # sort -n 00:21:18.515 11:43:19 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@73 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers -n mdns1_nvme0 00:21:18.515 11:43:19 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:18.515 11:43:19 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@10 -- # set +x 00:21:18.774 11:43:19 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:18.774 11:43:19 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@167 -- # [[ 4420 == \4\4\2\0 ]] 00:21:18.774 11:43:19 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@168 -- # get_notification_count 00:21:18.774 11:43:19 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@116 -- # rpc_cmd -s /tmp/host.sock notify_get_notifications -i 0 00:21:18.774 11:43:19 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:18.774 11:43:19 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@10 -- # set +x 00:21:18.774 11:43:19 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@116 -- # jq '. | length' 00:21:18.774 11:43:19 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:18.774 11:43:19 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@116 -- # notification_count=2 00:21:18.774 11:43:19 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@117 -- # notify_id=2 00:21:18.774 11:43:19 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@169 -- # [[ 2 == 2 ]] 00:21:18.774 11:43:19 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@172 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 null1 00:21:18.774 11:43:19 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:18.774 11:43:19 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@10 -- # set +x 00:21:18.774 [2024-12-16 11:43:19.379958] bdev_nvme.c:1990:bdev_nvme_create_qpair: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Connecting qpair 0x1444f60:1 started. 00:21:18.774 11:43:19 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:18.774 11:43:19 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@173 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode20 null3 00:21:18.774 11:43:19 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:18.774 11:43:19 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@10 -- # set +x 00:21:18.774 [2024-12-16 11:43:19.388004] bdev_nvme.c:1792:bdev_nvme_disconnected_qpair_cb: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] qpair 0x1444f60 was disconnected and freed. delete nvme_qpair. 00:21:18.774 11:43:19 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:18.774 11:43:19 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@174 -- # sleep 1 00:21:18.774 [2024-12-16 11:43:19.390927] bdev_mdns_client.c: 255:mdns_resolve_handler: *INFO*: Service 'spdk1' of type '_nvme-disc._tcp' in domain 'local' 00:21:18.774 [2024-12-16 11:43:19.390947] bdev_mdns_client.c: 260:mdns_resolve_handler: *INFO*: fedora39-cloud-1721788873-2326.local:8009 (10.0.0.3) 00:21:18.774 TXT="nqn=nqn.2014-08.org.nvmexpress.discovery" "p=tcp" 00:21:18.774 cookie is 0 00:21:18.774 is_local: 1 00:21:18.774 our_own: 0 00:21:18.774 wide_area: 0 00:21:18.774 multicast: 1 00:21:18.774 cached: 1 00:21:18.774 [2024-12-16 11:43:19.390968] bdev_mdns_client.c: 323:mdns_resolve_handler: *ERROR*: mDNS discovery entry exists already. trid->traddr: 10.0.0.3 trid->trsvcid: 8009 00:21:18.774 [2024-12-16 11:43:19.394004] bdev_nvme.c:1990:bdev_nvme_create_qpair: *INFO*: [nqn.2016-06.io.spdk:cnode20, 1] Connecting qpair 0x143ec30:1 started. 00:21:18.774 [2024-12-16 11:43:19.397806] bdev_nvme.c:1792:bdev_nvme_disconnected_qpair_cb: *INFO*: [nqn.2016-06.io.spdk:cnode20, 1] qpair 0x143ec30 was disconnected and freed. delete nvme_qpair. 00:21:18.774 [2024-12-16 11:43:19.490916] bdev_mdns_client.c: 255:mdns_resolve_handler: *INFO*: Service 'spdk1' of type '_nvme-disc._tcp' in domain 'local' 00:21:18.774 [2024-12-16 11:43:19.490948] bdev_mdns_client.c: 260:mdns_resolve_handler: *INFO*: fedora39-cloud-1721788873-2326.local:8009 (10.0.0.4) 00:21:18.774 TXT="nqn=nqn.2014-08.org.nvmexpress.discovery" "p=tcp" 00:21:18.774 cookie is 0 00:21:18.774 is_local: 1 00:21:18.774 our_own: 0 00:21:18.774 wide_area: 0 00:21:18.774 multicast: 1 00:21:18.774 cached: 1 00:21:18.774 [2024-12-16 11:43:19.490960] bdev_mdns_client.c: 323:mdns_resolve_handler: *ERROR*: mDNS discovery entry exists already. trid->traddr: 10.0.0.4 trid->trsvcid: 8009 00:21:19.709 11:43:20 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@176 -- # get_bdev_list 00:21:19.709 11:43:20 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@65 -- # jq -r '.[].name' 00:21:19.709 11:43:20 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@65 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:21:19.709 11:43:20 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@65 -- # sort 00:21:19.709 11:43:20 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:19.709 11:43:20 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@10 -- # set +x 00:21:19.709 11:43:20 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@65 -- # xargs 00:21:19.709 11:43:20 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:19.709 11:43:20 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@176 -- # [[ mdns0_nvme0n1 mdns0_nvme0n2 mdns1_nvme0n1 mdns1_nvme0n2 == \m\d\n\s\0\_\n\v\m\e\0\n\1\ \m\d\n\s\0\_\n\v\m\e\0\n\2\ \m\d\n\s\1\_\n\v\m\e\0\n\1\ \m\d\n\s\1\_\n\v\m\e\0\n\2 ]] 00:21:19.709 11:43:20 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@177 -- # get_notification_count 00:21:19.968 11:43:20 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@116 -- # rpc_cmd -s /tmp/host.sock notify_get_notifications -i 2 00:21:19.968 11:43:20 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:19.968 11:43:20 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@10 -- # set +x 00:21:19.968 11:43:20 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@116 -- # jq '. | length' 00:21:19.968 11:43:20 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:19.968 11:43:20 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@116 -- # notification_count=2 00:21:19.968 11:43:20 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@117 -- # notify_id=4 00:21:19.968 11:43:20 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@178 -- # [[ 2 == 2 ]] 00:21:19.968 11:43:20 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@182 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.3 -s 4421 00:21:19.969 11:43:20 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:19.969 11:43:20 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@10 -- # set +x 00:21:19.969 [2024-12-16 11:43:20.526489] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4421 *** 00:21:19.969 [2024-12-16 11:43:20.527406] bdev_nvme.c:7498:discovery_aer_cb: *INFO*: Discovery[10.0.0.3:8009] got aer 00:21:19.969 [2024-12-16 11:43:20.527443] bdev_nvme.c:7479:get_discovery_log_page: *INFO*: Discovery[10.0.0.3:8009] sent discovery log page command 00:21:19.969 [2024-12-16 11:43:20.527482] bdev_nvme.c:7498:discovery_aer_cb: *INFO*: Discovery[10.0.0.4:8009] got aer 00:21:19.969 [2024-12-16 11:43:20.527497] bdev_nvme.c:7479:get_discovery_log_page: *INFO*: Discovery[10.0.0.4:8009] sent discovery log page command 00:21:19.969 11:43:20 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:19.969 11:43:20 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@183 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode20 -t tcp -a 10.0.0.4 -s 4421 00:21:19.969 11:43:20 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:19.969 11:43:20 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@10 -- # set +x 00:21:19.969 [2024-12-16 11:43:20.534600] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.4 port 4421 *** 00:21:19.969 [2024-12-16 11:43:20.535394] bdev_nvme.c:7498:discovery_aer_cb: *INFO*: Discovery[10.0.0.3:8009] got aer 00:21:19.969 [2024-12-16 11:43:20.535451] bdev_nvme.c:7498:discovery_aer_cb: *INFO*: Discovery[10.0.0.4:8009] got aer 00:21:19.969 11:43:20 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:19.969 11:43:20 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@184 -- # sleep 1 00:21:19.969 [2024-12-16 11:43:20.666497] bdev_nvme.c:7440:discovery_log_page_cb: *INFO*: Discovery[10.0.0.3:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.3:4421 new path for mdns1_nvme0 00:21:19.969 [2024-12-16 11:43:20.666909] bdev_nvme.c:7440:discovery_log_page_cb: *INFO*: Discovery[10.0.0.4:8009] NVM nqn.2016-06.io.spdk:cnode20:10.0.0.4:4421 new path for mdns0_nvme0 00:21:20.227 [2024-12-16 11:43:20.724935] bdev_nvme.c:5663:nvme_ctrlr_create_done: *INFO*: [nqn.2016-06.io.spdk:cnode0, 2] ctrlr was created to 10.0.0.3:4421 00:21:20.227 [2024-12-16 11:43:20.725010] bdev_nvme.c:7335:discovery_attach_controller_done: *INFO*: Discovery[10.0.0.3:8009] attach mdns1_nvme0 done 00:21:20.227 [2024-12-16 11:43:20.725030] bdev_nvme.c:7294:discovery_remove_controllers: *INFO*: Discovery[10.0.0.3:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.3:4420 found again 00:21:20.227 [2024-12-16 11:43:20.725036] bdev_nvme.c:7294:discovery_remove_controllers: *INFO*: Discovery[10.0.0.3:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.3:4421 found again 00:21:20.227 [2024-12-16 11:43:20.725053] bdev_nvme.c:7479:get_discovery_log_page: *INFO*: Discovery[10.0.0.3:8009] sent discovery log page command 00:21:20.227 [2024-12-16 11:43:20.725202] bdev_nvme.c:5663:nvme_ctrlr_create_done: *INFO*: [nqn.2016-06.io.spdk:cnode20, 2] ctrlr was created to 10.0.0.4:4421 00:21:20.227 [2024-12-16 11:43:20.725248] bdev_nvme.c:7335:discovery_attach_controller_done: *INFO*: Discovery[10.0.0.4:8009] attach mdns0_nvme0 done 00:21:20.228 [2024-12-16 11:43:20.725256] bdev_nvme.c:7294:discovery_remove_controllers: *INFO*: Discovery[10.0.0.4:8009] NVM nqn.2016-06.io.spdk:cnode20:10.0.0.4:4420 found again 00:21:20.228 [2024-12-16 11:43:20.725261] bdev_nvme.c:7294:discovery_remove_controllers: *INFO*: Discovery[10.0.0.4:8009] NVM nqn.2016-06.io.spdk:cnode20:10.0.0.4:4421 found again 00:21:20.228 [2024-12-16 11:43:20.725275] bdev_nvme.c:7479:get_discovery_log_page: *INFO*: Discovery[10.0.0.4:8009] sent discovery log page command 00:21:20.228 [2024-12-16 11:43:20.770583] bdev_nvme.c:7294:discovery_remove_controllers: *INFO*: Discovery[10.0.0.3:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.3:4420 found again 00:21:20.228 [2024-12-16 11:43:20.770605] bdev_nvme.c:7294:discovery_remove_controllers: *INFO*: Discovery[10.0.0.3:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.3:4421 found again 00:21:20.228 [2024-12-16 11:43:20.770660] bdev_nvme.c:7294:discovery_remove_controllers: *INFO*: Discovery[10.0.0.4:8009] NVM nqn.2016-06.io.spdk:cnode20:10.0.0.4:4420 found again 00:21:20.228 [2024-12-16 11:43:20.770669] bdev_nvme.c:7294:discovery_remove_controllers: *INFO*: Discovery[10.0.0.4:8009] NVM nqn.2016-06.io.spdk:cnode20:10.0.0.4:4421 found again 00:21:20.795 11:43:21 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@186 -- # get_subsystem_names 00:21:20.795 11:43:21 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@69 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:21:20.795 11:43:21 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@69 -- # jq -r '.[].name' 00:21:20.795 11:43:21 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:20.795 11:43:21 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@10 -- # set +x 00:21:20.795 11:43:21 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@69 -- # sort 00:21:20.795 11:43:21 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@69 -- # xargs 00:21:21.054 11:43:21 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:21.054 11:43:21 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@186 -- # [[ mdns0_nvme0 mdns1_nvme0 == \m\d\n\s\0\_\n\v\m\e\0\ \m\d\n\s\1\_\n\v\m\e\0 ]] 00:21:21.054 11:43:21 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@187 -- # get_bdev_list 00:21:21.054 11:43:21 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@65 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:21:21.054 11:43:21 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:21.054 11:43:21 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@10 -- # set +x 00:21:21.054 11:43:21 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@65 -- # sort 00:21:21.054 11:43:21 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@65 -- # jq -r '.[].name' 00:21:21.054 11:43:21 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@65 -- # xargs 00:21:21.054 11:43:21 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:21.054 11:43:21 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@187 -- # [[ mdns0_nvme0n1 mdns0_nvme0n2 mdns1_nvme0n1 mdns1_nvme0n2 == \m\d\n\s\0\_\n\v\m\e\0\n\1\ \m\d\n\s\0\_\n\v\m\e\0\n\2\ \m\d\n\s\1\_\n\v\m\e\0\n\1\ \m\d\n\s\1\_\n\v\m\e\0\n\2 ]] 00:21:21.054 11:43:21 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@188 -- # get_subsystem_paths mdns0_nvme0 00:21:21.054 11:43:21 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@73 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers -n mdns0_nvme0 00:21:21.054 11:43:21 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:21.054 11:43:21 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@73 -- # jq -r '.[].ctrlrs[].trid.trsvcid' 00:21:21.054 11:43:21 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@10 -- # set +x 00:21:21.054 11:43:21 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@73 -- # xargs 00:21:21.054 11:43:21 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@73 -- # sort -n 00:21:21.055 11:43:21 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:21.055 11:43:21 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@188 -- # [[ 4420 4421 == \4\4\2\0\ \4\4\2\1 ]] 00:21:21.055 11:43:21 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@189 -- # get_subsystem_paths mdns1_nvme0 00:21:21.055 11:43:21 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@73 -- # jq -r '.[].ctrlrs[].trid.trsvcid' 00:21:21.055 11:43:21 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@73 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers -n mdns1_nvme0 00:21:21.055 11:43:21 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:21.055 11:43:21 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@10 -- # set +x 00:21:21.055 11:43:21 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@73 -- # sort -n 00:21:21.055 11:43:21 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@73 -- # xargs 00:21:21.055 11:43:21 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:21.055 11:43:21 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@189 -- # [[ 4420 4421 == \4\4\2\0\ \4\4\2\1 ]] 00:21:21.055 11:43:21 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@190 -- # get_notification_count 00:21:21.055 11:43:21 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@116 -- # jq '. | length' 00:21:21.055 11:43:21 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@116 -- # rpc_cmd -s /tmp/host.sock notify_get_notifications -i 4 00:21:21.055 11:43:21 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:21.055 11:43:21 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@10 -- # set +x 00:21:21.055 11:43:21 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:21.320 11:43:21 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@116 -- # notification_count=0 00:21:21.320 11:43:21 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@117 -- # notify_id=4 00:21:21.320 11:43:21 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@191 -- # [[ 0 == 0 ]] 00:21:21.320 11:43:21 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@195 -- # rpc_cmd nvmf_subsystem_remove_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.3 -s 4420 00:21:21.321 11:43:21 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:21.321 11:43:21 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@10 -- # set +x 00:21:21.321 [2024-12-16 11:43:21.823202] bdev_nvme.c:7498:discovery_aer_cb: *INFO*: Discovery[10.0.0.3:8009] got aer 00:21:21.321 [2024-12-16 11:43:21.823240] bdev_nvme.c:7479:get_discovery_log_page: *INFO*: Discovery[10.0.0.3:8009] sent discovery log page command 00:21:21.321 [2024-12-16 11:43:21.823277] bdev_nvme.c:7498:discovery_aer_cb: *INFO*: Discovery[10.0.0.4:8009] got aer 00:21:21.321 [2024-12-16 11:43:21.823293] bdev_nvme.c:7479:get_discovery_log_page: *INFO*: Discovery[10.0.0.4:8009] sent discovery log page command 00:21:21.321 11:43:21 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:21.321 11:43:21 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@196 -- # rpc_cmd nvmf_subsystem_remove_listener nqn.2016-06.io.spdk:cnode20 -t tcp -a 10.0.0.4 -s 4420 00:21:21.321 11:43:21 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:21.321 11:43:21 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@10 -- # set +x 00:21:21.321 [2024-12-16 11:43:21.828824] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:21:21.321 [2024-12-16 11:43:21.828861] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:21.321 [2024-12-16 11:43:21.828874] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:21:21.321 [2024-12-16 11:43:21.828884] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:21.321 [2024-12-16 11:43:21.828894] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:21:21.321 [2024-12-16 11:43:21.828903] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:21.321 [2024-12-16 11:43:21.828914] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:21:21.321 [2024-12-16 11:43:21.828923] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:21.321 [2024-12-16 11:43:21.828932] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13b4790 is same with the state(6) to be set 00:21:21.321 [2024-12-16 11:43:21.835198] bdev_nvme.c:7498:discovery_aer_cb: *INFO*: Discovery[10.0.0.3:8009] got aer 00:21:21.321 [2024-12-16 11:43:21.835256] bdev_nvme.c:7498:discovery_aer_cb: *INFO*: Discovery[10.0.0.4:8009] got aer 00:21:21.321 [2024-12-16 11:43:21.838784] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x13b4790 (9): Bad file descriptor 00:21:21.321 11:43:21 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:21.321 11:43:21 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@197 -- # sleep 1 00:21:21.321 [2024-12-16 11:43:21.842838] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:21:21.321 [2024-12-16 11:43:21.842873] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:21.321 [2024-12-16 11:43:21.842886] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:21:21.321 [2024-12-16 11:43:21.842895] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:21.321 [2024-12-16 11:43:21.842904] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:21:21.321 [2024-12-16 11:43:21.842914] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:21.321 [2024-12-16 11:43:21.842924] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:21:21.321 [2024-12-16 11:43:21.842932] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:21.321 [2024-12-16 11:43:21.842941] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1429dc0 is same with the state(6) to be set 00:21:21.321 [2024-12-16 11:43:21.848802] bdev_nvme.c:2550:bdev_nvme_reset_destroy_qpairs: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Delete qpairs for reset. 00:21:21.321 [2024-12-16 11:43:21.848826] bdev_nvme.c:2538:bdev_nvme_reset_destroy_qpair_done: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] qpairs were deleted. 00:21:21.321 [2024-12-16 11:43:21.848833] bdev_nvme.c:2134:nvme_ctrlr_disconnect: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Start disconnecting ctrlr. 00:21:21.321 [2024-12-16 11:43:21.848839] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0, 1] resetting controller 00:21:21.321 [2024-12-16 11:43:21.848872] bdev_nvme.c:2522:bdev_nvme_reconnect_ctrlr: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Start reconnecting ctrlr. 00:21:21.321 [2024-12-16 11:43:21.848954] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:21.321 [2024-12-16 11:43:21.848975] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13b4790 with addr=10.0.0.3, port=4420 00:21:21.321 [2024-12-16 11:43:21.848997] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13b4790 is same with the state(6) to be set 00:21:21.321 [2024-12-16 11:43:21.849014] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x13b4790 (9): Bad file descriptor 00:21:21.321 [2024-12-16 11:43:21.849029] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] Ctrlr is in error state 00:21:21.321 [2024-12-16 11:43:21.849038] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] controller reinitialization failed 00:21:21.321 [2024-12-16 11:43:21.849049] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] in failed state. 00:21:21.321 [2024-12-16 11:43:21.849057] bdev_nvme.c:2512:bdev_nvme_reconnect_ctrlr_poll: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] ctrlr could not be connected. 00:21:21.321 [2024-12-16 11:43:21.849063] bdev_nvme.c:2279:bdev_nvme_reset_ctrlr_complete: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Clear pending resets. 00:21:21.321 [2024-12-16 11:43:21.849068] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] Resetting controller failed. 00:21:21.321 [2024-12-16 11:43:21.852806] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1429dc0 (9): Bad file descriptor 00:21:21.321 [2024-12-16 11:43:21.858882] bdev_nvme.c:2550:bdev_nvme_reset_destroy_qpairs: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Delete qpairs for reset. 00:21:21.321 [2024-12-16 11:43:21.858906] bdev_nvme.c:2538:bdev_nvme_reset_destroy_qpair_done: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] qpairs were deleted. 00:21:21.321 [2024-12-16 11:43:21.858912] bdev_nvme.c:2134:nvme_ctrlr_disconnect: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Start disconnecting ctrlr. 00:21:21.321 [2024-12-16 11:43:21.858917] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0, 1] resetting controller 00:21:21.321 [2024-12-16 11:43:21.858942] bdev_nvme.c:2522:bdev_nvme_reconnect_ctrlr: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Start reconnecting ctrlr. 00:21:21.321 [2024-12-16 11:43:21.858994] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:21.321 [2024-12-16 11:43:21.859014] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13b4790 with addr=10.0.0.3, port=4420 00:21:21.321 [2024-12-16 11:43:21.859024] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13b4790 is same with the state(6) to be set 00:21:21.321 [2024-12-16 11:43:21.859040] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x13b4790 (9): Bad file descriptor 00:21:21.321 [2024-12-16 11:43:21.859054] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] Ctrlr is in error state 00:21:21.321 [2024-12-16 11:43:21.859062] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] controller reinitialization failed 00:21:21.321 [2024-12-16 11:43:21.859071] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] in failed state. 00:21:21.321 [2024-12-16 11:43:21.859079] bdev_nvme.c:2512:bdev_nvme_reconnect_ctrlr_poll: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] ctrlr could not be connected. 00:21:21.321 [2024-12-16 11:43:21.859085] bdev_nvme.c:2279:bdev_nvme_reset_ctrlr_complete: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Clear pending resets. 00:21:21.321 [2024-12-16 11:43:21.859090] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] Resetting controller failed. 00:21:21.321 [2024-12-16 11:43:21.862814] bdev_nvme.c:2550:bdev_nvme_reset_destroy_qpairs: *INFO*: [nqn.2016-06.io.spdk:cnode20, 1] Delete qpairs for reset. 00:21:21.321 [2024-12-16 11:43:21.862839] bdev_nvme.c:2538:bdev_nvme_reset_destroy_qpair_done: *INFO*: [nqn.2016-06.io.spdk:cnode20, 1] qpairs were deleted. 00:21:21.321 [2024-12-16 11:43:21.862845] bdev_nvme.c:2134:nvme_ctrlr_disconnect: *INFO*: [nqn.2016-06.io.spdk:cnode20, 1] Start disconnecting ctrlr. 00:21:21.321 [2024-12-16 11:43:21.862850] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode20, 1] resetting controller 00:21:21.321 [2024-12-16 11:43:21.862877] bdev_nvme.c:2522:bdev_nvme_reconnect_ctrlr: *INFO*: [nqn.2016-06.io.spdk:cnode20, 1] Start reconnecting ctrlr. 00:21:21.321 [2024-12-16 11:43:21.862928] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:21.321 [2024-12-16 11:43:21.862947] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1429dc0 with addr=10.0.0.4, port=4420 00:21:21.321 [2024-12-16 11:43:21.862957] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1429dc0 is same with the state(6) to be set 00:21:21.321 [2024-12-16 11:43:21.862972] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1429dc0 (9): Bad file descriptor 00:21:21.321 [2024-12-16 11:43:21.862986] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode20, 1] Ctrlr is in error state 00:21:21.321 [2024-12-16 11:43:21.862995] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode20, 1] controller reinitialization failed 00:21:21.321 [2024-12-16 11:43:21.863004] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode20, 1] in failed state. 00:21:21.321 [2024-12-16 11:43:21.863011] bdev_nvme.c:2512:bdev_nvme_reconnect_ctrlr_poll: *INFO*: [nqn.2016-06.io.spdk:cnode20, 1] ctrlr could not be connected. 00:21:21.321 [2024-12-16 11:43:21.863017] bdev_nvme.c:2279:bdev_nvme_reset_ctrlr_complete: *INFO*: [nqn.2016-06.io.spdk:cnode20, 1] Clear pending resets. 00:21:21.321 [2024-12-16 11:43:21.863022] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode20, 1] Resetting controller failed. 00:21:21.321 [2024-12-16 11:43:21.868953] bdev_nvme.c:2550:bdev_nvme_reset_destroy_qpairs: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Delete qpairs for reset. 00:21:21.321 [2024-12-16 11:43:21.868977] bdev_nvme.c:2538:bdev_nvme_reset_destroy_qpair_done: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] qpairs were deleted. 00:21:21.321 [2024-12-16 11:43:21.868983] bdev_nvme.c:2134:nvme_ctrlr_disconnect: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Start disconnecting ctrlr. 00:21:21.321 [2024-12-16 11:43:21.868988] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0, 1] resetting controller 00:21:21.321 [2024-12-16 11:43:21.869013] bdev_nvme.c:2522:bdev_nvme_reconnect_ctrlr: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Start reconnecting ctrlr. 00:21:21.321 [2024-12-16 11:43:21.869062] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:21.321 [2024-12-16 11:43:21.869080] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13b4790 with addr=10.0.0.3, port=4420 00:21:21.321 [2024-12-16 11:43:21.869090] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13b4790 is same with the state(6) to be set 00:21:21.321 [2024-12-16 11:43:21.869105] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x13b4790 (9): Bad file descriptor 00:21:21.321 [2024-12-16 11:43:21.869118] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] Ctrlr is in error state 00:21:21.321 [2024-12-16 11:43:21.869126] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] controller reinitialization failed 00:21:21.321 [2024-12-16 11:43:21.869135] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] in failed state. 00:21:21.321 [2024-12-16 11:43:21.869159] bdev_nvme.c:2512:bdev_nvme_reconnect_ctrlr_poll: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] ctrlr could not be connected. 00:21:21.321 [2024-12-16 11:43:21.869166] bdev_nvme.c:2279:bdev_nvme_reset_ctrlr_complete: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Clear pending resets. 00:21:21.321 [2024-12-16 11:43:21.869170] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] Resetting controller failed. 00:21:21.321 [2024-12-16 11:43:21.872888] bdev_nvme.c:2550:bdev_nvme_reset_destroy_qpairs: *INFO*: [nqn.2016-06.io.spdk:cnode20, 1] Delete qpairs for reset. 00:21:21.321 [2024-12-16 11:43:21.872912] bdev_nvme.c:2538:bdev_nvme_reset_destroy_qpair_done: *INFO*: [nqn.2016-06.io.spdk:cnode20, 1] qpairs were deleted. 00:21:21.321 [2024-12-16 11:43:21.872918] bdev_nvme.c:2134:nvme_ctrlr_disconnect: *INFO*: [nqn.2016-06.io.spdk:cnode20, 1] Start disconnecting ctrlr. 00:21:21.321 [2024-12-16 11:43:21.872923] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode20, 1] resetting controller 00:21:21.321 [2024-12-16 11:43:21.872947] bdev_nvme.c:2522:bdev_nvme_reconnect_ctrlr: *INFO*: [nqn.2016-06.io.spdk:cnode20, 1] Start reconnecting ctrlr. 00:21:21.321 [2024-12-16 11:43:21.872995] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:21.321 [2024-12-16 11:43:21.873013] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1429dc0 with addr=10.0.0.4, port=4420 00:21:21.321 [2024-12-16 11:43:21.873023] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1429dc0 is same with the state(6) to be set 00:21:21.321 [2024-12-16 11:43:21.873038] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1429dc0 (9): Bad file descriptor 00:21:21.321 [2024-12-16 11:43:21.873051] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode20, 1] Ctrlr is in error state 00:21:21.321 [2024-12-16 11:43:21.873059] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode20, 1] controller reinitialization failed 00:21:21.321 [2024-12-16 11:43:21.873068] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode20, 1] in failed state. 00:21:21.321 [2024-12-16 11:43:21.873076] bdev_nvme.c:2512:bdev_nvme_reconnect_ctrlr_poll: *INFO*: [nqn.2016-06.io.spdk:cnode20, 1] ctrlr could not be connected. 00:21:21.321 [2024-12-16 11:43:21.873082] bdev_nvme.c:2279:bdev_nvme_reset_ctrlr_complete: *INFO*: [nqn.2016-06.io.spdk:cnode20, 1] Clear pending resets. 00:21:21.321 [2024-12-16 11:43:21.873086] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode20, 1] Resetting controller failed. 00:21:21.321 [2024-12-16 11:43:21.879024] bdev_nvme.c:2550:bdev_nvme_reset_destroy_qpairs: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Delete qpairs for reset. 00:21:21.321 [2024-12-16 11:43:21.879048] bdev_nvme.c:2538:bdev_nvme_reset_destroy_qpair_done: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] qpairs were deleted. 00:21:21.321 [2024-12-16 11:43:21.879054] bdev_nvme.c:2134:nvme_ctrlr_disconnect: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Start disconnecting ctrlr. 00:21:21.321 [2024-12-16 11:43:21.879059] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0, 1] resetting controller 00:21:21.321 [2024-12-16 11:43:21.879083] bdev_nvme.c:2522:bdev_nvme_reconnect_ctrlr: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Start reconnecting ctrlr. 00:21:21.321 [2024-12-16 11:43:21.879131] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:21.321 [2024-12-16 11:43:21.879160] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13b4790 with addr=10.0.0.3, port=4420 00:21:21.321 [2024-12-16 11:43:21.879172] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13b4790 is same with the state(6) to be set 00:21:21.321 [2024-12-16 11:43:21.879187] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x13b4790 (9): Bad file descriptor 00:21:21.321 [2024-12-16 11:43:21.879200] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] Ctrlr is in error state 00:21:21.321 [2024-12-16 11:43:21.879209] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] controller reinitialization failed 00:21:21.321 [2024-12-16 11:43:21.879218] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] in failed state. 00:21:21.321 [2024-12-16 11:43:21.879226] bdev_nvme.c:2512:bdev_nvme_reconnect_ctrlr_poll: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] ctrlr could not be connected. 00:21:21.321 [2024-12-16 11:43:21.879232] bdev_nvme.c:2279:bdev_nvme_reset_ctrlr_complete: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Clear pending resets. 00:21:21.321 [2024-12-16 11:43:21.879237] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] Resetting controller failed. 00:21:21.321 [2024-12-16 11:43:21.882958] bdev_nvme.c:2550:bdev_nvme_reset_destroy_qpairs: *INFO*: [nqn.2016-06.io.spdk:cnode20, 1] Delete qpairs for reset. 00:21:21.321 [2024-12-16 11:43:21.882993] bdev_nvme.c:2538:bdev_nvme_reset_destroy_qpair_done: *INFO*: [nqn.2016-06.io.spdk:cnode20, 1] qpairs were deleted. 00:21:21.321 [2024-12-16 11:43:21.882999] bdev_nvme.c:2134:nvme_ctrlr_disconnect: *INFO*: [nqn.2016-06.io.spdk:cnode20, 1] Start disconnecting ctrlr. 00:21:21.321 [2024-12-16 11:43:21.883004] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode20, 1] resetting controller 00:21:21.321 [2024-12-16 11:43:21.883031] bdev_nvme.c:2522:bdev_nvme_reconnect_ctrlr: *INFO*: [nqn.2016-06.io.spdk:cnode20, 1] Start reconnecting ctrlr. 00:21:21.321 [2024-12-16 11:43:21.883083] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:21.322 [2024-12-16 11:43:21.883103] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1429dc0 with addr=10.0.0.4, port=4420 00:21:21.322 [2024-12-16 11:43:21.883113] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1429dc0 is same with the state(6) to be set 00:21:21.322 [2024-12-16 11:43:21.883128] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1429dc0 (9): Bad file descriptor 00:21:21.322 [2024-12-16 11:43:21.883153] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode20, 1] Ctrlr is in error state 00:21:21.322 [2024-12-16 11:43:21.883164] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode20, 1] controller reinitialization failed 00:21:21.322 [2024-12-16 11:43:21.883173] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode20, 1] in failed state. 00:21:21.322 [2024-12-16 11:43:21.883181] bdev_nvme.c:2512:bdev_nvme_reconnect_ctrlr_poll: *INFO*: [nqn.2016-06.io.spdk:cnode20, 1] ctrlr could not be connected. 00:21:21.322 [2024-12-16 11:43:21.883187] bdev_nvme.c:2279:bdev_nvme_reset_ctrlr_complete: *INFO*: [nqn.2016-06.io.spdk:cnode20, 1] Clear pending resets. 00:21:21.322 [2024-12-16 11:43:21.883192] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode20, 1] Resetting controller failed. 00:21:21.322 [2024-12-16 11:43:21.889092] bdev_nvme.c:2550:bdev_nvme_reset_destroy_qpairs: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Delete qpairs for reset. 00:21:21.322 [2024-12-16 11:43:21.889118] bdev_nvme.c:2538:bdev_nvme_reset_destroy_qpair_done: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] qpairs were deleted. 00:21:21.322 [2024-12-16 11:43:21.889141] bdev_nvme.c:2134:nvme_ctrlr_disconnect: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Start disconnecting ctrlr. 00:21:21.322 [2024-12-16 11:43:21.889146] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0, 1] resetting controller 00:21:21.322 [2024-12-16 11:43:21.889182] bdev_nvme.c:2522:bdev_nvme_reconnect_ctrlr: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Start reconnecting ctrlr. 00:21:21.322 [2024-12-16 11:43:21.889234] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:21.322 [2024-12-16 11:43:21.889253] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13b4790 with addr=10.0.0.3, port=4420 00:21:21.322 [2024-12-16 11:43:21.889263] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13b4790 is same with the state(6) to be set 00:21:21.322 [2024-12-16 11:43:21.889278] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x13b4790 (9): Bad file descriptor 00:21:21.322 [2024-12-16 11:43:21.889307] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] Ctrlr is in error state 00:21:21.322 [2024-12-16 11:43:21.889327] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] controller reinitialization failed 00:21:21.322 [2024-12-16 11:43:21.889336] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] in failed state. 00:21:21.322 [2024-12-16 11:43:21.889344] bdev_nvme.c:2512:bdev_nvme_reconnect_ctrlr_poll: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] ctrlr could not be connected. 00:21:21.322 [2024-12-16 11:43:21.889350] bdev_nvme.c:2279:bdev_nvme_reset_ctrlr_complete: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Clear pending resets. 00:21:21.322 [2024-12-16 11:43:21.889354] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] Resetting controller failed. 00:21:21.322 [2024-12-16 11:43:21.893039] bdev_nvme.c:2550:bdev_nvme_reset_destroy_qpairs: *INFO*: [nqn.2016-06.io.spdk:cnode20, 1] Delete qpairs for reset. 00:21:21.322 [2024-12-16 11:43:21.893063] bdev_nvme.c:2538:bdev_nvme_reset_destroy_qpair_done: *INFO*: [nqn.2016-06.io.spdk:cnode20, 1] qpairs were deleted. 00:21:21.322 [2024-12-16 11:43:21.893086] bdev_nvme.c:2134:nvme_ctrlr_disconnect: *INFO*: [nqn.2016-06.io.spdk:cnode20, 1] Start disconnecting ctrlr. 00:21:21.322 [2024-12-16 11:43:21.893092] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode20, 1] resetting controller 00:21:21.322 [2024-12-16 11:43:21.893116] bdev_nvme.c:2522:bdev_nvme_reconnect_ctrlr: *INFO*: [nqn.2016-06.io.spdk:cnode20, 1] Start reconnecting ctrlr. 00:21:21.322 [2024-12-16 11:43:21.893176] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:21.322 [2024-12-16 11:43:21.893195] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1429dc0 with addr=10.0.0.4, port=4420 00:21:21.322 [2024-12-16 11:43:21.893205] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1429dc0 is same with the state(6) to be set 00:21:21.322 [2024-12-16 11:43:21.893220] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1429dc0 (9): Bad file descriptor 00:21:21.322 [2024-12-16 11:43:21.893233] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode20, 1] Ctrlr is in error state 00:21:21.322 [2024-12-16 11:43:21.893242] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode20, 1] controller reinitialization failed 00:21:21.322 [2024-12-16 11:43:21.893251] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode20, 1] in failed state. 00:21:21.322 [2024-12-16 11:43:21.893259] bdev_nvme.c:2512:bdev_nvme_reconnect_ctrlr_poll: *INFO*: [nqn.2016-06.io.spdk:cnode20, 1] ctrlr could not be connected. 00:21:21.322 [2024-12-16 11:43:21.893265] bdev_nvme.c:2279:bdev_nvme_reset_ctrlr_complete: *INFO*: [nqn.2016-06.io.spdk:cnode20, 1] Clear pending resets. 00:21:21.322 [2024-12-16 11:43:21.893269] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode20, 1] Resetting controller failed. 00:21:21.322 [2024-12-16 11:43:21.899192] bdev_nvme.c:2550:bdev_nvme_reset_destroy_qpairs: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Delete qpairs for reset. 00:21:21.322 [2024-12-16 11:43:21.899214] bdev_nvme.c:2538:bdev_nvme_reset_destroy_qpair_done: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] qpairs were deleted. 00:21:21.322 [2024-12-16 11:43:21.899237] bdev_nvme.c:2134:nvme_ctrlr_disconnect: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Start disconnecting ctrlr. 00:21:21.322 [2024-12-16 11:43:21.899243] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0, 1] resetting controller 00:21:21.322 [2024-12-16 11:43:21.899267] bdev_nvme.c:2522:bdev_nvme_reconnect_ctrlr: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Start reconnecting ctrlr. 00:21:21.322 [2024-12-16 11:43:21.899315] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:21.322 [2024-12-16 11:43:21.899333] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13b4790 with addr=10.0.0.3, port=4420 00:21:21.322 [2024-12-16 11:43:21.899343] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13b4790 is same with the state(6) to be set 00:21:21.322 [2024-12-16 11:43:21.899358] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x13b4790 (9): Bad file descriptor 00:21:21.322 [2024-12-16 11:43:21.899389] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] Ctrlr is in error state 00:21:21.322 [2024-12-16 11:43:21.899399] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] controller reinitialization failed 00:21:21.322 [2024-12-16 11:43:21.899408] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] in failed state. 00:21:21.322 [2024-12-16 11:43:21.899416] bdev_nvme.c:2512:bdev_nvme_reconnect_ctrlr_poll: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] ctrlr could not be connected. 00:21:21.322 [2024-12-16 11:43:21.899422] bdev_nvme.c:2279:bdev_nvme_reset_ctrlr_complete: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Clear pending resets. 00:21:21.322 [2024-12-16 11:43:21.899427] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] Resetting controller failed. 00:21:21.322 [2024-12-16 11:43:21.903126] bdev_nvme.c:2550:bdev_nvme_reset_destroy_qpairs: *INFO*: [nqn.2016-06.io.spdk:cnode20, 1] Delete qpairs for reset. 00:21:21.322 [2024-12-16 11:43:21.903171] bdev_nvme.c:2538:bdev_nvme_reset_destroy_qpair_done: *INFO*: [nqn.2016-06.io.spdk:cnode20, 1] qpairs were deleted. 00:21:21.322 [2024-12-16 11:43:21.903178] bdev_nvme.c:2134:nvme_ctrlr_disconnect: *INFO*: [nqn.2016-06.io.spdk:cnode20, 1] Start disconnecting ctrlr. 00:21:21.322 [2024-12-16 11:43:21.903183] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode20, 1] resetting controller 00:21:21.322 [2024-12-16 11:43:21.903209] bdev_nvme.c:2522:bdev_nvme_reconnect_ctrlr: *INFO*: [nqn.2016-06.io.spdk:cnode20, 1] Start reconnecting ctrlr. 00:21:21.322 [2024-12-16 11:43:21.903257] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:21.322 [2024-12-16 11:43:21.903275] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1429dc0 with addr=10.0.0.4, port=4420 00:21:21.322 [2024-12-16 11:43:21.903285] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1429dc0 is same with the state(6) to be set 00:21:21.322 [2024-12-16 11:43:21.903299] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1429dc0 (9): Bad file descriptor 00:21:21.322 [2024-12-16 11:43:21.903313] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode20, 1] Ctrlr is in error state 00:21:21.322 [2024-12-16 11:43:21.903321] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode20, 1] controller reinitialization failed 00:21:21.322 [2024-12-16 11:43:21.903330] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode20, 1] in failed state. 00:21:21.322 [2024-12-16 11:43:21.903338] bdev_nvme.c:2512:bdev_nvme_reconnect_ctrlr_poll: *INFO*: [nqn.2016-06.io.spdk:cnode20, 1] ctrlr could not be connected. 00:21:21.322 [2024-12-16 11:43:21.903344] bdev_nvme.c:2279:bdev_nvme_reset_ctrlr_complete: *INFO*: [nqn.2016-06.io.spdk:cnode20, 1] Clear pending resets. 00:21:21.322 [2024-12-16 11:43:21.903348] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode20, 1] Resetting controller failed. 00:21:21.322 [2024-12-16 11:43:21.909295] bdev_nvme.c:2550:bdev_nvme_reset_destroy_qpairs: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Delete qpairs for reset. 00:21:21.322 [2024-12-16 11:43:21.909322] bdev_nvme.c:2538:bdev_nvme_reset_destroy_qpair_done: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] qpairs were deleted. 00:21:21.322 [2024-12-16 11:43:21.909328] bdev_nvme.c:2134:nvme_ctrlr_disconnect: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Start disconnecting ctrlr. 00:21:21.322 [2024-12-16 11:43:21.909333] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0, 1] resetting controller 00:21:21.322 [2024-12-16 11:43:21.909370] bdev_nvme.c:2522:bdev_nvme_reconnect_ctrlr: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Start reconnecting ctrlr. 00:21:21.322 [2024-12-16 11:43:21.909423] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:21.322 [2024-12-16 11:43:21.909442] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13b4790 with addr=10.0.0.3, port=4420 00:21:21.322 [2024-12-16 11:43:21.909452] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13b4790 is same with the state(6) to be set 00:21:21.322 [2024-12-16 11:43:21.909483] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x13b4790 (9): Bad file descriptor 00:21:21.322 [2024-12-16 11:43:21.909497] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] Ctrlr is in error state 00:21:21.322 [2024-12-16 11:43:21.909506] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] controller reinitialization failed 00:21:21.322 [2024-12-16 11:43:21.909515] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] in failed state. 00:21:21.322 [2024-12-16 11:43:21.909523] bdev_nvme.c:2512:bdev_nvme_reconnect_ctrlr_poll: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] ctrlr could not be connected. 00:21:21.322 [2024-12-16 11:43:21.909529] bdev_nvme.c:2279:bdev_nvme_reset_ctrlr_complete: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Clear pending resets. 00:21:21.322 [2024-12-16 11:43:21.909533] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] Resetting controller failed. 00:21:21.322 [2024-12-16 11:43:21.913217] bdev_nvme.c:2550:bdev_nvme_reset_destroy_qpairs: *INFO*: [nqn.2016-06.io.spdk:cnode20, 1] Delete qpairs for reset. 00:21:21.322 [2024-12-16 11:43:21.913235] bdev_nvme.c:2538:bdev_nvme_reset_destroy_qpair_done: *INFO*: [nqn.2016-06.io.spdk:cnode20, 1] qpairs were deleted. 00:21:21.322 [2024-12-16 11:43:21.913240] bdev_nvme.c:2134:nvme_ctrlr_disconnect: *INFO*: [nqn.2016-06.io.spdk:cnode20, 1] Start disconnecting ctrlr. 00:21:21.322 [2024-12-16 11:43:21.913246] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode20, 1] resetting controller 00:21:21.322 [2024-12-16 11:43:21.913271] bdev_nvme.c:2522:bdev_nvme_reconnect_ctrlr: *INFO*: [nqn.2016-06.io.spdk:cnode20, 1] Start reconnecting ctrlr. 00:21:21.322 [2024-12-16 11:43:21.913320] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:21.322 [2024-12-16 11:43:21.913338] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1429dc0 with addr=10.0.0.4, port=4420 00:21:21.322 [2024-12-16 11:43:21.913348] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1429dc0 is same with the state(6) to be set 00:21:21.322 [2024-12-16 11:43:21.913362] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1429dc0 (9): Bad file descriptor 00:21:21.322 [2024-12-16 11:43:21.913375] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode20, 1] Ctrlr is in error state 00:21:21.322 [2024-12-16 11:43:21.913384] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode20, 1] controller reinitialization failed 00:21:21.322 [2024-12-16 11:43:21.913393] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode20, 1] in failed state. 00:21:21.322 [2024-12-16 11:43:21.913401] bdev_nvme.c:2512:bdev_nvme_reconnect_ctrlr_poll: *INFO*: [nqn.2016-06.io.spdk:cnode20, 1] ctrlr could not be connected. 00:21:21.322 [2024-12-16 11:43:21.913407] bdev_nvme.c:2279:bdev_nvme_reset_ctrlr_complete: *INFO*: [nqn.2016-06.io.spdk:cnode20, 1] Clear pending resets. 00:21:21.322 [2024-12-16 11:43:21.913411] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode20, 1] Resetting controller failed. 00:21:21.322 [2024-12-16 11:43:21.919379] bdev_nvme.c:2550:bdev_nvme_reset_destroy_qpairs: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Delete qpairs for reset. 00:21:21.322 [2024-12-16 11:43:21.919402] bdev_nvme.c:2538:bdev_nvme_reset_destroy_qpair_done: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] qpairs were deleted. 00:21:21.322 [2024-12-16 11:43:21.919408] bdev_nvme.c:2134:nvme_ctrlr_disconnect: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Start disconnecting ctrlr. 00:21:21.322 [2024-12-16 11:43:21.919413] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0, 1] resetting controller 00:21:21.322 [2024-12-16 11:43:21.919444] bdev_nvme.c:2522:bdev_nvme_reconnect_ctrlr: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Start reconnecting ctrlr. 00:21:21.322 [2024-12-16 11:43:21.919493] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:21.322 [2024-12-16 11:43:21.919511] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13b4790 with addr=10.0.0.3, port=4420 00:21:21.322 [2024-12-16 11:43:21.919522] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13b4790 is same with the state(6) to be set 00:21:21.322 [2024-12-16 11:43:21.919552] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x13b4790 (9): Bad file descriptor 00:21:21.322 [2024-12-16 11:43:21.919567] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] Ctrlr is in error state 00:21:21.322 [2024-12-16 11:43:21.919575] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] controller reinitialization failed 00:21:21.322 [2024-12-16 11:43:21.919584] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] in failed state. 00:21:21.322 [2024-12-16 11:43:21.919592] bdev_nvme.c:2512:bdev_nvme_reconnect_ctrlr_poll: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] ctrlr could not be connected. 00:21:21.322 [2024-12-16 11:43:21.919597] bdev_nvme.c:2279:bdev_nvme_reset_ctrlr_complete: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Clear pending resets. 00:21:21.322 [2024-12-16 11:43:21.919602] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] Resetting controller failed. 00:21:21.322 [2024-12-16 11:43:21.923282] bdev_nvme.c:2550:bdev_nvme_reset_destroy_qpairs: *INFO*: [nqn.2016-06.io.spdk:cnode20, 1] Delete qpairs for reset. 00:21:21.322 [2024-12-16 11:43:21.923305] bdev_nvme.c:2538:bdev_nvme_reset_destroy_qpair_done: *INFO*: [nqn.2016-06.io.spdk:cnode20, 1] qpairs were deleted. 00:21:21.322 [2024-12-16 11:43:21.923311] bdev_nvme.c:2134:nvme_ctrlr_disconnect: *INFO*: [nqn.2016-06.io.spdk:cnode20, 1] Start disconnecting ctrlr. 00:21:21.322 [2024-12-16 11:43:21.923316] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode20, 1] resetting controller 00:21:21.322 [2024-12-16 11:43:21.923341] bdev_nvme.c:2522:bdev_nvme_reconnect_ctrlr: *INFO*: [nqn.2016-06.io.spdk:cnode20, 1] Start reconnecting ctrlr. 00:21:21.322 [2024-12-16 11:43:21.923389] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:21.322 [2024-12-16 11:43:21.923408] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1429dc0 with addr=10.0.0.4, port=4420 00:21:21.322 [2024-12-16 11:43:21.923418] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1429dc0 is same with the state(6) to be set 00:21:21.322 [2024-12-16 11:43:21.923432] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1429dc0 (9): Bad file descriptor 00:21:21.322 [2024-12-16 11:43:21.923445] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode20, 1] Ctrlr is in error state 00:21:21.322 [2024-12-16 11:43:21.923454] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode20, 1] controller reinitialization failed 00:21:21.322 [2024-12-16 11:43:21.923463] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode20, 1] in failed state. 00:21:21.322 [2024-12-16 11:43:21.923471] bdev_nvme.c:2512:bdev_nvme_reconnect_ctrlr_poll: *INFO*: [nqn.2016-06.io.spdk:cnode20, 1] ctrlr could not be connected. 00:21:21.322 [2024-12-16 11:43:21.923476] bdev_nvme.c:2279:bdev_nvme_reset_ctrlr_complete: *INFO*: [nqn.2016-06.io.spdk:cnode20, 1] Clear pending resets. 00:21:21.322 [2024-12-16 11:43:21.923481] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode20, 1] Resetting controller failed. 00:21:21.322 [2024-12-16 11:43:21.929454] bdev_nvme.c:2550:bdev_nvme_reset_destroy_qpairs: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Delete qpairs for reset. 00:21:21.322 [2024-12-16 11:43:21.929482] bdev_nvme.c:2538:bdev_nvme_reset_destroy_qpair_done: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] qpairs were deleted. 00:21:21.322 [2024-12-16 11:43:21.929488] bdev_nvme.c:2134:nvme_ctrlr_disconnect: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Start disconnecting ctrlr. 00:21:21.322 [2024-12-16 11:43:21.929494] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0, 1] resetting controller 00:21:21.322 [2024-12-16 11:43:21.929520] bdev_nvme.c:2522:bdev_nvme_reconnect_ctrlr: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Start reconnecting ctrlr. 00:21:21.323 [2024-12-16 11:43:21.929573] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:21.323 [2024-12-16 11:43:21.929593] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13b4790 with addr=10.0.0.3, port=4420 00:21:21.323 [2024-12-16 11:43:21.929603] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13b4790 is same with the state(6) to be set 00:21:21.323 [2024-12-16 11:43:21.929637] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x13b4790 (9): Bad file descriptor 00:21:21.323 [2024-12-16 11:43:21.929653] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] Ctrlr is in error state 00:21:21.323 [2024-12-16 11:43:21.929661] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] controller reinitialization failed 00:21:21.323 [2024-12-16 11:43:21.929671] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] in failed state. 00:21:21.323 [2024-12-16 11:43:21.929679] bdev_nvme.c:2512:bdev_nvme_reconnect_ctrlr_poll: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] ctrlr could not be connected. 00:21:21.323 [2024-12-16 11:43:21.929685] bdev_nvme.c:2279:bdev_nvme_reset_ctrlr_complete: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Clear pending resets. 00:21:21.323 [2024-12-16 11:43:21.929689] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] Resetting controller failed. 00:21:21.323 [2024-12-16 11:43:21.933353] bdev_nvme.c:2550:bdev_nvme_reset_destroy_qpairs: *INFO*: [nqn.2016-06.io.spdk:cnode20, 1] Delete qpairs for reset. 00:21:21.323 [2024-12-16 11:43:21.933378] bdev_nvme.c:2538:bdev_nvme_reset_destroy_qpair_done: *INFO*: [nqn.2016-06.io.spdk:cnode20, 1] qpairs were deleted. 00:21:21.323 [2024-12-16 11:43:21.933384] bdev_nvme.c:2134:nvme_ctrlr_disconnect: *INFO*: [nqn.2016-06.io.spdk:cnode20, 1] Start disconnecting ctrlr. 00:21:21.323 [2024-12-16 11:43:21.933389] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode20, 1] resetting controller 00:21:21.323 [2024-12-16 11:43:21.933413] bdev_nvme.c:2522:bdev_nvme_reconnect_ctrlr: *INFO*: [nqn.2016-06.io.spdk:cnode20, 1] Start reconnecting ctrlr. 00:21:21.323 [2024-12-16 11:43:21.933462] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:21.323 [2024-12-16 11:43:21.933480] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1429dc0 with addr=10.0.0.4, port=4420 00:21:21.323 [2024-12-16 11:43:21.933491] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1429dc0 is same with the state(6) to be set 00:21:21.323 [2024-12-16 11:43:21.933506] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1429dc0 (9): Bad file descriptor 00:21:21.323 [2024-12-16 11:43:21.933519] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode20, 1] Ctrlr is in error state 00:21:21.323 [2024-12-16 11:43:21.933527] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode20, 1] controller reinitialization failed 00:21:21.323 [2024-12-16 11:43:21.933536] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode20, 1] in failed state. 00:21:21.323 [2024-12-16 11:43:21.933544] bdev_nvme.c:2512:bdev_nvme_reconnect_ctrlr_poll: *INFO*: [nqn.2016-06.io.spdk:cnode20, 1] ctrlr could not be connected. 00:21:21.323 [2024-12-16 11:43:21.933550] bdev_nvme.c:2279:bdev_nvme_reset_ctrlr_complete: *INFO*: [nqn.2016-06.io.spdk:cnode20, 1] Clear pending resets. 00:21:21.323 [2024-12-16 11:43:21.933555] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode20, 1] Resetting controller failed. 00:21:21.323 [2024-12-16 11:43:21.939530] bdev_nvme.c:2550:bdev_nvme_reset_destroy_qpairs: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Delete qpairs for reset. 00:21:21.323 [2024-12-16 11:43:21.939554] bdev_nvme.c:2538:bdev_nvme_reset_destroy_qpair_done: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] qpairs were deleted. 00:21:21.323 [2024-12-16 11:43:21.939560] bdev_nvme.c:2134:nvme_ctrlr_disconnect: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Start disconnecting ctrlr. 00:21:21.323 [2024-12-16 11:43:21.939565] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0, 1] resetting controller 00:21:21.323 [2024-12-16 11:43:21.939589] bdev_nvme.c:2522:bdev_nvme_reconnect_ctrlr: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Start reconnecting ctrlr. 00:21:21.323 [2024-12-16 11:43:21.939637] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:21.323 [2024-12-16 11:43:21.939654] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13b4790 with addr=10.0.0.3, port=4420 00:21:21.323 [2024-12-16 11:43:21.939664] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13b4790 is same with the state(6) to be set 00:21:21.323 [2024-12-16 11:43:21.939706] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x13b4790 (9): Bad file descriptor 00:21:21.323 [2024-12-16 11:43:21.939721] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] Ctrlr is in error state 00:21:21.323 [2024-12-16 11:43:21.939730] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] controller reinitialization failed 00:21:21.323 [2024-12-16 11:43:21.939739] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] in failed state. 00:21:21.323 [2024-12-16 11:43:21.939747] bdev_nvme.c:2512:bdev_nvme_reconnect_ctrlr_poll: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] ctrlr could not be connected. 00:21:21.323 [2024-12-16 11:43:21.939753] bdev_nvme.c:2279:bdev_nvme_reset_ctrlr_complete: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Clear pending resets. 00:21:21.323 [2024-12-16 11:43:21.939758] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] Resetting controller failed. 00:21:21.323 [2024-12-16 11:43:21.943423] bdev_nvme.c:2550:bdev_nvme_reset_destroy_qpairs: *INFO*: [nqn.2016-06.io.spdk:cnode20, 1] Delete qpairs for reset. 00:21:21.323 [2024-12-16 11:43:21.943445] bdev_nvme.c:2538:bdev_nvme_reset_destroy_qpair_done: *INFO*: [nqn.2016-06.io.spdk:cnode20, 1] qpairs were deleted. 00:21:21.323 [2024-12-16 11:43:21.943451] bdev_nvme.c:2134:nvme_ctrlr_disconnect: *INFO*: [nqn.2016-06.io.spdk:cnode20, 1] Start disconnecting ctrlr. 00:21:21.323 [2024-12-16 11:43:21.943456] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode20, 1] resetting controller 00:21:21.323 [2024-12-16 11:43:21.943481] bdev_nvme.c:2522:bdev_nvme_reconnect_ctrlr: *INFO*: [nqn.2016-06.io.spdk:cnode20, 1] Start reconnecting ctrlr. 00:21:21.323 [2024-12-16 11:43:21.943528] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:21.323 [2024-12-16 11:43:21.943546] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1429dc0 with addr=10.0.0.4, port=4420 00:21:21.323 [2024-12-16 11:43:21.943556] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1429dc0 is same with the state(6) to be set 00:21:21.323 [2024-12-16 11:43:21.943571] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1429dc0 (9): Bad file descriptor 00:21:21.323 [2024-12-16 11:43:21.943584] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode20, 1] Ctrlr is in error state 00:21:21.323 [2024-12-16 11:43:21.943593] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode20, 1] controller reinitialization failed 00:21:21.323 [2024-12-16 11:43:21.943602] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode20, 1] in failed state. 00:21:21.323 [2024-12-16 11:43:21.943610] bdev_nvme.c:2512:bdev_nvme_reconnect_ctrlr_poll: *INFO*: [nqn.2016-06.io.spdk:cnode20, 1] ctrlr could not be connected. 00:21:21.323 [2024-12-16 11:43:21.943615] bdev_nvme.c:2279:bdev_nvme_reset_ctrlr_complete: *INFO*: [nqn.2016-06.io.spdk:cnode20, 1] Clear pending resets. 00:21:21.323 [2024-12-16 11:43:21.943620] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode20, 1] Resetting controller failed. 00:21:21.323 [2024-12-16 11:43:21.949599] bdev_nvme.c:2550:bdev_nvme_reset_destroy_qpairs: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Delete qpairs for reset. 00:21:21.323 [2024-12-16 11:43:21.949624] bdev_nvme.c:2538:bdev_nvme_reset_destroy_qpair_done: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] qpairs were deleted. 00:21:21.323 [2024-12-16 11:43:21.949630] bdev_nvme.c:2134:nvme_ctrlr_disconnect: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Start disconnecting ctrlr. 00:21:21.323 [2024-12-16 11:43:21.949635] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0, 1] resetting controller 00:21:21.323 [2024-12-16 11:43:21.949660] bdev_nvme.c:2522:bdev_nvme_reconnect_ctrlr: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Start reconnecting ctrlr. 00:21:21.323 [2024-12-16 11:43:21.949708] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:21.323 [2024-12-16 11:43:21.949725] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13b4790 with addr=10.0.0.3, port=4420 00:21:21.323 [2024-12-16 11:43:21.949735] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13b4790 is same with the state(6) to be set 00:21:21.323 [2024-12-16 11:43:21.949767] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x13b4790 (9): Bad file descriptor 00:21:21.323 [2024-12-16 11:43:21.949782] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] Ctrlr is in error state 00:21:21.323 [2024-12-16 11:43:21.949790] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] controller reinitialization failed 00:21:21.323 [2024-12-16 11:43:21.949799] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] in failed state. 00:21:21.323 [2024-12-16 11:43:21.949807] bdev_nvme.c:2512:bdev_nvme_reconnect_ctrlr_poll: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] ctrlr could not be connected. 00:21:21.323 [2024-12-16 11:43:21.949813] bdev_nvme.c:2279:bdev_nvme_reset_ctrlr_complete: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Clear pending resets. 00:21:21.323 [2024-12-16 11:43:21.949818] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] Resetting controller failed. 00:21:21.323 [2024-12-16 11:43:21.953489] bdev_nvme.c:2550:bdev_nvme_reset_destroy_qpairs: *INFO*: [nqn.2016-06.io.spdk:cnode20, 1] Delete qpairs for reset. 00:21:21.323 [2024-12-16 11:43:21.953514] bdev_nvme.c:2538:bdev_nvme_reset_destroy_qpair_done: *INFO*: [nqn.2016-06.io.spdk:cnode20, 1] qpairs were deleted. 00:21:21.323 [2024-12-16 11:43:21.953520] bdev_nvme.c:2134:nvme_ctrlr_disconnect: *INFO*: [nqn.2016-06.io.spdk:cnode20, 1] Start disconnecting ctrlr. 00:21:21.323 [2024-12-16 11:43:21.953525] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode20, 1] resetting controller 00:21:21.323 [2024-12-16 11:43:21.953549] bdev_nvme.c:2522:bdev_nvme_reconnect_ctrlr: *INFO*: [nqn.2016-06.io.spdk:cnode20, 1] Start reconnecting ctrlr. 00:21:21.323 [2024-12-16 11:43:21.953597] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:21.323 [2024-12-16 11:43:21.953615] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1429dc0 with addr=10.0.0.4, port=4420 00:21:21.323 [2024-12-16 11:43:21.953625] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1429dc0 is same with the state(6) to be set 00:21:21.323 [2024-12-16 11:43:21.953640] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1429dc0 (9): Bad file descriptor 00:21:21.323 [2024-12-16 11:43:21.953653] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode20, 1] Ctrlr is in error state 00:21:21.323 [2024-12-16 11:43:21.953661] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode20, 1] controller reinitialization failed 00:21:21.323 [2024-12-16 11:43:21.953671] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode20, 1] in failed state. 00:21:21.323 [2024-12-16 11:43:21.953678] bdev_nvme.c:2512:bdev_nvme_reconnect_ctrlr_poll: *INFO*: [nqn.2016-06.io.spdk:cnode20, 1] ctrlr could not be connected. 00:21:21.323 [2024-12-16 11:43:21.953684] bdev_nvme.c:2279:bdev_nvme_reset_ctrlr_complete: *INFO*: [nqn.2016-06.io.spdk:cnode20, 1] Clear pending resets. 00:21:21.323 [2024-12-16 11:43:21.953689] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode20, 1] Resetting controller failed. 00:21:21.323 [2024-12-16 11:43:21.959670] bdev_nvme.c:2550:bdev_nvme_reset_destroy_qpairs: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Delete qpairs for reset. 00:21:21.323 [2024-12-16 11:43:21.959699] bdev_nvme.c:2538:bdev_nvme_reset_destroy_qpair_done: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] qpairs were deleted. 00:21:21.323 [2024-12-16 11:43:21.959706] bdev_nvme.c:2134:nvme_ctrlr_disconnect: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Start disconnecting ctrlr. 00:21:21.323 [2024-12-16 11:43:21.959711] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0, 1] resetting controller 00:21:21.323 [2024-12-16 11:43:21.959736] bdev_nvme.c:2522:bdev_nvme_reconnect_ctrlr: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Start reconnecting ctrlr. 00:21:21.323 [2024-12-16 11:43:21.959783] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:21.323 [2024-12-16 11:43:21.959802] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13b4790 with addr=10.0.0.3, port=4420 00:21:21.323 [2024-12-16 11:43:21.959812] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13b4790 is same with the state(6) to be set 00:21:21.323 [2024-12-16 11:43:21.959844] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x13b4790 (9): Bad file descriptor 00:21:21.323 [2024-12-16 11:43:21.959858] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] Ctrlr is in error state 00:21:21.323 [2024-12-16 11:43:21.959867] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] controller reinitialization failed 00:21:21.323 [2024-12-16 11:43:21.959876] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] in failed state. 00:21:21.323 [2024-12-16 11:43:21.959883] bdev_nvme.c:2512:bdev_nvme_reconnect_ctrlr_poll: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] ctrlr could not be connected. 00:21:21.323 [2024-12-16 11:43:21.959889] bdev_nvme.c:2279:bdev_nvme_reset_ctrlr_complete: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Clear pending resets. 00:21:21.323 [2024-12-16 11:43:21.959894] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] Resetting controller failed. 00:21:21.323 [2024-12-16 11:43:21.963559] bdev_nvme.c:2550:bdev_nvme_reset_destroy_qpairs: *INFO*: [nqn.2016-06.io.spdk:cnode20, 1] Delete qpairs for reset. 00:21:21.323 [2024-12-16 11:43:21.963580] bdev_nvme.c:2538:bdev_nvme_reset_destroy_qpair_done: *INFO*: [nqn.2016-06.io.spdk:cnode20, 1] qpairs were deleted. 00:21:21.323 [2024-12-16 11:43:21.963586] bdev_nvme.c:2134:nvme_ctrlr_disconnect: *INFO*: [nqn.2016-06.io.spdk:cnode20, 1] Start disconnecting ctrlr. 00:21:21.323 [2024-12-16 11:43:21.963591] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode20, 1] resetting controller 00:21:21.323 [2024-12-16 11:43:21.963615] bdev_nvme.c:2522:bdev_nvme_reconnect_ctrlr: *INFO*: [nqn.2016-06.io.spdk:cnode20, 1] Start reconnecting ctrlr. 00:21:21.323 [2024-12-16 11:43:21.963662] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:21.323 [2024-12-16 11:43:21.963679] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1429dc0 with addr=10.0.0.4, port=4420 00:21:21.323 [2024-12-16 11:43:21.963689] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1429dc0 is same with the state(6) to be set 00:21:21.323 [2024-12-16 11:43:21.963714] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1429dc0 (9): Bad file descriptor 00:21:21.323 [2024-12-16 11:43:21.963728] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode20, 1] Ctrlr is in error state 00:21:21.323 [2024-12-16 11:43:21.963737] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode20, 1] controller reinitialization failed 00:21:21.323 [2024-12-16 11:43:21.963745] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode20, 1] in failed state. 00:21:21.323 [2024-12-16 11:43:21.963753] bdev_nvme.c:2512:bdev_nvme_reconnect_ctrlr_poll: *INFO*: [nqn.2016-06.io.spdk:cnode20, 1] ctrlr could not be connected. 00:21:21.323 [2024-12-16 11:43:21.963759] bdev_nvme.c:2279:bdev_nvme_reset_ctrlr_complete: *INFO*: [nqn.2016-06.io.spdk:cnode20, 1] Clear pending resets. 00:21:21.323 [2024-12-16 11:43:21.963764] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode20, 1] Resetting controller failed. 00:21:21.323 [2024-12-16 11:43:21.967808] bdev_nvme.c:7303:discovery_remove_controllers: *INFO*: Discovery[10.0.0.3:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.3:4420 not found 00:21:21.323 [2024-12-16 11:43:21.967839] bdev_nvme.c:7294:discovery_remove_controllers: *INFO*: Discovery[10.0.0.3:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.3:4421 found again 00:21:21.323 [2024-12-16 11:43:21.967859] bdev_nvme.c:7479:get_discovery_log_page: *INFO*: Discovery[10.0.0.3:8009] sent discovery log page command 00:21:21.323 [2024-12-16 11:43:21.967894] bdev_nvme.c:7303:discovery_remove_controllers: *INFO*: Discovery[10.0.0.4:8009] NVM nqn.2016-06.io.spdk:cnode20:10.0.0.4:4420 not found 00:21:21.323 [2024-12-16 11:43:21.967909] bdev_nvme.c:7294:discovery_remove_controllers: *INFO*: Discovery[10.0.0.4:8009] NVM nqn.2016-06.io.spdk:cnode20:10.0.0.4:4421 found again 00:21:21.323 [2024-12-16 11:43:21.967922] bdev_nvme.c:7479:get_discovery_log_page: *INFO*: Discovery[10.0.0.4:8009] sent discovery log page command 00:21:21.323 [2024-12-16 11:43:22.053887] bdev_nvme.c:7294:discovery_remove_controllers: *INFO*: Discovery[10.0.0.3:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.3:4421 found again 00:21:21.323 [2024-12-16 11:43:22.053962] bdev_nvme.c:7294:discovery_remove_controllers: *INFO*: Discovery[10.0.0.4:8009] NVM nqn.2016-06.io.spdk:cnode20:10.0.0.4:4421 found again 00:21:22.320 11:43:22 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@199 -- # get_subsystem_names 00:21:22.320 11:43:22 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@69 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:21:22.320 11:43:22 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:22.320 11:43:22 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@10 -- # set +x 00:21:22.320 11:43:22 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@69 -- # jq -r '.[].name' 00:21:22.320 11:43:22 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@69 -- # sort 00:21:22.320 11:43:22 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@69 -- # xargs 00:21:22.320 11:43:22 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:22.320 11:43:22 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@199 -- # [[ mdns0_nvme0 mdns1_nvme0 == \m\d\n\s\0\_\n\v\m\e\0\ \m\d\n\s\1\_\n\v\m\e\0 ]] 00:21:22.320 11:43:22 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@200 -- # get_bdev_list 00:21:22.320 11:43:22 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@65 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:21:22.320 11:43:22 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@65 -- # jq -r '.[].name' 00:21:22.320 11:43:22 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@65 -- # sort 00:21:22.320 11:43:22 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:22.320 11:43:22 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@10 -- # set +x 00:21:22.320 11:43:22 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@65 -- # xargs 00:21:22.320 11:43:22 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:22.320 11:43:22 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@200 -- # [[ mdns0_nvme0n1 mdns0_nvme0n2 mdns1_nvme0n1 mdns1_nvme0n2 == \m\d\n\s\0\_\n\v\m\e\0\n\1\ \m\d\n\s\0\_\n\v\m\e\0\n\2\ \m\d\n\s\1\_\n\v\m\e\0\n\1\ \m\d\n\s\1\_\n\v\m\e\0\n\2 ]] 00:21:22.320 11:43:22 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@201 -- # get_subsystem_paths mdns0_nvme0 00:21:22.320 11:43:22 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@73 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers -n mdns0_nvme0 00:21:22.320 11:43:22 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@73 -- # jq -r '.[].ctrlrs[].trid.trsvcid' 00:21:22.320 11:43:22 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:22.320 11:43:22 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@10 -- # set +x 00:21:22.320 11:43:22 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@73 -- # xargs 00:21:22.320 11:43:22 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@73 -- # sort -n 00:21:22.320 11:43:22 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:22.320 11:43:23 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@201 -- # [[ 4421 == \4\4\2\1 ]] 00:21:22.320 11:43:23 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@202 -- # get_subsystem_paths mdns1_nvme0 00:21:22.320 11:43:23 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@73 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers -n mdns1_nvme0 00:21:22.320 11:43:23 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@73 -- # jq -r '.[].ctrlrs[].trid.trsvcid' 00:21:22.320 11:43:23 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:22.320 11:43:23 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@10 -- # set +x 00:21:22.320 11:43:23 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@73 -- # sort -n 00:21:22.320 11:43:23 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@73 -- # xargs 00:21:22.320 11:43:23 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:22.578 11:43:23 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@202 -- # [[ 4421 == \4\4\2\1 ]] 00:21:22.578 11:43:23 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@203 -- # get_notification_count 00:21:22.578 11:43:23 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@116 -- # rpc_cmd -s /tmp/host.sock notify_get_notifications -i 4 00:21:22.578 11:43:23 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@116 -- # jq '. | length' 00:21:22.578 11:43:23 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:22.578 11:43:23 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@10 -- # set +x 00:21:22.578 11:43:23 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:22.578 11:43:23 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@116 -- # notification_count=0 00:21:22.578 11:43:23 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@117 -- # notify_id=4 00:21:22.578 11:43:23 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@204 -- # [[ 0 == 0 ]] 00:21:22.578 11:43:23 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@206 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_stop_mdns_discovery -b mdns 00:21:22.578 11:43:23 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:22.578 11:43:23 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@10 -- # set +x 00:21:22.578 11:43:23 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:22.578 11:43:23 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@207 -- # sleep 1 00:21:22.578 [2024-12-16 11:43:23.191659] bdev_mdns_client.c: 425:bdev_nvme_avahi_iterate: *INFO*: Stopping avahi poller for service _nvme-disc._tcp 00:21:23.513 11:43:24 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@209 -- # get_mdns_discovery_svcs 00:21:23.513 11:43:24 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@81 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_mdns_discovery_info 00:21:23.513 11:43:24 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:23.513 11:43:24 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@10 -- # set +x 00:21:23.513 11:43:24 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@81 -- # jq -r '.[].name' 00:21:23.513 11:43:24 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@81 -- # sort 00:21:23.513 11:43:24 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@81 -- # xargs 00:21:23.513 11:43:24 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:23.513 11:43:24 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@209 -- # [[ '' == '' ]] 00:21:23.513 11:43:24 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@210 -- # get_subsystem_names 00:21:23.513 11:43:24 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@69 -- # jq -r '.[].name' 00:21:23.513 11:43:24 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@69 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:21:23.513 11:43:24 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:23.513 11:43:24 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@10 -- # set +x 00:21:23.513 11:43:24 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@69 -- # sort 00:21:23.513 11:43:24 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@69 -- # xargs 00:21:23.513 11:43:24 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:23.513 11:43:24 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@210 -- # [[ '' == '' ]] 00:21:23.513 11:43:24 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@211 -- # get_bdev_list 00:21:23.513 11:43:24 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@65 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:21:23.513 11:43:24 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:23.513 11:43:24 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@10 -- # set +x 00:21:23.513 11:43:24 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@65 -- # jq -r '.[].name' 00:21:23.513 11:43:24 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@65 -- # sort 00:21:23.513 11:43:24 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@65 -- # xargs 00:21:23.772 11:43:24 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:23.772 11:43:24 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@211 -- # [[ '' == '' ]] 00:21:23.772 11:43:24 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@212 -- # get_notification_count 00:21:23.772 11:43:24 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@116 -- # rpc_cmd -s /tmp/host.sock notify_get_notifications -i 4 00:21:23.772 11:43:24 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:23.772 11:43:24 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@10 -- # set +x 00:21:23.772 11:43:24 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@116 -- # jq '. | length' 00:21:23.772 11:43:24 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:23.772 11:43:24 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@116 -- # notification_count=4 00:21:23.772 11:43:24 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@117 -- # notify_id=8 00:21:23.772 11:43:24 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@213 -- # [[ 4 == 4 ]] 00:21:23.772 11:43:24 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@216 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_start_mdns_discovery -b mdns -s _nvme-disc._tcp -q nqn.2021-12.io.spdk:test 00:21:23.772 11:43:24 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:23.772 11:43:24 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@10 -- # set +x 00:21:23.772 11:43:24 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:23.772 11:43:24 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@217 -- # NOT rpc_cmd -s /tmp/host.sock bdev_nvme_start_mdns_discovery -b mdns -s _nvme-disc._http -q nqn.2021-12.io.spdk:test 00:21:23.772 11:43:24 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@652 -- # local es=0 00:21:23.772 11:43:24 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@654 -- # valid_exec_arg rpc_cmd -s /tmp/host.sock bdev_nvme_start_mdns_discovery -b mdns -s _nvme-disc._http -q nqn.2021-12.io.spdk:test 00:21:23.772 11:43:24 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@640 -- # local arg=rpc_cmd 00:21:23.772 11:43:24 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:21:23.772 11:43:24 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@644 -- # type -t rpc_cmd 00:21:23.772 11:43:24 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:21:23.772 11:43:24 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@655 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_start_mdns_discovery -b mdns -s _nvme-disc._http -q nqn.2021-12.io.spdk:test 00:21:23.772 11:43:24 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:23.772 11:43:24 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@10 -- # set +x 00:21:23.772 [2024-12-16 11:43:24.381080] bdev_mdns_client.c: 471:bdev_nvme_start_mdns_discovery: *ERROR*: mDNS discovery already running with name mdns 00:21:23.772 2024/12/16 11:43:24 error on JSON-RPC call, method: bdev_nvme_start_mdns_discovery, params: map[hostnqn:nqn.2021-12.io.spdk:test name:mdns svcname:_nvme-disc._http], err: error received for bdev_nvme_start_mdns_discovery method, err: Code=-17 Msg=File exists 00:21:23.772 request: 00:21:23.772 { 00:21:23.772 "method": "bdev_nvme_start_mdns_discovery", 00:21:23.772 "params": { 00:21:23.772 "name": "mdns", 00:21:23.772 "svcname": "_nvme-disc._http", 00:21:23.772 "hostnqn": "nqn.2021-12.io.spdk:test" 00:21:23.772 } 00:21:23.772 } 00:21:23.772 Got JSON-RPC error response 00:21:23.772 GoRPCClient: error on JSON-RPC call 00:21:23.772 11:43:24 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:21:23.772 11:43:24 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@655 -- # es=1 00:21:23.772 11:43:24 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:21:23.772 11:43:24 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:21:23.772 11:43:24 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:21:23.772 11:43:24 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@218 -- # sleep 5 00:21:24.339 [2024-12-16 11:43:24.969903] bdev_mdns_client.c: 396:mdns_browse_handler: *INFO*: (Browser) CACHE_EXHAUSTED 00:21:24.339 [2024-12-16 11:43:25.069894] bdev_mdns_client.c: 396:mdns_browse_handler: *INFO*: (Browser) ALL_FOR_NOW 00:21:24.602 [2024-12-16 11:43:25.169901] bdev_mdns_client.c: 255:mdns_resolve_handler: *INFO*: Service 'spdk1' of type '_nvme-disc._tcp' in domain 'local' 00:21:24.602 [2024-12-16 11:43:25.169932] bdev_mdns_client.c: 260:mdns_resolve_handler: *INFO*: fedora39-cloud-1721788873-2326.local:8009 (10.0.0.4) 00:21:24.603 TXT="nqn=nqn.2014-08.org.nvmexpress.discovery" "p=tcp" 00:21:24.603 cookie is 0 00:21:24.603 is_local: 1 00:21:24.603 our_own: 0 00:21:24.603 wide_area: 0 00:21:24.603 multicast: 1 00:21:24.603 cached: 1 00:21:24.603 [2024-12-16 11:43:25.269897] bdev_mdns_client.c: 255:mdns_resolve_handler: *INFO*: Service 'spdk0' of type '_nvme-disc._tcp' in domain 'local' 00:21:24.603 [2024-12-16 11:43:25.269925] bdev_mdns_client.c: 260:mdns_resolve_handler: *INFO*: fedora39-cloud-1721788873-2326.local:8009 (10.0.0.4) 00:21:24.603 TXT="nqn=nqn.2014-08.org.nvmexpress.discovery" "p=tcp" 00:21:24.603 cookie is 0 00:21:24.603 is_local: 1 00:21:24.603 our_own: 0 00:21:24.603 wide_area: 0 00:21:24.603 multicast: 1 00:21:24.603 cached: 1 00:21:24.603 [2024-12-16 11:43:25.269939] bdev_mdns_client.c: 323:mdns_resolve_handler: *ERROR*: mDNS discovery entry exists already. trid->traddr: 10.0.0.4 trid->trsvcid: 8009 00:21:24.861 [2024-12-16 11:43:25.369902] bdev_mdns_client.c: 255:mdns_resolve_handler: *INFO*: Service 'spdk1' of type '_nvme-disc._tcp' in domain 'local' 00:21:24.861 [2024-12-16 11:43:25.369934] bdev_mdns_client.c: 260:mdns_resolve_handler: *INFO*: fedora39-cloud-1721788873-2326.local:8009 (10.0.0.3) 00:21:24.861 TXT="nqn=nqn.2014-08.org.nvmexpress.discovery" "p=tcp" 00:21:24.861 cookie is 0 00:21:24.861 is_local: 1 00:21:24.861 our_own: 0 00:21:24.861 wide_area: 0 00:21:24.861 multicast: 1 00:21:24.861 cached: 1 00:21:24.861 [2024-12-16 11:43:25.469900] bdev_mdns_client.c: 255:mdns_resolve_handler: *INFO*: Service 'spdk0' of type '_nvme-disc._tcp' in domain 'local' 00:21:24.861 [2024-12-16 11:43:25.469948] bdev_mdns_client.c: 260:mdns_resolve_handler: *INFO*: fedora39-cloud-1721788873-2326.local:8009 (10.0.0.3) 00:21:24.861 TXT="nqn=nqn.2014-08.org.nvmexpress.discovery" "p=tcp" 00:21:24.861 cookie is 0 00:21:24.861 is_local: 1 00:21:24.861 our_own: 0 00:21:24.861 wide_area: 0 00:21:24.861 multicast: 1 00:21:24.861 cached: 1 00:21:24.861 [2024-12-16 11:43:25.469962] bdev_mdns_client.c: 323:mdns_resolve_handler: *ERROR*: mDNS discovery entry exists already. trid->traddr: 10.0.0.3 trid->trsvcid: 8009 00:21:25.797 [2024-12-16 11:43:26.181490] bdev_nvme.c:7516:discovery_attach_cb: *INFO*: Discovery[10.0.0.4:8009] discovery ctrlr attached 00:21:25.797 [2024-12-16 11:43:26.181535] bdev_nvme.c:7602:discovery_poller: *INFO*: Discovery[10.0.0.4:8009] discovery ctrlr connected 00:21:25.797 [2024-12-16 11:43:26.181554] bdev_nvme.c:7479:get_discovery_log_page: *INFO*: Discovery[10.0.0.4:8009] sent discovery log page command 00:21:25.797 [2024-12-16 11:43:26.267598] bdev_nvme.c:7445:discovery_log_page_cb: *INFO*: Discovery[10.0.0.4:8009] NVM nqn.2016-06.io.spdk:cnode20:10.0.0.4:4421 new subsystem mdns0_nvme0 00:21:25.797 [2024-12-16 11:43:26.326083] bdev_nvme.c:5663:nvme_ctrlr_create_done: *INFO*: [nqn.2016-06.io.spdk:cnode20, 3] ctrlr was created to 10.0.0.4:4421 00:21:25.797 [2024-12-16 11:43:26.326817] bdev_nvme.c:1990:bdev_nvme_create_qpair: *INFO*: [nqn.2016-06.io.spdk:cnode20, 3] Connecting qpair 0x14082f0:1 started. 00:21:25.797 [2024-12-16 11:43:26.328890] bdev_nvme.c:7335:discovery_attach_controller_done: *INFO*: Discovery[10.0.0.4:8009] attach mdns0_nvme0 done 00:21:25.797 [2024-12-16 11:43:26.328917] bdev_nvme.c:7294:discovery_remove_controllers: *INFO*: Discovery[10.0.0.4:8009] NVM nqn.2016-06.io.spdk:cnode20:10.0.0.4:4421 found again 00:21:25.797 [2024-12-16 11:43:26.330202] bdev_nvme.c:1792:bdev_nvme_disconnected_qpair_cb: *INFO*: [nqn.2016-06.io.spdk:cnode20, 3] qpair 0x14082f0 was disconnected and freed. delete nvme_qpair. 00:21:25.797 [2024-12-16 11:43:26.380920] bdev_nvme.c:7516:discovery_attach_cb: *INFO*: Discovery[10.0.0.3:8009] discovery ctrlr attached 00:21:25.797 [2024-12-16 11:43:26.380941] bdev_nvme.c:7602:discovery_poller: *INFO*: Discovery[10.0.0.3:8009] discovery ctrlr connected 00:21:25.797 [2024-12-16 11:43:26.380962] bdev_nvme.c:7479:get_discovery_log_page: *INFO*: Discovery[10.0.0.3:8009] sent discovery log page command 00:21:25.797 [2024-12-16 11:43:26.467088] bdev_nvme.c:7445:discovery_log_page_cb: *INFO*: Discovery[10.0.0.3:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.3:4421 new subsystem mdns1_nvme0 00:21:25.797 [2024-12-16 11:43:26.525568] bdev_nvme.c:5663:nvme_ctrlr_create_done: *INFO*: [nqn.2016-06.io.spdk:cnode0, 3] ctrlr was created to 10.0.0.3:4421 00:21:25.797 [2024-12-16 11:43:26.526042] bdev_nvme.c:1990:bdev_nvme_create_qpair: *INFO*: [nqn.2016-06.io.spdk:cnode0, 3] Connecting qpair 0x1444420:1 started. 00:21:25.797 [2024-12-16 11:43:26.527409] bdev_nvme.c:7335:discovery_attach_controller_done: *INFO*: Discovery[10.0.0.3:8009] attach mdns1_nvme0 done 00:21:25.797 [2024-12-16 11:43:26.527433] bdev_nvme.c:7294:discovery_remove_controllers: *INFO*: Discovery[10.0.0.3:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.3:4421 found again 00:21:25.797 [2024-12-16 11:43:26.529927] bdev_nvme.c:1792:bdev_nvme_disconnected_qpair_cb: *INFO*: [nqn.2016-06.io.spdk:cnode0, 3] qpair 0x1444420 was disconnected and freed. delete nvme_qpair. 00:21:29.087 11:43:29 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@220 -- # get_mdns_discovery_svcs 00:21:29.087 11:43:29 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@81 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_mdns_discovery_info 00:21:29.087 11:43:29 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@81 -- # jq -r '.[].name' 00:21:29.087 11:43:29 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:29.087 11:43:29 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@10 -- # set +x 00:21:29.087 11:43:29 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@81 -- # sort 00:21:29.087 11:43:29 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@81 -- # xargs 00:21:29.087 11:43:29 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:29.087 11:43:29 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@220 -- # [[ mdns == \m\d\n\s ]] 00:21:29.087 11:43:29 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@221 -- # get_discovery_ctrlrs 00:21:29.087 11:43:29 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@77 -- # sort 00:21:29.087 11:43:29 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@77 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_discovery_info 00:21:29.087 11:43:29 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@77 -- # jq -r '.[].name' 00:21:29.087 11:43:29 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:29.087 11:43:29 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@77 -- # xargs 00:21:29.087 11:43:29 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@10 -- # set +x 00:21:29.087 11:43:29 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:29.087 11:43:29 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@221 -- # [[ mdns0_nvme mdns1_nvme == \m\d\n\s\0\_\n\v\m\e\ \m\d\n\s\1\_\n\v\m\e ]] 00:21:29.087 11:43:29 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@222 -- # get_bdev_list 00:21:29.087 11:43:29 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@65 -- # jq -r '.[].name' 00:21:29.087 11:43:29 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@65 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:21:29.087 11:43:29 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:29.087 11:43:29 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@10 -- # set +x 00:21:29.087 11:43:29 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@65 -- # sort 00:21:29.087 11:43:29 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@65 -- # xargs 00:21:29.087 11:43:29 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:29.087 11:43:29 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@222 -- # [[ mdns0_nvme0n1 mdns0_nvme0n2 mdns1_nvme0n1 mdns1_nvme0n2 == \m\d\n\s\0\_\n\v\m\e\0\n\1\ \m\d\n\s\0\_\n\v\m\e\0\n\2\ \m\d\n\s\1\_\n\v\m\e\0\n\1\ \m\d\n\s\1\_\n\v\m\e\0\n\2 ]] 00:21:29.087 11:43:29 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@225 -- # NOT rpc_cmd -s /tmp/host.sock bdev_nvme_start_mdns_discovery -b cdc -s _nvme-disc._tcp -q nqn.2021-12.io.spdk:test 00:21:29.087 11:43:29 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@652 -- # local es=0 00:21:29.087 11:43:29 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@654 -- # valid_exec_arg rpc_cmd -s /tmp/host.sock bdev_nvme_start_mdns_discovery -b cdc -s _nvme-disc._tcp -q nqn.2021-12.io.spdk:test 00:21:29.087 11:43:29 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@640 -- # local arg=rpc_cmd 00:21:29.087 11:43:29 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:21:29.087 11:43:29 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@644 -- # type -t rpc_cmd 00:21:29.087 11:43:29 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:21:29.087 11:43:29 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@655 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_start_mdns_discovery -b cdc -s _nvme-disc._tcp -q nqn.2021-12.io.spdk:test 00:21:29.087 11:43:29 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:29.087 11:43:29 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@10 -- # set +x 00:21:29.087 [2024-12-16 11:43:29.573013] bdev_mdns_client.c: 476:bdev_nvme_start_mdns_discovery: *ERROR*: mDNS discovery already running for service _nvme-disc._tcp 00:21:29.087 2024/12/16 11:43:29 error on JSON-RPC call, method: bdev_nvme_start_mdns_discovery, params: map[hostnqn:nqn.2021-12.io.spdk:test name:cdc svcname:_nvme-disc._tcp], err: error received for bdev_nvme_start_mdns_discovery method, err: Code=-17 Msg=File exists 00:21:29.087 request: 00:21:29.087 { 00:21:29.087 "method": "bdev_nvme_start_mdns_discovery", 00:21:29.087 "params": { 00:21:29.087 "name": "cdc", 00:21:29.087 "svcname": "_nvme-disc._tcp", 00:21:29.087 "hostnqn": "nqn.2021-12.io.spdk:test" 00:21:29.087 } 00:21:29.087 } 00:21:29.087 Got JSON-RPC error response 00:21:29.087 GoRPCClient: error on JSON-RPC call 00:21:29.087 11:43:29 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:21:29.087 11:43:29 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@655 -- # es=1 00:21:29.087 11:43:29 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:21:29.087 11:43:29 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:21:29.087 11:43:29 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:21:29.087 11:43:29 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@226 -- # get_discovery_ctrlrs 00:21:29.087 11:43:29 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@77 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_discovery_info 00:21:29.087 11:43:29 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@77 -- # jq -r '.[].name' 00:21:29.087 11:43:29 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:29.087 11:43:29 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@77 -- # sort 00:21:29.087 11:43:29 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@77 -- # xargs 00:21:29.087 11:43:29 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@10 -- # set +x 00:21:29.087 11:43:29 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:29.088 11:43:29 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@226 -- # [[ mdns0_nvme mdns1_nvme == \m\d\n\s\0\_\n\v\m\e\ \m\d\n\s\1\_\n\v\m\e ]] 00:21:29.088 11:43:29 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@227 -- # get_bdev_list 00:21:29.088 11:43:29 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@65 -- # jq -r '.[].name' 00:21:29.088 11:43:29 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@65 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:21:29.088 11:43:29 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@65 -- # sort 00:21:29.088 11:43:29 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@65 -- # xargs 00:21:29.088 11:43:29 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:29.088 11:43:29 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@10 -- # set +x 00:21:29.088 11:43:29 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:29.088 11:43:29 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@227 -- # [[ mdns0_nvme0n1 mdns0_nvme0n2 mdns1_nvme0n1 mdns1_nvme0n2 == \m\d\n\s\0\_\n\v\m\e\0\n\1\ \m\d\n\s\0\_\n\v\m\e\0\n\2\ \m\d\n\s\1\_\n\v\m\e\0\n\1\ \m\d\n\s\1\_\n\v\m\e\0\n\2 ]] 00:21:29.088 11:43:29 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@228 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_stop_mdns_discovery -b mdns 00:21:29.088 11:43:29 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:29.088 11:43:29 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@10 -- # set +x 00:21:29.088 11:43:29 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:29.088 11:43:29 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@231 -- # check_mdns_request_exists spdk1 10.0.0.3 8009 found 00:21:29.088 11:43:29 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@85 -- # local process=spdk1 00:21:29.088 11:43:29 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@86 -- # local ip=10.0.0.3 00:21:29.088 11:43:29 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@87 -- # local port=8009 00:21:29.088 11:43:29 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@88 -- # local check_type=found 00:21:29.088 11:43:29 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@89 -- # local output 00:21:29.088 11:43:29 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@92 -- # avahi-browse -t -r _nvme-disc._tcp -p 00:21:29.088 11:43:29 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@92 -- # output='+;(null);IPv4;spdk1;_nvme-disc._tcp;local 00:21:29.088 +;(null);IPv4;spdk0;_nvme-disc._tcp;local 00:21:29.088 +;(null);IPv4;spdk1;_nvme-disc._tcp;local 00:21:29.088 +;(null);IPv4;spdk0;_nvme-disc._tcp;local 00:21:29.088 =;(null);IPv4;spdk1;_nvme-disc._tcp;local;fedora39-cloud-1721788873-2326.local;10.0.0.4;8009;"nqn=nqn.2014-08.org.nvmexpress.discovery" "p=tcp" 00:21:29.088 =;(null);IPv4;spdk0;_nvme-disc._tcp;local;fedora39-cloud-1721788873-2326.local;10.0.0.4;8009;"nqn=nqn.2014-08.org.nvmexpress.discovery" "p=tcp" 00:21:29.088 =;(null);IPv4;spdk1;_nvme-disc._tcp;local;fedora39-cloud-1721788873-2326.local;10.0.0.3;8009;"nqn=nqn.2014-08.org.nvmexpress.discovery" "p=tcp" 00:21:29.088 =;(null);IPv4;spdk0;_nvme-disc._tcp;local;fedora39-cloud-1721788873-2326.local;10.0.0.3;8009;"nqn=nqn.2014-08.org.nvmexpress.discovery" "p=tcp"' 00:21:29.088 11:43:29 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@93 -- # readarray -t lines 00:21:29.088 11:43:29 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@95 -- # for line in "${lines[@]}" 00:21:29.088 11:43:29 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@96 -- # [[ +;(null);IPv4;spdk1;_nvme-disc._tcp;local == *\s\p\d\k\1* ]] 00:21:29.088 11:43:29 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@96 -- # [[ +;(null);IPv4;spdk1;_nvme-disc._tcp;local == *\1\0\.\0\.\0\.\3* ]] 00:21:29.088 11:43:29 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@95 -- # for line in "${lines[@]}" 00:21:29.088 11:43:29 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@96 -- # [[ +;(null);IPv4;spdk0;_nvme-disc._tcp;local == *\s\p\d\k\1* ]] 00:21:29.088 11:43:29 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@95 -- # for line in "${lines[@]}" 00:21:29.088 11:43:29 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@96 -- # [[ +;(null);IPv4;spdk1;_nvme-disc._tcp;local == *\s\p\d\k\1* ]] 00:21:29.088 11:43:29 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@96 -- # [[ +;(null);IPv4;spdk1;_nvme-disc._tcp;local == *\1\0\.\0\.\0\.\3* ]] 00:21:29.088 11:43:29 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@95 -- # for line in "${lines[@]}" 00:21:29.088 11:43:29 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@96 -- # [[ +;(null);IPv4;spdk0;_nvme-disc._tcp;local == *\s\p\d\k\1* ]] 00:21:29.088 11:43:29 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@95 -- # for line in "${lines[@]}" 00:21:29.088 11:43:29 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@96 -- # [[ =;(null);IPv4;spdk1;_nvme-disc._tcp;local;fedora39-cloud-1721788873-2326.local;10.0.0.4;8009;"nqn=nqn.2014-08.org.nvmexpress.discovery" "p=tcp" == *\s\p\d\k\1* ]] 00:21:29.088 11:43:29 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@96 -- # [[ =;(null);IPv4;spdk1;_nvme-disc._tcp;local;fedora39-cloud-1721788873-2326.local;10.0.0.4;8009;"nqn=nqn.2014-08.org.nvmexpress.discovery" "p=tcp" == *\1\0\.\0\.\0\.\3* ]] 00:21:29.088 11:43:29 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@95 -- # for line in "${lines[@]}" 00:21:29.088 11:43:29 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@96 -- # [[ =;(null);IPv4;spdk0;_nvme-disc._tcp;local;fedora39-cloud-1721788873-2326.local;10.0.0.4;8009;"nqn=nqn.2014-08.org.nvmexpress.discovery" "p=tcp" == *\s\p\d\k\1* ]] 00:21:29.088 11:43:29 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@95 -- # for line in "${lines[@]}" 00:21:29.088 11:43:29 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@96 -- # [[ =;(null);IPv4;spdk1;_nvme-disc._tcp;local;fedora39-cloud-1721788873-2326.local;10.0.0.3;8009;"nqn=nqn.2014-08.org.nvmexpress.discovery" "p=tcp" == *\s\p\d\k\1* ]] 00:21:29.088 11:43:29 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@96 -- # [[ =;(null);IPv4;spdk1;_nvme-disc._tcp;local;fedora39-cloud-1721788873-2326.local;10.0.0.3;8009;"nqn=nqn.2014-08.org.nvmexpress.discovery" "p=tcp" == *\1\0\.\0\.\0\.\3* ]] 00:21:29.088 11:43:29 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@96 -- # [[ =;(null);IPv4;spdk1;_nvme-disc._tcp;local;fedora39-cloud-1721788873-2326.local;10.0.0.3;8009;"nqn=nqn.2014-08.org.nvmexpress.discovery" "p=tcp" == *\8\0\0\9* ]] 00:21:29.088 11:43:29 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@97 -- # [[ found == \f\o\u\n\d ]] 00:21:29.088 11:43:29 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@98 -- # return 0 00:21:29.088 11:43:29 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@232 -- # rpc_cmd nvmf_subsystem_remove_listener nqn.2014-08.org.nvmexpress.discovery -t tcp -a 10.0.0.3 -s 8009 00:21:29.088 11:43:29 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:29.088 11:43:29 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@10 -- # set +x 00:21:29.088 11:43:29 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:29.088 11:43:29 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@234 -- # sleep 1 00:21:29.088 [2024-12-16 11:43:29.769890] bdev_mdns_client.c: 425:bdev_nvme_avahi_iterate: *INFO*: Stopping avahi poller for service _nvme-disc._tcp 00:21:30.024 11:43:30 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@236 -- # check_mdns_request_exists spdk1 10.0.0.3 8009 'not found' 00:21:30.024 11:43:30 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@85 -- # local process=spdk1 00:21:30.024 11:43:30 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@86 -- # local ip=10.0.0.3 00:21:30.024 11:43:30 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@87 -- # local port=8009 00:21:30.024 11:43:30 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@88 -- # local 'check_type=not found' 00:21:30.024 11:43:30 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@89 -- # local output 00:21:30.024 11:43:30 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@92 -- # avahi-browse -t -r _nvme-disc._tcp -p 00:21:30.283 11:43:30 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@92 -- # output='+;(null);IPv4;spdk0;_nvme-disc._tcp;local 00:21:30.283 +;(null);IPv4;spdk0;_nvme-disc._tcp;local 00:21:30.283 =;(null);IPv4;spdk0;_nvme-disc._tcp;local;fedora39-cloud-1721788873-2326.local;10.0.0.4;8009;"nqn=nqn.2014-08.org.nvmexpress.discovery" "p=tcp" 00:21:30.283 =;(null);IPv4;spdk0;_nvme-disc._tcp;local;fedora39-cloud-1721788873-2326.local;10.0.0.3;8009;"nqn=nqn.2014-08.org.nvmexpress.discovery" "p=tcp"' 00:21:30.283 11:43:30 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@93 -- # readarray -t lines 00:21:30.283 11:43:30 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@95 -- # for line in "${lines[@]}" 00:21:30.283 11:43:30 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@96 -- # [[ +;(null);IPv4;spdk0;_nvme-disc._tcp;local == *\s\p\d\k\1* ]] 00:21:30.283 11:43:30 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@95 -- # for line in "${lines[@]}" 00:21:30.283 11:43:30 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@96 -- # [[ +;(null);IPv4;spdk0;_nvme-disc._tcp;local == *\s\p\d\k\1* ]] 00:21:30.283 11:43:30 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@95 -- # for line in "${lines[@]}" 00:21:30.283 11:43:30 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@96 -- # [[ =;(null);IPv4;spdk0;_nvme-disc._tcp;local;fedora39-cloud-1721788873-2326.local;10.0.0.4;8009;"nqn=nqn.2014-08.org.nvmexpress.discovery" "p=tcp" == *\s\p\d\k\1* ]] 00:21:30.283 11:43:30 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@95 -- # for line in "${lines[@]}" 00:21:30.283 11:43:30 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@96 -- # [[ =;(null);IPv4;spdk0;_nvme-disc._tcp;local;fedora39-cloud-1721788873-2326.local;10.0.0.3;8009;"nqn=nqn.2014-08.org.nvmexpress.discovery" "p=tcp" == *\s\p\d\k\1* ]] 00:21:30.283 11:43:30 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@105 -- # [[ not found == \f\o\u\n\d ]] 00:21:30.283 11:43:30 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@108 -- # return 0 00:21:30.283 11:43:30 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@238 -- # rpc_cmd nvmf_stop_mdns_prr 00:21:30.283 11:43:30 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:30.283 11:43:30 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@10 -- # set +x 00:21:30.283 11:43:30 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:30.283 11:43:30 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@240 -- # trap - SIGINT SIGTERM EXIT 00:21:30.283 11:43:30 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@242 -- # kill 96875 00:21:30.283 11:43:30 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@245 -- # wait 96875 00:21:30.283 11:43:30 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@246 -- # kill 96892 00:21:30.283 Got SIGTERM, quitting. 00:21:30.283 11:43:30 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@247 -- # nvmftestfini 00:21:30.283 11:43:30 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- nvmf/common.sh@516 -- # nvmfcleanup 00:21:30.283 11:43:30 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- nvmf/common.sh@121 -- # sync 00:21:30.283 Leaving mDNS multicast group on interface nvmf_tgt_if2.IPv4 with address 10.0.0.4. 00:21:30.283 Leaving mDNS multicast group on interface nvmf_tgt_if.IPv4 with address 10.0.0.3. 00:21:30.283 avahi-daemon 0.8 exiting. 00:21:30.283 11:43:31 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:21:30.283 11:43:31 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- nvmf/common.sh@124 -- # set +e 00:21:30.283 11:43:31 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- nvmf/common.sh@125 -- # for i in {1..20} 00:21:30.283 11:43:31 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:21:30.283 rmmod nvme_tcp 00:21:30.542 rmmod nvme_fabrics 00:21:30.542 rmmod nvme_keyring 00:21:30.542 11:43:31 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:21:30.542 11:43:31 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- nvmf/common.sh@128 -- # set -e 00:21:30.542 11:43:31 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- nvmf/common.sh@129 -- # return 0 00:21:30.542 11:43:31 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- nvmf/common.sh@517 -- # '[' -n 96839 ']' 00:21:30.542 11:43:31 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- nvmf/common.sh@518 -- # killprocess 96839 00:21:30.542 11:43:31 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@954 -- # '[' -z 96839 ']' 00:21:30.543 11:43:31 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@958 -- # kill -0 96839 00:21:30.543 11:43:31 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@959 -- # uname 00:21:30.543 11:43:31 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:21:30.543 11:43:31 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 96839 00:21:30.543 11:43:31 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:21:30.543 killing process with pid 96839 00:21:30.543 11:43:31 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:21:30.543 11:43:31 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@972 -- # echo 'killing process with pid 96839' 00:21:30.543 11:43:31 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@973 -- # kill 96839 00:21:30.543 11:43:31 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@978 -- # wait 96839 00:21:30.802 11:43:31 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:21:30.802 11:43:31 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:21:30.802 11:43:31 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:21:30.802 11:43:31 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- nvmf/common.sh@297 -- # iptr 00:21:30.802 11:43:31 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- nvmf/common.sh@791 -- # iptables-save 00:21:30.802 11:43:31 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:21:30.802 11:43:31 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- nvmf/common.sh@791 -- # iptables-restore 00:21:30.802 11:43:31 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- nvmf/common.sh@298 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:21:30.802 11:43:31 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- nvmf/common.sh@299 -- # nvmf_veth_fini 00:21:30.802 11:43:31 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- nvmf/common.sh@233 -- # ip link set nvmf_init_br nomaster 00:21:30.802 11:43:31 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- nvmf/common.sh@234 -- # ip link set nvmf_init_br2 nomaster 00:21:30.802 11:43:31 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- nvmf/common.sh@235 -- # ip link set nvmf_tgt_br nomaster 00:21:30.802 11:43:31 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- nvmf/common.sh@236 -- # ip link set nvmf_tgt_br2 nomaster 00:21:30.802 11:43:31 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- nvmf/common.sh@237 -- # ip link set nvmf_init_br down 00:21:30.802 11:43:31 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- nvmf/common.sh@238 -- # ip link set nvmf_init_br2 down 00:21:30.802 11:43:31 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- nvmf/common.sh@239 -- # ip link set nvmf_tgt_br down 00:21:30.802 11:43:31 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- nvmf/common.sh@240 -- # ip link set nvmf_tgt_br2 down 00:21:30.802 11:43:31 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- nvmf/common.sh@241 -- # ip link delete nvmf_br type bridge 00:21:30.802 11:43:31 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- nvmf/common.sh@242 -- # ip link delete nvmf_init_if 00:21:30.802 11:43:31 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- nvmf/common.sh@243 -- # ip link delete nvmf_init_if2 00:21:30.802 11:43:31 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- nvmf/common.sh@244 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:21:30.802 11:43:31 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- nvmf/common.sh@245 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:21:31.061 11:43:31 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- nvmf/common.sh@246 -- # remove_spdk_ns 00:21:31.061 11:43:31 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:21:31.061 11:43:31 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:21:31.061 11:43:31 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:21:31.061 11:43:31 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- nvmf/common.sh@300 -- # return 0 00:21:31.061 00:21:31.061 real 0m21.843s 00:21:31.061 user 0m42.465s 00:21:31.061 sys 0m2.149s 00:21:31.061 ************************************ 00:21:31.061 END TEST nvmf_mdns_discovery 00:21:31.061 ************************************ 00:21:31.061 11:43:31 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@1130 -- # xtrace_disable 00:21:31.061 11:43:31 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@10 -- # set +x 00:21:31.061 11:43:31 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@41 -- # [[ 1 -eq 1 ]] 00:21:31.061 11:43:31 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@42 -- # run_test nvmf_host_multipath /home/vagrant/spdk_repo/spdk/test/nvmf/host/multipath.sh --transport=tcp 00:21:31.061 11:43:31 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:21:31.061 11:43:31 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1111 -- # xtrace_disable 00:21:31.061 11:43:31 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:21:31.061 ************************************ 00:21:31.061 START TEST nvmf_host_multipath 00:21:31.061 ************************************ 00:21:31.061 11:43:31 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/host/multipath.sh --transport=tcp 00:21:31.061 * Looking for test storage... 00:21:31.061 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/host 00:21:31.061 11:43:31 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:21:31.061 11:43:31 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@1711 -- # lcov --version 00:21:31.061 11:43:31 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:21:31.321 11:43:31 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:21:31.321 11:43:31 nvmf_tcp.nvmf_host.nvmf_host_multipath -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:21:31.321 11:43:31 nvmf_tcp.nvmf_host.nvmf_host_multipath -- scripts/common.sh@333 -- # local ver1 ver1_l 00:21:31.321 11:43:31 nvmf_tcp.nvmf_host.nvmf_host_multipath -- scripts/common.sh@334 -- # local ver2 ver2_l 00:21:31.321 11:43:31 nvmf_tcp.nvmf_host.nvmf_host_multipath -- scripts/common.sh@336 -- # IFS=.-: 00:21:31.321 11:43:31 nvmf_tcp.nvmf_host.nvmf_host_multipath -- scripts/common.sh@336 -- # read -ra ver1 00:21:31.321 11:43:31 nvmf_tcp.nvmf_host.nvmf_host_multipath -- scripts/common.sh@337 -- # IFS=.-: 00:21:31.321 11:43:31 nvmf_tcp.nvmf_host.nvmf_host_multipath -- scripts/common.sh@337 -- # read -ra ver2 00:21:31.321 11:43:31 nvmf_tcp.nvmf_host.nvmf_host_multipath -- scripts/common.sh@338 -- # local 'op=<' 00:21:31.321 11:43:31 nvmf_tcp.nvmf_host.nvmf_host_multipath -- scripts/common.sh@340 -- # ver1_l=2 00:21:31.321 11:43:31 nvmf_tcp.nvmf_host.nvmf_host_multipath -- scripts/common.sh@341 -- # ver2_l=1 00:21:31.321 11:43:31 nvmf_tcp.nvmf_host.nvmf_host_multipath -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:21:31.321 11:43:31 nvmf_tcp.nvmf_host.nvmf_host_multipath -- scripts/common.sh@344 -- # case "$op" in 00:21:31.321 11:43:31 nvmf_tcp.nvmf_host.nvmf_host_multipath -- scripts/common.sh@345 -- # : 1 00:21:31.321 11:43:31 nvmf_tcp.nvmf_host.nvmf_host_multipath -- scripts/common.sh@364 -- # (( v = 0 )) 00:21:31.321 11:43:31 nvmf_tcp.nvmf_host.nvmf_host_multipath -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:21:31.321 11:43:31 nvmf_tcp.nvmf_host.nvmf_host_multipath -- scripts/common.sh@365 -- # decimal 1 00:21:31.321 11:43:31 nvmf_tcp.nvmf_host.nvmf_host_multipath -- scripts/common.sh@353 -- # local d=1 00:21:31.321 11:43:31 nvmf_tcp.nvmf_host.nvmf_host_multipath -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:21:31.321 11:43:31 nvmf_tcp.nvmf_host.nvmf_host_multipath -- scripts/common.sh@355 -- # echo 1 00:21:31.321 11:43:31 nvmf_tcp.nvmf_host.nvmf_host_multipath -- scripts/common.sh@365 -- # ver1[v]=1 00:21:31.321 11:43:31 nvmf_tcp.nvmf_host.nvmf_host_multipath -- scripts/common.sh@366 -- # decimal 2 00:21:31.321 11:43:31 nvmf_tcp.nvmf_host.nvmf_host_multipath -- scripts/common.sh@353 -- # local d=2 00:21:31.321 11:43:31 nvmf_tcp.nvmf_host.nvmf_host_multipath -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:21:31.321 11:43:31 nvmf_tcp.nvmf_host.nvmf_host_multipath -- scripts/common.sh@355 -- # echo 2 00:21:31.321 11:43:31 nvmf_tcp.nvmf_host.nvmf_host_multipath -- scripts/common.sh@366 -- # ver2[v]=2 00:21:31.321 11:43:31 nvmf_tcp.nvmf_host.nvmf_host_multipath -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:21:31.321 11:43:31 nvmf_tcp.nvmf_host.nvmf_host_multipath -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:21:31.321 11:43:31 nvmf_tcp.nvmf_host.nvmf_host_multipath -- scripts/common.sh@368 -- # return 0 00:21:31.321 11:43:31 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:21:31.321 11:43:31 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:21:31.321 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:21:31.321 --rc genhtml_branch_coverage=1 00:21:31.321 --rc genhtml_function_coverage=1 00:21:31.321 --rc genhtml_legend=1 00:21:31.321 --rc geninfo_all_blocks=1 00:21:31.321 --rc geninfo_unexecuted_blocks=1 00:21:31.321 00:21:31.321 ' 00:21:31.321 11:43:31 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:21:31.321 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:21:31.321 --rc genhtml_branch_coverage=1 00:21:31.321 --rc genhtml_function_coverage=1 00:21:31.321 --rc genhtml_legend=1 00:21:31.321 --rc geninfo_all_blocks=1 00:21:31.321 --rc geninfo_unexecuted_blocks=1 00:21:31.321 00:21:31.321 ' 00:21:31.321 11:43:31 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:21:31.321 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:21:31.321 --rc genhtml_branch_coverage=1 00:21:31.321 --rc genhtml_function_coverage=1 00:21:31.321 --rc genhtml_legend=1 00:21:31.321 --rc geninfo_all_blocks=1 00:21:31.321 --rc geninfo_unexecuted_blocks=1 00:21:31.321 00:21:31.321 ' 00:21:31.321 11:43:31 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:21:31.321 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:21:31.322 --rc genhtml_branch_coverage=1 00:21:31.322 --rc genhtml_function_coverage=1 00:21:31.322 --rc genhtml_legend=1 00:21:31.322 --rc geninfo_all_blocks=1 00:21:31.322 --rc geninfo_unexecuted_blocks=1 00:21:31.322 00:21:31.322 ' 00:21:31.322 11:43:31 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:21:31.322 11:43:31 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@7 -- # uname -s 00:21:31.322 11:43:31 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:21:31.322 11:43:31 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:21:31.322 11:43:31 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:21:31.322 11:43:31 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:21:31.322 11:43:31 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:21:31.322 11:43:31 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:21:31.322 11:43:31 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:21:31.322 11:43:31 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:21:31.322 11:43:31 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:21:31.322 11:43:31 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:21:31.322 11:43:31 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:2fb5c03d-b67f-4471-b8a9-a83bf7b15be8 00:21:31.322 11:43:31 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@18 -- # NVME_HOSTID=2fb5c03d-b67f-4471-b8a9-a83bf7b15be8 00:21:31.322 11:43:31 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:21:31.322 11:43:31 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:21:31.322 11:43:31 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:21:31.322 11:43:31 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:21:31.322 11:43:31 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@49 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:21:31.322 11:43:31 nvmf_tcp.nvmf_host.nvmf_host_multipath -- scripts/common.sh@15 -- # shopt -s extglob 00:21:31.322 11:43:31 nvmf_tcp.nvmf_host.nvmf_host_multipath -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:21:31.322 11:43:31 nvmf_tcp.nvmf_host.nvmf_host_multipath -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:21:31.322 11:43:31 nvmf_tcp.nvmf_host.nvmf_host_multipath -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:21:31.322 11:43:31 nvmf_tcp.nvmf_host.nvmf_host_multipath -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:31.322 11:43:31 nvmf_tcp.nvmf_host.nvmf_host_multipath -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:31.322 11:43:31 nvmf_tcp.nvmf_host.nvmf_host_multipath -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:31.322 11:43:31 nvmf_tcp.nvmf_host.nvmf_host_multipath -- paths/export.sh@5 -- # export PATH 00:21:31.322 11:43:31 nvmf_tcp.nvmf_host.nvmf_host_multipath -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:31.322 11:43:31 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@51 -- # : 0 00:21:31.322 11:43:31 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:21:31.322 11:43:31 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:21:31.322 11:43:31 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:21:31.322 11:43:31 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:21:31.322 11:43:31 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:21:31.322 11:43:31 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:21:31.322 /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:21:31.322 11:43:31 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:21:31.322 11:43:31 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:21:31.322 11:43:31 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@55 -- # have_pci_nics=0 00:21:31.322 11:43:31 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@11 -- # MALLOC_BDEV_SIZE=64 00:21:31.322 11:43:31 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:21:31.322 11:43:31 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@14 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:21:31.322 11:43:31 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@15 -- # bpf_sh=/home/vagrant/spdk_repo/spdk/scripts/bpftrace.sh 00:21:31.322 11:43:31 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@17 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:21:31.322 11:43:31 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@20 -- # NQN=nqn.2016-06.io.spdk:cnode1 00:21:31.322 11:43:31 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@30 -- # nvmftestinit 00:21:31.322 11:43:31 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:21:31.322 11:43:31 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:21:31.322 11:43:31 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@476 -- # prepare_net_devs 00:21:31.322 11:43:31 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@438 -- # local -g is_hw=no 00:21:31.322 11:43:31 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@440 -- # remove_spdk_ns 00:21:31.322 11:43:31 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:21:31.322 11:43:31 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:21:31.322 11:43:31 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:21:31.322 11:43:31 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@442 -- # [[ virt != virt ]] 00:21:31.322 11:43:31 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@444 -- # [[ no == yes ]] 00:21:31.322 11:43:31 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@451 -- # [[ virt == phy ]] 00:21:31.322 11:43:31 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@454 -- # [[ virt == phy-fallback ]] 00:21:31.322 11:43:31 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@459 -- # [[ tcp == tcp ]] 00:21:31.322 11:43:31 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@460 -- # nvmf_veth_init 00:21:31.322 11:43:31 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@145 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:21:31.322 11:43:31 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@146 -- # NVMF_SECOND_INITIATOR_IP=10.0.0.2 00:21:31.322 11:43:31 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@147 -- # NVMF_FIRST_TARGET_IP=10.0.0.3 00:21:31.322 11:43:31 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@148 -- # NVMF_SECOND_TARGET_IP=10.0.0.4 00:21:31.322 11:43:31 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@149 -- # NVMF_INITIATOR_IP=10.0.0.1 00:21:31.322 11:43:31 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@150 -- # NVMF_BRIDGE=nvmf_br 00:21:31.322 11:43:31 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@151 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:21:31.322 11:43:31 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@152 -- # NVMF_INITIATOR_INTERFACE2=nvmf_init_if2 00:21:31.322 11:43:31 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@153 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:21:31.322 11:43:31 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@154 -- # NVMF_INITIATOR_BRIDGE2=nvmf_init_br2 00:21:31.322 11:43:31 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@155 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:21:31.322 11:43:31 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@156 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:21:31.322 11:43:31 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@157 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:21:31.322 11:43:31 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@158 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:21:31.322 11:43:31 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@159 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:21:31.322 11:43:31 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@160 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:21:31.322 11:43:31 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@162 -- # ip link set nvmf_init_br nomaster 00:21:31.322 Cannot find device "nvmf_init_br" 00:21:31.322 11:43:31 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@162 -- # true 00:21:31.322 11:43:31 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@163 -- # ip link set nvmf_init_br2 nomaster 00:21:31.322 Cannot find device "nvmf_init_br2" 00:21:31.322 11:43:31 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@163 -- # true 00:21:31.322 11:43:31 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@164 -- # ip link set nvmf_tgt_br nomaster 00:21:31.322 Cannot find device "nvmf_tgt_br" 00:21:31.322 11:43:31 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@164 -- # true 00:21:31.322 11:43:31 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@165 -- # ip link set nvmf_tgt_br2 nomaster 00:21:31.322 Cannot find device "nvmf_tgt_br2" 00:21:31.322 11:43:31 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@165 -- # true 00:21:31.322 11:43:31 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@166 -- # ip link set nvmf_init_br down 00:21:31.322 Cannot find device "nvmf_init_br" 00:21:31.322 11:43:31 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@166 -- # true 00:21:31.322 11:43:31 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@167 -- # ip link set nvmf_init_br2 down 00:21:31.322 Cannot find device "nvmf_init_br2" 00:21:31.322 11:43:31 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@167 -- # true 00:21:31.322 11:43:31 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@168 -- # ip link set nvmf_tgt_br down 00:21:31.322 Cannot find device "nvmf_tgt_br" 00:21:31.322 11:43:31 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@168 -- # true 00:21:31.323 11:43:31 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@169 -- # ip link set nvmf_tgt_br2 down 00:21:31.323 Cannot find device "nvmf_tgt_br2" 00:21:31.323 11:43:31 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@169 -- # true 00:21:31.323 11:43:31 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@170 -- # ip link delete nvmf_br type bridge 00:21:31.323 Cannot find device "nvmf_br" 00:21:31.323 11:43:31 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@170 -- # true 00:21:31.323 11:43:31 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@171 -- # ip link delete nvmf_init_if 00:21:31.323 Cannot find device "nvmf_init_if" 00:21:31.323 11:43:31 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@171 -- # true 00:21:31.323 11:43:31 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@172 -- # ip link delete nvmf_init_if2 00:21:31.323 Cannot find device "nvmf_init_if2" 00:21:31.323 11:43:32 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@172 -- # true 00:21:31.323 11:43:32 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@173 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:21:31.323 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:21:31.323 11:43:32 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@173 -- # true 00:21:31.323 11:43:32 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@174 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:21:31.323 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:21:31.323 11:43:32 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@174 -- # true 00:21:31.323 11:43:32 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@177 -- # ip netns add nvmf_tgt_ns_spdk 00:21:31.323 11:43:32 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@180 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:21:31.323 11:43:32 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@181 -- # ip link add nvmf_init_if2 type veth peer name nvmf_init_br2 00:21:31.323 11:43:32 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@182 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:21:31.582 11:43:32 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@183 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:21:31.582 11:43:32 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:21:31.582 11:43:32 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@187 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:21:31.582 11:43:32 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@190 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:21:31.582 11:43:32 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@191 -- # ip addr add 10.0.0.2/24 dev nvmf_init_if2 00:21:31.582 11:43:32 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@192 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if 00:21:31.582 11:43:32 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@193 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.4/24 dev nvmf_tgt_if2 00:21:31.582 11:43:32 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@196 -- # ip link set nvmf_init_if up 00:21:31.582 11:43:32 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@197 -- # ip link set nvmf_init_if2 up 00:21:31.582 11:43:32 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@198 -- # ip link set nvmf_init_br up 00:21:31.582 11:43:32 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@199 -- # ip link set nvmf_init_br2 up 00:21:31.582 11:43:32 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@200 -- # ip link set nvmf_tgt_br up 00:21:31.582 11:43:32 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@201 -- # ip link set nvmf_tgt_br2 up 00:21:31.582 11:43:32 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@202 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:21:31.582 11:43:32 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@203 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:21:31.582 11:43:32 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@204 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:21:31.582 11:43:32 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@207 -- # ip link add nvmf_br type bridge 00:21:31.582 11:43:32 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@208 -- # ip link set nvmf_br up 00:21:31.582 11:43:32 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@211 -- # ip link set nvmf_init_br master nvmf_br 00:21:31.582 11:43:32 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@212 -- # ip link set nvmf_init_br2 master nvmf_br 00:21:31.582 11:43:32 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@213 -- # ip link set nvmf_tgt_br master nvmf_br 00:21:31.582 11:43:32 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@214 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:21:31.582 11:43:32 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@217 -- # ipts -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:21:31.582 11:43:32 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT' 00:21:31.582 11:43:32 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@218 -- # ipts -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT 00:21:31.582 11:43:32 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT' 00:21:31.582 11:43:32 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@219 -- # ipts -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:21:31.582 11:43:32 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@790 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT -m comment --comment 'SPDK_NVMF:-A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT' 00:21:31.582 11:43:32 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@222 -- # ping -c 1 10.0.0.3 00:21:31.582 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:21:31.582 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.059 ms 00:21:31.582 00:21:31.582 --- 10.0.0.3 ping statistics --- 00:21:31.582 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:21:31.582 rtt min/avg/max/mdev = 0.059/0.059/0.059/0.000 ms 00:21:31.582 11:43:32 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@223 -- # ping -c 1 10.0.0.4 00:21:31.582 PING 10.0.0.4 (10.0.0.4) 56(84) bytes of data. 00:21:31.582 64 bytes from 10.0.0.4: icmp_seq=1 ttl=64 time=0.054 ms 00:21:31.582 00:21:31.582 --- 10.0.0.4 ping statistics --- 00:21:31.582 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:21:31.582 rtt min/avg/max/mdev = 0.054/0.054/0.054/0.000 ms 00:21:31.582 11:43:32 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@224 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:21:31.582 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:21:31.582 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.024 ms 00:21:31.582 00:21:31.583 --- 10.0.0.1 ping statistics --- 00:21:31.583 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:21:31.583 rtt min/avg/max/mdev = 0.024/0.024/0.024/0.000 ms 00:21:31.583 11:43:32 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@225 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.2 00:21:31.583 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:21:31.583 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.039 ms 00:21:31.583 00:21:31.583 --- 10.0.0.2 ping statistics --- 00:21:31.583 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:21:31.583 rtt min/avg/max/mdev = 0.039/0.039/0.039/0.000 ms 00:21:31.583 11:43:32 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@227 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:21:31.583 11:43:32 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@461 -- # return 0 00:21:31.583 11:43:32 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:21:31.583 11:43:32 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:21:31.583 11:43:32 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:21:31.583 11:43:32 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:21:31.583 11:43:32 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:21:31.583 11:43:32 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:21:31.583 11:43:32 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:21:31.583 11:43:32 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@32 -- # nvmfappstart -m 0x3 00:21:31.583 11:43:32 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:21:31.583 11:43:32 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@726 -- # xtrace_disable 00:21:31.583 11:43:32 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@10 -- # set +x 00:21:31.583 11:43:32 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@509 -- # nvmfpid=97536 00:21:31.583 11:43:32 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@508 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x3 00:21:31.583 11:43:32 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@510 -- # waitforlisten 97536 00:21:31.583 11:43:32 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@835 -- # '[' -z 97536 ']' 00:21:31.583 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:21:31.583 11:43:32 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:21:31.583 11:43:32 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@840 -- # local max_retries=100 00:21:31.583 11:43:32 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:21:31.583 11:43:32 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@844 -- # xtrace_disable 00:21:31.583 11:43:32 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@10 -- # set +x 00:21:31.842 [2024-12-16 11:43:32.381129] Starting SPDK v25.01-pre git sha1 c360129e7 / DPDK 24.03.0 initialization... 00:21:31.842 [2024-12-16 11:43:32.381508] [ DPDK EAL parameters: nvmf -c 0x3 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:21:31.842 [2024-12-16 11:43:32.534787] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:21:32.101 [2024-12-16 11:43:32.600729] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:21:32.101 [2024-12-16 11:43:32.600812] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:21:32.101 [2024-12-16 11:43:32.600841] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:21:32.101 [2024-12-16 11:43:32.600852] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:21:32.101 [2024-12-16 11:43:32.600861] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:21:32.101 [2024-12-16 11:43:32.602191] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 1 00:21:32.101 [2024-12-16 11:43:32.602197] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:21:32.101 11:43:32 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:21:32.101 11:43:32 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@868 -- # return 0 00:21:32.101 11:43:32 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:21:32.101 11:43:32 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@732 -- # xtrace_disable 00:21:32.101 11:43:32 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@10 -- # set +x 00:21:32.101 11:43:32 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:21:32.101 11:43:32 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@33 -- # nvmfapp_pid=97536 00:21:32.101 11:43:32 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@35 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:21:32.361 [2024-12-16 11:43:33.066575] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:21:32.361 11:43:33 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@36 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc0 00:21:32.942 Malloc0 00:21:32.942 11:43:33 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@38 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -r -m 2 00:21:32.942 11:43:33 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@39 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:21:33.509 11:43:33 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@40 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 00:21:33.509 [2024-12-16 11:43:34.186087] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4420 *** 00:21:33.509 11:43:34 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@41 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4421 00:21:33.768 [2024-12-16 11:43:34.426308] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4421 *** 00:21:33.768 11:43:34 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@44 -- # bdevperf_pid=97625 00:21:33.768 11:43:34 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@43 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 90 00:21:33.768 11:43:34 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@46 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:21:33.768 11:43:34 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@47 -- # waitforlisten 97625 /var/tmp/bdevperf.sock 00:21:33.768 11:43:34 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@835 -- # '[' -z 97625 ']' 00:21:33.768 11:43:34 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:21:33.768 11:43:34 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@840 -- # local max_retries=100 00:21:33.768 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:21:33.768 11:43:34 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:21:33.768 11:43:34 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@844 -- # xtrace_disable 00:21:33.768 11:43:34 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@10 -- # set +x 00:21:34.335 11:43:34 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:21:34.335 11:43:34 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@868 -- # return 0 00:21:34.335 11:43:34 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@51 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_set_options -r -1 00:21:34.594 11:43:35 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b Nvme0 -t tcp -a 10.0.0.3 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -x multipath -l -1 -o 10 00:21:34.852 Nvme0n1 00:21:34.852 11:43:35 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@55 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b Nvme0 -t tcp -a 10.0.0.3 -s 4421 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -x multipath -l -1 -o 10 00:21:35.418 Nvme0n1 00:21:35.418 11:43:35 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@76 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -t 120 -s /var/tmp/bdevperf.sock perform_tests 00:21:35.418 11:43:35 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@78 -- # sleep 1 00:21:36.354 11:43:36 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@81 -- # set_ANA_state non_optimized optimized 00:21:36.354 11:43:36 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@58 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 -n non_optimized 00:21:36.613 11:43:37 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@59 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4421 -n optimized 00:21:36.872 11:43:37 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@83 -- # confirm_io_on_port optimized 4421 00:21:36.872 11:43:37 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@65 -- # dtrace_pid=97700 00:21:36.872 11:43:37 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/bpftrace.sh 97536 /home/vagrant/spdk_repo/spdk/scripts/bpf/nvmf_path.bt 00:21:36.872 11:43:37 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@66 -- # sleep 6 00:21:43.433 11:43:43 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@67 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_get_listeners nqn.2016-06.io.spdk:cnode1 00:21:43.433 11:43:43 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@67 -- # jq -r '.[] | select (.ana_states[0].ana_state=="optimized") | .address.trsvcid' 00:21:43.433 11:43:43 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@67 -- # active_port=4421 00:21:43.433 11:43:43 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@68 -- # cat /home/vagrant/spdk_repo/spdk/test/nvmf/host/trace.txt 00:21:43.433 Attaching 4 probes... 00:21:43.433 @path[10.0.0.3, 4421]: 15748 00:21:43.433 @path[10.0.0.3, 4421]: 15659 00:21:43.433 @path[10.0.0.3, 4421]: 16474 00:21:43.433 @path[10.0.0.3, 4421]: 17495 00:21:43.433 @path[10.0.0.3, 4421]: 17186 00:21:43.433 11:43:43 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@69 -- # cut -d ']' -f1 00:21:43.433 11:43:43 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@69 -- # awk '$1=="@path[10.0.0.3," {print $2}' 00:21:43.433 11:43:43 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@69 -- # sed -n 1p 00:21:43.433 11:43:43 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@69 -- # port=4421 00:21:43.433 11:43:43 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@70 -- # [[ 4421 == \4\4\2\1 ]] 00:21:43.433 11:43:43 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@71 -- # [[ 4421 == \4\4\2\1 ]] 00:21:43.433 11:43:43 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@72 -- # kill 97700 00:21:43.433 11:43:43 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@73 -- # rm -f /home/vagrant/spdk_repo/spdk/test/nvmf/host/trace.txt 00:21:43.433 11:43:43 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@86 -- # set_ANA_state non_optimized inaccessible 00:21:43.433 11:43:43 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@58 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 -n non_optimized 00:21:43.691 11:43:44 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@59 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4421 -n inaccessible 00:21:43.949 11:43:44 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@87 -- # confirm_io_on_port non_optimized 4420 00:21:43.949 11:43:44 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@65 -- # dtrace_pid=97837 00:21:43.949 11:43:44 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@66 -- # sleep 6 00:21:43.949 11:43:44 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/bpftrace.sh 97536 /home/vagrant/spdk_repo/spdk/scripts/bpf/nvmf_path.bt 00:21:50.515 11:43:50 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@67 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_get_listeners nqn.2016-06.io.spdk:cnode1 00:21:50.515 11:43:50 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@67 -- # jq -r '.[] | select (.ana_states[0].ana_state=="non_optimized") | .address.trsvcid' 00:21:50.515 11:43:50 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@67 -- # active_port=4420 00:21:50.515 11:43:50 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@68 -- # cat /home/vagrant/spdk_repo/spdk/test/nvmf/host/trace.txt 00:21:50.515 Attaching 4 probes... 00:21:50.515 @path[10.0.0.3, 4420]: 16899 00:21:50.515 @path[10.0.0.3, 4420]: 17310 00:21:50.515 @path[10.0.0.3, 4420]: 17103 00:21:50.515 @path[10.0.0.3, 4420]: 17331 00:21:50.515 @path[10.0.0.3, 4420]: 17473 00:21:50.515 11:43:50 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@69 -- # cut -d ']' -f1 00:21:50.515 11:43:50 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@69 -- # sed -n 1p 00:21:50.515 11:43:50 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@69 -- # awk '$1=="@path[10.0.0.3," {print $2}' 00:21:50.515 11:43:50 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@69 -- # port=4420 00:21:50.515 11:43:50 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@70 -- # [[ 4420 == \4\4\2\0 ]] 00:21:50.515 11:43:50 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@71 -- # [[ 4420 == \4\4\2\0 ]] 00:21:50.515 11:43:50 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@72 -- # kill 97837 00:21:50.515 11:43:50 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@73 -- # rm -f /home/vagrant/spdk_repo/spdk/test/nvmf/host/trace.txt 00:21:50.515 11:43:50 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@89 -- # set_ANA_state inaccessible optimized 00:21:50.515 11:43:50 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@58 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 -n inaccessible 00:21:50.515 11:43:51 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@59 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4421 -n optimized 00:21:50.774 11:43:51 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@90 -- # confirm_io_on_port optimized 4421 00:21:50.774 11:43:51 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/bpftrace.sh 97536 /home/vagrant/spdk_repo/spdk/scripts/bpf/nvmf_path.bt 00:21:50.774 11:43:51 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@65 -- # dtrace_pid=97971 00:21:50.774 11:43:51 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@66 -- # sleep 6 00:21:57.338 11:43:57 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@67 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_get_listeners nqn.2016-06.io.spdk:cnode1 00:21:57.338 11:43:57 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@67 -- # jq -r '.[] | select (.ana_states[0].ana_state=="optimized") | .address.trsvcid' 00:21:57.338 11:43:57 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@67 -- # active_port=4421 00:21:57.338 11:43:57 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@68 -- # cat /home/vagrant/spdk_repo/spdk/test/nvmf/host/trace.txt 00:21:57.338 Attaching 4 probes... 00:21:57.338 @path[10.0.0.3, 4421]: 15650 00:21:57.338 @path[10.0.0.3, 4421]: 17835 00:21:57.338 @path[10.0.0.3, 4421]: 16657 00:21:57.338 @path[10.0.0.3, 4421]: 16945 00:21:57.338 @path[10.0.0.3, 4421]: 16826 00:21:57.338 11:43:57 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@69 -- # cut -d ']' -f1 00:21:57.338 11:43:57 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@69 -- # awk '$1=="@path[10.0.0.3," {print $2}' 00:21:57.338 11:43:57 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@69 -- # sed -n 1p 00:21:57.338 11:43:57 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@69 -- # port=4421 00:21:57.338 11:43:57 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@70 -- # [[ 4421 == \4\4\2\1 ]] 00:21:57.338 11:43:57 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@71 -- # [[ 4421 == \4\4\2\1 ]] 00:21:57.338 11:43:57 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@72 -- # kill 97971 00:21:57.338 11:43:57 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@73 -- # rm -f /home/vagrant/spdk_repo/spdk/test/nvmf/host/trace.txt 00:21:57.338 11:43:57 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@93 -- # set_ANA_state inaccessible inaccessible 00:21:57.338 11:43:57 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@58 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 -n inaccessible 00:21:57.597 11:43:58 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@59 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4421 -n inaccessible 00:21:57.866 11:43:58 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@94 -- # confirm_io_on_port '' '' 00:21:57.866 11:43:58 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@65 -- # dtrace_pid=98103 00:21:57.866 11:43:58 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/bpftrace.sh 97536 /home/vagrant/spdk_repo/spdk/scripts/bpf/nvmf_path.bt 00:21:57.866 11:43:58 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@66 -- # sleep 6 00:22:04.458 11:44:04 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@67 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_get_listeners nqn.2016-06.io.spdk:cnode1 00:22:04.458 11:44:04 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@67 -- # jq -r '.[] | select (.ana_states[0].ana_state=="") | .address.trsvcid' 00:22:04.458 11:44:04 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@67 -- # active_port= 00:22:04.458 11:44:04 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@68 -- # cat /home/vagrant/spdk_repo/spdk/test/nvmf/host/trace.txt 00:22:04.458 Attaching 4 probes... 00:22:04.458 00:22:04.458 00:22:04.458 00:22:04.458 00:22:04.458 00:22:04.458 11:44:04 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@69 -- # cut -d ']' -f1 00:22:04.458 11:44:04 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@69 -- # awk '$1=="@path[10.0.0.3," {print $2}' 00:22:04.458 11:44:04 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@69 -- # sed -n 1p 00:22:04.458 11:44:04 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@69 -- # port= 00:22:04.458 11:44:04 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@70 -- # [[ '' == '' ]] 00:22:04.458 11:44:04 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@71 -- # [[ '' == '' ]] 00:22:04.458 11:44:04 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@72 -- # kill 98103 00:22:04.458 11:44:04 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@73 -- # rm -f /home/vagrant/spdk_repo/spdk/test/nvmf/host/trace.txt 00:22:04.458 11:44:04 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@96 -- # set_ANA_state non_optimized optimized 00:22:04.458 11:44:04 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@58 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 -n non_optimized 00:22:04.458 11:44:04 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@59 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4421 -n optimized 00:22:04.718 11:44:05 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@97 -- # confirm_io_on_port optimized 4421 00:22:04.718 11:44:05 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@65 -- # dtrace_pid=98240 00:22:04.718 11:44:05 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@66 -- # sleep 6 00:22:04.718 11:44:05 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/bpftrace.sh 97536 /home/vagrant/spdk_repo/spdk/scripts/bpf/nvmf_path.bt 00:22:11.322 11:44:11 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@67 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_get_listeners nqn.2016-06.io.spdk:cnode1 00:22:11.322 11:44:11 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@67 -- # jq -r '.[] | select (.ana_states[0].ana_state=="optimized") | .address.trsvcid' 00:22:11.322 11:44:11 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@67 -- # active_port=4421 00:22:11.322 11:44:11 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@68 -- # cat /home/vagrant/spdk_repo/spdk/test/nvmf/host/trace.txt 00:22:11.322 Attaching 4 probes... 00:22:11.322 @path[10.0.0.3, 4421]: 17092 00:22:11.322 @path[10.0.0.3, 4421]: 16714 00:22:11.322 @path[10.0.0.3, 4421]: 17492 00:22:11.322 @path[10.0.0.3, 4421]: 17427 00:22:11.322 @path[10.0.0.3, 4421]: 16847 00:22:11.322 11:44:11 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@69 -- # cut -d ']' -f1 00:22:11.322 11:44:11 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@69 -- # awk '$1=="@path[10.0.0.3," {print $2}' 00:22:11.322 11:44:11 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@69 -- # sed -n 1p 00:22:11.322 11:44:11 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@69 -- # port=4421 00:22:11.322 11:44:11 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@70 -- # [[ 4421 == \4\4\2\1 ]] 00:22:11.322 11:44:11 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@71 -- # [[ 4421 == \4\4\2\1 ]] 00:22:11.322 11:44:11 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@72 -- # kill 98240 00:22:11.322 11:44:11 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@73 -- # rm -f /home/vagrant/spdk_repo/spdk/test/nvmf/host/trace.txt 00:22:11.322 11:44:11 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@100 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4421 00:22:11.322 [2024-12-16 11:44:11.813960] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f40d40 is same with the state(6) to be set 00:22:11.322 [2024-12-16 11:44:11.814020] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f40d40 is same with the state(6) to be set 00:22:11.322 [2024-12-16 11:44:11.814032] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f40d40 is same with the state(6) to be set 00:22:11.322 [2024-12-16 11:44:11.814041] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f40d40 is same with the state(6) to be set 00:22:11.322 [2024-12-16 11:44:11.814049] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f40d40 is same with the state(6) to be set 00:22:11.322 [2024-12-16 11:44:11.814058] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f40d40 is same with the state(6) to be set 00:22:11.322 [2024-12-16 11:44:11.814067] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f40d40 is same with the state(6) to be set 00:22:11.322 [2024-12-16 11:44:11.814075] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f40d40 is same with the state(6) to be set 00:22:11.322 [2024-12-16 11:44:11.814083] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f40d40 is same with the state(6) to be set 00:22:11.322 [2024-12-16 11:44:11.814091] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f40d40 is same with the state(6) to be set 00:22:11.322 [2024-12-16 11:44:11.814099] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f40d40 is same with the state(6) to be set 00:22:11.322 [2024-12-16 11:44:11.814107] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f40d40 is same with the state(6) to be set 00:22:11.322 [2024-12-16 11:44:11.814115] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f40d40 is same with the state(6) to be set 00:22:11.322 [2024-12-16 11:44:11.814123] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f40d40 is same with the state(6) to be set 00:22:11.322 [2024-12-16 11:44:11.814131] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f40d40 is same with the state(6) to be set 00:22:11.322 [2024-12-16 11:44:11.814151] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f40d40 is same with the state(6) to be set 00:22:11.322 [2024-12-16 11:44:11.814161] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f40d40 is same with the state(6) to be set 00:22:11.322 [2024-12-16 11:44:11.814169] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f40d40 is same with the state(6) to be set 00:22:11.322 [2024-12-16 11:44:11.814177] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f40d40 is same with the state(6) to be set 00:22:11.322 [2024-12-16 11:44:11.814186] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f40d40 is same with the state(6) to be set 00:22:11.322 [2024-12-16 11:44:11.814194] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f40d40 is same with the state(6) to be set 00:22:11.322 [2024-12-16 11:44:11.814202] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f40d40 is same with the state(6) to be set 00:22:11.322 [2024-12-16 11:44:11.814211] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f40d40 is same with the state(6) to be set 00:22:11.322 11:44:11 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@101 -- # sleep 1 00:22:12.270 11:44:12 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@104 -- # confirm_io_on_port non_optimized 4420 00:22:12.270 11:44:12 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@65 -- # dtrace_pid=98377 00:22:12.270 11:44:12 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@66 -- # sleep 6 00:22:12.270 11:44:12 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/bpftrace.sh 97536 /home/vagrant/spdk_repo/spdk/scripts/bpf/nvmf_path.bt 00:22:18.845 11:44:18 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@67 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_get_listeners nqn.2016-06.io.spdk:cnode1 00:22:18.845 11:44:18 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@67 -- # jq -r '.[] | select (.ana_states[0].ana_state=="non_optimized") | .address.trsvcid' 00:22:18.845 11:44:19 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@67 -- # active_port=4420 00:22:18.845 11:44:19 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@68 -- # cat /home/vagrant/spdk_repo/spdk/test/nvmf/host/trace.txt 00:22:18.845 Attaching 4 probes... 00:22:18.845 @path[10.0.0.3, 4420]: 16510 00:22:18.845 @path[10.0.0.3, 4420]: 16687 00:22:18.845 @path[10.0.0.3, 4420]: 17144 00:22:18.845 @path[10.0.0.3, 4420]: 16794 00:22:18.845 @path[10.0.0.3, 4420]: 17778 00:22:18.845 11:44:19 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@69 -- # cut -d ']' -f1 00:22:18.845 11:44:19 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@69 -- # sed -n 1p 00:22:18.845 11:44:19 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@69 -- # awk '$1=="@path[10.0.0.3," {print $2}' 00:22:18.845 11:44:19 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@69 -- # port=4420 00:22:18.845 11:44:19 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@70 -- # [[ 4420 == \4\4\2\0 ]] 00:22:18.845 11:44:19 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@71 -- # [[ 4420 == \4\4\2\0 ]] 00:22:18.845 11:44:19 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@72 -- # kill 98377 00:22:18.845 11:44:19 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@73 -- # rm -f /home/vagrant/spdk_repo/spdk/test/nvmf/host/trace.txt 00:22:18.845 11:44:19 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@107 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4421 00:22:18.845 [2024-12-16 11:44:19.412031] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4421 *** 00:22:18.845 11:44:19 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@108 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4421 -n optimized 00:22:19.104 11:44:19 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@111 -- # sleep 6 00:22:25.678 11:44:25 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@112 -- # confirm_io_on_port optimized 4421 00:22:25.678 11:44:25 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@65 -- # dtrace_pid=98564 00:22:25.678 11:44:25 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/bpftrace.sh 97536 /home/vagrant/spdk_repo/spdk/scripts/bpf/nvmf_path.bt 00:22:25.678 11:44:25 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@66 -- # sleep 6 00:22:32.275 11:44:31 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@67 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_get_listeners nqn.2016-06.io.spdk:cnode1 00:22:32.275 11:44:31 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@67 -- # jq -r '.[] | select (.ana_states[0].ana_state=="optimized") | .address.trsvcid' 00:22:32.275 11:44:32 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@67 -- # active_port=4421 00:22:32.275 11:44:32 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@68 -- # cat /home/vagrant/spdk_repo/spdk/test/nvmf/host/trace.txt 00:22:32.275 Attaching 4 probes... 00:22:32.275 @path[10.0.0.3, 4421]: 15873 00:22:32.275 @path[10.0.0.3, 4421]: 16137 00:22:32.275 @path[10.0.0.3, 4421]: 16208 00:22:32.275 @path[10.0.0.3, 4421]: 15902 00:22:32.275 @path[10.0.0.3, 4421]: 16423 00:22:32.275 11:44:32 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@69 -- # cut -d ']' -f1 00:22:32.275 11:44:32 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@69 -- # awk '$1=="@path[10.0.0.3," {print $2}' 00:22:32.275 11:44:32 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@69 -- # sed -n 1p 00:22:32.275 11:44:32 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@69 -- # port=4421 00:22:32.275 11:44:32 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@70 -- # [[ 4421 == \4\4\2\1 ]] 00:22:32.275 11:44:32 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@71 -- # [[ 4421 == \4\4\2\1 ]] 00:22:32.275 11:44:32 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@72 -- # kill 98564 00:22:32.275 11:44:32 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@73 -- # rm -f /home/vagrant/spdk_repo/spdk/test/nvmf/host/trace.txt 00:22:32.275 11:44:32 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@114 -- # killprocess 97625 00:22:32.275 11:44:32 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@954 -- # '[' -z 97625 ']' 00:22:32.275 11:44:32 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@958 -- # kill -0 97625 00:22:32.275 11:44:32 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@959 -- # uname 00:22:32.275 11:44:32 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:22:32.275 11:44:32 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 97625 00:22:32.275 killing process with pid 97625 00:22:32.275 11:44:32 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@960 -- # process_name=reactor_2 00:22:32.275 11:44:32 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@964 -- # '[' reactor_2 = sudo ']' 00:22:32.275 11:44:32 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@972 -- # echo 'killing process with pid 97625' 00:22:32.275 11:44:32 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@973 -- # kill 97625 00:22:32.275 11:44:32 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@978 -- # wait 97625 00:22:32.275 { 00:22:32.275 "results": [ 00:22:32.275 { 00:22:32.275 "job": "Nvme0n1", 00:22:32.275 "core_mask": "0x4", 00:22:32.275 "workload": "verify", 00:22:32.275 "status": "terminated", 00:22:32.275 "verify_range": { 00:22:32.275 "start": 0, 00:22:32.275 "length": 16384 00:22:32.275 }, 00:22:32.275 "queue_depth": 128, 00:22:32.275 "io_size": 4096, 00:22:32.275 "runtime": 56.064131, 00:22:32.275 "iops": 7192.102915141947, 00:22:32.275 "mibps": 28.09415201227323, 00:22:32.275 "io_failed": 0, 00:22:32.275 "io_timeout": 0, 00:22:32.275 "avg_latency_us": 17767.693988085426, 00:22:32.275 "min_latency_us": 1742.6618181818183, 00:22:32.275 "max_latency_us": 7046430.72 00:22:32.275 } 00:22:32.275 ], 00:22:32.275 "core_count": 1 00:22:32.275 } 00:22:32.275 11:44:32 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@116 -- # wait 97625 00:22:32.275 11:44:32 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@118 -- # cat /home/vagrant/spdk_repo/spdk/test/nvmf/host/try.txt 00:22:32.275 [2024-12-16 11:43:34.505645] Starting SPDK v25.01-pre git sha1 c360129e7 / DPDK 24.03.0 initialization... 00:22:32.275 [2024-12-16 11:43:34.505757] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid97625 ] 00:22:32.275 [2024-12-16 11:43:34.656619] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:22:32.275 [2024-12-16 11:43:34.716106] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 2 00:22:32.275 Running I/O for 90 seconds... 00:22:32.275 8530.00 IOPS, 33.32 MiB/s [2024-12-16T11:44:33.028Z] 8455.00 IOPS, 33.03 MiB/s [2024-12-16T11:44:33.028Z] 8363.67 IOPS, 32.67 MiB/s [2024-12-16T11:44:33.028Z] 8230.25 IOPS, 32.15 MiB/s [2024-12-16T11:44:33.028Z] 8232.80 IOPS, 32.16 MiB/s [2024-12-16T11:44:33.028Z] 8314.00 IOPS, 32.48 MiB/s [2024-12-16T11:44:33.028Z] 8354.00 IOPS, 32.63 MiB/s [2024-12-16T11:44:33.028Z] 8386.38 IOPS, 32.76 MiB/s [2024-12-16T11:44:33.028Z] [2024-12-16 11:43:44.552484] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:44696 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:32.275 [2024-12-16 11:43:44.552554] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:50 cdw0:0 sqhd:000d p:0 m:0 dnr:0 00:22:32.275 [2024-12-16 11:43:44.552592] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:44704 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:32.275 [2024-12-16 11:43:44.552609] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:10 cdw0:0 sqhd:000e p:0 m:0 dnr:0 00:22:32.275 [2024-12-16 11:43:44.552631] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:44712 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:32.275 [2024-12-16 11:43:44.552648] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:55 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:22:32.275 [2024-12-16 11:43:44.552670] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:109 nsid:1 lba:44720 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:32.275 [2024-12-16 11:43:44.552686] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:109 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:22:32.276 [2024-12-16 11:43:44.552708] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:79 nsid:1 lba:44728 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:32.276 [2024-12-16 11:43:44.552723] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:79 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:22:32.276 [2024-12-16 11:43:44.552759] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:44736 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:32.276 [2024-12-16 11:43:44.552774] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:125 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:22:32.276 [2024-12-16 11:43:44.552794] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:96 nsid:1 lba:44744 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:32.276 [2024-12-16 11:43:44.552809] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:96 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:22:32.276 [2024-12-16 11:43:44.552829] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:99 nsid:1 lba:44752 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:32.276 [2024-12-16 11:43:44.552844] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:99 cdw0:0 sqhd:0014 p:0 m:0 dnr:0 00:22:32.276 [2024-12-16 11:43:44.554307] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:97 nsid:1 lba:44760 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:32.276 [2024-12-16 11:43:44.554333] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:97 cdw0:0 sqhd:0015 p:0 m:0 dnr:0 00:22:32.276 [2024-12-16 11:43:44.554356] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:80 nsid:1 lba:44768 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:32.276 [2024-12-16 11:43:44.554372] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:80 cdw0:0 sqhd:0016 p:0 m:0 dnr:0 00:22:32.276 [2024-12-16 11:43:44.554421] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:126 nsid:1 lba:44776 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:32.276 [2024-12-16 11:43:44.554438] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:126 cdw0:0 sqhd:0017 p:0 m:0 dnr:0 00:22:32.276 [2024-12-16 11:43:44.554458] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:90 nsid:1 lba:44784 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:32.276 [2024-12-16 11:43:44.554473] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:90 cdw0:0 sqhd:0018 p:0 m:0 dnr:0 00:22:32.276 [2024-12-16 11:43:44.554493] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:44792 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:32.276 [2024-12-16 11:43:44.554507] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:52 cdw0:0 sqhd:0019 p:0 m:0 dnr:0 00:22:32.276 [2024-12-16 11:43:44.554528] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:106 nsid:1 lba:44800 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:32.276 [2024-12-16 11:43:44.554543] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:106 cdw0:0 sqhd:001a p:0 m:0 dnr:0 00:22:32.276 [2024-12-16 11:43:44.554563] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:91 nsid:1 lba:44808 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:32.276 [2024-12-16 11:43:44.554578] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:91 cdw0:0 sqhd:001b p:0 m:0 dnr:0 00:22:32.276 [2024-12-16 11:43:44.554598] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:83 nsid:1 lba:44816 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:32.276 [2024-12-16 11:43:44.554613] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:83 cdw0:0 sqhd:001c p:0 m:0 dnr:0 00:22:32.276 [2024-12-16 11:43:44.554634] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:113 nsid:1 lba:44824 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:32.276 [2024-12-16 11:43:44.554648] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:113 cdw0:0 sqhd:001d p:0 m:0 dnr:0 00:22:32.276 [2024-12-16 11:43:44.554668] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:44832 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:32.276 [2024-12-16 11:43:44.554683] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:54 cdw0:0 sqhd:001e p:0 m:0 dnr:0 00:22:32.276 [2024-12-16 11:43:44.554703] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:44840 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:32.276 [2024-12-16 11:43:44.554718] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:34 cdw0:0 sqhd:001f p:0 m:0 dnr:0 00:22:32.276 [2024-12-16 11:43:44.554738] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:44848 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:32.276 [2024-12-16 11:43:44.554753] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:20 cdw0:0 sqhd:0020 p:0 m:0 dnr:0 00:22:32.276 [2024-12-16 11:43:44.554773] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:86 nsid:1 lba:44856 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:32.276 [2024-12-16 11:43:44.554788] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:86 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:22:32.276 [2024-12-16 11:43:44.554809] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:100 nsid:1 lba:44864 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:32.276 [2024-12-16 11:43:44.554823] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:100 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:22:32.276 [2024-12-16 11:43:44.555608] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:124 nsid:1 lba:44872 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:32.276 [2024-12-16 11:43:44.555638] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:124 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:22:32.276 [2024-12-16 11:43:44.555679] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:122 nsid:1 lba:44880 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:32.276 [2024-12-16 11:43:44.555695] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:122 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 00:22:32.276 [2024-12-16 11:43:44.555746] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:44888 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:32.276 [2024-12-16 11:43:44.555773] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:49 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:22:32.276 [2024-12-16 11:43:44.555795] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:44896 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:32.276 [2024-12-16 11:43:44.555810] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:9 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:22:32.276 [2024-12-16 11:43:44.555831] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:44904 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:32.276 [2024-12-16 11:43:44.555845] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:23 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:22:32.276 [2024-12-16 11:43:44.555866] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:44912 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:32.276 [2024-12-16 11:43:44.555881] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:35 cdw0:0 sqhd:0028 p:0 m:0 dnr:0 00:22:32.276 [2024-12-16 11:43:44.555902] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:44920 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:32.276 [2024-12-16 11:43:44.555917] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:43 cdw0:0 sqhd:0029 p:0 m:0 dnr:0 00:22:32.276 [2024-12-16 11:43:44.555937] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:44928 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:32.276 [2024-12-16 11:43:44.555953] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:18 cdw0:0 sqhd:002a p:0 m:0 dnr:0 00:22:32.276 [2024-12-16 11:43:44.555974] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:44936 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:32.276 [2024-12-16 11:43:44.555989] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:2 cdw0:0 sqhd:002b p:0 m:0 dnr:0 00:22:32.276 [2024-12-16 11:43:44.556010] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:121 nsid:1 lba:44944 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:32.276 [2024-12-16 11:43:44.556025] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:121 cdw0:0 sqhd:002c p:0 m:0 dnr:0 00:22:32.276 [2024-12-16 11:43:44.556047] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:73 nsid:1 lba:44952 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:32.276 [2024-12-16 11:43:44.556077] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:73 cdw0:0 sqhd:002d p:0 m:0 dnr:0 00:22:32.276 [2024-12-16 11:43:44.556098] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:44960 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:32.276 [2024-12-16 11:43:44.556113] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:63 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:22:32.276 [2024-12-16 11:43:44.556133] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:44968 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:32.276 [2024-12-16 11:43:44.556159] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:57 cdw0:0 sqhd:002f p:0 m:0 dnr:0 00:22:32.276 [2024-12-16 11:43:44.556198] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:119 nsid:1 lba:44976 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:32.276 [2024-12-16 11:43:44.556215] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:119 cdw0:0 sqhd:0030 p:0 m:0 dnr:0 00:22:32.276 [2024-12-16 11:43:44.556236] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:44984 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:32.276 [2024-12-16 11:43:44.556251] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:1 cdw0:0 sqhd:0031 p:0 m:0 dnr:0 00:22:32.276 [2024-12-16 11:43:44.556272] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:78 nsid:1 lba:44992 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:32.276 [2024-12-16 11:43:44.556286] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:78 cdw0:0 sqhd:0032 p:0 m:0 dnr:0 00:22:32.276 [2024-12-16 11:43:44.556308] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:45000 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:32.276 [2024-12-16 11:43:44.556322] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:25 cdw0:0 sqhd:0033 p:0 m:0 dnr:0 00:22:32.276 [2024-12-16 11:43:44.556359] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:92 nsid:1 lba:45008 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:32.276 [2024-12-16 11:43:44.556374] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:92 cdw0:0 sqhd:0034 p:0 m:0 dnr:0 00:22:32.276 [2024-12-16 11:43:44.556395] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:45016 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:32.276 [2024-12-16 11:43:44.556410] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:14 cdw0:0 sqhd:0035 p:0 m:0 dnr:0 00:22:32.276 [2024-12-16 11:43:44.556432] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:116 nsid:1 lba:45024 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:32.276 [2024-12-16 11:43:44.556447] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:116 cdw0:0 sqhd:0036 p:0 m:0 dnr:0 00:22:32.276 [2024-12-16 11:43:44.556468] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:67 nsid:1 lba:45032 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:32.276 [2024-12-16 11:43:44.556483] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:67 cdw0:0 sqhd:0037 p:0 m:0 dnr:0 00:22:32.276 [2024-12-16 11:43:44.556504] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:45040 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:32.277 [2024-12-16 11:43:44.556519] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:7 cdw0:0 sqhd:0038 p:0 m:0 dnr:0 00:22:32.277 [2024-12-16 11:43:44.556540] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:65 nsid:1 lba:45048 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:32.277 [2024-12-16 11:43:44.556555] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:65 cdw0:0 sqhd:0039 p:0 m:0 dnr:0 00:22:32.277 [2024-12-16 11:43:44.556576] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:45056 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:32.277 [2024-12-16 11:43:44.556592] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:36 cdw0:0 sqhd:003a p:0 m:0 dnr:0 00:22:32.277 [2024-12-16 11:43:44.556614] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:45064 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:32.277 [2024-12-16 11:43:44.556637] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:15 cdw0:0 sqhd:003b p:0 m:0 dnr:0 00:22:32.277 [2024-12-16 11:43:44.556662] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:45072 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:32.277 [2024-12-16 11:43:44.556678] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:38 cdw0:0 sqhd:003c p:0 m:0 dnr:0 00:22:32.277 [2024-12-16 11:43:44.556699] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:111 nsid:1 lba:45080 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:32.277 [2024-12-16 11:43:44.556715] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:111 cdw0:0 sqhd:003d p:0 m:0 dnr:0 00:22:32.277 [2024-12-16 11:43:44.556736] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:45088 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:32.277 [2024-12-16 11:43:44.556766] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:13 cdw0:0 sqhd:003e p:0 m:0 dnr:0 00:22:32.277 [2024-12-16 11:43:44.556786] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:45096 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:32.277 [2024-12-16 11:43:44.556800] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:62 cdw0:0 sqhd:003f p:0 m:0 dnr:0 00:22:32.277 [2024-12-16 11:43:44.556821] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:45104 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:32.277 [2024-12-16 11:43:44.556835] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:26 cdw0:0 sqhd:0040 p:0 m:0 dnr:0 00:22:32.277 [2024-12-16 11:43:44.556856] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:45112 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:32.277 [2024-12-16 11:43:44.556870] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:56 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:22:32.277 [2024-12-16 11:43:44.556891] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:45120 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:32.277 [2024-12-16 11:43:44.556905] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:61 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:22:32.277 [2024-12-16 11:43:44.556925] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:45128 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:32.277 [2024-12-16 11:43:44.556940] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:6 cdw0:0 sqhd:0043 p:0 m:0 dnr:0 00:22:32.277 [2024-12-16 11:43:44.556960] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:45136 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:32.277 [2024-12-16 11:43:44.556975] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:11 cdw0:0 sqhd:0044 p:0 m:0 dnr:0 00:22:32.277 [2024-12-16 11:43:44.556995] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:45144 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:32.277 [2024-12-16 11:43:44.557009] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:33 cdw0:0 sqhd:0045 p:0 m:0 dnr:0 00:22:32.277 [2024-12-16 11:43:44.557030] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:45152 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:32.277 [2024-12-16 11:43:44.557044] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:27 cdw0:0 sqhd:0046 p:0 m:0 dnr:0 00:22:32.277 [2024-12-16 11:43:44.557064] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:45160 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:32.277 [2024-12-16 11:43:44.557079] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:40 cdw0:0 sqhd:0047 p:0 m:0 dnr:0 00:22:32.277 [2024-12-16 11:43:44.557106] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:105 nsid:1 lba:45168 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:32.277 [2024-12-16 11:43:44.557121] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:105 cdw0:0 sqhd:0048 p:0 m:0 dnr:0 00:22:32.277 [2024-12-16 11:43:44.557142] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:45176 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:32.277 [2024-12-16 11:43:44.557156] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:0 cdw0:0 sqhd:0049 p:0 m:0 dnr:0 00:22:32.277 [2024-12-16 11:43:44.557187] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:120 nsid:1 lba:45184 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:32.277 [2024-12-16 11:43:44.557206] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:120 cdw0:0 sqhd:004a p:0 m:0 dnr:0 00:22:32.277 [2024-12-16 11:43:44.557227] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:77 nsid:1 lba:45192 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:32.277 [2024-12-16 11:43:44.557242] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:77 cdw0:0 sqhd:004b p:0 m:0 dnr:0 00:22:32.277 [2024-12-16 11:43:44.557263] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:45200 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:32.277 [2024-12-16 11:43:44.557279] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:31 cdw0:0 sqhd:004c p:0 m:0 dnr:0 00:22:32.277 [2024-12-16 11:43:44.557299] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:45208 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:32.277 [2024-12-16 11:43:44.557314] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:48 cdw0:0 sqhd:004d p:0 m:0 dnr:0 00:22:32.277 [2024-12-16 11:43:44.557334] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:45216 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:32.277 [2024-12-16 11:43:44.557349] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:30 cdw0:0 sqhd:004e p:0 m:0 dnr:0 00:22:32.277 [2024-12-16 11:43:44.557369] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:89 nsid:1 lba:45224 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:32.277 [2024-12-16 11:43:44.557384] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:89 cdw0:0 sqhd:004f p:0 m:0 dnr:0 00:22:32.277 [2024-12-16 11:43:44.557404] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:45232 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:32.277 [2024-12-16 11:43:44.557419] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:17 cdw0:0 sqhd:0050 p:0 m:0 dnr:0 00:22:32.277 [2024-12-16 11:43:44.557439] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:115 nsid:1 lba:45240 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:32.277 [2024-12-16 11:43:44.557454] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:115 cdw0:0 sqhd:0051 p:0 m:0 dnr:0 00:22:32.277 [2024-12-16 11:43:44.557474] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:94 nsid:1 lba:45248 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:32.277 [2024-12-16 11:43:44.557489] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:94 cdw0:0 sqhd:0052 p:0 m:0 dnr:0 00:22:32.277 [2024-12-16 11:43:44.557509] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:45256 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:32.277 [2024-12-16 11:43:44.557524] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:24 cdw0:0 sqhd:0053 p:0 m:0 dnr:0 00:22:32.277 [2024-12-16 11:43:44.557552] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:88 nsid:1 lba:45264 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:32.277 [2024-12-16 11:43:44.557568] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:88 cdw0:0 sqhd:0054 p:0 m:0 dnr:0 00:22:32.277 [2024-12-16 11:43:44.557589] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:123 nsid:1 lba:45272 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:32.277 [2024-12-16 11:43:44.557604] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:123 cdw0:0 sqhd:0055 p:0 m:0 dnr:0 00:22:32.277 [2024-12-16 11:43:44.558171] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:100 nsid:1 lba:45280 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:32.277 [2024-12-16 11:43:44.558199] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:100 cdw0:0 sqhd:0056 p:0 m:0 dnr:0 00:22:32.277 [2024-12-16 11:43:44.558225] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:86 nsid:1 lba:45288 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:32.277 [2024-12-16 11:43:44.558241] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:86 cdw0:0 sqhd:0057 p:0 m:0 dnr:0 00:22:32.277 [2024-12-16 11:43:44.558262] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:45296 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:32.277 [2024-12-16 11:43:44.558277] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:20 cdw0:0 sqhd:0058 p:0 m:0 dnr:0 00:22:32.277 [2024-12-16 11:43:44.558298] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:45304 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:32.277 [2024-12-16 11:43:44.558313] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:34 cdw0:0 sqhd:0059 p:0 m:0 dnr:0 00:22:32.277 [2024-12-16 11:43:44.558334] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:45312 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:32.277 [2024-12-16 11:43:44.558349] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:54 cdw0:0 sqhd:005a p:0 m:0 dnr:0 00:22:32.277 [2024-12-16 11:43:44.558370] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:113 nsid:1 lba:45320 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:32.277 [2024-12-16 11:43:44.558385] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:113 cdw0:0 sqhd:005b p:0 m:0 dnr:0 00:22:32.277 [2024-12-16 11:43:44.558406] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:83 nsid:1 lba:45328 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:32.277 [2024-12-16 11:43:44.558420] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:83 cdw0:0 sqhd:005c p:0 m:0 dnr:0 00:22:32.277 [2024-12-16 11:43:44.558441] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:91 nsid:1 lba:45336 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:32.277 [2024-12-16 11:43:44.558456] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:91 cdw0:0 sqhd:005d p:0 m:0 dnr:0 00:22:32.278 [2024-12-16 11:43:44.558477] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:106 nsid:1 lba:45344 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:32.278 [2024-12-16 11:43:44.558492] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:106 cdw0:0 sqhd:005e p:0 m:0 dnr:0 00:22:32.278 [2024-12-16 11:43:44.558512] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:45352 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:32.278 [2024-12-16 11:43:44.558527] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:52 cdw0:0 sqhd:005f p:0 m:0 dnr:0 00:22:32.278 [2024-12-16 11:43:44.558547] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:90 nsid:1 lba:45360 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:32.278 [2024-12-16 11:43:44.558573] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:90 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:22:32.278 [2024-12-16 11:43:44.558595] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:126 nsid:1 lba:45368 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:32.278 [2024-12-16 11:43:44.558621] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:126 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:22:32.278 [2024-12-16 11:43:44.558642] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:80 nsid:1 lba:45376 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:32.278 [2024-12-16 11:43:44.558657] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:80 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:22:32.278 [2024-12-16 11:43:44.558678] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:97 nsid:1 lba:45384 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:32.278 [2024-12-16 11:43:44.558693] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:97 cdw0:0 sqhd:0063 p:0 m:0 dnr:0 00:22:32.278 [2024-12-16 11:43:44.558713] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:45392 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:32.278 [2024-12-16 11:43:44.558728] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:46 cdw0:0 sqhd:0064 p:0 m:0 dnr:0 00:22:32.278 [2024-12-16 11:43:44.558748] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:64 nsid:1 lba:45400 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:32.278 [2024-12-16 11:43:44.558762] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:64 cdw0:0 sqhd:0065 p:0 m:0 dnr:0 00:22:32.278 [2024-12-16 11:43:44.558783] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:45408 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:32.278 [2024-12-16 11:43:44.558797] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:5 cdw0:0 sqhd:0066 p:0 m:0 dnr:0 00:22:32.278 [2024-12-16 11:43:44.558817] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:45416 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:32.278 [2024-12-16 11:43:44.558839] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:41 cdw0:0 sqhd:0067 p:0 m:0 dnr:0 00:22:32.278 [2024-12-16 11:43:44.558859] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:45424 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:32.278 [2024-12-16 11:43:44.558874] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:29 cdw0:0 sqhd:0068 p:0 m:0 dnr:0 00:22:32.278 [2024-12-16 11:43:44.558894] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:108 nsid:1 lba:45432 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:32.278 [2024-12-16 11:43:44.558909] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:108 cdw0:0 sqhd:0069 p:0 m:0 dnr:0 00:22:32.278 [2024-12-16 11:43:44.558929] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:45440 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:32.278 [2024-12-16 11:43:44.558944] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:8 cdw0:0 sqhd:006a p:0 m:0 dnr:0 00:22:32.278 [2024-12-16 11:43:44.558964] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:71 nsid:1 lba:45448 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:32.278 [2024-12-16 11:43:44.558979] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:71 cdw0:0 sqhd:006b p:0 m:0 dnr:0 00:22:32.278 [2024-12-16 11:43:44.558999] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:107 nsid:1 lba:45456 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:32.278 [2024-12-16 11:43:44.559020] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:107 cdw0:0 sqhd:006c p:0 m:0 dnr:0 00:22:32.278 [2024-12-16 11:43:44.559043] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:45464 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:32.278 [2024-12-16 11:43:44.559058] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:28 cdw0:0 sqhd:006d p:0 m:0 dnr:0 00:22:32.278 [2024-12-16 11:43:44.559079] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:112 nsid:1 lba:45472 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:32.278 [2024-12-16 11:43:44.559094] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:112 cdw0:0 sqhd:006e p:0 m:0 dnr:0 00:22:32.278 [2024-12-16 11:43:44.559114] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:45480 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:32.278 [2024-12-16 11:43:44.559129] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:37 cdw0:0 sqhd:006f p:0 m:0 dnr:0 00:22:32.278 [2024-12-16 11:43:44.559161] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:98 nsid:1 lba:45488 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:32.278 [2024-12-16 11:43:44.559178] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:98 cdw0:0 sqhd:0070 p:0 m:0 dnr:0 00:22:32.278 [2024-12-16 11:43:44.559199] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:87 nsid:1 lba:45496 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:32.278 [2024-12-16 11:43:44.559214] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:87 cdw0:0 sqhd:0071 p:0 m:0 dnr:0 00:22:32.278 [2024-12-16 11:43:44.559234] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:118 nsid:1 lba:45504 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:32.278 [2024-12-16 11:43:44.559249] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:118 cdw0:0 sqhd:0072 p:0 m:0 dnr:0 00:22:32.278 [2024-12-16 11:43:44.559286] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:81 nsid:1 lba:45512 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:32.278 [2024-12-16 11:43:44.559301] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:81 cdw0:0 sqhd:0073 p:0 m:0 dnr:0 00:22:32.278 [2024-12-16 11:43:44.559323] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:76 nsid:1 lba:45520 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:32.278 [2024-12-16 11:43:44.559338] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:76 cdw0:0 sqhd:0074 p:0 m:0 dnr:0 00:22:32.278 [2024-12-16 11:43:44.559359] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:45528 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:32.278 [2024-12-16 11:43:44.559374] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:59 cdw0:0 sqhd:0075 p:0 m:0 dnr:0 00:22:32.278 [2024-12-16 11:43:44.559395] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:104 nsid:1 lba:45536 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:32.278 [2024-12-16 11:43:44.559410] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:104 cdw0:0 sqhd:0076 p:0 m:0 dnr:0 00:22:32.278 [2024-12-16 11:43:44.559431] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:68 nsid:1 lba:45544 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:32.278 [2024-12-16 11:43:44.559445] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:68 cdw0:0 sqhd:0077 p:0 m:0 dnr:0 00:22:32.278 [2024-12-16 11:43:44.559466] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:45552 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:32.278 [2024-12-16 11:43:44.559481] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:4 cdw0:0 sqhd:0078 p:0 m:0 dnr:0 00:22:32.278 [2024-12-16 11:43:44.559514] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:45560 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:32.278 [2024-12-16 11:43:44.559530] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:60 cdw0:0 sqhd:0079 p:0 m:0 dnr:0 00:22:32.278 [2024-12-16 11:43:44.559552] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:75 nsid:1 lba:45568 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:32.278 [2024-12-16 11:43:44.559567] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:75 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:22:32.278 [2024-12-16 11:43:44.559588] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:84 nsid:1 lba:45576 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:32.278 [2024-12-16 11:43:44.559603] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:84 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:22:32.278 [2024-12-16 11:43:44.559624] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:82 nsid:1 lba:45584 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:32.278 [2024-12-16 11:43:44.559639] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:82 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:22:32.278 [2024-12-16 11:43:44.559671] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:45592 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:32.278 [2024-12-16 11:43:44.559686] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:32 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:22:32.278 [2024-12-16 11:43:44.559716] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:45600 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:32.278 [2024-12-16 11:43:44.559750] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:16 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:22:32.278 [2024-12-16 11:43:44.559771] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:117 nsid:1 lba:45608 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:32.278 [2024-12-16 11:43:44.559786] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:117 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:22:32.278 [2024-12-16 11:43:44.559808] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:45616 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:32.278 [2024-12-16 11:43:44.559823] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:19 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:32.278 [2024-12-16 11:43:44.559845] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:45624 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:32.278 [2024-12-16 11:43:44.559860] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:12 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:32.278 [2024-12-16 11:43:44.559882] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:45632 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:32.278 [2024-12-16 11:43:44.559897] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:53 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:22:32.278 [2024-12-16 11:43:44.559918] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:103 nsid:1 lba:45640 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:32.278 [2024-12-16 11:43:44.559933] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:103 cdw0:0 sqhd:0003 p:0 m:0 dnr:0 00:22:32.278 [2024-12-16 11:43:44.559961] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:45648 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:32.278 [2024-12-16 11:43:44.559977] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:45 cdw0:0 sqhd:0004 p:0 m:0 dnr:0 00:22:32.279 [2024-12-16 11:43:44.560006] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:45656 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:32.279 [2024-12-16 11:43:44.560022] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:47 cdw0:0 sqhd:0005 p:0 m:0 dnr:0 00:22:32.279 [2024-12-16 11:43:44.560058] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:101 nsid:1 lba:45664 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:32.279 [2024-12-16 11:43:44.560073] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:101 cdw0:0 sqhd:0006 p:0 m:0 dnr:0 00:22:32.279 [2024-12-16 11:43:44.560093] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:70 nsid:1 lba:45672 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:32.279 [2024-12-16 11:43:44.560108] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:70 cdw0:0 sqhd:0007 p:0 m:0 dnr:0 00:22:32.279 [2024-12-16 11:43:44.560129] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:45680 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:32.279 [2024-12-16 11:43:44.560143] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:22 cdw0:0 sqhd:0008 p:0 m:0 dnr:0 00:22:32.279 [2024-12-16 11:43:44.560164] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:102 nsid:1 lba:45688 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:32.279 [2024-12-16 11:43:44.560191] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:102 cdw0:0 sqhd:0009 p:0 m:0 dnr:0 00:22:32.279 [2024-12-16 11:43:44.560214] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:45696 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:32.279 [2024-12-16 11:43:44.560230] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:51 cdw0:0 sqhd:000a p:0 m:0 dnr:0 00:22:32.279 [2024-12-16 11:43:44.560251] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:45704 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:32.279 [2024-12-16 11:43:44.560266] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:42 cdw0:0 sqhd:000b p:0 m:0 dnr:0 00:22:32.279 [2024-12-16 11:43:44.560286] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:110 nsid:1 lba:45712 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:32.279 [2024-12-16 11:43:44.560300] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:110 cdw0:0 sqhd:000c p:0 m:0 dnr:0 00:22:32.279 [2024-12-16 11:43:44.560321] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:44696 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:32.279 [2024-12-16 11:43:44.560336] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:21 cdw0:0 sqhd:000d p:0 m:0 dnr:0 00:22:32.279 [2024-12-16 11:43:44.560373] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:44704 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:32.279 [2024-12-16 11:43:44.560389] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:39 cdw0:0 sqhd:000e p:0 m:0 dnr:0 00:22:32.279 [2024-12-16 11:43:44.560410] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:99 nsid:1 lba:44712 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:32.279 [2024-12-16 11:43:44.560425] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:99 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:22:32.279 [2024-12-16 11:43:44.560446] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:96 nsid:1 lba:44720 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:32.279 [2024-12-16 11:43:44.560461] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:96 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:22:32.279 [2024-12-16 11:43:44.560489] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:44728 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:32.279 [2024-12-16 11:43:44.560512] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:125 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:22:32.279 [2024-12-16 11:43:44.560535] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:79 nsid:1 lba:44736 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:32.279 [2024-12-16 11:43:44.560551] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:79 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:22:32.279 [2024-12-16 11:43:44.560572] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:109 nsid:1 lba:44744 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:32.279 [2024-12-16 11:43:44.560587] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:109 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:22:32.279 [2024-12-16 11:43:44.560614] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:44752 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:32.279 [2024-12-16 11:43:44.560630] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:55 cdw0:0 sqhd:0014 p:0 m:0 dnr:0 00:22:32.279 [2024-12-16 11:43:44.560652] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:44760 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:32.279 [2024-12-16 11:43:44.560667] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:10 cdw0:0 sqhd:0015 p:0 m:0 dnr:0 00:22:32.279 [2024-12-16 11:43:44.560688] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:44768 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:32.279 [2024-12-16 11:43:44.560703] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:50 cdw0:0 sqhd:0016 p:0 m:0 dnr:0 00:22:32.279 [2024-12-16 11:43:44.560724] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:66 nsid:1 lba:44776 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:32.279 [2024-12-16 11:43:44.560739] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:66 cdw0:0 sqhd:0017 p:0 m:0 dnr:0 00:22:32.279 [2024-12-16 11:43:44.560775] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:44784 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:32.279 [2024-12-16 11:43:44.560790] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:44 cdw0:0 sqhd:0018 p:0 m:0 dnr:0 00:22:32.279 [2024-12-16 11:43:44.560810] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:93 nsid:1 lba:44792 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:32.279 [2024-12-16 11:43:44.560825] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:93 cdw0:0 sqhd:0019 p:0 m:0 dnr:0 00:22:32.279 [2024-12-16 11:43:44.560845] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:95 nsid:1 lba:44800 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:32.279 [2024-12-16 11:43:44.560860] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:95 cdw0:0 sqhd:001a p:0 m:0 dnr:0 00:22:32.279 [2024-12-16 11:43:44.560880] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:85 nsid:1 lba:44808 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:32.279 [2024-12-16 11:43:44.560895] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:85 cdw0:0 sqhd:001b p:0 m:0 dnr:0 00:22:32.279 [2024-12-16 11:43:44.560915] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:44816 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:32.279 [2024-12-16 11:43:44.560930] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:3 cdw0:0 sqhd:001c p:0 m:0 dnr:0 00:22:32.279 [2024-12-16 11:43:44.560950] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:74 nsid:1 lba:44824 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:32.279 [2024-12-16 11:43:44.560971] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:74 cdw0:0 sqhd:001d p:0 m:0 dnr:0 00:22:32.279 [2024-12-16 11:43:44.560993] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:44832 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:32.279 [2024-12-16 11:43:44.561008] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:58 cdw0:0 sqhd:001e p:0 m:0 dnr:0 00:22:32.279 [2024-12-16 11:43:44.561029] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:72 nsid:1 lba:44840 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:32.279 [2024-12-16 11:43:44.561044] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:72 cdw0:0 sqhd:001f p:0 m:0 dnr:0 00:22:32.279 [2024-12-16 11:43:44.561064] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:69 nsid:1 lba:44848 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:32.279 [2024-12-16 11:43:44.561079] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:69 cdw0:0 sqhd:0020 p:0 m:0 dnr:0 00:22:32.279 [2024-12-16 11:43:44.561099] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:114 nsid:1 lba:44856 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:32.279 [2024-12-16 11:43:44.561114] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:114 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:22:32.279 [2024-12-16 11:43:44.561953] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:123 nsid:1 lba:44864 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:32.279 [2024-12-16 11:43:44.561982] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:123 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:22:32.279 [2024-12-16 11:43:44.562010] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:88 nsid:1 lba:44872 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:32.279 [2024-12-16 11:43:44.562026] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:88 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:22:32.279 [2024-12-16 11:43:44.562055] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:44880 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:32.279 [2024-12-16 11:43:44.562071] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:24 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 00:22:32.279 [2024-12-16 11:43:44.562092] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:94 nsid:1 lba:44888 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:32.279 [2024-12-16 11:43:44.562107] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:94 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:22:32.280 [2024-12-16 11:43:44.562128] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:115 nsid:1 lba:44896 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:32.280 [2024-12-16 11:43:44.562143] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:115 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:22:32.280 [2024-12-16 11:43:44.562179] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:44904 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:32.280 [2024-12-16 11:43:44.562197] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:17 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:22:32.280 [2024-12-16 11:43:44.562219] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:89 nsid:1 lba:44912 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:32.280 [2024-12-16 11:43:44.562240] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:89 cdw0:0 sqhd:0028 p:0 m:0 dnr:0 00:22:32.280 [2024-12-16 11:43:44.562262] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:44920 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:32.280 [2024-12-16 11:43:44.562292] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:30 cdw0:0 sqhd:0029 p:0 m:0 dnr:0 00:22:32.280 [2024-12-16 11:43:44.562325] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:44928 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:32.280 [2024-12-16 11:43:44.562341] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:48 cdw0:0 sqhd:002a p:0 m:0 dnr:0 00:22:32.280 [2024-12-16 11:43:44.562361] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:44936 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:32.280 [2024-12-16 11:43:44.562377] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:31 cdw0:0 sqhd:002b p:0 m:0 dnr:0 00:22:32.280 [2024-12-16 11:43:44.562397] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:77 nsid:1 lba:44944 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:32.280 [2024-12-16 11:43:44.562412] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:77 cdw0:0 sqhd:002c p:0 m:0 dnr:0 00:22:32.280 [2024-12-16 11:43:44.562432] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:120 nsid:1 lba:44952 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:32.280 [2024-12-16 11:43:44.562447] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:120 cdw0:0 sqhd:002d p:0 m:0 dnr:0 00:22:32.280 [2024-12-16 11:43:44.562468] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:44960 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:32.280 [2024-12-16 11:43:44.562482] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:0 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:22:32.280 [2024-12-16 11:43:44.562503] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:105 nsid:1 lba:44968 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:32.280 [2024-12-16 11:43:44.562517] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:105 cdw0:0 sqhd:002f p:0 m:0 dnr:0 00:22:32.280 [2024-12-16 11:43:44.562538] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:44976 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:32.280 [2024-12-16 11:43:44.562552] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:40 cdw0:0 sqhd:0030 p:0 m:0 dnr:0 00:22:32.280 [2024-12-16 11:43:44.562573] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:44984 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:32.280 [2024-12-16 11:43:44.562587] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:27 cdw0:0 sqhd:0031 p:0 m:0 dnr:0 00:22:32.280 [2024-12-16 11:43:44.562607] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:44992 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:32.280 [2024-12-16 11:43:44.562622] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:33 cdw0:0 sqhd:0032 p:0 m:0 dnr:0 00:22:32.280 [2024-12-16 11:43:44.562642] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:45000 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:32.280 [2024-12-16 11:43:44.562657] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:11 cdw0:0 sqhd:0033 p:0 m:0 dnr:0 00:22:32.280 [2024-12-16 11:43:44.562681] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:45008 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:32.280 [2024-12-16 11:43:44.562697] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:6 cdw0:0 sqhd:0034 p:0 m:0 dnr:0 00:22:32.280 [2024-12-16 11:43:44.562718] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:45016 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:32.280 [2024-12-16 11:43:44.562732] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:61 cdw0:0 sqhd:0035 p:0 m:0 dnr:0 00:22:32.280 [2024-12-16 11:43:44.562760] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:45024 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:32.280 [2024-12-16 11:43:44.562775] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:56 cdw0:0 sqhd:0036 p:0 m:0 dnr:0 00:22:32.280 [2024-12-16 11:43:44.562796] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:45032 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:32.280 [2024-12-16 11:43:44.562810] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:26 cdw0:0 sqhd:0037 p:0 m:0 dnr:0 00:22:32.280 [2024-12-16 11:43:44.562831] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:45040 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:32.280 [2024-12-16 11:43:44.562851] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:62 cdw0:0 sqhd:0038 p:0 m:0 dnr:0 00:22:32.280 [2024-12-16 11:43:44.562872] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:45048 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:32.280 [2024-12-16 11:43:44.562887] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:13 cdw0:0 sqhd:0039 p:0 m:0 dnr:0 00:22:32.280 [2024-12-16 11:43:44.562908] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:111 nsid:1 lba:45056 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:32.280 [2024-12-16 11:43:44.562923] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:111 cdw0:0 sqhd:003a p:0 m:0 dnr:0 00:22:32.280 [2024-12-16 11:43:44.562943] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:45064 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:32.280 [2024-12-16 11:43:44.562958] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:38 cdw0:0 sqhd:003b p:0 m:0 dnr:0 00:22:32.280 [2024-12-16 11:43:44.562979] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:45072 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:32.280 [2024-12-16 11:43:44.562993] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:15 cdw0:0 sqhd:003c p:0 m:0 dnr:0 00:22:32.280 [2024-12-16 11:43:44.563014] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:45080 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:32.280 [2024-12-16 11:43:44.563028] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:36 cdw0:0 sqhd:003d p:0 m:0 dnr:0 00:22:32.280 [2024-12-16 11:43:44.563049] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:65 nsid:1 lba:45088 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:32.280 [2024-12-16 11:43:44.563063] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:65 cdw0:0 sqhd:003e p:0 m:0 dnr:0 00:22:32.280 [2024-12-16 11:43:44.563084] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:45096 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:32.280 [2024-12-16 11:43:44.563098] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:7 cdw0:0 sqhd:003f p:0 m:0 dnr:0 00:22:32.280 [2024-12-16 11:43:44.563119] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:67 nsid:1 lba:45104 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:32.280 [2024-12-16 11:43:44.563133] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:67 cdw0:0 sqhd:0040 p:0 m:0 dnr:0 00:22:32.280 [2024-12-16 11:43:44.563163] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:116 nsid:1 lba:45112 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:32.280 [2024-12-16 11:43:44.563181] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:116 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:22:32.280 [2024-12-16 11:43:44.563202] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:45120 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:32.280 [2024-12-16 11:43:44.563225] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:14 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:22:32.280 [2024-12-16 11:43:44.563247] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:92 nsid:1 lba:45128 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:32.280 [2024-12-16 11:43:44.563279] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:92 cdw0:0 sqhd:0043 p:0 m:0 dnr:0 00:22:32.280 [2024-12-16 11:43:44.563305] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:45136 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:32.280 [2024-12-16 11:43:44.563321] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:25 cdw0:0 sqhd:0044 p:0 m:0 dnr:0 00:22:32.280 [2024-12-16 11:43:44.563342] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:78 nsid:1 lba:45144 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:32.280 [2024-12-16 11:43:44.563357] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:78 cdw0:0 sqhd:0045 p:0 m:0 dnr:0 00:22:32.280 [2024-12-16 11:43:44.563378] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:45152 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:32.280 [2024-12-16 11:43:44.563393] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:1 cdw0:0 sqhd:0046 p:0 m:0 dnr:0 00:22:32.280 [2024-12-16 11:43:44.563415] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:119 nsid:1 lba:45160 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:32.280 [2024-12-16 11:43:44.563429] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:119 cdw0:0 sqhd:0047 p:0 m:0 dnr:0 00:22:32.280 [2024-12-16 11:43:44.563450] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:45168 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:32.280 [2024-12-16 11:43:44.563470] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:57 cdw0:0 sqhd:0048 p:0 m:0 dnr:0 00:22:32.280 [2024-12-16 11:43:44.563491] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:45176 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:32.280 [2024-12-16 11:43:44.563506] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:63 cdw0:0 sqhd:0049 p:0 m:0 dnr:0 00:22:32.280 [2024-12-16 11:43:44.563527] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:73 nsid:1 lba:45184 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:32.280 [2024-12-16 11:43:44.563542] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:73 cdw0:0 sqhd:004a p:0 m:0 dnr:0 00:22:32.281 [2024-12-16 11:43:44.563563] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:121 nsid:1 lba:45192 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:32.281 [2024-12-16 11:43:44.563578] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:121 cdw0:0 sqhd:004b p:0 m:0 dnr:0 00:22:32.281 [2024-12-16 11:43:44.563599] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:45200 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:32.281 [2024-12-16 11:43:44.563614] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:2 cdw0:0 sqhd:004c p:0 m:0 dnr:0 00:22:32.281 [2024-12-16 11:43:44.563635] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:45208 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:32.281 [2024-12-16 11:43:44.563650] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:18 cdw0:0 sqhd:004d p:0 m:0 dnr:0 00:22:32.281 [2024-12-16 11:43:44.563683] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:45216 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:32.281 [2024-12-16 11:43:44.563713] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:43 cdw0:0 sqhd:004e p:0 m:0 dnr:0 00:22:32.281 [2024-12-16 11:43:44.563755] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:45224 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:32.281 [2024-12-16 11:43:44.563771] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:35 cdw0:0 sqhd:004f p:0 m:0 dnr:0 00:22:32.281 [2024-12-16 11:43:44.563792] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:45232 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:32.281 [2024-12-16 11:43:44.563808] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:23 cdw0:0 sqhd:0050 p:0 m:0 dnr:0 00:22:32.281 [2024-12-16 11:43:44.563829] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:45240 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:32.281 [2024-12-16 11:43:44.563844] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:9 cdw0:0 sqhd:0051 p:0 m:0 dnr:0 00:22:32.281 [2024-12-16 11:43:44.563865] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:45248 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:32.281 [2024-12-16 11:43:44.563880] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:49 cdw0:0 sqhd:0052 p:0 m:0 dnr:0 00:22:32.281 [2024-12-16 11:43:44.563901] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:122 nsid:1 lba:45256 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:32.281 [2024-12-16 11:43:44.563916] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:122 cdw0:0 sqhd:0053 p:0 m:0 dnr:0 00:22:32.281 [2024-12-16 11:43:44.563943] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:124 nsid:1 lba:45264 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:32.281 [2024-12-16 11:43:44.563958] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:124 cdw0:0 sqhd:0054 p:0 m:0 dnr:0 00:22:32.281 [2024-12-16 11:43:44.564583] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:114 nsid:1 lba:45272 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:32.281 [2024-12-16 11:43:44.564612] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:114 cdw0:0 sqhd:0055 p:0 m:0 dnr:0 00:22:32.281 [2024-12-16 11:43:44.564639] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:69 nsid:1 lba:45280 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:32.281 [2024-12-16 11:43:44.564656] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:69 cdw0:0 sqhd:0056 p:0 m:0 dnr:0 00:22:32.281 [2024-12-16 11:43:44.564677] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:72 nsid:1 lba:45288 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:32.281 [2024-12-16 11:43:44.564692] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:72 cdw0:0 sqhd:0057 p:0 m:0 dnr:0 00:22:32.281 [2024-12-16 11:43:44.564714] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:45296 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:32.281 [2024-12-16 11:43:44.564735] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:58 cdw0:0 sqhd:0058 p:0 m:0 dnr:0 00:22:32.281 [2024-12-16 11:43:44.564771] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:74 nsid:1 lba:45304 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:32.281 [2024-12-16 11:43:44.564786] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:74 cdw0:0 sqhd:0059 p:0 m:0 dnr:0 00:22:32.281 [2024-12-16 11:43:44.564807] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:45312 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:32.281 [2024-12-16 11:43:44.564821] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:3 cdw0:0 sqhd:005a p:0 m:0 dnr:0 00:22:32.281 [2024-12-16 11:43:44.564854] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:85 nsid:1 lba:45320 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:32.281 [2024-12-16 11:43:44.564870] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:85 cdw0:0 sqhd:005b p:0 m:0 dnr:0 00:22:32.281 [2024-12-16 11:43:44.564891] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:95 nsid:1 lba:45328 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:32.281 [2024-12-16 11:43:44.564906] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:95 cdw0:0 sqhd:005c p:0 m:0 dnr:0 00:22:32.281 [2024-12-16 11:43:44.564926] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:93 nsid:1 lba:45336 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:32.281 [2024-12-16 11:43:44.564941] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:93 cdw0:0 sqhd:005d p:0 m:0 dnr:0 00:22:32.281 [2024-12-16 11:43:44.564961] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:45344 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:32.281 [2024-12-16 11:43:44.564976] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:44 cdw0:0 sqhd:005e p:0 m:0 dnr:0 00:22:32.281 [2024-12-16 11:43:44.564996] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:66 nsid:1 lba:45352 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:32.281 [2024-12-16 11:43:44.565011] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:66 cdw0:0 sqhd:005f p:0 m:0 dnr:0 00:22:32.281 [2024-12-16 11:43:44.565032] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:45360 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:32.281 [2024-12-16 11:43:44.565047] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:50 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:22:32.281 [2024-12-16 11:43:44.565067] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:45368 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:32.281 [2024-12-16 11:43:44.565082] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:10 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:22:32.281 [2024-12-16 11:43:44.565103] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:45376 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:32.281 [2024-12-16 11:43:44.565118] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:55 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:22:32.281 [2024-12-16 11:43:44.565138] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:109 nsid:1 lba:45384 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:32.281 [2024-12-16 11:43:44.565182] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:109 cdw0:0 sqhd:0063 p:0 m:0 dnr:0 00:22:32.281 [2024-12-16 11:43:44.565209] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:79 nsid:1 lba:45392 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:32.281 [2024-12-16 11:43:44.565224] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:79 cdw0:0 sqhd:0064 p:0 m:0 dnr:0 00:22:32.281 [2024-12-16 11:43:44.565245] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:45400 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:32.281 [2024-12-16 11:43:44.565261] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:125 cdw0:0 sqhd:0065 p:0 m:0 dnr:0 00:22:32.281 [2024-12-16 11:43:44.565282] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:96 nsid:1 lba:45408 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:32.281 [2024-12-16 11:43:44.565297] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:96 cdw0:0 sqhd:0066 p:0 m:0 dnr:0 00:22:32.281 [2024-12-16 11:43:44.565326] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:99 nsid:1 lba:45416 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:32.281 [2024-12-16 11:43:44.565343] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:99 cdw0:0 sqhd:0067 p:0 m:0 dnr:0 00:22:32.281 [2024-12-16 11:43:44.565364] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:45424 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:32.281 [2024-12-16 11:43:44.565380] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:39 cdw0:0 sqhd:0068 p:0 m:0 dnr:0 00:22:32.281 [2024-12-16 11:43:44.565401] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:45432 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:32.281 [2024-12-16 11:43:44.565417] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:21 cdw0:0 sqhd:0069 p:0 m:0 dnr:0 00:22:32.281 [2024-12-16 11:43:44.565447] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:110 nsid:1 lba:45440 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:32.281 [2024-12-16 11:43:44.565462] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:110 cdw0:0 sqhd:006a p:0 m:0 dnr:0 00:22:32.281 [2024-12-16 11:43:44.565483] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:45448 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:32.281 [2024-12-16 11:43:44.565497] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:42 cdw0:0 sqhd:006b p:0 m:0 dnr:0 00:22:32.281 [2024-12-16 11:43:44.575730] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:45456 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:32.281 [2024-12-16 11:43:44.575770] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:51 cdw0:0 sqhd:006c p:0 m:0 dnr:0 00:22:32.281 [2024-12-16 11:43:44.575797] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:102 nsid:1 lba:45464 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:32.281 [2024-12-16 11:43:44.575815] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:102 cdw0:0 sqhd:006d p:0 m:0 dnr:0 00:22:32.281 [2024-12-16 11:43:44.575837] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:45472 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:32.281 [2024-12-16 11:43:44.575853] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:22 cdw0:0 sqhd:006e p:0 m:0 dnr:0 00:22:32.281 [2024-12-16 11:43:44.575874] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:70 nsid:1 lba:45480 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:32.281 [2024-12-16 11:43:44.575889] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:70 cdw0:0 sqhd:006f p:0 m:0 dnr:0 00:22:32.281 [2024-12-16 11:43:44.575910] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:101 nsid:1 lba:45488 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:32.281 [2024-12-16 11:43:44.575925] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:101 cdw0:0 sqhd:0070 p:0 m:0 dnr:0 00:22:32.281 [2024-12-16 11:43:44.575946] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:45496 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:32.281 [2024-12-16 11:43:44.575961] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:47 cdw0:0 sqhd:0071 p:0 m:0 dnr:0 00:22:32.281 [2024-12-16 11:43:44.575982] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:45504 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:32.281 [2024-12-16 11:43:44.575997] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:45 cdw0:0 sqhd:0072 p:0 m:0 dnr:0 00:22:32.282 [2024-12-16 11:43:44.576018] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:103 nsid:1 lba:45512 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:32.282 [2024-12-16 11:43:44.576062] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:103 cdw0:0 sqhd:0073 p:0 m:0 dnr:0 00:22:32.282 [2024-12-16 11:43:44.576086] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:45520 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:32.282 [2024-12-16 11:43:44.576101] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:53 cdw0:0 sqhd:0074 p:0 m:0 dnr:0 00:22:32.282 [2024-12-16 11:43:44.576121] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:45528 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:32.282 [2024-12-16 11:43:44.576137] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:12 cdw0:0 sqhd:0075 p:0 m:0 dnr:0 00:22:32.282 [2024-12-16 11:43:44.576188] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:45536 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:32.282 [2024-12-16 11:43:44.576206] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:19 cdw0:0 sqhd:0076 p:0 m:0 dnr:0 00:22:32.282 [2024-12-16 11:43:44.576228] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:117 nsid:1 lba:45544 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:32.282 [2024-12-16 11:43:44.576253] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:117 cdw0:0 sqhd:0077 p:0 m:0 dnr:0 00:22:32.282 [2024-12-16 11:43:44.576274] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:45552 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:32.282 [2024-12-16 11:43:44.576289] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:16 cdw0:0 sqhd:0078 p:0 m:0 dnr:0 00:22:32.282 [2024-12-16 11:43:44.576310] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:45560 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:32.282 [2024-12-16 11:43:44.576325] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:32 cdw0:0 sqhd:0079 p:0 m:0 dnr:0 00:22:32.282 [2024-12-16 11:43:44.576346] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:82 nsid:1 lba:45568 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:32.282 [2024-12-16 11:43:44.576362] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:82 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:22:32.282 [2024-12-16 11:43:44.576382] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:84 nsid:1 lba:45576 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:32.282 [2024-12-16 11:43:44.576397] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:84 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:22:32.282 [2024-12-16 11:43:44.576418] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:75 nsid:1 lba:45584 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:32.282 [2024-12-16 11:43:44.576433] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:75 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:22:32.282 [2024-12-16 11:43:44.576453] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:45592 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:32.282 [2024-12-16 11:43:44.576468] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:60 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:22:32.282 [2024-12-16 11:43:44.576489] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:45600 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:32.282 [2024-12-16 11:43:44.576504] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:4 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:22:32.282 [2024-12-16 11:43:44.576524] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:68 nsid:1 lba:45608 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:32.282 [2024-12-16 11:43:44.576547] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:68 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:22:32.282 [2024-12-16 11:43:44.576570] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:104 nsid:1 lba:45616 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:32.282 [2024-12-16 11:43:44.576586] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:104 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:32.282 [2024-12-16 11:43:44.576607] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:45624 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:32.282 [2024-12-16 11:43:44.576622] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:59 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:32.282 [2024-12-16 11:43:44.576643] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:76 nsid:1 lba:45632 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:32.282 [2024-12-16 11:43:44.576658] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:76 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:22:32.282 [2024-12-16 11:43:44.576678] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:81 nsid:1 lba:45640 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:32.282 [2024-12-16 11:43:44.576693] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:81 cdw0:0 sqhd:0003 p:0 m:0 dnr:0 00:22:32.282 [2024-12-16 11:43:44.576714] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:118 nsid:1 lba:45648 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:32.282 [2024-12-16 11:43:44.576729] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:118 cdw0:0 sqhd:0004 p:0 m:0 dnr:0 00:22:32.282 [2024-12-16 11:43:44.576751] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:87 nsid:1 lba:45656 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:32.282 [2024-12-16 11:43:44.576766] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:87 cdw0:0 sqhd:0005 p:0 m:0 dnr:0 00:22:32.282 [2024-12-16 11:43:44.576786] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:98 nsid:1 lba:45664 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:32.282 [2024-12-16 11:43:44.576801] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:98 cdw0:0 sqhd:0006 p:0 m:0 dnr:0 00:22:32.282 [2024-12-16 11:43:44.576821] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:45672 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:32.282 [2024-12-16 11:43:44.576836] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:37 cdw0:0 sqhd:0007 p:0 m:0 dnr:0 00:22:32.282 [2024-12-16 11:43:44.576857] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:112 nsid:1 lba:45680 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:32.282 [2024-12-16 11:43:44.576872] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:112 cdw0:0 sqhd:0008 p:0 m:0 dnr:0 00:22:32.282 [2024-12-16 11:43:44.576893] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:45688 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:32.282 [2024-12-16 11:43:44.576907] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:28 cdw0:0 sqhd:0009 p:0 m:0 dnr:0 00:22:32.282 [2024-12-16 11:43:44.576928] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:107 nsid:1 lba:45696 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:32.282 [2024-12-16 11:43:44.576943] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:107 cdw0:0 sqhd:000a p:0 m:0 dnr:0 00:22:32.282 [2024-12-16 11:43:44.576964] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:71 nsid:1 lba:45704 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:32.282 [2024-12-16 11:43:44.576978] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:71 cdw0:0 sqhd:000b p:0 m:0 dnr:0 00:22:32.282 [2024-12-16 11:43:44.577023] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:45712 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:32.282 [2024-12-16 11:43:44.577052] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:8 cdw0:0 sqhd:000c p:0 m:0 dnr:0 00:22:32.282 [2024-12-16 11:43:44.577073] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:108 nsid:1 lba:44696 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:32.282 [2024-12-16 11:43:44.577087] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:108 cdw0:0 sqhd:000d p:0 m:0 dnr:0 00:22:32.282 [2024-12-16 11:43:44.577106] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:44704 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:32.282 [2024-12-16 11:43:44.577120] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:29 cdw0:0 sqhd:000e p:0 m:0 dnr:0 00:22:32.282 [2024-12-16 11:43:44.577140] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:44712 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:32.282 [2024-12-16 11:43:44.577203] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:41 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:22:32.282 [2024-12-16 11:43:44.577227] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:44720 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:32.282 [2024-12-16 11:43:44.577242] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:22:32.282 [2024-12-16 11:43:44.577263] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:64 nsid:1 lba:44728 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:32.282 [2024-12-16 11:43:44.577279] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:64 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:22:32.282 [2024-12-16 11:43:44.577300] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:44736 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:32.282 [2024-12-16 11:43:44.577315] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:46 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:22:32.282 [2024-12-16 11:43:44.577336] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:97 nsid:1 lba:44744 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:32.282 [2024-12-16 11:43:44.577351] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:97 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:22:32.282 [2024-12-16 11:43:44.577372] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:80 nsid:1 lba:44752 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:32.282 [2024-12-16 11:43:44.577386] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:80 cdw0:0 sqhd:0014 p:0 m:0 dnr:0 00:22:32.282 [2024-12-16 11:43:44.577408] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:126 nsid:1 lba:44760 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:32.282 [2024-12-16 11:43:44.577423] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:126 cdw0:0 sqhd:0015 p:0 m:0 dnr:0 00:22:32.282 [2024-12-16 11:43:44.577447] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:90 nsid:1 lba:44768 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:32.283 [2024-12-16 11:43:44.577462] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:90 cdw0:0 sqhd:0016 p:0 m:0 dnr:0 00:22:32.283 [2024-12-16 11:43:44.577482] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:44776 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:32.283 [2024-12-16 11:43:44.577497] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:52 cdw0:0 sqhd:0017 p:0 m:0 dnr:0 00:22:32.283 [2024-12-16 11:43:44.577526] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:106 nsid:1 lba:44784 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:32.283 [2024-12-16 11:43:44.577542] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:106 cdw0:0 sqhd:0018 p:0 m:0 dnr:0 00:22:32.283 [2024-12-16 11:43:44.577577] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:91 nsid:1 lba:44792 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:32.283 [2024-12-16 11:43:44.577592] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:91 cdw0:0 sqhd:0019 p:0 m:0 dnr:0 00:22:32.283 [2024-12-16 11:43:44.577613] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:83 nsid:1 lba:44800 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:32.283 [2024-12-16 11:43:44.577627] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:83 cdw0:0 sqhd:001a p:0 m:0 dnr:0 00:22:32.283 [2024-12-16 11:43:44.577647] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:113 nsid:1 lba:44808 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:32.283 [2024-12-16 11:43:44.577676] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:113 cdw0:0 sqhd:001b p:0 m:0 dnr:0 00:22:32.283 [2024-12-16 11:43:44.577695] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:44816 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:32.283 [2024-12-16 11:43:44.577709] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:54 cdw0:0 sqhd:001c p:0 m:0 dnr:0 00:22:32.283 [2024-12-16 11:43:44.577728] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:44824 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:32.283 [2024-12-16 11:43:44.577742] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:34 cdw0:0 sqhd:001d p:0 m:0 dnr:0 00:22:32.283 [2024-12-16 11:43:44.577762] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:44832 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:32.283 [2024-12-16 11:43:44.577776] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:20 cdw0:0 sqhd:001e p:0 m:0 dnr:0 00:22:32.283 [2024-12-16 11:43:44.577795] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:86 nsid:1 lba:44840 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:32.283 [2024-12-16 11:43:44.577809] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:86 cdw0:0 sqhd:001f p:0 m:0 dnr:0 00:22:32.283 [2024-12-16 11:43:44.577830] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:100 nsid:1 lba:44848 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:32.283 [2024-12-16 11:43:44.577844] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:100 cdw0:0 sqhd:0020 p:0 m:0 dnr:0 00:22:32.283 [2024-12-16 11:43:44.578810] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:124 nsid:1 lba:44856 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:32.283 [2024-12-16 11:43:44.578839] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:124 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:22:32.283 [2024-12-16 11:43:44.578868] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:122 nsid:1 lba:44864 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:32.283 [2024-12-16 11:43:44.578884] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:122 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:22:32.283 [2024-12-16 11:43:44.578905] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:44872 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:32.283 [2024-12-16 11:43:44.578919] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:49 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:22:32.283 [2024-12-16 11:43:44.578939] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:44880 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:32.283 [2024-12-16 11:43:44.578964] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:9 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 00:22:32.283 [2024-12-16 11:43:44.578987] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:44888 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:32.283 [2024-12-16 11:43:44.579001] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:23 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:22:32.283 [2024-12-16 11:43:44.579021] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:44896 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:32.283 [2024-12-16 11:43:44.579052] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:35 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:22:32.283 [2024-12-16 11:43:44.579072] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:44904 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:32.283 [2024-12-16 11:43:44.579087] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:43 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:22:32.283 [2024-12-16 11:43:44.579106] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:44912 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:32.283 [2024-12-16 11:43:44.579121] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:18 cdw0:0 sqhd:0028 p:0 m:0 dnr:0 00:22:32.283 [2024-12-16 11:43:44.579141] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:44920 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:32.283 [2024-12-16 11:43:44.579172] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:2 cdw0:0 sqhd:0029 p:0 m:0 dnr:0 00:22:32.283 [2024-12-16 11:43:44.579192] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:121 nsid:1 lba:44928 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:32.283 [2024-12-16 11:43:44.579224] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:121 cdw0:0 sqhd:002a p:0 m:0 dnr:0 00:22:32.283 [2024-12-16 11:43:44.579248] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:73 nsid:1 lba:44936 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:32.283 [2024-12-16 11:43:44.579264] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:73 cdw0:0 sqhd:002b p:0 m:0 dnr:0 00:22:32.283 [2024-12-16 11:43:44.579285] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:44944 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:32.283 [2024-12-16 11:43:44.579300] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:63 cdw0:0 sqhd:002c p:0 m:0 dnr:0 00:22:32.283 [2024-12-16 11:43:44.579321] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:44952 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:32.283 [2024-12-16 11:43:44.579335] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:57 cdw0:0 sqhd:002d p:0 m:0 dnr:0 00:22:32.283 [2024-12-16 11:43:44.579356] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:119 nsid:1 lba:44960 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:32.283 [2024-12-16 11:43:44.579371] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:119 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:22:32.283 [2024-12-16 11:43:44.579392] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:44968 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:32.283 [2024-12-16 11:43:44.579407] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:1 cdw0:0 sqhd:002f p:0 m:0 dnr:0 00:22:32.283 [2024-12-16 11:43:44.579427] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:78 nsid:1 lba:44976 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:32.283 [2024-12-16 11:43:44.579452] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:78 cdw0:0 sqhd:0030 p:0 m:0 dnr:0 00:22:32.283 [2024-12-16 11:43:44.579475] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:44984 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:32.283 [2024-12-16 11:43:44.579491] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:25 cdw0:0 sqhd:0031 p:0 m:0 dnr:0 00:22:32.283 [2024-12-16 11:43:44.579511] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:92 nsid:1 lba:44992 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:32.283 [2024-12-16 11:43:44.579526] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:92 cdw0:0 sqhd:0032 p:0 m:0 dnr:0 00:22:32.283 [2024-12-16 11:43:44.579547] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:45000 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:32.283 [2024-12-16 11:43:44.579562] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:14 cdw0:0 sqhd:0033 p:0 m:0 dnr:0 00:22:32.283 [2024-12-16 11:43:44.579583] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:116 nsid:1 lba:45008 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:32.283 [2024-12-16 11:43:44.579598] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:116 cdw0:0 sqhd:0034 p:0 m:0 dnr:0 00:22:32.283 [2024-12-16 11:43:44.579619] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:67 nsid:1 lba:45016 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:32.283 [2024-12-16 11:43:44.579633] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:67 cdw0:0 sqhd:0035 p:0 m:0 dnr:0 00:22:32.283 [2024-12-16 11:43:44.579665] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:45024 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:32.283 [2024-12-16 11:43:44.579680] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:7 cdw0:0 sqhd:0036 p:0 m:0 dnr:0 00:22:32.283 [2024-12-16 11:43:44.579700] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:65 nsid:1 lba:45032 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:32.283 [2024-12-16 11:43:44.579746] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:65 cdw0:0 sqhd:0037 p:0 m:0 dnr:0 00:22:32.283 [2024-12-16 11:43:44.579769] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:45040 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:32.283 [2024-12-16 11:43:44.579785] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:36 cdw0:0 sqhd:0038 p:0 m:0 dnr:0 00:22:32.283 [2024-12-16 11:43:44.579806] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:45048 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:32.283 [2024-12-16 11:43:44.579821] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:15 cdw0:0 sqhd:0039 p:0 m:0 dnr:0 00:22:32.283 [2024-12-16 11:43:44.579842] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:45056 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:32.283 [2024-12-16 11:43:44.579856] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:38 cdw0:0 sqhd:003a p:0 m:0 dnr:0 00:22:32.283 [2024-12-16 11:43:44.579877] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:111 nsid:1 lba:45064 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:32.283 [2024-12-16 11:43:44.579891] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:111 cdw0:0 sqhd:003b p:0 m:0 dnr:0 00:22:32.283 [2024-12-16 11:43:44.579912] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:45072 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:32.283 [2024-12-16 11:43:44.579927] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:13 cdw0:0 sqhd:003c p:0 m:0 dnr:0 00:22:32.283 [2024-12-16 11:43:44.579975] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:45080 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:32.284 [2024-12-16 11:43:44.579992] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:62 cdw0:0 sqhd:003d p:0 m:0 dnr:0 00:22:32.284 [2024-12-16 11:43:44.580013] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:45088 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:32.284 [2024-12-16 11:43:44.580028] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:26 cdw0:0 sqhd:003e p:0 m:0 dnr:0 00:22:32.284 [2024-12-16 11:43:44.580049] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:45096 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:32.284 [2024-12-16 11:43:44.580064] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:56 cdw0:0 sqhd:003f p:0 m:0 dnr:0 00:22:32.284 [2024-12-16 11:43:44.580085] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:45104 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:32.284 [2024-12-16 11:43:44.580099] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:61 cdw0:0 sqhd:0040 p:0 m:0 dnr:0 00:22:32.284 [2024-12-16 11:43:44.580120] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:45112 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:32.284 [2024-12-16 11:43:44.580135] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:6 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:22:32.284 [2024-12-16 11:43:44.580167] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:45120 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:32.284 [2024-12-16 11:43:44.580185] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:11 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:22:32.284 [2024-12-16 11:43:44.580206] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:45128 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:32.284 [2024-12-16 11:43:44.580221] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:33 cdw0:0 sqhd:0043 p:0 m:0 dnr:0 00:22:32.284 [2024-12-16 11:43:44.580261] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:45136 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:32.284 [2024-12-16 11:43:44.580280] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:27 cdw0:0 sqhd:0044 p:0 m:0 dnr:0 00:22:32.284 [2024-12-16 11:43:44.580307] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:45144 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:32.284 [2024-12-16 11:43:44.580326] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:40 cdw0:0 sqhd:0045 p:0 m:0 dnr:0 00:22:32.284 [2024-12-16 11:43:44.580353] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:105 nsid:1 lba:45152 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:32.284 [2024-12-16 11:43:44.580373] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:105 cdw0:0 sqhd:0046 p:0 m:0 dnr:0 00:22:32.284 [2024-12-16 11:43:44.580401] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:45160 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:32.284 [2024-12-16 11:43:44.580420] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:0 cdw0:0 sqhd:0047 p:0 m:0 dnr:0 00:22:32.284 [2024-12-16 11:43:44.580447] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:120 nsid:1 lba:45168 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:32.284 [2024-12-16 11:43:44.580467] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:120 cdw0:0 sqhd:0048 p:0 m:0 dnr:0 00:22:32.284 [2024-12-16 11:43:44.580503] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:77 nsid:1 lba:45176 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:32.284 [2024-12-16 11:43:44.580524] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:77 cdw0:0 sqhd:0049 p:0 m:0 dnr:0 00:22:32.284 [2024-12-16 11:43:44.580551] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:45184 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:32.284 [2024-12-16 11:43:44.580570] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:31 cdw0:0 sqhd:004a p:0 m:0 dnr:0 00:22:32.284 [2024-12-16 11:43:44.580598] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:45192 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:32.284 [2024-12-16 11:43:44.580617] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:48 cdw0:0 sqhd:004b p:0 m:0 dnr:0 00:22:32.284 [2024-12-16 11:43:44.580644] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:45200 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:32.284 [2024-12-16 11:43:44.580663] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:30 cdw0:0 sqhd:004c p:0 m:0 dnr:0 00:22:32.284 [2024-12-16 11:43:44.580690] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:89 nsid:1 lba:45208 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:32.284 [2024-12-16 11:43:44.580709] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:89 cdw0:0 sqhd:004d p:0 m:0 dnr:0 00:22:32.284 [2024-12-16 11:43:44.580736] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:45216 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:32.284 [2024-12-16 11:43:44.580755] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:17 cdw0:0 sqhd:004e p:0 m:0 dnr:0 00:22:32.284 [2024-12-16 11:43:44.580783] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:115 nsid:1 lba:45224 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:32.284 [2024-12-16 11:43:44.580802] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:115 cdw0:0 sqhd:004f p:0 m:0 dnr:0 00:22:32.284 [2024-12-16 11:43:44.580829] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:94 nsid:1 lba:45232 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:32.284 [2024-12-16 11:43:44.580848] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:94 cdw0:0 sqhd:0050 p:0 m:0 dnr:0 00:22:32.284 [2024-12-16 11:43:44.580876] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:45240 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:32.284 [2024-12-16 11:43:44.580895] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:24 cdw0:0 sqhd:0051 p:0 m:0 dnr:0 00:22:32.284 [2024-12-16 11:43:44.580922] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:88 nsid:1 lba:45248 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:32.284 [2024-12-16 11:43:44.580941] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:88 cdw0:0 sqhd:0052 p:0 m:0 dnr:0 00:22:32.284 [2024-12-16 11:43:44.580969] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:123 nsid:1 lba:45256 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:32.284 [2024-12-16 11:43:44.580988] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:123 cdw0:0 sqhd:0053 p:0 m:0 dnr:0 00:22:32.284 [2024-12-16 11:43:44.581779] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:100 nsid:1 lba:45264 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:32.284 [2024-12-16 11:43:44.581813] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:100 cdw0:0 sqhd:0054 p:0 m:0 dnr:0 00:22:32.284 [2024-12-16 11:43:44.581848] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:86 nsid:1 lba:45272 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:32.284 [2024-12-16 11:43:44.581883] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:86 cdw0:0 sqhd:0055 p:0 m:0 dnr:0 00:22:32.284 [2024-12-16 11:43:44.581914] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:45280 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:32.284 [2024-12-16 11:43:44.581934] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:20 cdw0:0 sqhd:0056 p:0 m:0 dnr:0 00:22:32.284 [2024-12-16 11:43:44.581961] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:45288 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:32.284 [2024-12-16 11:43:44.581980] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:34 cdw0:0 sqhd:0057 p:0 m:0 dnr:0 00:22:32.284 [2024-12-16 11:43:44.582007] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:45296 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:32.284 [2024-12-16 11:43:44.582026] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:54 cdw0:0 sqhd:0058 p:0 m:0 dnr:0 00:22:32.284 [2024-12-16 11:43:44.582054] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:113 nsid:1 lba:45304 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:32.284 [2024-12-16 11:43:44.582073] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:113 cdw0:0 sqhd:0059 p:0 m:0 dnr:0 00:22:32.284 [2024-12-16 11:43:44.582100] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:83 nsid:1 lba:45312 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:32.284 [2024-12-16 11:43:44.582119] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:83 cdw0:0 sqhd:005a p:0 m:0 dnr:0 00:22:32.284 [2024-12-16 11:43:44.582186] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:91 nsid:1 lba:45320 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:32.284 [2024-12-16 11:43:44.582212] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:91 cdw0:0 sqhd:005b p:0 m:0 dnr:0 00:22:32.284 [2024-12-16 11:43:44.582240] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:106 nsid:1 lba:45328 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:32.284 [2024-12-16 11:43:44.582260] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:106 cdw0:0 sqhd:005c p:0 m:0 dnr:0 00:22:32.284 [2024-12-16 11:43:44.582287] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:45336 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:32.284 [2024-12-16 11:43:44.582306] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:52 cdw0:0 sqhd:005d p:0 m:0 dnr:0 00:22:32.284 [2024-12-16 11:43:44.582333] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:90 nsid:1 lba:45344 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:32.285 [2024-12-16 11:43:44.582352] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:90 cdw0:0 sqhd:005e p:0 m:0 dnr:0 00:22:32.285 [2024-12-16 11:43:44.582379] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:126 nsid:1 lba:45352 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:32.285 [2024-12-16 11:43:44.582398] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:126 cdw0:0 sqhd:005f p:0 m:0 dnr:0 00:22:32.285 [2024-12-16 11:43:44.582425] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:80 nsid:1 lba:45360 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:32.285 [2024-12-16 11:43:44.582443] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:80 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:22:32.285 [2024-12-16 11:43:44.582470] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:97 nsid:1 lba:45368 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:32.285 [2024-12-16 11:43:44.582499] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:97 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:22:32.285 [2024-12-16 11:43:44.582528] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:45376 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:32.285 [2024-12-16 11:43:44.582547] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:46 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:22:32.285 [2024-12-16 11:43:44.582575] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:64 nsid:1 lba:45384 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:32.285 [2024-12-16 11:43:44.582594] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:64 cdw0:0 sqhd:0063 p:0 m:0 dnr:0 00:22:32.285 [2024-12-16 11:43:44.582621] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:45392 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:32.285 [2024-12-16 11:43:44.582639] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:5 cdw0:0 sqhd:0064 p:0 m:0 dnr:0 00:22:32.285 [2024-12-16 11:43:44.582666] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:45400 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:32.285 [2024-12-16 11:43:44.582685] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:41 cdw0:0 sqhd:0065 p:0 m:0 dnr:0 00:22:32.285 [2024-12-16 11:43:44.582713] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:45408 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:32.285 [2024-12-16 11:43:44.582732] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:29 cdw0:0 sqhd:0066 p:0 m:0 dnr:0 00:22:32.285 [2024-12-16 11:43:44.582759] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:108 nsid:1 lba:45416 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:32.285 [2024-12-16 11:43:44.582778] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:108 cdw0:0 sqhd:0067 p:0 m:0 dnr:0 00:22:32.285 [2024-12-16 11:43:44.582805] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:45424 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:32.285 [2024-12-16 11:43:44.582824] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:8 cdw0:0 sqhd:0068 p:0 m:0 dnr:0 00:22:32.285 [2024-12-16 11:43:44.582851] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:71 nsid:1 lba:45432 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:32.285 [2024-12-16 11:43:44.582870] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:71 cdw0:0 sqhd:0069 p:0 m:0 dnr:0 00:22:32.285 [2024-12-16 11:43:44.582897] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:107 nsid:1 lba:45440 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:32.285 [2024-12-16 11:43:44.582916] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:107 cdw0:0 sqhd:006a p:0 m:0 dnr:0 00:22:32.285 [2024-12-16 11:43:44.582950] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:45448 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:32.285 [2024-12-16 11:43:44.582970] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:28 cdw0:0 sqhd:006b p:0 m:0 dnr:0 00:22:32.285 [2024-12-16 11:43:44.582997] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:112 nsid:1 lba:45456 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:32.285 [2024-12-16 11:43:44.583016] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:112 cdw0:0 sqhd:006c p:0 m:0 dnr:0 00:22:32.285 [2024-12-16 11:43:44.583043] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:45464 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:32.285 [2024-12-16 11:43:44.583062] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:37 cdw0:0 sqhd:006d p:0 m:0 dnr:0 00:22:32.285 [2024-12-16 11:43:44.583099] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:98 nsid:1 lba:45472 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:32.285 [2024-12-16 11:43:44.583119] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:98 cdw0:0 sqhd:006e p:0 m:0 dnr:0 00:22:32.285 [2024-12-16 11:43:44.583176] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:87 nsid:1 lba:45480 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:32.285 [2024-12-16 11:43:44.583197] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:87 cdw0:0 sqhd:006f p:0 m:0 dnr:0 00:22:32.285 [2024-12-16 11:43:44.583225] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:118 nsid:1 lba:45488 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:32.285 [2024-12-16 11:43:44.583245] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:118 cdw0:0 sqhd:0070 p:0 m:0 dnr:0 00:22:32.285 [2024-12-16 11:43:44.583272] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:81 nsid:1 lba:45496 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:32.285 [2024-12-16 11:43:44.583291] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:81 cdw0:0 sqhd:0071 p:0 m:0 dnr:0 00:22:32.285 [2024-12-16 11:43:44.583318] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:76 nsid:1 lba:45504 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:32.285 [2024-12-16 11:43:44.583338] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:76 cdw0:0 sqhd:0072 p:0 m:0 dnr:0 00:22:32.285 [2024-12-16 11:43:44.583365] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:45512 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:32.285 [2024-12-16 11:43:44.583384] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:59 cdw0:0 sqhd:0073 p:0 m:0 dnr:0 00:22:32.285 [2024-12-16 11:43:44.583411] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:104 nsid:1 lba:45520 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:32.285 [2024-12-16 11:43:44.583430] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:104 cdw0:0 sqhd:0074 p:0 m:0 dnr:0 00:22:32.285 [2024-12-16 11:43:44.583458] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:68 nsid:1 lba:45528 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:32.285 [2024-12-16 11:43:44.583477] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:68 cdw0:0 sqhd:0075 p:0 m:0 dnr:0 00:22:32.285 [2024-12-16 11:43:44.583504] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:45536 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:32.285 [2024-12-16 11:43:44.583524] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:4 cdw0:0 sqhd:0076 p:0 m:0 dnr:0 00:22:32.285 [2024-12-16 11:43:44.583551] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:45544 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:32.285 [2024-12-16 11:43:44.583570] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:60 cdw0:0 sqhd:0077 p:0 m:0 dnr:0 00:22:32.285 [2024-12-16 11:43:44.583597] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:75 nsid:1 lba:45552 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:32.285 [2024-12-16 11:43:44.583616] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:75 cdw0:0 sqhd:0078 p:0 m:0 dnr:0 00:22:32.285 [2024-12-16 11:43:44.583644] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:84 nsid:1 lba:45560 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:32.285 [2024-12-16 11:43:44.583663] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:84 cdw0:0 sqhd:0079 p:0 m:0 dnr:0 00:22:32.285 [2024-12-16 11:43:44.583699] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:82 nsid:1 lba:45568 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:32.285 [2024-12-16 11:43:44.583735] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:82 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:22:32.285 [2024-12-16 11:43:44.583765] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:45576 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:32.285 [2024-12-16 11:43:44.583785] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:32 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:22:32.285 [2024-12-16 11:43:44.583812] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:45584 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:32.285 [2024-12-16 11:43:44.583831] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:16 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:22:32.285 [2024-12-16 11:43:44.583858] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:117 nsid:1 lba:45592 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:32.285 [2024-12-16 11:43:44.583877] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:117 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:22:32.285 [2024-12-16 11:43:44.583905] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:45600 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:32.285 [2024-12-16 11:43:44.583923] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:19 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:22:32.285 [2024-12-16 11:43:44.583950] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:45608 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:32.285 [2024-12-16 11:43:44.583969] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:12 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:22:32.285 [2024-12-16 11:43:44.583996] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:45616 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:32.285 [2024-12-16 11:43:44.584016] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:53 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:32.285 [2024-12-16 11:43:44.584047] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:103 nsid:1 lba:45624 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:32.285 [2024-12-16 11:43:44.584066] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:103 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:32.285 [2024-12-16 11:43:44.584092] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:45632 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:32.286 [2024-12-16 11:43:44.584112] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:45 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:22:32.286 [2024-12-16 11:43:44.584172] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:45640 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:32.286 [2024-12-16 11:43:44.584196] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:47 cdw0:0 sqhd:0003 p:0 m:0 dnr:0 00:22:32.286 [2024-12-16 11:43:44.584224] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:101 nsid:1 lba:45648 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:32.286 [2024-12-16 11:43:44.584244] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:101 cdw0:0 sqhd:0004 p:0 m:0 dnr:0 00:22:32.286 [2024-12-16 11:43:44.584271] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:70 nsid:1 lba:45656 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:32.286 [2024-12-16 11:43:44.584290] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:70 cdw0:0 sqhd:0005 p:0 m:0 dnr:0 00:22:32.286 [2024-12-16 11:43:44.584317] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:45664 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:32.286 [2024-12-16 11:43:44.584347] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:22 cdw0:0 sqhd:0006 p:0 m:0 dnr:0 00:22:32.286 [2024-12-16 11:43:44.584376] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:102 nsid:1 lba:45672 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:32.286 [2024-12-16 11:43:44.584396] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:102 cdw0:0 sqhd:0007 p:0 m:0 dnr:0 00:22:32.286 [2024-12-16 11:43:44.584423] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:45680 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:32.286 [2024-12-16 11:43:44.584442] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:51 cdw0:0 sqhd:0008 p:0 m:0 dnr:0 00:22:32.286 [2024-12-16 11:43:44.584469] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:45688 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:32.286 [2024-12-16 11:43:44.584488] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:42 cdw0:0 sqhd:0009 p:0 m:0 dnr:0 00:22:32.286 [2024-12-16 11:43:44.584516] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:110 nsid:1 lba:45696 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:32.286 [2024-12-16 11:43:44.584535] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:110 cdw0:0 sqhd:000a p:0 m:0 dnr:0 00:22:32.286 [2024-12-16 11:43:44.584562] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:45704 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:32.286 [2024-12-16 11:43:44.584581] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:21 cdw0:0 sqhd:000b p:0 m:0 dnr:0 00:22:32.286 [2024-12-16 11:43:44.584608] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:45712 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:32.286 [2024-12-16 11:43:44.584627] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:39 cdw0:0 sqhd:000c p:0 m:0 dnr:0 00:22:32.286 [2024-12-16 11:43:44.584654] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:99 nsid:1 lba:44696 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:32.286 [2024-12-16 11:43:44.584673] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:99 cdw0:0 sqhd:000d p:0 m:0 dnr:0 00:22:32.286 [2024-12-16 11:43:44.584700] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:96 nsid:1 lba:44704 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:32.286 [2024-12-16 11:43:44.584719] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:96 cdw0:0 sqhd:000e p:0 m:0 dnr:0 00:22:32.286 [2024-12-16 11:43:44.584746] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:44712 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:32.286 [2024-12-16 11:43:44.584774] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:125 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:22:32.286 [2024-12-16 11:43:44.584801] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:79 nsid:1 lba:44720 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:32.286 [2024-12-16 11:43:44.584820] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:79 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:22:32.286 [2024-12-16 11:43:44.584847] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:109 nsid:1 lba:44728 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:32.286 [2024-12-16 11:43:44.584866] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:109 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:22:32.286 [2024-12-16 11:43:44.584893] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:44736 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:32.286 [2024-12-16 11:43:44.584920] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:55 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:22:32.286 [2024-12-16 11:43:44.584949] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:44744 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:32.286 [2024-12-16 11:43:44.584968] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:10 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:22:32.286 [2024-12-16 11:43:44.584995] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:44752 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:32.286 [2024-12-16 11:43:44.585014] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:50 cdw0:0 sqhd:0014 p:0 m:0 dnr:0 00:22:32.286 [2024-12-16 11:43:44.585042] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:66 nsid:1 lba:44760 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:32.286 [2024-12-16 11:43:44.585061] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:66 cdw0:0 sqhd:0015 p:0 m:0 dnr:0 00:22:32.286 [2024-12-16 11:43:44.585088] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:44768 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:32.286 [2024-12-16 11:43:44.585107] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:44 cdw0:0 sqhd:0016 p:0 m:0 dnr:0 00:22:32.286 [2024-12-16 11:43:44.585133] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:93 nsid:1 lba:44776 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:32.286 [2024-12-16 11:43:44.585181] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:93 cdw0:0 sqhd:0017 p:0 m:0 dnr:0 00:22:32.286 [2024-12-16 11:43:44.585209] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:95 nsid:1 lba:44784 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:32.286 [2024-12-16 11:43:44.585229] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:95 cdw0:0 sqhd:0018 p:0 m:0 dnr:0 00:22:32.286 [2024-12-16 11:43:44.585256] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:85 nsid:1 lba:44792 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:32.286 [2024-12-16 11:43:44.585275] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:85 cdw0:0 sqhd:0019 p:0 m:0 dnr:0 00:22:32.286 [2024-12-16 11:43:44.585304] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:44800 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:32.286 [2024-12-16 11:43:44.585323] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:3 cdw0:0 sqhd:001a p:0 m:0 dnr:0 00:22:32.286 [2024-12-16 11:43:44.585350] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:74 nsid:1 lba:44808 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:32.286 [2024-12-16 11:43:44.585369] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:74 cdw0:0 sqhd:001b p:0 m:0 dnr:0 00:22:32.286 [2024-12-16 11:43:44.585396] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:44816 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:32.286 [2024-12-16 11:43:44.585421] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:58 cdw0:0 sqhd:001c p:0 m:0 dnr:0 00:22:32.286 [2024-12-16 11:43:44.585448] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:72 nsid:1 lba:44824 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:32.286 [2024-12-16 11:43:44.585467] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:72 cdw0:0 sqhd:001d p:0 m:0 dnr:0 00:22:32.286 [2024-12-16 11:43:44.585494] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:69 nsid:1 lba:44832 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:32.286 [2024-12-16 11:43:44.585512] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:69 cdw0:0 sqhd:001e p:0 m:0 dnr:0 00:22:32.286 [2024-12-16 11:43:44.585550] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:114 nsid:1 lba:44840 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:32.286 [2024-12-16 11:43:44.585571] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:114 cdw0:0 sqhd:001f p:0 m:0 dnr:0 00:22:32.286 [2024-12-16 11:43:44.586720] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:123 nsid:1 lba:44848 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:32.286 [2024-12-16 11:43:44.586756] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:123 cdw0:0 sqhd:0020 p:0 m:0 dnr:0 00:22:32.286 [2024-12-16 11:43:44.586791] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:88 nsid:1 lba:44856 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:32.286 [2024-12-16 11:43:44.586812] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:88 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:22:32.286 [2024-12-16 11:43:44.586841] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:44864 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:32.286 [2024-12-16 11:43:44.586860] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:24 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:22:32.286 [2024-12-16 11:43:44.586887] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:94 nsid:1 lba:44872 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:32.286 [2024-12-16 11:43:44.586906] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:94 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:22:32.286 [2024-12-16 11:43:44.586933] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:115 nsid:1 lba:44880 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:32.286 [2024-12-16 11:43:44.586953] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:115 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 00:22:32.286 [2024-12-16 11:43:44.586980] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:44888 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:32.286 [2024-12-16 11:43:44.586998] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:17 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:22:32.286 [2024-12-16 11:43:44.587026] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:89 nsid:1 lba:44896 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:32.286 [2024-12-16 11:43:44.587044] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:89 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:22:32.286 [2024-12-16 11:43:44.587072] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:44904 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:32.286 [2024-12-16 11:43:44.587091] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:30 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:22:32.286 [2024-12-16 11:43:44.587118] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:44912 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:32.286 [2024-12-16 11:43:44.587169] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:48 cdw0:0 sqhd:0028 p:0 m:0 dnr:0 00:22:32.286 [2024-12-16 11:43:44.587204] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:44920 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:32.287 [2024-12-16 11:43:44.587232] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:31 cdw0:0 sqhd:0029 p:0 m:0 dnr:0 00:22:32.287 [2024-12-16 11:43:44.587260] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:77 nsid:1 lba:44928 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:32.287 [2024-12-16 11:43:44.587279] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:77 cdw0:0 sqhd:002a p:0 m:0 dnr:0 00:22:32.287 [2024-12-16 11:43:44.587320] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:120 nsid:1 lba:44936 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:32.287 [2024-12-16 11:43:44.587342] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:120 cdw0:0 sqhd:002b p:0 m:0 dnr:0 00:22:32.287 [2024-12-16 11:43:44.587370] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:44944 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:32.287 [2024-12-16 11:43:44.587390] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:0 cdw0:0 sqhd:002c p:0 m:0 dnr:0 00:22:32.287 [2024-12-16 11:43:44.587417] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:105 nsid:1 lba:44952 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:32.287 [2024-12-16 11:43:44.587436] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:105 cdw0:0 sqhd:002d p:0 m:0 dnr:0 00:22:32.287 [2024-12-16 11:43:44.587463] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:44960 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:32.287 [2024-12-16 11:43:44.587482] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:40 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:22:32.287 [2024-12-16 11:43:44.587510] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:44968 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:32.287 [2024-12-16 11:43:44.587529] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:27 cdw0:0 sqhd:002f p:0 m:0 dnr:0 00:22:32.287 [2024-12-16 11:43:44.587556] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:44976 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:32.287 [2024-12-16 11:43:44.587576] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:33 cdw0:0 sqhd:0030 p:0 m:0 dnr:0 00:22:32.287 [2024-12-16 11:43:44.587603] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:44984 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:32.287 [2024-12-16 11:43:44.587622] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:11 cdw0:0 sqhd:0031 p:0 m:0 dnr:0 00:22:32.287 [2024-12-16 11:43:44.587658] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:44992 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:32.287 [2024-12-16 11:43:44.587677] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:6 cdw0:0 sqhd:0032 p:0 m:0 dnr:0 00:22:32.287 [2024-12-16 11:43:44.587717] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:45000 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:32.287 [2024-12-16 11:43:44.587753] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:61 cdw0:0 sqhd:0033 p:0 m:0 dnr:0 00:22:32.287 [2024-12-16 11:43:44.587782] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:45008 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:32.287 [2024-12-16 11:43:44.587802] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:56 cdw0:0 sqhd:0034 p:0 m:0 dnr:0 00:22:32.287 [2024-12-16 11:43:44.587829] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:45016 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:32.287 [2024-12-16 11:43:44.587848] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:26 cdw0:0 sqhd:0035 p:0 m:0 dnr:0 00:22:32.287 [2024-12-16 11:43:44.587875] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:45024 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:32.287 [2024-12-16 11:43:44.587894] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:62 cdw0:0 sqhd:0036 p:0 m:0 dnr:0 00:22:32.287 [2024-12-16 11:43:44.587921] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:45032 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:32.287 [2024-12-16 11:43:44.587950] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:13 cdw0:0 sqhd:0037 p:0 m:0 dnr:0 00:22:32.287 [2024-12-16 11:43:44.587980] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:111 nsid:1 lba:45040 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:32.287 [2024-12-16 11:43:44.587999] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:111 cdw0:0 sqhd:0038 p:0 m:0 dnr:0 00:22:32.287 [2024-12-16 11:43:44.588026] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:45048 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:32.287 [2024-12-16 11:43:44.588057] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:38 cdw0:0 sqhd:0039 p:0 m:0 dnr:0 00:22:32.287 [2024-12-16 11:43:44.588085] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:45056 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:32.287 [2024-12-16 11:43:44.588105] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:15 cdw0:0 sqhd:003a p:0 m:0 dnr:0 00:22:32.287 [2024-12-16 11:43:44.588132] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:45064 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:32.287 [2024-12-16 11:43:44.588175] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:36 cdw0:0 sqhd:003b p:0 m:0 dnr:0 00:22:32.287 [2024-12-16 11:43:44.588204] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:65 nsid:1 lba:45072 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:32.287 [2024-12-16 11:43:44.588224] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:65 cdw0:0 sqhd:003c p:0 m:0 dnr:0 00:22:32.287 [2024-12-16 11:43:44.588260] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:45080 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:32.287 [2024-12-16 11:43:44.588279] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:7 cdw0:0 sqhd:003d p:0 m:0 dnr:0 00:22:32.287 [2024-12-16 11:43:44.588306] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:67 nsid:1 lba:45088 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:32.287 [2024-12-16 11:43:44.588325] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:67 cdw0:0 sqhd:003e p:0 m:0 dnr:0 00:22:32.287 [2024-12-16 11:43:44.588352] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:116 nsid:1 lba:45096 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:32.287 [2024-12-16 11:43:44.588371] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:116 cdw0:0 sqhd:003f p:0 m:0 dnr:0 00:22:32.287 [2024-12-16 11:43:44.588398] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:45104 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:32.287 [2024-12-16 11:43:44.588417] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:14 cdw0:0 sqhd:0040 p:0 m:0 dnr:0 00:22:32.287 [2024-12-16 11:43:44.588444] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:92 nsid:1 lba:45112 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:32.287 [2024-12-16 11:43:44.588463] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:92 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:22:32.287 [2024-12-16 11:43:44.588491] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:45120 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:32.287 [2024-12-16 11:43:44.588510] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:25 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:22:32.287 [2024-12-16 11:43:44.588537] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:78 nsid:1 lba:45128 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:32.287 [2024-12-16 11:43:44.588564] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:78 cdw0:0 sqhd:0043 p:0 m:0 dnr:0 00:22:32.287 [2024-12-16 11:43:44.588594] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:45136 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:32.287 [2024-12-16 11:43:44.588613] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:1 cdw0:0 sqhd:0044 p:0 m:0 dnr:0 00:22:32.287 [2024-12-16 11:43:44.588640] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:119 nsid:1 lba:45144 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:32.287 [2024-12-16 11:43:44.588659] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:119 cdw0:0 sqhd:0045 p:0 m:0 dnr:0 00:22:32.287 [2024-12-16 11:43:44.588686] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:45152 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:32.287 [2024-12-16 11:43:44.588705] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:57 cdw0:0 sqhd:0046 p:0 m:0 dnr:0 00:22:32.287 [2024-12-16 11:43:44.588732] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:45160 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:32.287 [2024-12-16 11:43:44.588763] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:63 cdw0:0 sqhd:0047 p:0 m:0 dnr:0 00:22:32.287 [2024-12-16 11:43:44.588790] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:73 nsid:1 lba:45168 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:32.287 [2024-12-16 11:43:44.588809] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:73 cdw0:0 sqhd:0048 p:0 m:0 dnr:0 00:22:32.287 [2024-12-16 11:43:44.588836] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:121 nsid:1 lba:45176 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:32.287 [2024-12-16 11:43:44.588855] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:121 cdw0:0 sqhd:0049 p:0 m:0 dnr:0 00:22:32.287 [2024-12-16 11:43:44.588882] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:45184 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:32.287 [2024-12-16 11:43:44.588901] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:2 cdw0:0 sqhd:004a p:0 m:0 dnr:0 00:22:32.287 [2024-12-16 11:43:44.588928] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:45192 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:32.287 [2024-12-16 11:43:44.588947] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:18 cdw0:0 sqhd:004b p:0 m:0 dnr:0 00:22:32.287 [2024-12-16 11:43:44.588974] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:45200 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:32.287 [2024-12-16 11:43:44.588993] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:43 cdw0:0 sqhd:004c p:0 m:0 dnr:0 00:22:32.287 [2024-12-16 11:43:44.589019] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:45208 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:32.287 [2024-12-16 11:43:44.589038] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:35 cdw0:0 sqhd:004d p:0 m:0 dnr:0 00:22:32.288 [2024-12-16 11:43:44.589066] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:45216 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:32.288 [2024-12-16 11:43:44.589091] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:23 cdw0:0 sqhd:004e p:0 m:0 dnr:0 00:22:32.288 [2024-12-16 11:43:44.589119] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:45224 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:32.288 [2024-12-16 11:43:44.589165] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:9 cdw0:0 sqhd:004f p:0 m:0 dnr:0 00:22:32.288 [2024-12-16 11:43:44.589207] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:45232 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:32.288 [2024-12-16 11:43:44.589228] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:49 cdw0:0 sqhd:0050 p:0 m:0 dnr:0 00:22:32.288 [2024-12-16 11:43:44.589256] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:122 nsid:1 lba:45240 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:32.288 [2024-12-16 11:43:44.589276] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:122 cdw0:0 sqhd:0051 p:0 m:0 dnr:0 00:22:32.288 [2024-12-16 11:43:44.589304] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:124 nsid:1 lba:45248 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:32.288 [2024-12-16 11:43:44.589323] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:124 cdw0:0 sqhd:0052 p:0 m:0 dnr:0 00:22:32.288 [2024-12-16 11:43:44.590111] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:114 nsid:1 lba:45256 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:32.288 [2024-12-16 11:43:44.590177] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:114 cdw0:0 sqhd:0053 p:0 m:0 dnr:0 00:22:32.288 [2024-12-16 11:43:44.590214] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:69 nsid:1 lba:45264 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:32.288 [2024-12-16 11:43:44.590235] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:69 cdw0:0 sqhd:0054 p:0 m:0 dnr:0 00:22:32.288 [2024-12-16 11:43:44.590263] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:72 nsid:1 lba:45272 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:32.288 [2024-12-16 11:43:44.590282] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:72 cdw0:0 sqhd:0055 p:0 m:0 dnr:0 00:22:32.288 [2024-12-16 11:43:44.590310] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:45280 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:32.288 [2024-12-16 11:43:44.590329] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:58 cdw0:0 sqhd:0056 p:0 m:0 dnr:0 00:22:32.288 [2024-12-16 11:43:44.590356] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:74 nsid:1 lba:45288 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:32.288 [2024-12-16 11:43:44.590375] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:74 cdw0:0 sqhd:0057 p:0 m:0 dnr:0 00:22:32.288 [2024-12-16 11:43:44.590402] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:45296 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:32.288 [2024-12-16 11:43:44.590422] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:3 cdw0:0 sqhd:0058 p:0 m:0 dnr:0 00:22:32.288 [2024-12-16 11:43:44.590449] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:85 nsid:1 lba:45304 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:32.288 [2024-12-16 11:43:44.590468] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:85 cdw0:0 sqhd:0059 p:0 m:0 dnr:0 00:22:32.288 [2024-12-16 11:43:44.590495] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:95 nsid:1 lba:45312 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:32.288 [2024-12-16 11:43:44.590524] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:95 cdw0:0 sqhd:005a p:0 m:0 dnr:0 00:22:32.288 [2024-12-16 11:43:44.590551] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:93 nsid:1 lba:45320 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:32.288 [2024-12-16 11:43:44.590571] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:93 cdw0:0 sqhd:005b p:0 m:0 dnr:0 00:22:32.288 [2024-12-16 11:43:44.590611] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:45328 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:32.288 [2024-12-16 11:43:44.590633] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:44 cdw0:0 sqhd:005c p:0 m:0 dnr:0 00:22:32.288 [2024-12-16 11:43:44.590660] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:66 nsid:1 lba:45336 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:32.288 [2024-12-16 11:43:44.590679] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:66 cdw0:0 sqhd:005d p:0 m:0 dnr:0 00:22:32.288 [2024-12-16 11:43:44.590707] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:45344 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:32.288 [2024-12-16 11:43:44.590727] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:50 cdw0:0 sqhd:005e p:0 m:0 dnr:0 00:22:32.288 [2024-12-16 11:43:44.590755] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:45352 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:32.288 [2024-12-16 11:43:44.590774] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:10 cdw0:0 sqhd:005f p:0 m:0 dnr:0 00:22:32.288 [2024-12-16 11:43:44.590801] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:45360 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:32.288 [2024-12-16 11:43:44.590820] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:55 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:22:32.288 [2024-12-16 11:43:44.590847] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:109 nsid:1 lba:45368 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:32.288 [2024-12-16 11:43:44.590866] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:109 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:22:32.288 [2024-12-16 11:43:44.590893] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:79 nsid:1 lba:45376 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:32.288 [2024-12-16 11:43:44.590912] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:79 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:22:32.288 [2024-12-16 11:43:44.590940] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:45384 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:32.288 [2024-12-16 11:43:44.590959] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:125 cdw0:0 sqhd:0063 p:0 m:0 dnr:0 00:22:32.288 [2024-12-16 11:43:44.590986] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:96 nsid:1 lba:45392 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:32.288 [2024-12-16 11:43:44.591005] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:96 cdw0:0 sqhd:0064 p:0 m:0 dnr:0 00:22:32.288 [2024-12-16 11:43:44.591032] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:99 nsid:1 lba:45400 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:32.288 [2024-12-16 11:43:44.591051] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:99 cdw0:0 sqhd:0065 p:0 m:0 dnr:0 00:22:32.288 [2024-12-16 11:43:44.591079] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:45408 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:32.288 [2024-12-16 11:43:44.591097] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:39 cdw0:0 sqhd:0066 p:0 m:0 dnr:0 00:22:32.288 [2024-12-16 11:43:44.591124] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:45416 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:32.288 [2024-12-16 11:43:44.591181] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:21 cdw0:0 sqhd:0067 p:0 m:0 dnr:0 00:22:32.288 [2024-12-16 11:43:44.591214] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:110 nsid:1 lba:45424 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:32.288 [2024-12-16 11:43:44.591247] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:110 cdw0:0 sqhd:0068 p:0 m:0 dnr:0 00:22:32.288 [2024-12-16 11:43:44.591277] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:45432 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:32.288 [2024-12-16 11:43:44.591297] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:42 cdw0:0 sqhd:0069 p:0 m:0 dnr:0 00:22:32.288 [2024-12-16 11:43:44.591324] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:45440 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:32.288 [2024-12-16 11:43:44.591343] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:51 cdw0:0 sqhd:006a p:0 m:0 dnr:0 00:22:32.288 [2024-12-16 11:43:44.591370] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:102 nsid:1 lba:45448 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:32.288 [2024-12-16 11:43:44.591389] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:102 cdw0:0 sqhd:006b p:0 m:0 dnr:0 00:22:32.288 [2024-12-16 11:43:44.591416] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:45456 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:32.288 [2024-12-16 11:43:44.591435] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:22 cdw0:0 sqhd:006c p:0 m:0 dnr:0 00:22:32.288 [2024-12-16 11:43:44.591462] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:70 nsid:1 lba:45464 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:32.288 [2024-12-16 11:43:44.591481] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:70 cdw0:0 sqhd:006d p:0 m:0 dnr:0 00:22:32.288 [2024-12-16 11:43:44.591509] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:101 nsid:1 lba:45472 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:32.288 [2024-12-16 11:43:44.591528] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:101 cdw0:0 sqhd:006e p:0 m:0 dnr:0 00:22:32.288 [2024-12-16 11:43:44.591555] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:45480 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:32.288 [2024-12-16 11:43:44.591574] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:47 cdw0:0 sqhd:006f p:0 m:0 dnr:0 00:22:32.288 [2024-12-16 11:43:44.591601] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:45488 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:32.288 [2024-12-16 11:43:44.591620] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:45 cdw0:0 sqhd:0070 p:0 m:0 dnr:0 00:22:32.288 [2024-12-16 11:43:44.591647] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:103 nsid:1 lba:45496 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:32.288 [2024-12-16 11:43:44.591674] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:103 cdw0:0 sqhd:0071 p:0 m:0 dnr:0 00:22:32.288 [2024-12-16 11:43:44.591735] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:45504 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:32.288 [2024-12-16 11:43:44.591758] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:53 cdw0:0 sqhd:0072 p:0 m:0 dnr:0 00:22:32.288 [2024-12-16 11:43:44.591787] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:45512 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:32.288 [2024-12-16 11:43:44.591807] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:12 cdw0:0 sqhd:0073 p:0 m:0 dnr:0 00:22:32.288 [2024-12-16 11:43:44.591834] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:45520 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:32.289 [2024-12-16 11:43:44.591862] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:19 cdw0:0 sqhd:0074 p:0 m:0 dnr:0 00:22:32.289 [2024-12-16 11:43:44.591891] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:117 nsid:1 lba:45528 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:32.289 [2024-12-16 11:43:44.591911] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:117 cdw0:0 sqhd:0075 p:0 m:0 dnr:0 00:22:32.289 [2024-12-16 11:43:44.591939] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:45536 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:32.289 [2024-12-16 11:43:44.591958] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:16 cdw0:0 sqhd:0076 p:0 m:0 dnr:0 00:22:32.289 [2024-12-16 11:43:44.591985] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:45544 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:32.289 [2024-12-16 11:43:44.592004] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:32 cdw0:0 sqhd:0077 p:0 m:0 dnr:0 00:22:32.289 [2024-12-16 11:43:44.592032] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:82 nsid:1 lba:45552 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:32.289 [2024-12-16 11:43:44.592060] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:82 cdw0:0 sqhd:0078 p:0 m:0 dnr:0 00:22:32.289 [2024-12-16 11:43:44.592086] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:84 nsid:1 lba:45560 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:32.289 [2024-12-16 11:43:44.592106] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:84 cdw0:0 sqhd:0079 p:0 m:0 dnr:0 00:22:32.289 [2024-12-16 11:43:44.592133] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:75 nsid:1 lba:45568 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:32.289 [2024-12-16 11:43:44.592167] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:75 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:22:32.289 [2024-12-16 11:43:44.592197] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:45576 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:32.289 [2024-12-16 11:43:44.592217] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:60 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:22:32.289 [2024-12-16 11:43:44.592256] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:45584 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:32.289 [2024-12-16 11:43:44.592271] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:4 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:22:32.289 [2024-12-16 11:43:44.592291] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:68 nsid:1 lba:45592 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:32.289 [2024-12-16 11:43:44.592306] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:68 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:22:32.289 [2024-12-16 11:43:44.592327] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:104 nsid:1 lba:45600 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:32.289 [2024-12-16 11:43:44.592342] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:104 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:22:32.289 [2024-12-16 11:43:44.592363] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:45608 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:32.289 [2024-12-16 11:43:44.592378] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:59 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:22:32.289 [2024-12-16 11:43:44.592399] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:76 nsid:1 lba:45616 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:32.289 [2024-12-16 11:43:44.592414] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:76 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:32.289 [2024-12-16 11:43:44.592443] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:81 nsid:1 lba:45624 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:32.289 [2024-12-16 11:43:44.592459] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:81 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:32.289 [2024-12-16 11:43:44.592484] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:118 nsid:1 lba:45632 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:32.289 [2024-12-16 11:43:44.592500] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:118 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:22:32.289 [2024-12-16 11:43:44.592521] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:87 nsid:1 lba:45640 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:32.289 [2024-12-16 11:43:44.592536] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:87 cdw0:0 sqhd:0003 p:0 m:0 dnr:0 00:22:32.289 [2024-12-16 11:43:44.592557] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:98 nsid:1 lba:45648 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:32.289 [2024-12-16 11:43:44.592572] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:98 cdw0:0 sqhd:0004 p:0 m:0 dnr:0 00:22:32.289 [2024-12-16 11:43:44.592593] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:45656 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:32.289 [2024-12-16 11:43:44.592608] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:37 cdw0:0 sqhd:0005 p:0 m:0 dnr:0 00:22:32.289 [2024-12-16 11:43:44.592628] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:112 nsid:1 lba:45664 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:32.289 [2024-12-16 11:43:44.592643] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:112 cdw0:0 sqhd:0006 p:0 m:0 dnr:0 00:22:32.289 [2024-12-16 11:43:44.592664] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:45672 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:32.289 [2024-12-16 11:43:44.592679] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:28 cdw0:0 sqhd:0007 p:0 m:0 dnr:0 00:22:32.289 [2024-12-16 11:43:44.592699] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:107 nsid:1 lba:45680 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:32.289 [2024-12-16 11:43:44.592714] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:107 cdw0:0 sqhd:0008 p:0 m:0 dnr:0 00:22:32.289 [2024-12-16 11:43:44.592735] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:71 nsid:1 lba:45688 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:32.289 [2024-12-16 11:43:44.592764] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:71 cdw0:0 sqhd:0009 p:0 m:0 dnr:0 00:22:32.289 [2024-12-16 11:43:44.592784] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:45696 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:32.289 [2024-12-16 11:43:44.592799] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:8 cdw0:0 sqhd:000a p:0 m:0 dnr:0 00:22:32.289 [2024-12-16 11:43:44.592819] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:108 nsid:1 lba:45704 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:32.289 [2024-12-16 11:43:44.592834] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:108 cdw0:0 sqhd:000b p:0 m:0 dnr:0 00:22:32.289 [2024-12-16 11:43:44.592854] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:45712 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:32.289 [2024-12-16 11:43:44.592869] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:29 cdw0:0 sqhd:000c p:0 m:0 dnr:0 00:22:32.289 [2024-12-16 11:43:44.592895] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:44696 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:32.289 [2024-12-16 11:43:44.592911] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:41 cdw0:0 sqhd:000d p:0 m:0 dnr:0 00:22:32.289 [2024-12-16 11:43:44.592932] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:44704 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:32.289 [2024-12-16 11:43:44.592947] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:5 cdw0:0 sqhd:000e p:0 m:0 dnr:0 00:22:32.289 [2024-12-16 11:43:44.592967] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:64 nsid:1 lba:44712 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:32.289 [2024-12-16 11:43:44.592981] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:64 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:22:32.289 [2024-12-16 11:43:44.593001] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:44720 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:32.289 [2024-12-16 11:43:44.593016] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:46 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:22:32.289 [2024-12-16 11:43:44.593036] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:97 nsid:1 lba:44728 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:32.289 [2024-12-16 11:43:44.593050] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:97 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:22:32.289 [2024-12-16 11:43:44.593071] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:80 nsid:1 lba:44736 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:32.289 [2024-12-16 11:43:44.593086] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:80 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:22:32.289 [2024-12-16 11:43:44.593105] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:126 nsid:1 lba:44744 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:32.289 [2024-12-16 11:43:44.593120] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:126 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:22:32.289 [2024-12-16 11:43:44.593139] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:90 nsid:1 lba:44752 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:32.289 [2024-12-16 11:43:44.593170] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:90 cdw0:0 sqhd:0014 p:0 m:0 dnr:0 00:22:32.290 [2024-12-16 11:43:44.593203] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:44760 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:32.290 [2024-12-16 11:43:44.593220] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:52 cdw0:0 sqhd:0015 p:0 m:0 dnr:0 00:22:32.290 [2024-12-16 11:43:44.593241] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:106 nsid:1 lba:44768 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:32.290 [2024-12-16 11:43:44.593256] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:106 cdw0:0 sqhd:0016 p:0 m:0 dnr:0 00:22:32.290 [2024-12-16 11:43:44.593276] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:91 nsid:1 lba:44776 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:32.290 [2024-12-16 11:43:44.593291] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:91 cdw0:0 sqhd:0017 p:0 m:0 dnr:0 00:22:32.290 [2024-12-16 11:43:44.593312] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:83 nsid:1 lba:44784 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:32.290 [2024-12-16 11:43:44.593327] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:83 cdw0:0 sqhd:0018 p:0 m:0 dnr:0 00:22:32.290 [2024-12-16 11:43:44.593348] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:113 nsid:1 lba:44792 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:32.290 [2024-12-16 11:43:44.593370] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:113 cdw0:0 sqhd:0019 p:0 m:0 dnr:0 00:22:32.290 [2024-12-16 11:43:44.593392] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:44800 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:32.290 [2024-12-16 11:43:44.593407] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:54 cdw0:0 sqhd:001a p:0 m:0 dnr:0 00:22:32.290 [2024-12-16 11:43:44.593428] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:44808 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:32.290 [2024-12-16 11:43:44.593443] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:34 cdw0:0 sqhd:001b p:0 m:0 dnr:0 00:22:32.290 [2024-12-16 11:43:44.593470] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:44816 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:32.290 [2024-12-16 11:43:44.593485] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:20 cdw0:0 sqhd:001c p:0 m:0 dnr:0 00:22:32.290 [2024-12-16 11:43:44.593506] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:86 nsid:1 lba:44824 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:32.290 [2024-12-16 11:43:44.593535] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:86 cdw0:0 sqhd:001d p:0 m:0 dnr:0 00:22:32.290 [2024-12-16 11:43:44.593561] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:100 nsid:1 lba:44832 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:32.290 [2024-12-16 11:43:44.593577] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:100 cdw0:0 sqhd:001e p:0 m:0 dnr:0 00:22:32.290 [2024-12-16 11:43:44.594423] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:124 nsid:1 lba:44840 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:32.290 [2024-12-16 11:43:44.594452] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:124 cdw0:0 sqhd:001f p:0 m:0 dnr:0 00:22:32.290 [2024-12-16 11:43:44.594479] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:122 nsid:1 lba:44848 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:32.290 [2024-12-16 11:43:44.594502] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:122 cdw0:0 sqhd:0020 p:0 m:0 dnr:0 00:22:32.290 [2024-12-16 11:43:44.594524] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:44856 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:32.290 [2024-12-16 11:43:44.594554] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:49 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:22:32.290 [2024-12-16 11:43:44.594575] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:44864 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:32.290 [2024-12-16 11:43:44.594590] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:9 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:22:32.290 [2024-12-16 11:43:44.594610] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:44872 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:32.290 [2024-12-16 11:43:44.594624] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:23 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:22:32.290 [2024-12-16 11:43:44.594644] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:44880 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:32.290 [2024-12-16 11:43:44.594659] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:35 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 00:22:32.290 [2024-12-16 11:43:44.594679] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:44888 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:32.290 [2024-12-16 11:43:44.594703] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:43 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:22:32.290 [2024-12-16 11:43:44.594726] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:44896 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:32.290 [2024-12-16 11:43:44.594741] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:18 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:22:32.290 [2024-12-16 11:43:44.594761] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:44904 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:32.290 [2024-12-16 11:43:44.594776] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:2 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:22:32.290 [2024-12-16 11:43:44.594796] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:121 nsid:1 lba:44912 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:32.290 [2024-12-16 11:43:44.594811] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:121 cdw0:0 sqhd:0028 p:0 m:0 dnr:0 00:22:32.290 [2024-12-16 11:43:44.594831] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:73 nsid:1 lba:44920 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:32.290 [2024-12-16 11:43:44.594845] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:73 cdw0:0 sqhd:0029 p:0 m:0 dnr:0 00:22:32.290 [2024-12-16 11:43:44.594865] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:44928 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:32.290 [2024-12-16 11:43:44.594880] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:63 cdw0:0 sqhd:002a p:0 m:0 dnr:0 00:22:32.290 [2024-12-16 11:43:44.594900] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:44936 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:32.290 [2024-12-16 11:43:44.594914] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:57 cdw0:0 sqhd:002b p:0 m:0 dnr:0 00:22:32.290 [2024-12-16 11:43:44.594934] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:119 nsid:1 lba:44944 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:32.290 [2024-12-16 11:43:44.594949] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:119 cdw0:0 sqhd:002c p:0 m:0 dnr:0 00:22:32.290 [2024-12-16 11:43:44.594969] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:44952 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:32.290 [2024-12-16 11:43:44.594983] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:1 cdw0:0 sqhd:002d p:0 m:0 dnr:0 00:22:32.290 [2024-12-16 11:43:44.595004] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:78 nsid:1 lba:44960 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:32.290 [2024-12-16 11:43:44.595018] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:78 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:22:32.290 [2024-12-16 11:43:44.595039] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:44968 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:32.290 [2024-12-16 11:43:44.595053] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:25 cdw0:0 sqhd:002f p:0 m:0 dnr:0 00:22:32.290 [2024-12-16 11:43:44.595073] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:92 nsid:1 lba:44976 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:32.290 [2024-12-16 11:43:44.595087] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:92 cdw0:0 sqhd:0030 p:0 m:0 dnr:0 00:22:32.290 [2024-12-16 11:43:44.595108] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:44984 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:32.290 [2024-12-16 11:43:44.595122] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:14 cdw0:0 sqhd:0031 p:0 m:0 dnr:0 00:22:32.290 [2024-12-16 11:43:44.595150] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:116 nsid:1 lba:44992 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:32.290 [2024-12-16 11:43:44.595197] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:116 cdw0:0 sqhd:0032 p:0 m:0 dnr:0 00:22:32.290 [2024-12-16 11:43:44.595220] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:67 nsid:1 lba:45000 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:32.290 [2024-12-16 11:43:44.595236] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:67 cdw0:0 sqhd:0033 p:0 m:0 dnr:0 00:22:32.290 [2024-12-16 11:43:44.595257] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:45008 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:32.290 [2024-12-16 11:43:44.595272] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:7 cdw0:0 sqhd:0034 p:0 m:0 dnr:0 00:22:32.290 [2024-12-16 11:43:44.595292] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:65 nsid:1 lba:45016 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:32.290 [2024-12-16 11:43:44.595312] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:65 cdw0:0 sqhd:0035 p:0 m:0 dnr:0 00:22:32.290 [2024-12-16 11:43:44.595333] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:45024 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:32.290 [2024-12-16 11:43:44.595348] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:36 cdw0:0 sqhd:0036 p:0 m:0 dnr:0 00:22:32.290 [2024-12-16 11:43:44.595369] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:45032 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:32.290 [2024-12-16 11:43:44.595383] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:15 cdw0:0 sqhd:0037 p:0 m:0 dnr:0 00:22:32.290 [2024-12-16 11:43:44.595404] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:45040 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:32.290 [2024-12-16 11:43:44.595418] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:38 cdw0:0 sqhd:0038 p:0 m:0 dnr:0 00:22:32.290 [2024-12-16 11:43:44.595439] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:111 nsid:1 lba:45048 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:32.290 [2024-12-16 11:43:44.595454] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:111 cdw0:0 sqhd:0039 p:0 m:0 dnr:0 00:22:32.290 [2024-12-16 11:43:44.595474] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:45056 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:32.290 [2024-12-16 11:43:44.595489] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:13 cdw0:0 sqhd:003a p:0 m:0 dnr:0 00:22:32.290 [2024-12-16 11:43:44.595510] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:45064 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:32.290 [2024-12-16 11:43:44.595524] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:62 cdw0:0 sqhd:003b p:0 m:0 dnr:0 00:22:32.290 [2024-12-16 11:43:44.595545] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:45072 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:32.290 [2024-12-16 11:43:44.595560] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:26 cdw0:0 sqhd:003c p:0 m:0 dnr:0 00:22:32.291 [2024-12-16 11:43:44.595581] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:45080 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:32.291 [2024-12-16 11:43:44.595596] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:56 cdw0:0 sqhd:003d p:0 m:0 dnr:0 00:22:32.291 [2024-12-16 11:43:44.595624] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:45088 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:32.291 [2024-12-16 11:43:44.595641] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:61 cdw0:0 sqhd:003e p:0 m:0 dnr:0 00:22:32.291 [2024-12-16 11:43:44.603619] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:45096 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:32.291 [2024-12-16 11:43:44.603672] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:6 cdw0:0 sqhd:003f p:0 m:0 dnr:0 00:22:32.291 [2024-12-16 11:43:44.603717] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:45104 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:32.291 [2024-12-16 11:43:44.603753] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:11 cdw0:0 sqhd:0040 p:0 m:0 dnr:0 00:22:32.291 [2024-12-16 11:43:44.603782] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:45112 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:32.291 [2024-12-16 11:43:44.603802] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:33 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:22:32.291 [2024-12-16 11:43:44.603828] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:45120 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:32.291 [2024-12-16 11:43:44.603846] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:27 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:22:32.291 [2024-12-16 11:43:44.603873] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:45128 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:32.291 [2024-12-16 11:43:44.603891] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:40 cdw0:0 sqhd:0043 p:0 m:0 dnr:0 00:22:32.291 [2024-12-16 11:43:44.603917] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:105 nsid:1 lba:45136 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:32.291 [2024-12-16 11:43:44.603936] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:105 cdw0:0 sqhd:0044 p:0 m:0 dnr:0 00:22:32.291 [2024-12-16 11:43:44.603970] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:45144 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:32.291 [2024-12-16 11:43:44.603988] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:0 cdw0:0 sqhd:0045 p:0 m:0 dnr:0 00:22:32.291 [2024-12-16 11:43:44.604014] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:120 nsid:1 lba:45152 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:32.291 [2024-12-16 11:43:44.604032] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:120 cdw0:0 sqhd:0046 p:0 m:0 dnr:0 00:22:32.291 [2024-12-16 11:43:44.604058] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:77 nsid:1 lba:45160 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:32.291 [2024-12-16 11:43:44.604076] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:77 cdw0:0 sqhd:0047 p:0 m:0 dnr:0 00:22:32.291 [2024-12-16 11:43:44.604102] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:45168 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:32.291 [2024-12-16 11:43:44.604121] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:31 cdw0:0 sqhd:0048 p:0 m:0 dnr:0 00:22:32.291 [2024-12-16 11:43:44.604165] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:45176 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:32.291 [2024-12-16 11:43:44.604188] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:48 cdw0:0 sqhd:0049 p:0 m:0 dnr:0 00:22:32.291 [2024-12-16 11:43:44.604215] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:45184 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:32.291 [2024-12-16 11:43:44.604251] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:30 cdw0:0 sqhd:004a p:0 m:0 dnr:0 00:22:32.291 [2024-12-16 11:43:44.604280] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:89 nsid:1 lba:45192 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:32.291 [2024-12-16 11:43:44.604299] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:89 cdw0:0 sqhd:004b p:0 m:0 dnr:0 00:22:32.291 [2024-12-16 11:43:44.604325] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:45200 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:32.291 [2024-12-16 11:43:44.604344] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:17 cdw0:0 sqhd:004c p:0 m:0 dnr:0 00:22:32.291 [2024-12-16 11:43:44.604382] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:115 nsid:1 lba:45208 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:32.291 [2024-12-16 11:43:44.604400] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:115 cdw0:0 sqhd:004d p:0 m:0 dnr:0 00:22:32.291 [2024-12-16 11:43:44.604426] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:94 nsid:1 lba:45216 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:32.291 [2024-12-16 11:43:44.604444] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:94 cdw0:0 sqhd:004e p:0 m:0 dnr:0 00:22:32.291 [2024-12-16 11:43:44.604471] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:45224 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:32.291 [2024-12-16 11:43:44.604488] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:24 cdw0:0 sqhd:004f p:0 m:0 dnr:0 00:22:32.291 [2024-12-16 11:43:44.604514] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:88 nsid:1 lba:45232 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:32.291 [2024-12-16 11:43:44.604533] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:88 cdw0:0 sqhd:0050 p:0 m:0 dnr:0 00:22:32.291 [2024-12-16 11:43:44.604561] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:123 nsid:1 lba:45240 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:32.291 [2024-12-16 11:43:44.604580] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:123 cdw0:0 sqhd:0051 p:0 m:0 dnr:0 00:22:32.291 [2024-12-16 11:43:44.605431] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:100 nsid:1 lba:45248 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:32.291 [2024-12-16 11:43:44.605468] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:100 cdw0:0 sqhd:0052 p:0 m:0 dnr:0 00:22:32.291 [2024-12-16 11:43:44.605502] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:86 nsid:1 lba:45256 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:32.291 [2024-12-16 11:43:44.605524] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:86 cdw0:0 sqhd:0053 p:0 m:0 dnr:0 00:22:32.291 [2024-12-16 11:43:44.605551] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:45264 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:32.291 [2024-12-16 11:43:44.605570] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:20 cdw0:0 sqhd:0054 p:0 m:0 dnr:0 00:22:32.291 [2024-12-16 11:43:44.605607] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:45272 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:32.291 [2024-12-16 11:43:44.605625] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:34 cdw0:0 sqhd:0055 p:0 m:0 dnr:0 00:22:32.291 [2024-12-16 11:43:44.605651] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:45280 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:32.291 [2024-12-16 11:43:44.605683] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:54 cdw0:0 sqhd:0056 p:0 m:0 dnr:0 00:22:32.291 [2024-12-16 11:43:44.605713] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:113 nsid:1 lba:45288 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:32.291 [2024-12-16 11:43:44.605732] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:113 cdw0:0 sqhd:0057 p:0 m:0 dnr:0 00:22:32.291 [2024-12-16 11:43:44.605758] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:83 nsid:1 lba:45296 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:32.291 [2024-12-16 11:43:44.605777] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:83 cdw0:0 sqhd:0058 p:0 m:0 dnr:0 00:22:32.291 [2024-12-16 11:43:44.605803] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:91 nsid:1 lba:45304 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:32.291 [2024-12-16 11:43:44.605821] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:91 cdw0:0 sqhd:0059 p:0 m:0 dnr:0 00:22:32.291 [2024-12-16 11:43:44.605847] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:106 nsid:1 lba:45312 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:32.291 [2024-12-16 11:43:44.605865] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:106 cdw0:0 sqhd:005a p:0 m:0 dnr:0 00:22:32.291 [2024-12-16 11:43:44.605891] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:45320 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:32.291 [2024-12-16 11:43:44.605909] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:52 cdw0:0 sqhd:005b p:0 m:0 dnr:0 00:22:32.291 [2024-12-16 11:43:44.605935] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:90 nsid:1 lba:45328 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:32.291 [2024-12-16 11:43:44.605954] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:90 cdw0:0 sqhd:005c p:0 m:0 dnr:0 00:22:32.291 [2024-12-16 11:43:44.605980] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:126 nsid:1 lba:45336 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:32.291 [2024-12-16 11:43:44.605998] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:126 cdw0:0 sqhd:005d p:0 m:0 dnr:0 00:22:32.291 [2024-12-16 11:43:44.606024] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:80 nsid:1 lba:45344 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:32.291 [2024-12-16 11:43:44.606042] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:80 cdw0:0 sqhd:005e p:0 m:0 dnr:0 00:22:32.291 [2024-12-16 11:43:44.606068] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:97 nsid:1 lba:45352 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:32.291 [2024-12-16 11:43:44.606087] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:97 cdw0:0 sqhd:005f p:0 m:0 dnr:0 00:22:32.291 [2024-12-16 11:43:44.606112] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:45360 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:32.291 [2024-12-16 11:43:44.606131] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:46 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:22:32.291 [2024-12-16 11:43:44.606177] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:64 nsid:1 lba:45368 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:32.291 [2024-12-16 11:43:44.606197] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:64 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:22:32.291 [2024-12-16 11:43:44.606235] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:45376 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:32.291 [2024-12-16 11:43:44.606253] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:5 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:22:32.291 [2024-12-16 11:43:44.606291] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:45384 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:32.291 [2024-12-16 11:43:44.606310] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:41 cdw0:0 sqhd:0063 p:0 m:0 dnr:0 00:22:32.292 [2024-12-16 11:43:44.606337] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:45392 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:32.292 [2024-12-16 11:43:44.606355] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:29 cdw0:0 sqhd:0064 p:0 m:0 dnr:0 00:22:32.292 [2024-12-16 11:43:44.606381] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:108 nsid:1 lba:45400 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:32.292 [2024-12-16 11:43:44.606399] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:108 cdw0:0 sqhd:0065 p:0 m:0 dnr:0 00:22:32.292 [2024-12-16 11:43:44.606425] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:45408 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:32.292 [2024-12-16 11:43:44.606444] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:8 cdw0:0 sqhd:0066 p:0 m:0 dnr:0 00:22:32.292 [2024-12-16 11:43:44.606469] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:71 nsid:1 lba:45416 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:32.292 [2024-12-16 11:43:44.606488] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:71 cdw0:0 sqhd:0067 p:0 m:0 dnr:0 00:22:32.292 [2024-12-16 11:43:44.606514] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:107 nsid:1 lba:45424 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:32.292 [2024-12-16 11:43:44.606532] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:107 cdw0:0 sqhd:0068 p:0 m:0 dnr:0 00:22:32.292 [2024-12-16 11:43:44.606566] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:45432 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:32.292 [2024-12-16 11:43:44.606585] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:28 cdw0:0 sqhd:0069 p:0 m:0 dnr:0 00:22:32.292 [2024-12-16 11:43:44.606611] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:112 nsid:1 lba:45440 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:32.292 [2024-12-16 11:43:44.606629] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:112 cdw0:0 sqhd:006a p:0 m:0 dnr:0 00:22:32.292 [2024-12-16 11:43:44.606655] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:45448 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:32.292 [2024-12-16 11:43:44.606674] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:37 cdw0:0 sqhd:006b p:0 m:0 dnr:0 00:22:32.292 [2024-12-16 11:43:44.606700] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:98 nsid:1 lba:45456 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:32.292 [2024-12-16 11:43:44.606718] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:98 cdw0:0 sqhd:006c p:0 m:0 dnr:0 00:22:32.292 [2024-12-16 11:43:44.606744] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:87 nsid:1 lba:45464 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:32.292 [2024-12-16 11:43:44.606762] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:87 cdw0:0 sqhd:006d p:0 m:0 dnr:0 00:22:32.292 [2024-12-16 11:43:44.606788] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:118 nsid:1 lba:45472 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:32.292 [2024-12-16 11:43:44.606806] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:118 cdw0:0 sqhd:006e p:0 m:0 dnr:0 00:22:32.292 [2024-12-16 11:43:44.606840] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:81 nsid:1 lba:45480 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:32.292 [2024-12-16 11:43:44.606860] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:81 cdw0:0 sqhd:006f p:0 m:0 dnr:0 00:22:32.292 [2024-12-16 11:43:44.606887] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:76 nsid:1 lba:45488 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:32.292 [2024-12-16 11:43:44.606905] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:76 cdw0:0 sqhd:0070 p:0 m:0 dnr:0 00:22:32.292 [2024-12-16 11:43:44.606931] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:45496 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:32.292 [2024-12-16 11:43:44.606950] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:59 cdw0:0 sqhd:0071 p:0 m:0 dnr:0 00:22:32.292 [2024-12-16 11:43:44.606976] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:104 nsid:1 lba:45504 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:32.292 [2024-12-16 11:43:44.606994] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:104 cdw0:0 sqhd:0072 p:0 m:0 dnr:0 00:22:32.292 [2024-12-16 11:43:44.607021] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:68 nsid:1 lba:45512 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:32.292 [2024-12-16 11:43:44.607039] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:68 cdw0:0 sqhd:0073 p:0 m:0 dnr:0 00:22:32.292 [2024-12-16 11:43:44.607065] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:45520 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:32.292 [2024-12-16 11:43:44.607083] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:4 cdw0:0 sqhd:0074 p:0 m:0 dnr:0 00:22:32.292 [2024-12-16 11:43:44.607109] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:45528 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:32.292 [2024-12-16 11:43:44.607128] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:60 cdw0:0 sqhd:0075 p:0 m:0 dnr:0 00:22:32.292 [2024-12-16 11:43:44.607168] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:75 nsid:1 lba:45536 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:32.292 [2024-12-16 11:43:44.607189] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:75 cdw0:0 sqhd:0076 p:0 m:0 dnr:0 00:22:32.292 [2024-12-16 11:43:44.607215] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:84 nsid:1 lba:45544 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:32.292 [2024-12-16 11:43:44.607234] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:84 cdw0:0 sqhd:0077 p:0 m:0 dnr:0 00:22:32.292 [2024-12-16 11:43:44.607261] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:82 nsid:1 lba:45552 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:32.292 [2024-12-16 11:43:44.607292] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:82 cdw0:0 sqhd:0078 p:0 m:0 dnr:0 00:22:32.292 [2024-12-16 11:43:44.607319] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:45560 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:32.292 [2024-12-16 11:43:44.607337] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:32 cdw0:0 sqhd:0079 p:0 m:0 dnr:0 00:22:32.292 [2024-12-16 11:43:44.607363] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:45568 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:32.292 [2024-12-16 11:43:44.607382] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:16 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:22:32.292 [2024-12-16 11:43:44.607409] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:117 nsid:1 lba:45576 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:32.292 [2024-12-16 11:43:44.607437] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:117 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:22:32.292 [2024-12-16 11:43:44.607464] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:45584 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:32.292 [2024-12-16 11:43:44.607483] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:19 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:22:32.292 [2024-12-16 11:43:44.607509] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:45592 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:32.292 [2024-12-16 11:43:44.607528] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:12 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:22:32.292 [2024-12-16 11:43:44.607554] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:45600 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:32.292 [2024-12-16 11:43:44.607573] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:53 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:22:32.292 [2024-12-16 11:43:44.607598] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:103 nsid:1 lba:45608 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:32.292 [2024-12-16 11:43:44.607617] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:103 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:22:32.292 [2024-12-16 11:43:44.607643] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:45616 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:32.292 [2024-12-16 11:43:44.607661] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:45 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:32.292 [2024-12-16 11:43:44.607687] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:45624 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:32.292 [2024-12-16 11:43:44.607717] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:47 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:32.292 [2024-12-16 11:43:44.607748] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:101 nsid:1 lba:45632 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:32.292 [2024-12-16 11:43:44.607767] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:101 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:22:32.292 [2024-12-16 11:43:44.607793] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:70 nsid:1 lba:45640 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:32.292 [2024-12-16 11:43:44.607812] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:70 cdw0:0 sqhd:0003 p:0 m:0 dnr:0 00:22:32.292 [2024-12-16 11:43:44.607838] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:45648 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:32.292 [2024-12-16 11:43:44.607855] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:22 cdw0:0 sqhd:0004 p:0 m:0 dnr:0 00:22:32.292 [2024-12-16 11:43:44.607882] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:102 nsid:1 lba:45656 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:32.292 [2024-12-16 11:43:44.607900] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:102 cdw0:0 sqhd:0005 p:0 m:0 dnr:0 00:22:32.292 [2024-12-16 11:43:44.607926] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:45664 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:32.292 [2024-12-16 11:43:44.607944] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:51 cdw0:0 sqhd:0006 p:0 m:0 dnr:0 00:22:32.292 [2024-12-16 11:43:44.607970] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:45672 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:32.292 [2024-12-16 11:43:44.607996] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:42 cdw0:0 sqhd:0007 p:0 m:0 dnr:0 00:22:32.292 [2024-12-16 11:43:44.608025] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:110 nsid:1 lba:45680 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:32.292 [2024-12-16 11:43:44.608056] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:110 cdw0:0 sqhd:0008 p:0 m:0 dnr:0 00:22:32.292 [2024-12-16 11:43:44.608083] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:45688 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:32.292 [2024-12-16 11:43:44.608102] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:21 cdw0:0 sqhd:0009 p:0 m:0 dnr:0 00:22:32.293 [2024-12-16 11:43:44.608128] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:45696 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:32.293 [2024-12-16 11:43:44.608159] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:39 cdw0:0 sqhd:000a p:0 m:0 dnr:0 00:22:32.293 [2024-12-16 11:43:44.608188] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:99 nsid:1 lba:45704 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:32.293 [2024-12-16 11:43:44.608207] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:99 cdw0:0 sqhd:000b p:0 m:0 dnr:0 00:22:32.293 [2024-12-16 11:43:44.608233] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:96 nsid:1 lba:45712 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:32.293 [2024-12-16 11:43:44.608252] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:96 cdw0:0 sqhd:000c p:0 m:0 dnr:0 00:22:32.293 [2024-12-16 11:43:44.608278] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:125 nsid:1 lba:44696 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:32.293 [2024-12-16 11:43:44.608296] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:125 cdw0:0 sqhd:000d p:0 m:0 dnr:0 00:22:32.293 [2024-12-16 11:43:44.608323] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:79 nsid:1 lba:44704 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:32.293 [2024-12-16 11:43:44.608341] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:79 cdw0:0 sqhd:000e p:0 m:0 dnr:0 00:22:32.293 [2024-12-16 11:43:44.608379] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:109 nsid:1 lba:44712 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:32.293 [2024-12-16 11:43:44.608398] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:109 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:22:32.293 [2024-12-16 11:43:44.608424] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:44720 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:32.293 [2024-12-16 11:43:44.608442] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:55 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:22:32.293 [2024-12-16 11:43:44.608469] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:44728 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:32.293 [2024-12-16 11:43:44.608487] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:10 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:22:32.293 [2024-12-16 11:43:44.608513] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:44736 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:32.293 [2024-12-16 11:43:44.608531] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:50 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:22:32.293 [2024-12-16 11:43:44.608557] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:66 nsid:1 lba:44744 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:32.293 [2024-12-16 11:43:44.608576] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:66 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:22:32.293 [2024-12-16 11:43:44.608612] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:44752 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:32.293 [2024-12-16 11:43:44.608631] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:44 cdw0:0 sqhd:0014 p:0 m:0 dnr:0 00:22:32.293 [2024-12-16 11:43:44.608658] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:93 nsid:1 lba:44760 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:32.293 [2024-12-16 11:43:44.608676] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:93 cdw0:0 sqhd:0015 p:0 m:0 dnr:0 00:22:32.293 [2024-12-16 11:43:44.608702] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:95 nsid:1 lba:44768 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:32.293 [2024-12-16 11:43:44.608721] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:95 cdw0:0 sqhd:0016 p:0 m:0 dnr:0 00:22:32.293 [2024-12-16 11:43:44.608758] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:85 nsid:1 lba:44776 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:32.293 [2024-12-16 11:43:44.608776] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:85 cdw0:0 sqhd:0017 p:0 m:0 dnr:0 00:22:32.293 [2024-12-16 11:43:44.608803] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:44784 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:32.293 [2024-12-16 11:43:44.608821] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:3 cdw0:0 sqhd:0018 p:0 m:0 dnr:0 00:22:32.293 [2024-12-16 11:43:44.608847] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:74 nsid:1 lba:44792 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:32.293 [2024-12-16 11:43:44.608865] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:74 cdw0:0 sqhd:0019 p:0 m:0 dnr:0 00:22:32.293 [2024-12-16 11:43:44.608891] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:44800 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:32.293 [2024-12-16 11:43:44.608909] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:58 cdw0:0 sqhd:001a p:0 m:0 dnr:0 00:22:32.293 [2024-12-16 11:43:44.608935] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:72 nsid:1 lba:44808 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:32.293 [2024-12-16 11:43:44.608953] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:72 cdw0:0 sqhd:001b p:0 m:0 dnr:0 00:22:32.293 [2024-12-16 11:43:44.608979] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:69 nsid:1 lba:44816 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:32.293 [2024-12-16 11:43:44.608998] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:69 cdw0:0 sqhd:001c p:0 m:0 dnr:0 00:22:32.293 [2024-12-16 11:43:44.609035] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:114 nsid:1 lba:44824 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:32.293 [2024-12-16 11:43:44.609054] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:114 cdw0:0 sqhd:001d p:0 m:0 dnr:0 00:22:32.293 [2024-12-16 11:43:44.610132] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:123 nsid:1 lba:44832 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:32.293 [2024-12-16 11:43:44.610183] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:123 cdw0:0 sqhd:001e p:0 m:0 dnr:0 00:22:32.293 [2024-12-16 11:43:44.610217] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:88 nsid:1 lba:44840 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:32.293 [2024-12-16 11:43:44.610238] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:88 cdw0:0 sqhd:001f p:0 m:0 dnr:0 00:22:32.293 [2024-12-16 11:43:44.610281] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:44848 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:32.293 [2024-12-16 11:43:44.610301] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:24 cdw0:0 sqhd:0020 p:0 m:0 dnr:0 00:22:32.293 [2024-12-16 11:43:44.610328] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:94 nsid:1 lba:44856 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:32.293 [2024-12-16 11:43:44.610346] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:94 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:22:32.293 [2024-12-16 11:43:44.610372] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:115 nsid:1 lba:44864 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:32.293 [2024-12-16 11:43:44.610390] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:115 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:22:32.293 [2024-12-16 11:43:44.610416] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:44872 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:32.293 [2024-12-16 11:43:44.610434] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:17 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:22:32.293 [2024-12-16 11:43:44.610460] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:89 nsid:1 lba:44880 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:32.293 [2024-12-16 11:43:44.610478] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:89 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 00:22:32.293 [2024-12-16 11:43:44.610504] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:44888 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:32.293 [2024-12-16 11:43:44.610522] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:30 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:22:32.293 [2024-12-16 11:43:44.610548] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:44896 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:32.293 [2024-12-16 11:43:44.610567] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:48 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:22:32.293 [2024-12-16 11:43:44.610593] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:44904 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:32.293 [2024-12-16 11:43:44.610611] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:31 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:22:32.293 [2024-12-16 11:43:44.610637] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:77 nsid:1 lba:44912 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:32.293 [2024-12-16 11:43:44.610656] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:77 cdw0:0 sqhd:0028 p:0 m:0 dnr:0 00:22:32.293 [2024-12-16 11:43:44.610682] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:120 nsid:1 lba:44920 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:32.293 [2024-12-16 11:43:44.610700] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:120 cdw0:0 sqhd:0029 p:0 m:0 dnr:0 00:22:32.293 [2024-12-16 11:43:44.610726] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:44928 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:32.293 [2024-12-16 11:43:44.610744] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:0 cdw0:0 sqhd:002a p:0 m:0 dnr:0 00:22:32.294 [2024-12-16 11:43:44.610770] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:105 nsid:1 lba:44936 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:32.294 [2024-12-16 11:43:44.610787] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:105 cdw0:0 sqhd:002b p:0 m:0 dnr:0 00:22:32.294 [2024-12-16 11:43:44.610813] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:44944 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:32.294 [2024-12-16 11:43:44.610840] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:40 cdw0:0 sqhd:002c p:0 m:0 dnr:0 00:22:32.294 [2024-12-16 11:43:44.610868] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:44952 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:32.294 [2024-12-16 11:43:44.610887] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:27 cdw0:0 sqhd:002d p:0 m:0 dnr:0 00:22:32.294 [2024-12-16 11:43:44.610913] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:44960 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:32.294 [2024-12-16 11:43:44.610931] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:33 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:22:32.294 [2024-12-16 11:43:44.610958] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:44968 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:32.294 [2024-12-16 11:43:44.610976] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:11 cdw0:0 sqhd:002f p:0 m:0 dnr:0 00:22:32.294 [2024-12-16 11:43:44.611002] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:44976 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:32.294 [2024-12-16 11:43:44.611020] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:6 cdw0:0 sqhd:0030 p:0 m:0 dnr:0 00:22:32.294 [2024-12-16 11:43:44.611046] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:44984 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:32.294 [2024-12-16 11:43:44.611064] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:61 cdw0:0 sqhd:0031 p:0 m:0 dnr:0 00:22:32.294 [2024-12-16 11:43:44.611090] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:44992 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:32.294 [2024-12-16 11:43:44.611108] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:56 cdw0:0 sqhd:0032 p:0 m:0 dnr:0 00:22:32.294 [2024-12-16 11:43:44.611134] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:45000 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:32.294 [2024-12-16 11:43:44.611169] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:26 cdw0:0 sqhd:0033 p:0 m:0 dnr:0 00:22:32.294 [2024-12-16 11:43:44.611197] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:45008 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:32.294 [2024-12-16 11:43:44.611216] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:62 cdw0:0 sqhd:0034 p:0 m:0 dnr:0 00:22:32.294 [2024-12-16 11:43:44.611242] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:45016 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:32.294 [2024-12-16 11:43:44.611260] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:13 cdw0:0 sqhd:0035 p:0 m:0 dnr:0 00:22:32.294 [2024-12-16 11:43:44.611298] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:111 nsid:1 lba:45024 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:32.294 [2024-12-16 11:43:44.611317] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:111 cdw0:0 sqhd:0036 p:0 m:0 dnr:0 00:22:32.294 [2024-12-16 11:43:44.611343] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:45032 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:32.294 [2024-12-16 11:43:44.611361] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:38 cdw0:0 sqhd:0037 p:0 m:0 dnr:0 00:22:32.294 [2024-12-16 11:43:44.611387] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:45040 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:32.294 [2024-12-16 11:43:44.611405] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:15 cdw0:0 sqhd:0038 p:0 m:0 dnr:0 00:22:32.294 [2024-12-16 11:43:44.611441] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:45048 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:32.294 [2024-12-16 11:43:44.611460] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:36 cdw0:0 sqhd:0039 p:0 m:0 dnr:0 00:22:32.294 [2024-12-16 11:43:44.611487] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:65 nsid:1 lba:45056 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:32.294 [2024-12-16 11:43:44.611505] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:65 cdw0:0 sqhd:003a p:0 m:0 dnr:0 00:22:32.294 [2024-12-16 11:43:44.611531] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:45064 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:32.294 [2024-12-16 11:43:44.611549] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:7 cdw0:0 sqhd:003b p:0 m:0 dnr:0 00:22:32.294 [2024-12-16 11:43:44.611575] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:67 nsid:1 lba:45072 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:32.294 [2024-12-16 11:43:44.611594] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:67 cdw0:0 sqhd:003c p:0 m:0 dnr:0 00:22:32.294 [2024-12-16 11:43:44.611620] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:116 nsid:1 lba:45080 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:32.294 [2024-12-16 11:43:44.611638] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:116 cdw0:0 sqhd:003d p:0 m:0 dnr:0 00:22:32.294 [2024-12-16 11:43:44.611664] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:45088 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:32.294 [2024-12-16 11:43:44.611682] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:14 cdw0:0 sqhd:003e p:0 m:0 dnr:0 00:22:32.294 [2024-12-16 11:43:44.611722] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:92 nsid:1 lba:45096 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:32.294 [2024-12-16 11:43:44.611744] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:92 cdw0:0 sqhd:003f p:0 m:0 dnr:0 00:22:32.294 [2024-12-16 11:43:44.611771] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:45104 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:32.294 [2024-12-16 11:43:44.611790] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:25 cdw0:0 sqhd:0040 p:0 m:0 dnr:0 00:22:32.294 [2024-12-16 11:43:44.611815] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:78 nsid:1 lba:45112 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:32.294 [2024-12-16 11:43:44.611833] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:78 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:22:32.294 [2024-12-16 11:43:44.611860] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:45120 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:32.294 [2024-12-16 11:43:44.611878] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:22:32.294 [2024-12-16 11:43:44.611904] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:119 nsid:1 lba:45128 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:32.294 [2024-12-16 11:43:44.611922] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:119 cdw0:0 sqhd:0043 p:0 m:0 dnr:0 00:22:32.294 [2024-12-16 11:43:44.611948] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:45136 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:32.294 [2024-12-16 11:43:44.611966] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:57 cdw0:0 sqhd:0044 p:0 m:0 dnr:0 00:22:32.294 [2024-12-16 11:43:44.612002] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:45144 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:32.294 [2024-12-16 11:43:44.612022] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:63 cdw0:0 sqhd:0045 p:0 m:0 dnr:0 00:22:32.294 [2024-12-16 11:43:44.612060] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:73 nsid:1 lba:45152 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:32.294 [2024-12-16 11:43:44.612078] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:73 cdw0:0 sqhd:0046 p:0 m:0 dnr:0 00:22:32.294 [2024-12-16 11:43:44.612104] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:121 nsid:1 lba:45160 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:32.294 [2024-12-16 11:43:44.612122] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:121 cdw0:0 sqhd:0047 p:0 m:0 dnr:0 00:22:32.294 [2024-12-16 11:43:44.612162] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:45168 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:32.294 [2024-12-16 11:43:44.612185] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:2 cdw0:0 sqhd:0048 p:0 m:0 dnr:0 00:22:32.294 [2024-12-16 11:43:44.612211] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:45176 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:32.294 [2024-12-16 11:43:44.612229] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:18 cdw0:0 sqhd:0049 p:0 m:0 dnr:0 00:22:32.294 [2024-12-16 11:43:44.612255] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:45184 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:32.294 [2024-12-16 11:43:44.612273] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:43 cdw0:0 sqhd:004a p:0 m:0 dnr:0 00:22:32.294 [2024-12-16 11:43:44.612300] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:45192 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:32.294 [2024-12-16 11:43:44.612318] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:35 cdw0:0 sqhd:004b p:0 m:0 dnr:0 00:22:32.294 [2024-12-16 11:43:44.612344] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:45200 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:32.294 [2024-12-16 11:43:44.612374] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:23 cdw0:0 sqhd:004c p:0 m:0 dnr:0 00:22:32.294 [2024-12-16 11:43:44.612401] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:45208 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:32.294 [2024-12-16 11:43:44.612419] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:9 cdw0:0 sqhd:004d p:0 m:0 dnr:0 00:22:32.294 [2024-12-16 11:43:44.612446] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:45216 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:32.294 [2024-12-16 11:43:44.612464] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:49 cdw0:0 sqhd:004e p:0 m:0 dnr:0 00:22:32.294 [2024-12-16 11:43:44.612490] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:122 nsid:1 lba:45224 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:32.294 [2024-12-16 11:43:44.612508] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:122 cdw0:0 sqhd:004f p:0 m:0 dnr:0 00:22:32.294 [2024-12-16 11:43:44.612536] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:124 nsid:1 lba:45232 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:32.294 [2024-12-16 11:43:44.612555] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:124 cdw0:0 sqhd:0050 p:0 m:0 dnr:0 00:22:32.294 [2024-12-16 11:43:44.613301] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:114 nsid:1 lba:45240 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:32.294 [2024-12-16 11:43:44.613346] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:114 cdw0:0 sqhd:0051 p:0 m:0 dnr:0 00:22:32.295 [2024-12-16 11:43:44.613380] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:69 nsid:1 lba:45248 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:32.295 [2024-12-16 11:43:44.613400] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:69 cdw0:0 sqhd:0052 p:0 m:0 dnr:0 00:22:32.295 [2024-12-16 11:43:44.613432] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:72 nsid:1 lba:45256 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:32.295 [2024-12-16 11:43:44.613451] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:72 cdw0:0 sqhd:0053 p:0 m:0 dnr:0 00:22:32.295 [2024-12-16 11:43:44.613477] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:45264 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:32.295 [2024-12-16 11:43:44.613495] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:58 cdw0:0 sqhd:0054 p:0 m:0 dnr:0 00:22:32.295 [2024-12-16 11:43:44.613532] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:74 nsid:1 lba:45272 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:32.295 [2024-12-16 11:43:44.613551] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:74 cdw0:0 sqhd:0055 p:0 m:0 dnr:0 00:22:32.295 [2024-12-16 11:43:44.613577] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:45280 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:32.295 [2024-12-16 11:43:44.613595] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:3 cdw0:0 sqhd:0056 p:0 m:0 dnr:0 00:22:32.295 [2024-12-16 11:43:44.613621] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:85 nsid:1 lba:45288 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:32.295 [2024-12-16 11:43:44.613639] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:85 cdw0:0 sqhd:0057 p:0 m:0 dnr:0 00:22:32.295 [2024-12-16 11:43:44.613665] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:95 nsid:1 lba:45296 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:32.295 [2024-12-16 11:43:44.613684] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:95 cdw0:0 sqhd:0058 p:0 m:0 dnr:0 00:22:32.295 [2024-12-16 11:43:44.613710] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:93 nsid:1 lba:45304 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:32.295 [2024-12-16 11:43:44.613728] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:93 cdw0:0 sqhd:0059 p:0 m:0 dnr:0 00:22:32.295 [2024-12-16 11:43:44.613754] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:45312 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:32.295 [2024-12-16 11:43:44.613772] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:44 cdw0:0 sqhd:005a p:0 m:0 dnr:0 00:22:32.295 [2024-12-16 11:43:44.613798] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:66 nsid:1 lba:45320 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:32.295 [2024-12-16 11:43:44.613817] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:66 cdw0:0 sqhd:005b p:0 m:0 dnr:0 00:22:32.295 [2024-12-16 11:43:44.613843] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:45328 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:32.295 [2024-12-16 11:43:44.613861] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:50 cdw0:0 sqhd:005c p:0 m:0 dnr:0 00:22:32.295 [2024-12-16 11:43:44.613887] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:45336 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:32.295 [2024-12-16 11:43:44.613914] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:10 cdw0:0 sqhd:005d p:0 m:0 dnr:0 00:22:32.295 [2024-12-16 11:43:44.613942] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:45344 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:32.295 [2024-12-16 11:43:44.613961] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:55 cdw0:0 sqhd:005e p:0 m:0 dnr:0 00:22:32.295 [2024-12-16 11:43:44.613987] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:109 nsid:1 lba:45352 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:32.295 [2024-12-16 11:43:44.614005] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:109 cdw0:0 sqhd:005f p:0 m:0 dnr:0 00:22:32.295 [2024-12-16 11:43:44.614031] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:79 nsid:1 lba:45360 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:32.295 [2024-12-16 11:43:44.614050] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:79 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:22:32.295 [2024-12-16 11:43:44.614076] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:45368 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:32.295 [2024-12-16 11:43:44.614094] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:125 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:22:32.295 [2024-12-16 11:43:44.614120] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:96 nsid:1 lba:45376 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:32.295 [2024-12-16 11:43:44.614154] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:96 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:22:32.295 [2024-12-16 11:43:44.614186] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:99 nsid:1 lba:45384 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:32.295 [2024-12-16 11:43:44.614204] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:99 cdw0:0 sqhd:0063 p:0 m:0 dnr:0 00:22:32.295 [2024-12-16 11:43:44.614230] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:45392 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:32.295 [2024-12-16 11:43:44.614248] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:39 cdw0:0 sqhd:0064 p:0 m:0 dnr:0 00:22:32.295 [2024-12-16 11:43:44.614275] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:45400 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:32.295 [2024-12-16 11:43:44.614293] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:21 cdw0:0 sqhd:0065 p:0 m:0 dnr:0 00:22:32.295 [2024-12-16 11:43:44.614319] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:110 nsid:1 lba:45408 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:32.295 [2024-12-16 11:43:44.614337] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:110 cdw0:0 sqhd:0066 p:0 m:0 dnr:0 00:22:32.295 [2024-12-16 11:43:44.614376] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:45416 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:32.295 [2024-12-16 11:43:44.614394] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:42 cdw0:0 sqhd:0067 p:0 m:0 dnr:0 00:22:32.295 [2024-12-16 11:43:44.614420] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:45424 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:32.295 [2024-12-16 11:43:44.614439] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:51 cdw0:0 sqhd:0068 p:0 m:0 dnr:0 00:22:32.295 [2024-12-16 11:43:44.614466] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:102 nsid:1 lba:45432 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:32.295 [2024-12-16 11:43:44.614484] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:102 cdw0:0 sqhd:0069 p:0 m:0 dnr:0 00:22:32.295 [2024-12-16 11:43:44.614519] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:45440 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:32.295 [2024-12-16 11:43:44.614539] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:22 cdw0:0 sqhd:006a p:0 m:0 dnr:0 00:22:32.295 [2024-12-16 11:43:44.614565] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:70 nsid:1 lba:45448 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:32.295 [2024-12-16 11:43:44.614583] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:70 cdw0:0 sqhd:006b p:0 m:0 dnr:0 00:22:32.295 [2024-12-16 11:43:44.614609] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:101 nsid:1 lba:45456 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:32.295 [2024-12-16 11:43:44.614627] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:101 cdw0:0 sqhd:006c p:0 m:0 dnr:0 00:22:32.295 [2024-12-16 11:43:44.614653] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:45464 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:32.295 [2024-12-16 11:43:44.614675] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:47 cdw0:0 sqhd:006d p:0 m:0 dnr:0 00:22:32.295 [2024-12-16 11:43:44.614713] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:45472 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:32.295 [2024-12-16 11:43:44.614731] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:45 cdw0:0 sqhd:006e p:0 m:0 dnr:0 00:22:32.295 [2024-12-16 11:43:44.614757] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:103 nsid:1 lba:45480 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:32.295 [2024-12-16 11:43:44.614775] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:103 cdw0:0 sqhd:006f p:0 m:0 dnr:0 00:22:32.295 [2024-12-16 11:43:44.614802] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:45488 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:32.295 [2024-12-16 11:43:44.614820] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:53 cdw0:0 sqhd:0070 p:0 m:0 dnr:0 00:22:32.295 [2024-12-16 11:43:44.614846] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:45496 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:32.295 [2024-12-16 11:43:44.614864] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:12 cdw0:0 sqhd:0071 p:0 m:0 dnr:0 00:22:32.295 [2024-12-16 11:43:44.614890] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:45504 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:32.295 [2024-12-16 11:43:44.614908] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:19 cdw0:0 sqhd:0072 p:0 m:0 dnr:0 00:22:32.295 [2024-12-16 11:43:44.614934] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:117 nsid:1 lba:45512 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:32.295 [2024-12-16 11:43:44.614952] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:117 cdw0:0 sqhd:0073 p:0 m:0 dnr:0 00:22:32.295 [2024-12-16 11:43:44.614978] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:45520 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:32.295 [2024-12-16 11:43:44.614996] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:16 cdw0:0 sqhd:0074 p:0 m:0 dnr:0 00:22:32.295 [2024-12-16 11:43:44.615023] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:45528 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:32.295 [2024-12-16 11:43:44.615041] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:32 cdw0:0 sqhd:0075 p:0 m:0 dnr:0 00:22:32.295 [2024-12-16 11:43:44.615075] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:82 nsid:1 lba:45536 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:32.295 [2024-12-16 11:43:44.615095] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:82 cdw0:0 sqhd:0076 p:0 m:0 dnr:0 00:22:32.295 [2024-12-16 11:43:44.615121] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:84 nsid:1 lba:45544 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:32.295 [2024-12-16 11:43:44.615152] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:84 cdw0:0 sqhd:0077 p:0 m:0 dnr:0 00:22:32.295 [2024-12-16 11:43:44.615182] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:75 nsid:1 lba:45552 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:32.295 [2024-12-16 11:43:44.615201] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:75 cdw0:0 sqhd:0078 p:0 m:0 dnr:0 00:22:32.295 [2024-12-16 11:43:44.615227] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:45560 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:32.295 [2024-12-16 11:43:44.615246] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:60 cdw0:0 sqhd:0079 p:0 m:0 dnr:0 00:22:32.296 [2024-12-16 11:43:44.615284] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:45568 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:32.296 [2024-12-16 11:43:44.615302] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:4 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:22:32.296 [2024-12-16 11:43:44.615328] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:68 nsid:1 lba:45576 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:32.296 [2024-12-16 11:43:44.615346] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:68 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:22:32.296 [2024-12-16 11:43:44.615372] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:104 nsid:1 lba:45584 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:32.296 [2024-12-16 11:43:44.615390] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:104 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:22:32.296 [2024-12-16 11:43:44.615416] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:45592 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:32.296 [2024-12-16 11:43:44.615434] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:59 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:22:32.296 [2024-12-16 11:43:44.615460] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:76 nsid:1 lba:45600 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:32.296 [2024-12-16 11:43:44.615478] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:76 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:22:32.296 [2024-12-16 11:43:44.615503] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:81 nsid:1 lba:45608 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:32.296 [2024-12-16 11:43:44.615522] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:81 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:22:32.296 [2024-12-16 11:43:44.615548] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:118 nsid:1 lba:45616 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:32.296 [2024-12-16 11:43:44.615567] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:118 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:32.296 [2024-12-16 11:43:44.615593] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:87 nsid:1 lba:45624 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:32.296 [2024-12-16 11:43:44.615611] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:87 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:32.296 [2024-12-16 11:43:44.615637] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:98 nsid:1 lba:45632 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:32.296 [2024-12-16 11:43:44.615664] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:98 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:22:32.296 [2024-12-16 11:43:44.615699] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:45640 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:32.296 [2024-12-16 11:43:44.615758] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:37 cdw0:0 sqhd:0003 p:0 m:0 dnr:0 00:22:32.296 [2024-12-16 11:43:44.615780] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:112 nsid:1 lba:45648 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:32.296 [2024-12-16 11:43:44.615796] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:112 cdw0:0 sqhd:0004 p:0 m:0 dnr:0 00:22:32.296 [2024-12-16 11:43:44.615817] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:45656 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:32.296 [2024-12-16 11:43:44.615831] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:28 cdw0:0 sqhd:0005 p:0 m:0 dnr:0 00:22:32.296 [2024-12-16 11:43:44.615852] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:107 nsid:1 lba:45664 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:32.296 [2024-12-16 11:43:44.615866] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:107 cdw0:0 sqhd:0006 p:0 m:0 dnr:0 00:22:32.296 [2024-12-16 11:43:44.615887] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:71 nsid:1 lba:45672 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:32.296 [2024-12-16 11:43:44.615902] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:71 cdw0:0 sqhd:0007 p:0 m:0 dnr:0 00:22:32.296 [2024-12-16 11:43:44.615922] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:45680 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:32.296 [2024-12-16 11:43:44.615936] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:8 cdw0:0 sqhd:0008 p:0 m:0 dnr:0 00:22:32.296 [2024-12-16 11:43:44.615957] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:108 nsid:1 lba:45688 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:32.296 [2024-12-16 11:43:44.615972] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:108 cdw0:0 sqhd:0009 p:0 m:0 dnr:0 00:22:32.296 [2024-12-16 11:43:44.615993] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:45696 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:32.296 [2024-12-16 11:43:44.616007] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:29 cdw0:0 sqhd:000a p:0 m:0 dnr:0 00:22:32.296 [2024-12-16 11:43:44.616028] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:45704 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:32.296 [2024-12-16 11:43:44.616042] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:41 cdw0:0 sqhd:000b p:0 m:0 dnr:0 00:22:32.296 [2024-12-16 11:43:44.616077] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:45712 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:32.296 [2024-12-16 11:43:44.616091] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:5 cdw0:0 sqhd:000c p:0 m:0 dnr:0 00:22:32.296 [2024-12-16 11:43:44.616111] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:64 nsid:1 lba:44696 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:32.296 [2024-12-16 11:43:44.616125] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:64 cdw0:0 sqhd:000d p:0 m:0 dnr:0 00:22:32.296 [2024-12-16 11:43:44.616145] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:44704 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:32.296 [2024-12-16 11:43:44.616178] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:46 cdw0:0 sqhd:000e p:0 m:0 dnr:0 00:22:32.296 [2024-12-16 11:43:44.616205] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:97 nsid:1 lba:44712 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:32.296 [2024-12-16 11:43:44.616221] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:97 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:22:32.296 [2024-12-16 11:43:44.616248] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:80 nsid:1 lba:44720 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:32.296 [2024-12-16 11:43:44.616264] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:80 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:22:32.296 [2024-12-16 11:43:44.616284] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:126 nsid:1 lba:44728 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:32.296 [2024-12-16 11:43:44.616299] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:126 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:22:32.296 [2024-12-16 11:43:44.616319] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:90 nsid:1 lba:44736 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:32.296 [2024-12-16 11:43:44.616333] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:90 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:22:32.296 [2024-12-16 11:43:44.616370] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:44744 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:32.296 [2024-12-16 11:43:44.616385] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:52 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:22:32.296 [2024-12-16 11:43:44.616406] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:106 nsid:1 lba:44752 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:32.296 [2024-12-16 11:43:44.616421] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:106 cdw0:0 sqhd:0014 p:0 m:0 dnr:0 00:22:32.296 [2024-12-16 11:43:44.616441] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:91 nsid:1 lba:44760 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:32.296 [2024-12-16 11:43:44.616456] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:91 cdw0:0 sqhd:0015 p:0 m:0 dnr:0 00:22:32.296 [2024-12-16 11:43:44.616477] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:83 nsid:1 lba:44768 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:32.296 [2024-12-16 11:43:44.616492] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:83 cdw0:0 sqhd:0016 p:0 m:0 dnr:0 00:22:32.296 [2024-12-16 11:43:44.616512] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:113 nsid:1 lba:44776 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:32.296 [2024-12-16 11:43:44.616528] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:113 cdw0:0 sqhd:0017 p:0 m:0 dnr:0 00:22:32.296 [2024-12-16 11:43:44.616548] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:44784 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:32.296 [2024-12-16 11:43:44.616563] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:54 cdw0:0 sqhd:0018 p:0 m:0 dnr:0 00:22:32.296 [2024-12-16 11:43:44.616583] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:44792 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:32.296 [2024-12-16 11:43:44.616598] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:34 cdw0:0 sqhd:0019 p:0 m:0 dnr:0 00:22:32.296 [2024-12-16 11:43:44.616619] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:44800 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:32.296 [2024-12-16 11:43:44.616634] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:20 cdw0:0 sqhd:001a p:0 m:0 dnr:0 00:22:32.296 [2024-12-16 11:43:44.616671] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:86 nsid:1 lba:44808 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:32.296 [2024-12-16 11:43:44.616688] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:86 cdw0:0 sqhd:001b p:0 m:0 dnr:0 00:22:32.296 [2024-12-16 11:43:44.616709] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:100 nsid:1 lba:44816 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:32.296 [2024-12-16 11:43:44.616724] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:100 cdw0:0 sqhd:001c p:0 m:0 dnr:0 00:22:32.296 [2024-12-16 11:43:44.617809] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:124 nsid:1 lba:44824 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:32.296 [2024-12-16 11:43:44.617840] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:124 cdw0:0 sqhd:001d p:0 m:0 dnr:0 00:22:32.296 [2024-12-16 11:43:44.617882] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:122 nsid:1 lba:44832 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:32.296 [2024-12-16 11:43:44.617902] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:122 cdw0:0 sqhd:001e p:0 m:0 dnr:0 00:22:32.296 [2024-12-16 11:43:44.617923] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:44840 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:32.296 [2024-12-16 11:43:44.617938] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:49 cdw0:0 sqhd:001f p:0 m:0 dnr:0 00:22:32.297 [2024-12-16 11:43:44.617960] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:44848 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:32.297 [2024-12-16 11:43:44.617974] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:9 cdw0:0 sqhd:0020 p:0 m:0 dnr:0 00:22:32.297 [2024-12-16 11:43:44.617995] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:44856 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:32.297 [2024-12-16 11:43:44.618009] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:23 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:22:32.297 [2024-12-16 11:43:44.618029] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:44864 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:32.297 [2024-12-16 11:43:44.618044] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:35 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:22:32.297 [2024-12-16 11:43:44.618064] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:44872 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:32.297 [2024-12-16 11:43:44.618078] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:43 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:22:32.297 [2024-12-16 11:43:44.618098] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:44880 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:32.297 [2024-12-16 11:43:44.618112] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:18 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 00:22:32.297 [2024-12-16 11:43:44.618133] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:44888 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:32.297 [2024-12-16 11:43:44.618147] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:2 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:22:32.297 [2024-12-16 11:43:44.618187] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:121 nsid:1 lba:44896 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:32.297 [2024-12-16 11:43:44.618203] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:121 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:22:32.297 [2024-12-16 11:43:44.618236] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:73 nsid:1 lba:44904 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:32.297 [2024-12-16 11:43:44.618252] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:73 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:22:32.297 [2024-12-16 11:43:44.618273] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:44912 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:32.297 [2024-12-16 11:43:44.618287] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:63 cdw0:0 sqhd:0028 p:0 m:0 dnr:0 00:22:32.297 [2024-12-16 11:43:44.618307] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:44920 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:32.297 [2024-12-16 11:43:44.618321] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:57 cdw0:0 sqhd:0029 p:0 m:0 dnr:0 00:22:32.297 [2024-12-16 11:43:44.618342] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:119 nsid:1 lba:44928 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:32.297 [2024-12-16 11:43:44.618356] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:119 cdw0:0 sqhd:002a p:0 m:0 dnr:0 00:22:32.297 [2024-12-16 11:43:44.618376] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:44936 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:32.297 [2024-12-16 11:43:44.618391] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:1 cdw0:0 sqhd:002b p:0 m:0 dnr:0 00:22:32.297 [2024-12-16 11:43:44.618411] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:78 nsid:1 lba:44944 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:32.297 [2024-12-16 11:43:44.618425] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:78 cdw0:0 sqhd:002c p:0 m:0 dnr:0 00:22:32.297 [2024-12-16 11:43:44.618445] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:44952 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:32.297 [2024-12-16 11:43:44.618459] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:25 cdw0:0 sqhd:002d p:0 m:0 dnr:0 00:22:32.297 [2024-12-16 11:43:44.618479] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:92 nsid:1 lba:44960 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:32.297 [2024-12-16 11:43:44.618493] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:92 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:22:32.297 [2024-12-16 11:43:44.618514] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:44968 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:32.297 [2024-12-16 11:43:44.618528] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:14 cdw0:0 sqhd:002f p:0 m:0 dnr:0 00:22:32.297 [2024-12-16 11:43:44.618549] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:116 nsid:1 lba:44976 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:32.297 [2024-12-16 11:43:44.618563] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:116 cdw0:0 sqhd:0030 p:0 m:0 dnr:0 00:22:32.297 [2024-12-16 11:43:44.618583] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:67 nsid:1 lba:44984 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:32.297 [2024-12-16 11:43:44.618598] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:67 cdw0:0 sqhd:0031 p:0 m:0 dnr:0 00:22:32.297 [2024-12-16 11:43:44.618618] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:44992 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:32.297 [2024-12-16 11:43:44.618632] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:7 cdw0:0 sqhd:0032 p:0 m:0 dnr:0 00:22:32.297 [2024-12-16 11:43:44.618652] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:65 nsid:1 lba:45000 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:32.297 [2024-12-16 11:43:44.618674] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:65 cdw0:0 sqhd:0033 p:0 m:0 dnr:0 00:22:32.297 [2024-12-16 11:43:44.618695] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:45008 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:32.297 [2024-12-16 11:43:44.618710] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:36 cdw0:0 sqhd:0034 p:0 m:0 dnr:0 00:22:32.297 [2024-12-16 11:43:44.618731] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:45016 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:32.297 [2024-12-16 11:43:44.618745] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:15 cdw0:0 sqhd:0035 p:0 m:0 dnr:0 00:22:32.297 [2024-12-16 11:43:44.618766] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:45024 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:32.297 [2024-12-16 11:43:44.618780] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:38 cdw0:0 sqhd:0036 p:0 m:0 dnr:0 00:22:32.297 [2024-12-16 11:43:44.618801] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:111 nsid:1 lba:45032 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:32.297 [2024-12-16 11:43:44.618815] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:111 cdw0:0 sqhd:0037 p:0 m:0 dnr:0 00:22:32.297 [2024-12-16 11:43:44.618835] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:45040 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:32.297 [2024-12-16 11:43:44.618849] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:13 cdw0:0 sqhd:0038 p:0 m:0 dnr:0 00:22:32.297 [2024-12-16 11:43:44.618870] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:45048 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:32.297 [2024-12-16 11:43:44.618884] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:62 cdw0:0 sqhd:0039 p:0 m:0 dnr:0 00:22:32.297 [2024-12-16 11:43:44.618905] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:45056 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:32.297 [2024-12-16 11:43:44.618919] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:26 cdw0:0 sqhd:003a p:0 m:0 dnr:0 00:22:32.297 [2024-12-16 11:43:44.618940] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:45064 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:32.297 [2024-12-16 11:43:44.618954] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:56 cdw0:0 sqhd:003b p:0 m:0 dnr:0 00:22:32.297 [2024-12-16 11:43:44.618974] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:45072 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:32.297 [2024-12-16 11:43:44.618989] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:61 cdw0:0 sqhd:003c p:0 m:0 dnr:0 00:22:32.297 [2024-12-16 11:43:44.619009] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:45080 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:32.297 [2024-12-16 11:43:44.619023] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:6 cdw0:0 sqhd:003d p:0 m:0 dnr:0 00:22:32.297 [2024-12-16 11:43:44.619044] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:45088 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:32.297 [2024-12-16 11:43:44.619058] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:11 cdw0:0 sqhd:003e p:0 m:0 dnr:0 00:22:32.297 [2024-12-16 11:43:44.619079] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:45096 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:32.297 [2024-12-16 11:43:44.619100] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:33 cdw0:0 sqhd:003f p:0 m:0 dnr:0 00:22:32.297 [2024-12-16 11:43:44.619122] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:45104 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:32.297 [2024-12-16 11:43:44.619149] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:27 cdw0:0 sqhd:0040 p:0 m:0 dnr:0 00:22:32.297 [2024-12-16 11:43:44.619174] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:45112 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:32.297 [2024-12-16 11:43:44.619189] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:40 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:22:32.297 [2024-12-16 11:43:44.619209] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:105 nsid:1 lba:45120 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:32.297 [2024-12-16 11:43:44.619224] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:105 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:22:32.297 [2024-12-16 11:43:44.619244] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:45128 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:32.297 [2024-12-16 11:43:44.619259] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:0 cdw0:0 sqhd:0043 p:0 m:0 dnr:0 00:22:32.297 [2024-12-16 11:43:44.619295] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:120 nsid:1 lba:45136 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:32.297 [2024-12-16 11:43:44.619310] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:120 cdw0:0 sqhd:0044 p:0 m:0 dnr:0 00:22:32.297 [2024-12-16 11:43:44.619331] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:77 nsid:1 lba:45144 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:32.297 [2024-12-16 11:43:44.619346] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:77 cdw0:0 sqhd:0045 p:0 m:0 dnr:0 00:22:32.298 [2024-12-16 11:43:44.619367] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:45152 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:32.298 [2024-12-16 11:43:44.619382] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:31 cdw0:0 sqhd:0046 p:0 m:0 dnr:0 00:22:32.298 [2024-12-16 11:43:44.619403] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:45160 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:32.298 [2024-12-16 11:43:44.619418] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:48 cdw0:0 sqhd:0047 p:0 m:0 dnr:0 00:22:32.298 [2024-12-16 11:43:44.619439] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:45168 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:32.298 [2024-12-16 11:43:44.619454] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:30 cdw0:0 sqhd:0048 p:0 m:0 dnr:0 00:22:32.298 [2024-12-16 11:43:44.619476] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:89 nsid:1 lba:45176 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:32.298 [2024-12-16 11:43:44.619491] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:89 cdw0:0 sqhd:0049 p:0 m:0 dnr:0 00:22:32.298 [2024-12-16 11:43:44.619511] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:45184 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:32.298 [2024-12-16 11:43:44.619526] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:17 cdw0:0 sqhd:004a p:0 m:0 dnr:0 00:22:32.298 [2024-12-16 11:43:44.619547] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:115 nsid:1 lba:45192 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:32.298 [2024-12-16 11:43:44.619562] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:115 cdw0:0 sqhd:004b p:0 m:0 dnr:0 00:22:32.298 [2024-12-16 11:43:44.619591] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:94 nsid:1 lba:45200 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:32.298 [2024-12-16 11:43:44.619607] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:94 cdw0:0 sqhd:004c p:0 m:0 dnr:0 00:22:32.298 [2024-12-16 11:43:44.619628] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:45208 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:32.298 [2024-12-16 11:43:44.619643] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:24 cdw0:0 sqhd:004d p:0 m:0 dnr:0 00:22:32.298 [2024-12-16 11:43:44.619675] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:88 nsid:1 lba:45216 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:32.298 [2024-12-16 11:43:44.619690] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:88 cdw0:0 sqhd:004e p:0 m:0 dnr:0 00:22:32.298 [2024-12-16 11:43:44.619737] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:123 nsid:1 lba:45224 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:32.298 [2024-12-16 11:43:44.619756] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:123 cdw0:0 sqhd:004f p:0 m:0 dnr:0 00:22:32.298 [2024-12-16 11:43:44.620515] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:100 nsid:1 lba:45232 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:32.298 [2024-12-16 11:43:44.620545] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:100 cdw0:0 sqhd:0050 p:0 m:0 dnr:0 00:22:32.298 8345.44 IOPS, 32.60 MiB/s [2024-12-16T11:44:33.051Z] 8365.70 IOPS, 32.68 MiB/s [2024-12-16T11:44:33.051Z] 8388.64 IOPS, 32.77 MiB/s [2024-12-16T11:44:33.051Z] 8405.17 IOPS, 32.83 MiB/s [2024-12-16T11:44:33.051Z] 8427.69 IOPS, 32.92 MiB/s [2024-12-16T11:44:33.051Z] 8451.57 IOPS, 33.01 MiB/s [2024-12-16T11:44:33.051Z] 8464.87 IOPS, 33.07 MiB/s [2024-12-16T11:44:33.051Z] [2024-12-16 11:43:51.170212] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:119 nsid:1 lba:104144 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:32.298 [2024-12-16 11:43:51.170301] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:119 cdw0:0 sqhd:0004 p:0 m:0 dnr:0 00:22:32.298 [2024-12-16 11:43:51.170367] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:104152 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:32.298 [2024-12-16 11:43:51.170390] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:54 cdw0:0 sqhd:0005 p:0 m:0 dnr:0 00:22:32.298 [2024-12-16 11:43:51.170413] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:104160 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:32.298 [2024-12-16 11:43:51.170428] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:35 cdw0:0 sqhd:0006 p:0 m:0 dnr:0 00:22:32.298 [2024-12-16 11:43:51.170450] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:80 nsid:1 lba:104168 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:32.298 [2024-12-16 11:43:51.170465] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:80 cdw0:0 sqhd:0007 p:0 m:0 dnr:0 00:22:32.298 [2024-12-16 11:43:51.170487] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:85 nsid:1 lba:104176 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:32.298 [2024-12-16 11:43:51.170502] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:85 cdw0:0 sqhd:0008 p:0 m:0 dnr:0 00:22:32.298 [2024-12-16 11:43:51.170538] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:104184 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:32.298 [2024-12-16 11:43:51.170552] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:20 cdw0:0 sqhd:0009 p:0 m:0 dnr:0 00:22:32.298 [2024-12-16 11:43:51.170588] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:104192 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:32.298 [2024-12-16 11:43:51.170630] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:38 cdw0:0 sqhd:000a p:0 m:0 dnr:0 00:22:32.298 [2024-12-16 11:43:51.170653] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:104200 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:32.298 [2024-12-16 11:43:51.170668] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:58 cdw0:0 sqhd:000b p:0 m:0 dnr:0 00:22:32.298 [2024-12-16 11:43:51.170687] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:104208 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:32.298 [2024-12-16 11:43:51.170702] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:61 cdw0:0 sqhd:000c p:0 m:0 dnr:0 00:22:32.298 [2024-12-16 11:43:51.170722] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:108 nsid:1 lba:104216 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:32.298 [2024-12-16 11:43:51.170736] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:108 cdw0:0 sqhd:000d p:0 m:0 dnr:0 00:22:32.298 [2024-12-16 11:43:51.170755] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:117 nsid:1 lba:104224 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:32.298 [2024-12-16 11:43:51.170769] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:117 cdw0:0 sqhd:000e p:0 m:0 dnr:0 00:22:32.298 [2024-12-16 11:43:51.170788] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:118 nsid:1 lba:104232 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:32.298 [2024-12-16 11:43:51.170802] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:118 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:22:32.298 [2024-12-16 11:43:51.170822] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:104240 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:32.298 [2024-12-16 11:43:51.170836] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:10 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:22:32.298 [2024-12-16 11:43:51.170855] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:105 nsid:1 lba:104248 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:32.298 [2024-12-16 11:43:51.170869] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:105 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:22:32.298 [2024-12-16 11:43:51.170889] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:104256 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:32.298 [2024-12-16 11:43:51.170903] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:11 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:22:32.298 [2024-12-16 11:43:51.170940] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:109 nsid:1 lba:104264 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:32.298 [2024-12-16 11:43:51.170954] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:109 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:22:32.298 [2024-12-16 11:43:51.173670] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:104272 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:32.298 [2024-12-16 11:43:51.173701] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:56 cdw0:0 sqhd:0014 p:0 m:0 dnr:0 00:22:32.298 [2024-12-16 11:43:51.173735] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:100 nsid:1 lba:104280 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:32.298 [2024-12-16 11:43:51.173752] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:100 cdw0:0 sqhd:0015 p:0 m:0 dnr:0 00:22:32.298 [2024-12-16 11:43:51.173778] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:104288 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:32.298 [2024-12-16 11:43:51.173807] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:15 cdw0:0 sqhd:0016 p:0 m:0 dnr:0 00:22:32.298 [2024-12-16 11:43:51.173836] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:104296 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:32.298 [2024-12-16 11:43:51.173853] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:62 cdw0:0 sqhd:0017 p:0 m:0 dnr:0 00:22:32.299 [2024-12-16 11:43:51.173879] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:104304 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:32.299 [2024-12-16 11:43:51.173895] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:28 cdw0:0 sqhd:0018 p:0 m:0 dnr:0 00:22:32.299 [2024-12-16 11:43:51.173921] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:104312 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:32.299 [2024-12-16 11:43:51.173936] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:19 cdw0:0 sqhd:0019 p:0 m:0 dnr:0 00:22:32.299 [2024-12-16 11:43:51.173962] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:110 nsid:1 lba:104320 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:32.299 [2024-12-16 11:43:51.173989] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:110 cdw0:0 sqhd:001a p:0 m:0 dnr:0 00:22:32.299 [2024-12-16 11:43:51.174015] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:93 nsid:1 lba:104328 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:32.299 [2024-12-16 11:43:51.174030] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:93 cdw0:0 sqhd:001b p:0 m:0 dnr:0 00:22:32.299 [2024-12-16 11:43:51.174056] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:101 nsid:1 lba:103832 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:32.299 [2024-12-16 11:43:51.174072] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:101 cdw0:0 sqhd:001c p:0 m:0 dnr:0 00:22:32.299 [2024-12-16 11:43:51.174098] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:78 nsid:1 lba:103840 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:32.299 [2024-12-16 11:43:51.174114] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:78 cdw0:0 sqhd:001d p:0 m:0 dnr:0 00:22:32.299 [2024-12-16 11:43:51.174168] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:103848 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:32.299 [2024-12-16 11:43:51.174187] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:55 cdw0:0 sqhd:001e p:0 m:0 dnr:0 00:22:32.299 [2024-12-16 11:43:51.174213] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:79 nsid:1 lba:103856 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:32.299 [2024-12-16 11:43:51.174229] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:79 cdw0:0 sqhd:001f p:0 m:0 dnr:0 00:22:32.299 [2024-12-16 11:43:51.174255] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:103864 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:32.299 [2024-12-16 11:43:51.174277] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:50 cdw0:0 sqhd:0020 p:0 m:0 dnr:0 00:22:32.299 [2024-12-16 11:43:51.174302] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:92 nsid:1 lba:103872 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:32.299 [2024-12-16 11:43:51.174318] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:92 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:22:32.299 [2024-12-16 11:43:51.174344] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:103880 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:32.299 [2024-12-16 11:43:51.174368] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:4 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:22:32.299 [2024-12-16 11:43:51.174396] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:74 nsid:1 lba:103888 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:32.299 [2024-12-16 11:43:51.174411] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:74 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:22:32.299 [2024-12-16 11:43:51.174438] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:67 nsid:1 lba:103896 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:32.299 [2024-12-16 11:43:51.174455] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:67 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 00:22:32.299 [2024-12-16 11:43:51.174482] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:103904 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:32.299 [2024-12-16 11:43:51.174498] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:44 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:22:32.299 [2024-12-16 11:43:51.174524] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:103912 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:32.299 [2024-12-16 11:43:51.174539] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:49 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:22:32.299 [2024-12-16 11:43:51.174566] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:103920 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:32.299 [2024-12-16 11:43:51.174581] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:6 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:22:32.299 [2024-12-16 11:43:51.174607] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:103928 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:32.299 [2024-12-16 11:43:51.174633] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:41 cdw0:0 sqhd:0028 p:0 m:0 dnr:0 00:22:32.299 [2024-12-16 11:43:51.174659] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:106 nsid:1 lba:103936 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:32.299 [2024-12-16 11:43:51.174674] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:106 cdw0:0 sqhd:0029 p:0 m:0 dnr:0 00:22:32.299 [2024-12-16 11:43:51.174701] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:64 nsid:1 lba:103944 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:32.299 [2024-12-16 11:43:51.174717] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:64 cdw0:0 sqhd:002a p:0 m:0 dnr:0 00:22:32.299 [2024-12-16 11:43:51.174743] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:113 nsid:1 lba:103952 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:32.299 [2024-12-16 11:43:51.174759] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:113 cdw0:0 sqhd:002b p:0 m:0 dnr:0 00:22:32.299 [2024-12-16 11:43:51.174785] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:66 nsid:1 lba:103960 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:32.299 [2024-12-16 11:43:51.174801] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:66 cdw0:0 sqhd:002c p:0 m:0 dnr:0 00:22:32.299 [2024-12-16 11:43:51.174828] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:123 nsid:1 lba:103968 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:32.299 [2024-12-16 11:43:51.174843] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:123 cdw0:0 sqhd:002d p:0 m:0 dnr:0 00:22:32.299 [2024-12-16 11:43:51.174985] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:103976 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:32.299 [2024-12-16 11:43:51.175011] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:63 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:22:32.299 [2024-12-16 11:43:51.175054] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:111 nsid:1 lba:103984 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:32.299 [2024-12-16 11:43:51.175071] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:111 cdw0:0 sqhd:002f p:0 m:0 dnr:0 00:22:32.299 [2024-12-16 11:43:51.175115] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:75 nsid:1 lba:103992 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:32.299 [2024-12-16 11:43:51.175131] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:75 cdw0:0 sqhd:0030 p:0 m:0 dnr:0 00:22:32.299 [2024-12-16 11:43:51.175159] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:104000 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:32.299 [2024-12-16 11:43:51.175191] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:43 cdw0:0 sqhd:0031 p:0 m:0 dnr:0 00:22:32.299 [2024-12-16 11:43:51.175222] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:76 nsid:1 lba:104008 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:32.299 [2024-12-16 11:43:51.175239] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:76 cdw0:0 sqhd:0032 p:0 m:0 dnr:0 00:22:32.299 [2024-12-16 11:43:51.175267] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:109 nsid:1 lba:104016 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:32.299 [2024-12-16 11:43:51.175283] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:109 cdw0:0 sqhd:0033 p:0 m:0 dnr:0 00:22:32.299 [2024-12-16 11:43:51.175311] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:104024 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:32.299 [2024-12-16 11:43:51.175328] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:11 cdw0:0 sqhd:0034 p:0 m:0 dnr:0 00:22:32.299 [2024-12-16 11:43:51.175355] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:105 nsid:1 lba:104032 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:32.299 [2024-12-16 11:43:51.175371] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:105 cdw0:0 sqhd:0035 p:0 m:0 dnr:0 00:22:32.299 [2024-12-16 11:43:51.175399] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:104040 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:32.299 [2024-12-16 11:43:51.175415] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:10 cdw0:0 sqhd:0036 p:0 m:0 dnr:0 00:22:32.299 [2024-12-16 11:43:51.175443] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:118 nsid:1 lba:104048 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:32.299 [2024-12-16 11:43:51.175458] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:118 cdw0:0 sqhd:0037 p:0 m:0 dnr:0 00:22:32.299 [2024-12-16 11:43:51.175486] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:117 nsid:1 lba:104056 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:32.299 [2024-12-16 11:43:51.175502] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:117 cdw0:0 sqhd:0038 p:0 m:0 dnr:0 00:22:32.299 [2024-12-16 11:43:51.175530] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:108 nsid:1 lba:104064 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:32.299 [2024-12-16 11:43:51.175545] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:108 cdw0:0 sqhd:0039 p:0 m:0 dnr:0 00:22:32.299 [2024-12-16 11:43:51.175574] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:104072 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:32.299 [2024-12-16 11:43:51.175589] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:61 cdw0:0 sqhd:003a p:0 m:0 dnr:0 00:22:32.299 [2024-12-16 11:43:51.175636] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:104080 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:32.299 [2024-12-16 11:43:51.175653] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:58 cdw0:0 sqhd:003b p:0 m:0 dnr:0 00:22:32.299 [2024-12-16 11:43:51.175682] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:104088 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:32.299 [2024-12-16 11:43:51.175698] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:38 cdw0:0 sqhd:003c p:0 m:0 dnr:0 00:22:32.299 [2024-12-16 11:43:51.175748] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:104096 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:32.300 [2024-12-16 11:43:51.175765] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:20 cdw0:0 sqhd:003d p:0 m:0 dnr:0 00:22:32.300 [2024-12-16 11:43:51.175793] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:85 nsid:1 lba:104104 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:32.300 [2024-12-16 11:43:51.175810] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:85 cdw0:0 sqhd:003e p:0 m:0 dnr:0 00:22:32.300 [2024-12-16 11:43:51.175838] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:80 nsid:1 lba:104112 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:32.300 [2024-12-16 11:43:51.175853] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:80 cdw0:0 sqhd:003f p:0 m:0 dnr:0 00:22:32.300 [2024-12-16 11:43:51.175881] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:104120 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:32.300 [2024-12-16 11:43:51.175896] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:35 cdw0:0 sqhd:0040 p:0 m:0 dnr:0 00:22:32.300 [2024-12-16 11:43:51.175924] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:104128 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:32.300 [2024-12-16 11:43:51.175940] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:54 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:22:32.300 [2024-12-16 11:43:51.175968] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:119 nsid:1 lba:104136 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:32.300 [2024-12-16 11:43:51.175984] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:119 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:22:32.300 7979.19 IOPS, 31.17 MiB/s [2024-12-16T11:44:33.053Z] 8003.29 IOPS, 31.26 MiB/s [2024-12-16T11:44:33.053Z] 8054.44 IOPS, 31.46 MiB/s [2024-12-16T11:44:33.053Z] 8068.21 IOPS, 31.52 MiB/s [2024-12-16T11:44:33.053Z] 8087.90 IOPS, 31.59 MiB/s [2024-12-16T11:44:33.053Z] 8100.14 IOPS, 31.64 MiB/s [2024-12-16T11:44:33.053Z] 8121.86 IOPS, 31.73 MiB/s [2024-12-16T11:44:33.053Z] [2024-12-16 11:43:58.350650] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:113 nsid:1 lba:4616 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:32.300 [2024-12-16 11:43:58.350750] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:113 cdw0:0 sqhd:004f p:0 m:0 dnr:0 00:22:32.300 [2024-12-16 11:43:58.350820] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:4688 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:32.300 [2024-12-16 11:43:58.350843] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:46 cdw0:0 sqhd:0050 p:0 m:0 dnr:0 00:22:32.300 [2024-12-16 11:43:58.350866] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:4696 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:32.300 [2024-12-16 11:43:58.350882] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:41 cdw0:0 sqhd:0051 p:0 m:0 dnr:0 00:22:32.300 [2024-12-16 11:43:58.350904] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:103 nsid:1 lba:4704 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:32.300 [2024-12-16 11:43:58.350960] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:103 cdw0:0 sqhd:0052 p:0 m:0 dnr:0 00:22:32.300 [2024-12-16 11:43:58.350985] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:4712 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:32.300 [2024-12-16 11:43:58.351000] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:44 cdw0:0 sqhd:0053 p:0 m:0 dnr:0 00:22:32.300 [2024-12-16 11:43:58.351022] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:67 nsid:1 lba:4720 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:32.300 [2024-12-16 11:43:58.351037] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:67 cdw0:0 sqhd:0054 p:0 m:0 dnr:0 00:22:32.300 [2024-12-16 11:43:58.351058] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:4728 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:32.300 [2024-12-16 11:43:58.351073] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:3 cdw0:0 sqhd:0055 p:0 m:0 dnr:0 00:22:32.300 [2024-12-16 11:43:58.351094] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:100 nsid:1 lba:4736 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:32.300 [2024-12-16 11:43:58.351109] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:100 cdw0:0 sqhd:0056 p:0 m:0 dnr:0 00:22:32.300 [2024-12-16 11:43:58.351130] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:4744 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:32.300 [2024-12-16 11:43:58.351161] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:2 cdw0:0 sqhd:0057 p:0 m:0 dnr:0 00:22:32.300 [2024-12-16 11:43:58.351186] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:4624 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:32.300 [2024-12-16 11:43:58.351201] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:20 cdw0:0 sqhd:0058 p:0 m:0 dnr:0 00:22:32.300 [2024-12-16 11:43:58.351223] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:83 nsid:1 lba:4632 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:32.300 [2024-12-16 11:43:58.351238] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:83 cdw0:0 sqhd:0059 p:0 m:0 dnr:0 00:22:32.300 [2024-12-16 11:43:58.351259] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:4640 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:32.300 [2024-12-16 11:43:58.351274] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:61 cdw0:0 sqhd:005a p:0 m:0 dnr:0 00:22:32.300 [2024-12-16 11:43:58.351296] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:4648 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:32.300 [2024-12-16 11:43:58.351310] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:31 cdw0:0 sqhd:005b p:0 m:0 dnr:0 00:22:32.300 [2024-12-16 11:43:58.351331] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:77 nsid:1 lba:4656 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:32.300 [2024-12-16 11:43:58.351346] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:77 cdw0:0 sqhd:005c p:0 m:0 dnr:0 00:22:32.300 [2024-12-16 11:43:58.351368] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:4664 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:32.300 [2024-12-16 11:43:58.351384] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:62 cdw0:0 sqhd:005d p:0 m:0 dnr:0 00:22:32.300 [2024-12-16 11:43:58.351405] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:101 nsid:1 lba:4672 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:32.300 [2024-12-16 11:43:58.351437] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:101 cdw0:0 sqhd:005e p:0 m:0 dnr:0 00:22:32.300 [2024-12-16 11:43:58.352379] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:4680 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:32.300 [2024-12-16 11:43:58.352409] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:0 cdw0:0 sqhd:005f p:0 m:0 dnr:0 00:22:32.300 [2024-12-16 11:43:58.352439] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:4752 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:32.300 [2024-12-16 11:43:58.352455] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:23 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:22:32.300 [2024-12-16 11:43:58.352478] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:4760 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:32.300 [2024-12-16 11:43:58.352494] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:37 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:22:32.300 [2024-12-16 11:43:58.352516] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:4768 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:32.300 [2024-12-16 11:43:58.352531] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:63 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:22:32.300 [2024-12-16 11:43:58.352554] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:4776 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:32.300 [2024-12-16 11:43:58.352569] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:4 cdw0:0 sqhd:0063 p:0 m:0 dnr:0 00:22:32.300 [2024-12-16 11:43:58.352591] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:4784 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:32.300 [2024-12-16 11:43:58.352606] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:52 cdw0:0 sqhd:0064 p:0 m:0 dnr:0 00:22:32.300 [2024-12-16 11:43:58.352627] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:4792 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:32.300 [2024-12-16 11:43:58.352642] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:43 cdw0:0 sqhd:0065 p:0 m:0 dnr:0 00:22:32.300 [2024-12-16 11:43:58.352664] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:4800 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:32.300 [2024-12-16 11:43:58.352679] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:9 cdw0:0 sqhd:0066 p:0 m:0 dnr:0 00:22:32.300 [2024-12-16 11:43:58.352701] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:4808 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:32.300 [2024-12-16 11:43:58.352716] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:12 cdw0:0 sqhd:0067 p:0 m:0 dnr:0 00:22:32.300 [2024-12-16 11:43:58.352738] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:104 nsid:1 lba:4816 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:32.300 [2024-12-16 11:43:58.352753] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:104 cdw0:0 sqhd:0068 p:0 m:0 dnr:0 00:22:32.300 [2024-12-16 11:43:58.352775] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:4824 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:32.300 [2024-12-16 11:43:58.352800] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:22 cdw0:0 sqhd:0069 p:0 m:0 dnr:0 00:22:32.300 [2024-12-16 11:43:58.352822] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:84 nsid:1 lba:4832 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:32.300 [2024-12-16 11:43:58.352839] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:84 cdw0:0 sqhd:006a p:0 m:0 dnr:0 00:22:32.300 [2024-12-16 11:43:58.352873] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:98 nsid:1 lba:4840 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:32.300 [2024-12-16 11:43:58.352890] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:98 cdw0:0 sqhd:006b p:0 m:0 dnr:0 00:22:32.300 [2024-12-16 11:43:58.352912] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:115 nsid:1 lba:4848 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:32.300 [2024-12-16 11:43:58.352927] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:115 cdw0:0 sqhd:006c p:0 m:0 dnr:0 00:22:32.300 [2024-12-16 11:43:58.352949] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:86 nsid:1 lba:4856 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:32.300 [2024-12-16 11:43:58.352966] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:86 cdw0:0 sqhd:006d p:0 m:0 dnr:0 00:22:32.300 [2024-12-16 11:43:58.352989] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:80 nsid:1 lba:4864 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:32.300 [2024-12-16 11:43:58.353004] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:80 cdw0:0 sqhd:006e p:0 m:0 dnr:0 00:22:32.300 [2024-12-16 11:43:58.353026] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:121 nsid:1 lba:4872 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:32.301 [2024-12-16 11:43:58.353041] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:121 cdw0:0 sqhd:006f p:0 m:0 dnr:0 00:22:32.301 [2024-12-16 11:43:58.353862] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:101 nsid:1 lba:4880 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:32.301 [2024-12-16 11:43:58.353887] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:101 cdw0:0 sqhd:0070 p:0 m:0 dnr:0 00:22:32.301 [2024-12-16 11:43:58.353915] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:4888 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:32.301 [2024-12-16 11:43:58.353932] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:62 cdw0:0 sqhd:0071 p:0 m:0 dnr:0 00:22:32.301 [2024-12-16 11:43:58.353955] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:77 nsid:1 lba:4896 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:32.301 [2024-12-16 11:43:58.353970] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:77 cdw0:0 sqhd:0072 p:0 m:0 dnr:0 00:22:32.301 [2024-12-16 11:43:58.353994] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:4904 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:32.301 [2024-12-16 11:43:58.354010] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:31 cdw0:0 sqhd:0073 p:0 m:0 dnr:0 00:22:32.301 [2024-12-16 11:43:58.354033] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:4912 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:32.301 [2024-12-16 11:43:58.354048] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:61 cdw0:0 sqhd:0074 p:0 m:0 dnr:0 00:22:32.301 [2024-12-16 11:43:58.354071] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:83 nsid:1 lba:4920 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:32.301 [2024-12-16 11:43:58.354087] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:83 cdw0:0 sqhd:0075 p:0 m:0 dnr:0 00:22:32.301 [2024-12-16 11:43:58.354110] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:4928 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:32.301 [2024-12-16 11:43:58.354125] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:20 cdw0:0 sqhd:0076 p:0 m:0 dnr:0 00:22:32.301 [2024-12-16 11:43:58.354163] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:4936 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:32.301 [2024-12-16 11:43:58.354193] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:2 cdw0:0 sqhd:0077 p:0 m:0 dnr:0 00:22:32.301 [2024-12-16 11:43:58.354219] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:100 nsid:1 lba:4944 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:32.301 [2024-12-16 11:43:58.354235] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:100 cdw0:0 sqhd:0078 p:0 m:0 dnr:0 00:22:32.301 [2024-12-16 11:43:58.354258] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:4952 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:32.301 [2024-12-16 11:43:58.354274] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:3 cdw0:0 sqhd:0079 p:0 m:0 dnr:0 00:22:32.301 [2024-12-16 11:43:58.354297] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:67 nsid:1 lba:4960 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:32.301 [2024-12-16 11:43:58.354312] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:67 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:22:32.301 [2024-12-16 11:43:58.354335] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:4968 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:32.301 [2024-12-16 11:43:58.354351] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:44 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:22:32.301 [2024-12-16 11:43:58.354374] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:103 nsid:1 lba:4976 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:32.301 [2024-12-16 11:43:58.354389] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:103 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:22:32.301 [2024-12-16 11:43:58.354413] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:4984 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:32.301 [2024-12-16 11:43:58.354428] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:41 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:22:32.301 [2024-12-16 11:43:58.354452] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:4992 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:32.301 [2024-12-16 11:43:58.354468] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:46 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:22:32.301 [2024-12-16 11:43:58.354491] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:113 nsid:1 lba:5000 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:32.301 [2024-12-16 11:43:58.354507] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:113 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:22:32.301 [2024-12-16 11:43:58.354530] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:5008 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:32.301 [2024-12-16 11:43:58.354546] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:125 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:32.301 [2024-12-16 11:43:58.354569] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:118 nsid:1 lba:5016 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:32.301 [2024-12-16 11:43:58.354589] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:118 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:32.301 [2024-12-16 11:43:58.354612] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:93 nsid:1 lba:5024 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:32.301 [2024-12-16 11:43:58.354628] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:93 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:22:32.301 [2024-12-16 11:43:58.354651] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:123 nsid:1 lba:5032 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:32.301 [2024-12-16 11:43:58.354674] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:123 cdw0:0 sqhd:0003 p:0 m:0 dnr:0 00:22:32.301 [2024-12-16 11:43:58.354699] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:74 nsid:1 lba:5040 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:32.301 [2024-12-16 11:43:58.354715] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:74 cdw0:0 sqhd:0004 p:0 m:0 dnr:0 00:22:32.301 [2024-12-16 11:43:58.354738] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:5048 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:32.301 [2024-12-16 11:43:58.354754] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:38 cdw0:0 sqhd:0005 p:0 m:0 dnr:0 00:22:32.301 [2024-12-16 11:43:58.354778] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:5056 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:32.301 [2024-12-16 11:43:58.354793] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:53 cdw0:0 sqhd:0006 p:0 m:0 dnr:0 00:22:32.301 [2024-12-16 11:43:58.354817] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:94 nsid:1 lba:5064 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:32.301 [2024-12-16 11:43:58.354832] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:94 cdw0:0 sqhd:0007 p:0 m:0 dnr:0 00:22:32.301 [2024-12-16 11:43:58.354856] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:96 nsid:1 lba:5072 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:32.301 [2024-12-16 11:43:58.354871] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:96 cdw0:0 sqhd:0008 p:0 m:0 dnr:0 00:22:32.301 [2024-12-16 11:43:58.354894] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:73 nsid:1 lba:5080 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:32.301 [2024-12-16 11:43:58.354909] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:73 cdw0:0 sqhd:0009 p:0 m:0 dnr:0 00:22:32.301 [2024-12-16 11:43:58.354933] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:110 nsid:1 lba:5088 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:32.301 [2024-12-16 11:43:58.354960] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:110 cdw0:0 sqhd:000a p:0 m:0 dnr:0 00:22:32.301 [2024-12-16 11:43:58.354983] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:5096 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:32.301 [2024-12-16 11:43:58.354999] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:26 cdw0:0 sqhd:000b p:0 m:0 dnr:0 00:22:32.301 [2024-12-16 11:43:58.355023] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:5104 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:32.301 [2024-12-16 11:43:58.355039] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:1 cdw0:0 sqhd:000c p:0 m:0 dnr:0 00:22:32.301 [2024-12-16 11:43:58.355062] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:91 nsid:1 lba:5112 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:32.301 [2024-12-16 11:43:58.355078] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:91 cdw0:0 sqhd:000d p:0 m:0 dnr:0 00:22:32.301 [2024-12-16 11:43:58.355101] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:5120 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:32.301 [2024-12-16 11:43:58.355116] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:15 cdw0:0 sqhd:000e p:0 m:0 dnr:0 00:22:32.301 [2024-12-16 11:43:58.355153] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:114 nsid:1 lba:5128 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:32.301 [2024-12-16 11:43:58.355172] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:114 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:22:32.301 [2024-12-16 11:43:58.355205] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:5136 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:32.301 [2024-12-16 11:43:58.355222] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:32 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:22:32.301 [2024-12-16 11:43:58.355247] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:5144 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:32.301 [2024-12-16 11:43:58.355263] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:56 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:22:32.301 [2024-12-16 11:43:58.355397] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:5152 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:32.301 [2024-12-16 11:43:58.355420] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:24 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:22:32.301 [2024-12-16 11:43:58.355449] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:5160 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:32.301 [2024-12-16 11:43:58.355466] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:58 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:22:32.301 [2024-12-16 11:43:58.355492] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:5168 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:32.302 [2024-12-16 11:43:58.355508] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:17 cdw0:0 sqhd:0014 p:0 m:0 dnr:0 00:22:32.302 [2024-12-16 11:43:58.355533] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:5176 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:32.302 [2024-12-16 11:43:58.355549] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:39 cdw0:0 sqhd:0015 p:0 m:0 dnr:0 00:22:32.302 [2024-12-16 11:43:58.355574] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:111 nsid:1 lba:5184 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:32.302 [2024-12-16 11:43:58.355589] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:111 cdw0:0 sqhd:0016 p:0 m:0 dnr:0 00:22:32.302 [2024-12-16 11:43:58.355615] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:5192 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:32.302 [2024-12-16 11:43:58.355637] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:47 cdw0:0 sqhd:0017 p:0 m:0 dnr:0 00:22:32.302 [2024-12-16 11:43:58.355662] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:79 nsid:1 lba:5200 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:32.302 [2024-12-16 11:43:58.355678] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:79 cdw0:0 sqhd:0018 p:0 m:0 dnr:0 00:22:32.302 [2024-12-16 11:43:58.355703] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:88 nsid:1 lba:5208 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:32.302 [2024-12-16 11:43:58.355733] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:88 cdw0:0 sqhd:0019 p:0 m:0 dnr:0 00:22:32.302 [2024-12-16 11:43:58.355761] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:81 nsid:1 lba:5216 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:32.302 [2024-12-16 11:43:58.355777] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:81 cdw0:0 sqhd:001a p:0 m:0 dnr:0 00:22:32.302 [2024-12-16 11:43:58.355802] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:78 nsid:1 lba:5224 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:32.302 [2024-12-16 11:43:58.355818] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:78 cdw0:0 sqhd:001b p:0 m:0 dnr:0 00:22:32.302 [2024-12-16 11:43:58.355843] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:5232 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:32.302 [2024-12-16 11:43:58.355871] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:57 cdw0:0 sqhd:001c p:0 m:0 dnr:0 00:22:32.302 [2024-12-16 11:43:58.355903] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:5240 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:32.302 [2024-12-16 11:43:58.355920] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:49 cdw0:0 sqhd:001d p:0 m:0 dnr:0 00:22:32.302 [2024-12-16 11:43:58.355946] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:70 nsid:1 lba:5248 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:32.302 [2024-12-16 11:43:58.355962] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:70 cdw0:0 sqhd:001e p:0 m:0 dnr:0 00:22:32.302 [2024-12-16 11:43:58.355988] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:106 nsid:1 lba:5256 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:32.302 [2024-12-16 11:43:58.356003] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:106 cdw0:0 sqhd:001f p:0 m:0 dnr:0 00:22:32.302 [2024-12-16 11:43:58.356028] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:99 nsid:1 lba:5264 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:32.302 [2024-12-16 11:43:58.356043] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:99 cdw0:0 sqhd:0020 p:0 m:0 dnr:0 00:22:32.302 [2024-12-16 11:43:58.356068] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:90 nsid:1 lba:5272 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:32.302 [2024-12-16 11:43:58.356084] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:90 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:22:32.302 [2024-12-16 11:43:58.356109] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:5280 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:32.302 [2024-12-16 11:43:58.356125] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:21 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:22:32.302 [2024-12-16 11:43:58.356163] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:5288 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:32.302 [2024-12-16 11:43:58.356181] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:6 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:22:32.302 [2024-12-16 11:43:58.356207] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:102 nsid:1 lba:5296 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:32.302 [2024-12-16 11:43:58.356223] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:102 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 00:22:32.302 [2024-12-16 11:43:58.356248] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:76 nsid:1 lba:5304 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:32.302 [2024-12-16 11:43:58.356263] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:76 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:22:32.302 [2024-12-16 11:43:58.356288] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:5312 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:32.302 [2024-12-16 11:43:58.356304] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:19 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:22:32.302 [2024-12-16 11:43:58.356330] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:68 nsid:1 lba:5320 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:32.302 [2024-12-16 11:43:58.356345] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:68 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:22:32.302 [2024-12-16 11:43:58.356378] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:5328 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:32.302 [2024-12-16 11:43:58.356402] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:54 cdw0:0 sqhd:0028 p:0 m:0 dnr:0 00:22:32.302 [2024-12-16 11:43:58.356429] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:85 nsid:1 lba:5336 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:32.302 [2024-12-16 11:43:58.356445] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:85 cdw0:0 sqhd:0029 p:0 m:0 dnr:0 00:22:32.302 [2024-12-16 11:43:58.356470] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:117 nsid:1 lba:5344 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:32.302 [2024-12-16 11:43:58.356486] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:117 cdw0:0 sqhd:002a p:0 m:0 dnr:0 00:22:32.302 [2024-12-16 11:43:58.356511] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:126 nsid:1 lba:5352 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:32.302 [2024-12-16 11:43:58.356527] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:126 cdw0:0 sqhd:002b p:0 m:0 dnr:0 00:22:32.302 [2024-12-16 11:43:58.356552] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:5360 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:32.302 [2024-12-16 11:43:58.356568] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:55 cdw0:0 sqhd:002c p:0 m:0 dnr:0 00:22:32.302 [2024-12-16 11:43:58.356594] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:116 nsid:1 lba:5368 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:32.302 [2024-12-16 11:43:58.356617] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:116 cdw0:0 sqhd:002d p:0 m:0 dnr:0 00:22:32.302 [2024-12-16 11:43:58.356643] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:5376 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:32.302 [2024-12-16 11:43:58.356660] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:28 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:22:32.302 [2024-12-16 11:43:58.356686] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:5384 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:32.302 [2024-12-16 11:43:58.356701] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:5 cdw0:0 sqhd:002f p:0 m:0 dnr:0 00:22:32.302 [2024-12-16 11:43:58.356727] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:5392 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:32.302 [2024-12-16 11:43:58.356742] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:11 cdw0:0 sqhd:0030 p:0 m:0 dnr:0 00:22:32.302 [2024-12-16 11:43:58.356768] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:92 nsid:1 lba:5400 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:32.302 [2024-12-16 11:43:58.356783] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:92 cdw0:0 sqhd:0031 p:0 m:0 dnr:0 00:22:32.302 [2024-12-16 11:43:58.356808] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:97 nsid:1 lba:5408 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:32.302 [2024-12-16 11:43:58.356825] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:97 cdw0:0 sqhd:0032 p:0 m:0 dnr:0 00:22:32.302 [2024-12-16 11:43:58.356850] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:82 nsid:1 lba:5416 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:32.302 [2024-12-16 11:43:58.356865] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:82 cdw0:0 sqhd:0033 p:0 m:0 dnr:0 00:22:32.302 [2024-12-16 11:43:58.356891] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:5424 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:32.302 [2024-12-16 11:43:58.356906] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:13 cdw0:0 sqhd:0034 p:0 m:0 dnr:0 00:22:32.302 [2024-12-16 11:43:58.356940] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:109 nsid:1 lba:5432 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:32.302 [2024-12-16 11:43:58.356957] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:109 cdw0:0 sqhd:0035 p:0 m:0 dnr:0 00:22:32.302 [2024-12-16 11:43:58.356982] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:5440 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:32.303 [2024-12-16 11:43:58.356997] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:45 cdw0:0 sqhd:0036 p:0 m:0 dnr:0 00:22:32.303 [2024-12-16 11:43:58.357023] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:95 nsid:1 lba:5448 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:32.303 [2024-12-16 11:43:58.357038] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:95 cdw0:0 sqhd:0037 p:0 m:0 dnr:0 00:22:32.303 [2024-12-16 11:43:58.357063] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:5456 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:32.303 [2024-12-16 11:43:58.357078] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:29 cdw0:0 sqhd:0038 p:0 m:0 dnr:0 00:22:32.303 [2024-12-16 11:43:58.357104] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:64 nsid:1 lba:5464 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:32.303 [2024-12-16 11:43:58.357119] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:64 cdw0:0 sqhd:0039 p:0 m:0 dnr:0 00:22:32.303 [2024-12-16 11:43:58.357155] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:122 nsid:1 lba:5472 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:32.303 [2024-12-16 11:43:58.357173] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:122 cdw0:0 sqhd:003a p:0 m:0 dnr:0 00:22:32.303 [2024-12-16 11:43:58.357199] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:5480 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:32.303 [2024-12-16 11:43:58.357215] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:25 cdw0:0 sqhd:003b p:0 m:0 dnr:0 00:22:32.303 [2024-12-16 11:43:58.357240] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:5488 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:32.303 [2024-12-16 11:43:58.357256] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:35 cdw0:0 sqhd:003c p:0 m:0 dnr:0 00:22:32.303 [2024-12-16 11:43:58.357281] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:5496 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:32.303 [2024-12-16 11:43:58.357297] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:42 cdw0:0 sqhd:003d p:0 m:0 dnr:0 00:22:32.303 [2024-12-16 11:43:58.357323] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:107 nsid:1 lba:5504 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:32.303 [2024-12-16 11:43:58.357338] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:107 cdw0:0 sqhd:003e p:0 m:0 dnr:0 00:22:32.303 [2024-12-16 11:43:58.357363] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:124 nsid:1 lba:5512 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:32.303 [2024-12-16 11:43:58.357378] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:124 cdw0:0 sqhd:003f p:0 m:0 dnr:0 00:22:32.303 [2024-12-16 11:43:58.357404] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:5520 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:32.303 [2024-12-16 11:43:58.357420] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:60 cdw0:0 sqhd:0040 p:0 m:0 dnr:0 00:22:32.303 [2024-12-16 11:43:58.357453] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:5528 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:32.303 [2024-12-16 11:43:58.357470] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:48 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:22:32.303 [2024-12-16 11:43:58.357495] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:5536 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:32.303 [2024-12-16 11:43:58.357511] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:14 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:22:32.303 [2024-12-16 11:43:58.357536] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:5544 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:32.303 [2024-12-16 11:43:58.357551] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:18 cdw0:0 sqhd:0043 p:0 m:0 dnr:0 00:22:32.303 [2024-12-16 11:43:58.357576] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:5552 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:32.303 [2024-12-16 11:43:58.357591] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:10 cdw0:0 sqhd:0044 p:0 m:0 dnr:0 00:22:32.303 [2024-12-16 11:43:58.357617] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:75 nsid:1 lba:5560 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:32.303 [2024-12-16 11:43:58.357632] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:75 cdw0:0 sqhd:0045 p:0 m:0 dnr:0 00:22:32.303 [2024-12-16 11:43:58.357657] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:120 nsid:1 lba:5568 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:32.303 [2024-12-16 11:43:58.357672] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:120 cdw0:0 sqhd:0046 p:0 m:0 dnr:0 00:22:32.303 [2024-12-16 11:43:58.357699] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:89 nsid:1 lba:5576 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:32.303 [2024-12-16 11:43:58.357714] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:89 cdw0:0 sqhd:0047 p:0 m:0 dnr:0 00:22:32.303 [2024-12-16 11:43:58.357739] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:5584 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:32.303 [2024-12-16 11:43:58.357755] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:16 cdw0:0 sqhd:0048 p:0 m:0 dnr:0 00:22:32.303 [2024-12-16 11:43:58.357781] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:72 nsid:1 lba:5592 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:32.303 [2024-12-16 11:43:58.357796] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:72 cdw0:0 sqhd:0049 p:0 m:0 dnr:0 00:22:32.303 [2024-12-16 11:43:58.357821] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:112 nsid:1 lba:5600 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:32.303 [2024-12-16 11:43:58.357836] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:112 cdw0:0 sqhd:004a p:0 m:0 dnr:0 00:22:32.303 [2024-12-16 11:43:58.357862] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:5608 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:32.303 [2024-12-16 11:43:58.357878] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:40 cdw0:0 sqhd:004b p:0 m:0 dnr:0 00:22:32.303 [2024-12-16 11:43:58.357903] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:5616 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:32.303 [2024-12-16 11:43:58.357918] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:36 cdw0:0 sqhd:004c p:0 m:0 dnr:0 00:22:32.303 [2024-12-16 11:43:58.357944] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:66 nsid:1 lba:5624 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:32.303 [2024-12-16 11:43:58.357969] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:66 cdw0:0 sqhd:004d p:0 m:0 dnr:0 00:22:32.303 [2024-12-16 11:43:58.357997] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:5632 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:32.303 [2024-12-16 11:43:58.358013] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:33 cdw0:0 sqhd:004e p:0 m:0 dnr:0 00:22:32.303 7860.91 IOPS, 30.71 MiB/s [2024-12-16T11:44:33.056Z] 7533.38 IOPS, 29.43 MiB/s [2024-12-16T11:44:33.056Z] 7232.04 IOPS, 28.25 MiB/s [2024-12-16T11:44:33.056Z] 6953.88 IOPS, 27.16 MiB/s [2024-12-16T11:44:33.056Z] 6696.33 IOPS, 26.16 MiB/s [2024-12-16T11:44:33.056Z] 6457.18 IOPS, 25.22 MiB/s [2024-12-16T11:44:33.056Z] 6234.52 IOPS, 24.35 MiB/s [2024-12-16T11:44:33.056Z] 6234.93 IOPS, 24.36 MiB/s [2024-12-16T11:44:33.056Z] 6313.97 IOPS, 24.66 MiB/s [2024-12-16T11:44:33.056Z] 6379.75 IOPS, 24.92 MiB/s [2024-12-16T11:44:33.056Z] 6450.18 IOPS, 25.20 MiB/s [2024-12-16T11:44:33.056Z] 6514.50 IOPS, 25.45 MiB/s [2024-12-16T11:44:33.056Z] 6569.31 IOPS, 25.66 MiB/s [2024-12-16T11:44:33.056Z] [2024-12-16 11:44:11.814151] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:99 nsid:1 lba:54128 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:32.303 [2024-12-16 11:44:11.814207] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:99 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:22:32.303 [2024-12-16 11:44:11.814267] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:54136 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:32.303 [2024-12-16 11:44:11.814299] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:26 cdw0:0 sqhd:0063 p:0 m:0 dnr:0 00:22:32.303 [2024-12-16 11:44:11.814323] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:54144 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:32.303 [2024-12-16 11:44:11.814338] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:10 cdw0:0 sqhd:0064 p:0 m:0 dnr:0 00:22:32.303 [2024-12-16 11:44:11.814359] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:79 nsid:1 lba:54152 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:32.303 [2024-12-16 11:44:11.814892] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:79 cdw0:0 sqhd:0065 p:0 m:0 dnr:0 00:22:32.303 [2024-12-16 11:44:11.814973] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:54160 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:32.303 [2024-12-16 11:44:11.815002] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:30 cdw0:0 sqhd:0066 p:0 m:0 dnr:0 00:22:32.303 [2024-12-16 11:44:11.815035] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:54168 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:32.303 [2024-12-16 11:44:11.815056] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:24 cdw0:0 sqhd:0067 p:0 m:0 dnr:0 00:22:32.303 [2024-12-16 11:44:11.815088] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:54176 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:32.303 [2024-12-16 11:44:11.815109] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:46 cdw0:0 sqhd:0068 p:0 m:0 dnr:0 00:22:32.303 [2024-12-16 11:44:11.815140] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:105 nsid:1 lba:54184 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:32.303 [2024-12-16 11:44:11.815181] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:105 cdw0:0 sqhd:0069 p:0 m:0 dnr:0 00:22:32.303 [2024-12-16 11:44:11.815504] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:54192 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:32.303 [2024-12-16 11:44:11.815542] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:32.303 [2024-12-16 11:44:11.815570] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:69 nsid:1 lba:53632 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:32.303 [2024-12-16 11:44:11.815620] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:32.303 [2024-12-16 11:44:11.815645] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:75 nsid:1 lba:53640 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:32.303 [2024-12-16 11:44:11.815665] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:32.304 [2024-12-16 11:44:11.815698] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:96 nsid:1 lba:53648 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:32.304 [2024-12-16 11:44:11.815717] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:32.304 [2024-12-16 11:44:11.815768] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:115 nsid:1 lba:53656 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:32.304 [2024-12-16 11:44:11.815789] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:32.304 [2024-12-16 11:44:11.815811] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:123 nsid:1 lba:53664 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:32.304 [2024-12-16 11:44:11.815831] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:32.304 [2024-12-16 11:44:11.815853] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:53672 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:32.304 [2024-12-16 11:44:11.815872] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:32.304 [2024-12-16 11:44:11.815893] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:86 nsid:1 lba:53680 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:32.304 [2024-12-16 11:44:11.815912] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:32.304 [2024-12-16 11:44:11.815934] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:76 nsid:1 lba:53688 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:32.304 [2024-12-16 11:44:11.815953] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:32.304 [2024-12-16 11:44:11.815976] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:107 nsid:1 lba:53696 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:32.304 [2024-12-16 11:44:11.815995] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:32.304 [2024-12-16 11:44:11.816017] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:53704 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:32.304 [2024-12-16 11:44:11.816036] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:32.304 [2024-12-16 11:44:11.816058] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:77 nsid:1 lba:53712 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:32.304 [2024-12-16 11:44:11.816077] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:32.304 [2024-12-16 11:44:11.816099] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:64 nsid:1 lba:53720 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:32.304 [2024-12-16 11:44:11.816118] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:32.304 [2024-12-16 11:44:11.816161] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:53728 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:32.304 [2024-12-16 11:44:11.816184] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:32.304 [2024-12-16 11:44:11.816221] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:53736 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:32.304 [2024-12-16 11:44:11.816242] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:32.304 [2024-12-16 11:44:11.816267] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:53744 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:32.304 [2024-12-16 11:44:11.816288] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:32.304 [2024-12-16 11:44:11.816319] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:54200 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:32.304 [2024-12-16 11:44:11.816339] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:32.304 [2024-12-16 11:44:11.816361] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:92 nsid:1 lba:54208 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:32.304 [2024-12-16 11:44:11.816380] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:32.304 [2024-12-16 11:44:11.816402] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:54216 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:32.304 [2024-12-16 11:44:11.816421] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:32.304 [2024-12-16 11:44:11.816443] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:117 nsid:1 lba:54224 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:32.304 [2024-12-16 11:44:11.816462] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:32.304 [2024-12-16 11:44:11.816484] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:108 nsid:1 lba:54232 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:32.304 [2024-12-16 11:44:11.816504] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:32.304 [2024-12-16 11:44:11.816526] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:89 nsid:1 lba:54240 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:32.304 [2024-12-16 11:44:11.816545] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:32.304 [2024-12-16 11:44:11.816567] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:54248 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:32.304 [2024-12-16 11:44:11.816587] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:32.304 [2024-12-16 11:44:11.816608] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:126 nsid:1 lba:54256 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:32.304 [2024-12-16 11:44:11.816628] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:32.304 [2024-12-16 11:44:11.816650] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:109 nsid:1 lba:54264 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:32.304 [2024-12-16 11:44:11.816669] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:32.304 [2024-12-16 11:44:11.816691] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:120 nsid:1 lba:54272 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:32.304 [2024-12-16 11:44:11.816711] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:32.304 [2024-12-16 11:44:11.816732] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:54280 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:32.304 [2024-12-16 11:44:11.816761] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:32.304 [2024-12-16 11:44:11.816784] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:100 nsid:1 lba:54288 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:32.304 [2024-12-16 11:44:11.816804] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:32.304 [2024-12-16 11:44:11.816825] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:54296 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:32.304 [2024-12-16 11:44:11.816845] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:32.304 [2024-12-16 11:44:11.816866] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:91 nsid:1 lba:54304 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:32.304 [2024-12-16 11:44:11.816886] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:32.304 [2024-12-16 11:44:11.816907] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:54312 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:32.304 [2024-12-16 11:44:11.816927] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:32.304 [2024-12-16 11:44:11.816949] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:83 nsid:1 lba:54320 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:32.304 [2024-12-16 11:44:11.816968] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:32.304 [2024-12-16 11:44:11.816990] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:54328 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:32.304 [2024-12-16 11:44:11.817009] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:32.304 [2024-12-16 11:44:11.817031] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:82 nsid:1 lba:54336 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:32.304 [2024-12-16 11:44:11.817050] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:32.304 [2024-12-16 11:44:11.817071] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:54344 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:32.304 [2024-12-16 11:44:11.817090] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:32.304 [2024-12-16 11:44:11.817113] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:54352 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:32.304 [2024-12-16 11:44:11.817133] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:32.304 [2024-12-16 11:44:11.817171] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:93 nsid:1 lba:54360 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:32.304 [2024-12-16 11:44:11.817192] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:32.304 [2024-12-16 11:44:11.817213] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:112 nsid:1 lba:54368 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:32.304 [2024-12-16 11:44:11.817233] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:32.304 [2024-12-16 11:44:11.817255] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:118 nsid:1 lba:54376 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:32.304 [2024-12-16 11:44:11.817275] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:32.304 [2024-12-16 11:44:11.817297] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:54384 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:32.304 [2024-12-16 11:44:11.817332] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:32.304 [2024-12-16 11:44:11.817355] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:71 nsid:1 lba:54392 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:32.304 [2024-12-16 11:44:11.817375] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:32.304 [2024-12-16 11:44:11.817398] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:54400 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:32.305 [2024-12-16 11:44:11.817417] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:32.305 [2024-12-16 11:44:11.817439] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:102 nsid:1 lba:54408 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:32.305 [2024-12-16 11:44:11.817458] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:32.305 [2024-12-16 11:44:11.817480] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:54416 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:32.305 [2024-12-16 11:44:11.817499] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:32.305 [2024-12-16 11:44:11.817521] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:90 nsid:1 lba:53752 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:32.305 [2024-12-16 11:44:11.817540] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:32.305 [2024-12-16 11:44:11.817562] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:53760 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:32.305 [2024-12-16 11:44:11.817581] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:32.305 [2024-12-16 11:44:11.817603] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:53768 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:32.305 [2024-12-16 11:44:11.817623] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:32.305 [2024-12-16 11:44:11.817644] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:122 nsid:1 lba:53776 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:32.305 [2024-12-16 11:44:11.817664] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:32.305 [2024-12-16 11:44:11.817686] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:106 nsid:1 lba:53784 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:32.305 [2024-12-16 11:44:11.817705] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:32.305 [2024-12-16 11:44:11.817727] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:53792 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:32.305 [2024-12-16 11:44:11.817747] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:32.305 [2024-12-16 11:44:11.817769] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:65 nsid:1 lba:53800 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:32.305 [2024-12-16 11:44:11.817789] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:32.305 [2024-12-16 11:44:11.817810] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:54424 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:32.305 [2024-12-16 11:44:11.817830] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:32.305 [2024-12-16 11:44:11.817860] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:67 nsid:1 lba:54432 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:32.305 [2024-12-16 11:44:11.817880] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:32.305 [2024-12-16 11:44:11.817902] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:54440 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:32.305 [2024-12-16 11:44:11.817922] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:32.305 [2024-12-16 11:44:11.817943] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:54448 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:32.305 [2024-12-16 11:44:11.817963] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:32.305 [2024-12-16 11:44:11.817984] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:124 nsid:1 lba:54456 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:32.305 [2024-12-16 11:44:11.818004] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:32.305 [2024-12-16 11:44:11.818026] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:94 nsid:1 lba:54464 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:32.305 [2024-12-16 11:44:11.818046] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:32.305 [2024-12-16 11:44:11.818068] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:84 nsid:1 lba:54472 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:32.305 [2024-12-16 11:44:11.818088] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:32.305 [2024-12-16 11:44:11.818109] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:101 nsid:1 lba:54480 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:32.305 [2024-12-16 11:44:11.818129] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:32.305 [2024-12-16 11:44:11.818164] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:80 nsid:1 lba:54488 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:32.305 [2024-12-16 11:44:11.818185] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:32.305 [2024-12-16 11:44:11.818207] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:78 nsid:1 lba:54496 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:32.305 [2024-12-16 11:44:11.818227] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:32.305 [2024-12-16 11:44:11.818250] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:54504 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:32.305 [2024-12-16 11:44:11.818269] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:32.305 [2024-12-16 11:44:11.818291] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:116 nsid:1 lba:54512 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:32.305 [2024-12-16 11:44:11.818311] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:32.305 [2024-12-16 11:44:11.818332] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:114 nsid:1 lba:54520 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:32.305 [2024-12-16 11:44:11.818352] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:32.305 [2024-12-16 11:44:11.818373] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:54528 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:32.305 [2024-12-16 11:44:11.818402] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:32.305 [2024-12-16 11:44:11.818425] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:54536 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:32.305 [2024-12-16 11:44:11.818445] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:32.305 [2024-12-16 11:44:11.818467] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:113 nsid:1 lba:54544 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:32.305 [2024-12-16 11:44:11.818486] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:32.305 [2024-12-16 11:44:11.818507] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:72 nsid:1 lba:54552 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:32.305 [2024-12-16 11:44:11.818526] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:32.305 [2024-12-16 11:44:11.818548] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:54560 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:32.305 [2024-12-16 11:44:11.818567] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:32.305 [2024-12-16 11:44:11.818589] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:95 nsid:1 lba:54568 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:32.305 [2024-12-16 11:44:11.818608] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:32.305 [2024-12-16 11:44:11.818630] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:97 nsid:1 lba:54576 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:32.305 [2024-12-16 11:44:11.818649] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:32.305 [2024-12-16 11:44:11.818670] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:53808 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:32.305 [2024-12-16 11:44:11.818690] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:32.305 [2024-12-16 11:44:11.818712] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:81 nsid:1 lba:53816 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:32.305 [2024-12-16 11:44:11.818732] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:32.305 [2024-12-16 11:44:11.818754] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:66 nsid:1 lba:53824 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:32.305 [2024-12-16 11:44:11.818774] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:32.305 [2024-12-16 11:44:11.818796] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:53832 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:32.305 [2024-12-16 11:44:11.818815] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:32.305 [2024-12-16 11:44:11.818837] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:125 nsid:1 lba:53840 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:32.305 [2024-12-16 11:44:11.818856] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:32.305 [2024-12-16 11:44:11.818878] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:53848 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:32.305 [2024-12-16 11:44:11.818898] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:32.305 [2024-12-16 11:44:11.818928] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:53856 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:32.305 [2024-12-16 11:44:11.818949] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:32.305 [2024-12-16 11:44:11.818970] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:53864 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:32.306 [2024-12-16 11:44:11.818990] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:32.306 [2024-12-16 11:44:11.819012] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:53872 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:32.306 [2024-12-16 11:44:11.819031] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:32.306 [2024-12-16 11:44:11.819053] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:53880 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:32.306 [2024-12-16 11:44:11.819072] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:32.306 [2024-12-16 11:44:11.819093] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:53888 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:32.306 [2024-12-16 11:44:11.819112] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:32.306 [2024-12-16 11:44:11.819134] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:53896 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:32.306 [2024-12-16 11:44:11.819168] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:32.306 [2024-12-16 11:44:11.819191] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:88 nsid:1 lba:53904 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:32.306 [2024-12-16 11:44:11.819213] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:32.306 [2024-12-16 11:44:11.819235] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:53912 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:32.306 [2024-12-16 11:44:11.819254] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:32.306 [2024-12-16 11:44:11.819276] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:53920 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:32.306 [2024-12-16 11:44:11.819295] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:32.306 [2024-12-16 11:44:11.819316] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:70 nsid:1 lba:53928 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:32.306 [2024-12-16 11:44:11.819335] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:32.306 [2024-12-16 11:44:11.819357] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:98 nsid:1 lba:53936 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:32.306 [2024-12-16 11:44:11.819376] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:32.306 [2024-12-16 11:44:11.819398] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:53944 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:32.306 [2024-12-16 11:44:11.819417] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:32.306 [2024-12-16 11:44:11.819439] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:53952 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:32.306 [2024-12-16 11:44:11.819467] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:32.306 [2024-12-16 11:44:11.819490] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:53960 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:32.306 [2024-12-16 11:44:11.819510] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:32.306 [2024-12-16 11:44:11.819532] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:121 nsid:1 lba:53968 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:32.306 [2024-12-16 11:44:11.819551] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:32.306 [2024-12-16 11:44:11.819572] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:53976 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:32.306 [2024-12-16 11:44:11.819592] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:32.306 [2024-12-16 11:44:11.819614] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:53984 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:32.306 [2024-12-16 11:44:11.819634] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:32.306 [2024-12-16 11:44:11.819656] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:53992 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:32.306 [2024-12-16 11:44:11.819675] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:32.306 [2024-12-16 11:44:11.819703] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:54584 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:32.306 [2024-12-16 11:44:11.819734] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:32.306 [2024-12-16 11:44:11.819765] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:54000 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:32.306 [2024-12-16 11:44:11.819785] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:32.306 [2024-12-16 11:44:11.819806] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:54008 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:32.306 [2024-12-16 11:44:11.819826] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:32.306 [2024-12-16 11:44:11.819847] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:54016 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:32.306 [2024-12-16 11:44:11.819866] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:32.306 [2024-12-16 11:44:11.819888] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:110 nsid:1 lba:54024 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:32.306 [2024-12-16 11:44:11.819907] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:32.306 [2024-12-16 11:44:11.819929] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:119 nsid:1 lba:54032 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:32.306 [2024-12-16 11:44:11.819948] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:32.306 [2024-12-16 11:44:11.819970] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:54040 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:32.306 [2024-12-16 11:44:11.819989] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:32.306 [2024-12-16 11:44:11.820011] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:54048 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:32.306 [2024-12-16 11:44:11.820039] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:32.306 [2024-12-16 11:44:11.820062] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:54056 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:32.306 [2024-12-16 11:44:11.820082] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:32.306 [2024-12-16 11:44:11.820110] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:54592 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:32.306 [2024-12-16 11:44:11.820131] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:32.306 [2024-12-16 11:44:11.820167] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:54600 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:32.306 [2024-12-16 11:44:11.820188] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:32.306 [2024-12-16 11:44:11.820210] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:68 nsid:1 lba:54608 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:32.306 [2024-12-16 11:44:11.820230] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:32.306 [2024-12-16 11:44:11.820251] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:87 nsid:1 lba:54616 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:32.306 [2024-12-16 11:44:11.820281] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:32.306 [2024-12-16 11:44:11.820303] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:54624 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:32.306 [2024-12-16 11:44:11.820322] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:32.306 [2024-12-16 11:44:11.820344] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:54632 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:32.306 [2024-12-16 11:44:11.820371] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:32.306 [2024-12-16 11:44:11.820394] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:54640 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:32.306 [2024-12-16 11:44:11.820414] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:32.306 [2024-12-16 11:44:11.820436] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:74 nsid:1 lba:54648 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:32.306 [2024-12-16 11:44:11.820455] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:32.306 [2024-12-16 11:44:11.820485] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:73 nsid:1 lba:54064 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:32.306 [2024-12-16 11:44:11.820504] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:32.306 [2024-12-16 11:44:11.820526] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:111 nsid:1 lba:54072 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:32.306 [2024-12-16 11:44:11.820546] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:32.306 [2024-12-16 11:44:11.820567] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:54080 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:32.306 [2024-12-16 11:44:11.820589] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:32.306 [2024-12-16 11:44:11.820621] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:54088 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:32.306 [2024-12-16 11:44:11.820642] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:32.306 [2024-12-16 11:44:11.820664] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:104 nsid:1 lba:54096 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:32.306 [2024-12-16 11:44:11.820683] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:32.306 [2024-12-16 11:44:11.820705] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:103 nsid:1 lba:54104 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:32.307 [2024-12-16 11:44:11.820724] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:32.307 [2024-12-16 11:44:11.820746] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:85 nsid:1 lba:54112 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:32.307 [2024-12-16 11:44:11.820766] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:32.307 [2024-12-16 11:44:11.820787] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:105 nsid:1 lba:54120 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:32.307 [2024-12-16 11:44:11.820806] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:32.307 [2024-12-16 11:44:11.821329] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:22:32.307 [2024-12-16 11:44:11.821369] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:32.307 [2024-12-16 11:44:11.821391] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:22:32.307 [2024-12-16 11:44:11.821410] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:32.307 [2024-12-16 11:44:11.821430] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:22:32.307 [2024-12-16 11:44:11.821449] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:32.307 [2024-12-16 11:44:11.821469] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:22:32.307 [2024-12-16 11:44:11.821488] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:32.307 [2024-12-16 11:44:11.821509] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:4 nsid:ffffffff cdw10:0014000c cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:32.307 [2024-12-16 11:44:11.821528] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:32.307 [2024-12-16 11:44:11.821556] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c0ed90 is same with the state(6) to be set 00:22:32.307 [2024-12-16 11:44:11.823424] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:22:32.307 [2024-12-16 11:44:11.823477] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1c0ed90 (9): Bad file descriptor 00:22:32.307 [2024-12-16 11:44:11.823969] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:32.307 [2024-12-16 11:44:11.824013] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c0ed90 with addr=10.0.0.3, port=4421 00:22:32.307 [2024-12-16 11:44:11.824036] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c0ed90 is same with the state(6) to be set 00:22:32.307 [2024-12-16 11:44:11.824247] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1c0ed90 (9): Bad file descriptor 00:22:32.307 [2024-12-16 11:44:11.824466] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:22:32.307 [2024-12-16 11:44:11.824499] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:22:32.307 [2024-12-16 11:44:11.824521] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:22:32.307 [2024-12-16 11:44:11.824541] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:22:32.307 [2024-12-16 11:44:11.824561] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:22:32.307 6622.06 IOPS, 25.87 MiB/s [2024-12-16T11:44:33.060Z] 6672.57 IOPS, 26.06 MiB/s [2024-12-16T11:44:33.060Z] 6717.45 IOPS, 26.24 MiB/s [2024-12-16T11:44:33.060Z] 6759.31 IOPS, 26.40 MiB/s [2024-12-16T11:44:33.060Z] 6803.82 IOPS, 26.58 MiB/s [2024-12-16T11:44:33.060Z] 6850.54 IOPS, 26.76 MiB/s [2024-12-16T11:44:33.060Z] 6888.02 IOPS, 26.91 MiB/s [2024-12-16T11:44:33.060Z] 6933.56 IOPS, 27.08 MiB/s [2024-12-16T11:44:33.060Z] 6968.34 IOPS, 27.22 MiB/s [2024-12-16T11:44:33.060Z] 7002.82 IOPS, 27.35 MiB/s [2024-12-16T11:44:33.060Z] [2024-12-16 11:44:21.913947] bdev_nvme.c:2287:bdev_nvme_reset_ctrlr_complete: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 3] Resetting controller successful. 00:22:32.307 7019.83 IOPS, 27.42 MiB/s [2024-12-16T11:44:33.060Z] 7036.36 IOPS, 27.49 MiB/s [2024-12-16T11:44:33.060Z] 7053.69 IOPS, 27.55 MiB/s [2024-12-16T11:44:33.060Z] 7070.20 IOPS, 27.62 MiB/s [2024-12-16T11:44:33.060Z] 7087.50 IOPS, 27.69 MiB/s [2024-12-16T11:44:33.060Z] 7104.67 IOPS, 27.75 MiB/s [2024-12-16T11:44:33.060Z] 7123.60 IOPS, 27.83 MiB/s [2024-12-16T11:44:33.060Z] 7143.17 IOPS, 27.90 MiB/s [2024-12-16T11:44:33.060Z] 7158.17 IOPS, 27.96 MiB/s [2024-12-16T11:44:33.060Z] 7176.36 IOPS, 28.03 MiB/s [2024-12-16T11:44:33.060Z] 7193.41 IOPS, 28.10 MiB/s [2024-12-16T11:44:33.060Z] Received shutdown signal, test time was about 56.064897 seconds 00:22:32.307 00:22:32.307 Latency(us) 00:22:32.307 [2024-12-16T11:44:33.060Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:22:32.307 Job: Nvme0n1 (Core Mask 0x4, workload: verify, depth: 128, IO size: 4096) 00:22:32.307 Verification LBA range: start 0x0 length 0x4000 00:22:32.307 Nvme0n1 : 56.06 7192.10 28.09 0.00 0.00 17767.69 1742.66 7046430.72 00:22:32.307 [2024-12-16T11:44:33.060Z] =================================================================================================================== 00:22:32.307 [2024-12-16T11:44:33.060Z] Total : 7192.10 28.09 0.00 0.00 17767.69 1742.66 7046430.72 00:22:32.307 11:44:32 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@120 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:22:32.307 11:44:32 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@122 -- # trap - SIGINT SIGTERM EXIT 00:22:32.307 11:44:32 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@124 -- # rm -f /home/vagrant/spdk_repo/spdk/test/nvmf/host/try.txt 00:22:32.307 11:44:32 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@125 -- # nvmftestfini 00:22:32.307 11:44:32 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@516 -- # nvmfcleanup 00:22:32.307 11:44:32 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@121 -- # sync 00:22:32.307 11:44:32 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:22:32.307 11:44:32 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@124 -- # set +e 00:22:32.307 11:44:32 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@125 -- # for i in {1..20} 00:22:32.307 11:44:32 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:22:32.307 rmmod nvme_tcp 00:22:32.307 rmmod nvme_fabrics 00:22:32.307 rmmod nvme_keyring 00:22:32.307 11:44:32 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:22:32.307 11:44:32 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@128 -- # set -e 00:22:32.307 11:44:32 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@129 -- # return 0 00:22:32.307 11:44:32 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@517 -- # '[' -n 97536 ']' 00:22:32.307 11:44:32 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@518 -- # killprocess 97536 00:22:32.307 11:44:32 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@954 -- # '[' -z 97536 ']' 00:22:32.307 11:44:32 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@958 -- # kill -0 97536 00:22:32.307 11:44:32 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@959 -- # uname 00:22:32.307 11:44:32 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:22:32.307 11:44:32 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 97536 00:22:32.307 killing process with pid 97536 00:22:32.307 11:44:32 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:22:32.307 11:44:32 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:22:32.307 11:44:32 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@972 -- # echo 'killing process with pid 97536' 00:22:32.307 11:44:32 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@973 -- # kill 97536 00:22:32.307 11:44:32 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@978 -- # wait 97536 00:22:32.566 11:44:33 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:22:32.566 11:44:33 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:22:32.566 11:44:33 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:22:32.566 11:44:33 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@297 -- # iptr 00:22:32.566 11:44:33 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@791 -- # iptables-save 00:22:32.566 11:44:33 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:22:32.566 11:44:33 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@791 -- # iptables-restore 00:22:32.566 11:44:33 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@298 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:22:32.566 11:44:33 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@299 -- # nvmf_veth_fini 00:22:32.566 11:44:33 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@233 -- # ip link set nvmf_init_br nomaster 00:22:32.566 11:44:33 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@234 -- # ip link set nvmf_init_br2 nomaster 00:22:32.566 11:44:33 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@235 -- # ip link set nvmf_tgt_br nomaster 00:22:32.566 11:44:33 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@236 -- # ip link set nvmf_tgt_br2 nomaster 00:22:32.566 11:44:33 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@237 -- # ip link set nvmf_init_br down 00:22:32.566 11:44:33 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@238 -- # ip link set nvmf_init_br2 down 00:22:32.566 11:44:33 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@239 -- # ip link set nvmf_tgt_br down 00:22:32.566 11:44:33 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@240 -- # ip link set nvmf_tgt_br2 down 00:22:32.566 11:44:33 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@241 -- # ip link delete nvmf_br type bridge 00:22:32.566 11:44:33 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@242 -- # ip link delete nvmf_init_if 00:22:32.566 11:44:33 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@243 -- # ip link delete nvmf_init_if2 00:22:32.566 11:44:33 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@244 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:22:32.566 11:44:33 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@245 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:22:32.566 11:44:33 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@246 -- # remove_spdk_ns 00:22:32.566 11:44:33 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:22:32.566 11:44:33 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:22:32.566 11:44:33 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:22:32.566 11:44:33 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@300 -- # return 0 00:22:32.566 00:22:32.566 real 1m1.632s 00:22:32.566 user 2m55.266s 00:22:32.566 sys 0m13.203s 00:22:32.566 11:44:33 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@1130 -- # xtrace_disable 00:22:32.566 11:44:33 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@10 -- # set +x 00:22:32.566 ************************************ 00:22:32.566 END TEST nvmf_host_multipath 00:22:32.566 ************************************ 00:22:32.828 11:44:33 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@43 -- # run_test nvmf_timeout /home/vagrant/spdk_repo/spdk/test/nvmf/host/timeout.sh --transport=tcp 00:22:32.828 11:44:33 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:22:32.828 11:44:33 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1111 -- # xtrace_disable 00:22:32.828 11:44:33 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:22:32.828 ************************************ 00:22:32.828 START TEST nvmf_timeout 00:22:32.828 ************************************ 00:22:32.828 11:44:33 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/host/timeout.sh --transport=tcp 00:22:32.828 * Looking for test storage... 00:22:32.828 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/host 00:22:32.828 11:44:33 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:22:32.828 11:44:33 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@1711 -- # lcov --version 00:22:32.828 11:44:33 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:22:32.828 11:44:33 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:22:32.828 11:44:33 nvmf_tcp.nvmf_host.nvmf_timeout -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:22:32.828 11:44:33 nvmf_tcp.nvmf_host.nvmf_timeout -- scripts/common.sh@333 -- # local ver1 ver1_l 00:22:32.828 11:44:33 nvmf_tcp.nvmf_host.nvmf_timeout -- scripts/common.sh@334 -- # local ver2 ver2_l 00:22:32.828 11:44:33 nvmf_tcp.nvmf_host.nvmf_timeout -- scripts/common.sh@336 -- # IFS=.-: 00:22:32.828 11:44:33 nvmf_tcp.nvmf_host.nvmf_timeout -- scripts/common.sh@336 -- # read -ra ver1 00:22:32.828 11:44:33 nvmf_tcp.nvmf_host.nvmf_timeout -- scripts/common.sh@337 -- # IFS=.-: 00:22:32.828 11:44:33 nvmf_tcp.nvmf_host.nvmf_timeout -- scripts/common.sh@337 -- # read -ra ver2 00:22:32.828 11:44:33 nvmf_tcp.nvmf_host.nvmf_timeout -- scripts/common.sh@338 -- # local 'op=<' 00:22:32.828 11:44:33 nvmf_tcp.nvmf_host.nvmf_timeout -- scripts/common.sh@340 -- # ver1_l=2 00:22:32.828 11:44:33 nvmf_tcp.nvmf_host.nvmf_timeout -- scripts/common.sh@341 -- # ver2_l=1 00:22:32.828 11:44:33 nvmf_tcp.nvmf_host.nvmf_timeout -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:22:32.828 11:44:33 nvmf_tcp.nvmf_host.nvmf_timeout -- scripts/common.sh@344 -- # case "$op" in 00:22:32.828 11:44:33 nvmf_tcp.nvmf_host.nvmf_timeout -- scripts/common.sh@345 -- # : 1 00:22:32.828 11:44:33 nvmf_tcp.nvmf_host.nvmf_timeout -- scripts/common.sh@364 -- # (( v = 0 )) 00:22:32.828 11:44:33 nvmf_tcp.nvmf_host.nvmf_timeout -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:22:32.828 11:44:33 nvmf_tcp.nvmf_host.nvmf_timeout -- scripts/common.sh@365 -- # decimal 1 00:22:32.828 11:44:33 nvmf_tcp.nvmf_host.nvmf_timeout -- scripts/common.sh@353 -- # local d=1 00:22:32.828 11:44:33 nvmf_tcp.nvmf_host.nvmf_timeout -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:22:32.828 11:44:33 nvmf_tcp.nvmf_host.nvmf_timeout -- scripts/common.sh@355 -- # echo 1 00:22:32.828 11:44:33 nvmf_tcp.nvmf_host.nvmf_timeout -- scripts/common.sh@365 -- # ver1[v]=1 00:22:32.828 11:44:33 nvmf_tcp.nvmf_host.nvmf_timeout -- scripts/common.sh@366 -- # decimal 2 00:22:32.828 11:44:33 nvmf_tcp.nvmf_host.nvmf_timeout -- scripts/common.sh@353 -- # local d=2 00:22:32.828 11:44:33 nvmf_tcp.nvmf_host.nvmf_timeout -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:22:32.828 11:44:33 nvmf_tcp.nvmf_host.nvmf_timeout -- scripts/common.sh@355 -- # echo 2 00:22:32.828 11:44:33 nvmf_tcp.nvmf_host.nvmf_timeout -- scripts/common.sh@366 -- # ver2[v]=2 00:22:32.828 11:44:33 nvmf_tcp.nvmf_host.nvmf_timeout -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:22:32.828 11:44:33 nvmf_tcp.nvmf_host.nvmf_timeout -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:22:32.828 11:44:33 nvmf_tcp.nvmf_host.nvmf_timeout -- scripts/common.sh@368 -- # return 0 00:22:32.828 11:44:33 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:22:32.828 11:44:33 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:22:32.828 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:22:32.828 --rc genhtml_branch_coverage=1 00:22:32.828 --rc genhtml_function_coverage=1 00:22:32.828 --rc genhtml_legend=1 00:22:32.828 --rc geninfo_all_blocks=1 00:22:32.828 --rc geninfo_unexecuted_blocks=1 00:22:32.828 00:22:32.828 ' 00:22:32.828 11:44:33 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:22:32.828 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:22:32.828 --rc genhtml_branch_coverage=1 00:22:32.828 --rc genhtml_function_coverage=1 00:22:32.828 --rc genhtml_legend=1 00:22:32.828 --rc geninfo_all_blocks=1 00:22:32.828 --rc geninfo_unexecuted_blocks=1 00:22:32.828 00:22:32.828 ' 00:22:32.828 11:44:33 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:22:32.828 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:22:32.828 --rc genhtml_branch_coverage=1 00:22:32.828 --rc genhtml_function_coverage=1 00:22:32.828 --rc genhtml_legend=1 00:22:32.828 --rc geninfo_all_blocks=1 00:22:32.828 --rc geninfo_unexecuted_blocks=1 00:22:32.828 00:22:32.828 ' 00:22:32.828 11:44:33 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:22:32.828 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:22:32.828 --rc genhtml_branch_coverage=1 00:22:32.828 --rc genhtml_function_coverage=1 00:22:32.828 --rc genhtml_legend=1 00:22:32.828 --rc geninfo_all_blocks=1 00:22:32.828 --rc geninfo_unexecuted_blocks=1 00:22:32.828 00:22:32.828 ' 00:22:32.828 11:44:33 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:22:32.828 11:44:33 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@7 -- # uname -s 00:22:32.828 11:44:33 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:22:32.828 11:44:33 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:22:32.828 11:44:33 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:22:32.828 11:44:33 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:22:32.828 11:44:33 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:22:32.828 11:44:33 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:22:32.828 11:44:33 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:22:32.828 11:44:33 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:22:32.828 11:44:33 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:22:32.828 11:44:33 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:22:32.828 11:44:33 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:2fb5c03d-b67f-4471-b8a9-a83bf7b15be8 00:22:32.828 11:44:33 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@18 -- # NVME_HOSTID=2fb5c03d-b67f-4471-b8a9-a83bf7b15be8 00:22:32.828 11:44:33 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:22:32.828 11:44:33 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:22:32.828 11:44:33 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:22:32.828 11:44:33 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:22:32.828 11:44:33 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@49 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:22:32.828 11:44:33 nvmf_tcp.nvmf_host.nvmf_timeout -- scripts/common.sh@15 -- # shopt -s extglob 00:22:32.828 11:44:33 nvmf_tcp.nvmf_host.nvmf_timeout -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:22:32.828 11:44:33 nvmf_tcp.nvmf_host.nvmf_timeout -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:22:32.828 11:44:33 nvmf_tcp.nvmf_host.nvmf_timeout -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:22:32.828 11:44:33 nvmf_tcp.nvmf_host.nvmf_timeout -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:32.828 11:44:33 nvmf_tcp.nvmf_host.nvmf_timeout -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:32.829 11:44:33 nvmf_tcp.nvmf_host.nvmf_timeout -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:32.829 11:44:33 nvmf_tcp.nvmf_host.nvmf_timeout -- paths/export.sh@5 -- # export PATH 00:22:32.829 11:44:33 nvmf_tcp.nvmf_host.nvmf_timeout -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:32.829 11:44:33 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@51 -- # : 0 00:22:32.829 11:44:33 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:22:32.829 11:44:33 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:22:32.829 11:44:33 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:22:32.829 11:44:33 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:22:32.829 11:44:33 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:22:32.829 11:44:33 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:22:32.829 /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:22:32.829 11:44:33 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:22:32.829 11:44:33 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:22:32.829 11:44:33 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@55 -- # have_pci_nics=0 00:22:32.829 11:44:33 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@11 -- # MALLOC_BDEV_SIZE=64 00:22:32.829 11:44:33 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:22:32.829 11:44:33 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@14 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:22:32.829 11:44:33 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@15 -- # bpf_sh=/home/vagrant/spdk_repo/spdk/scripts/bpftrace.sh 00:22:32.829 11:44:33 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@17 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:22:32.829 11:44:33 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@19 -- # nvmftestinit 00:22:32.829 11:44:33 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:22:32.829 11:44:33 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:22:32.829 11:44:33 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@476 -- # prepare_net_devs 00:22:32.829 11:44:33 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@438 -- # local -g is_hw=no 00:22:32.829 11:44:33 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@440 -- # remove_spdk_ns 00:22:32.829 11:44:33 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:22:32.829 11:44:33 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:22:32.829 11:44:33 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:22:32.829 11:44:33 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@442 -- # [[ virt != virt ]] 00:22:32.829 11:44:33 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@444 -- # [[ no == yes ]] 00:22:32.829 11:44:33 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@451 -- # [[ virt == phy ]] 00:22:32.829 11:44:33 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@454 -- # [[ virt == phy-fallback ]] 00:22:32.829 11:44:33 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@459 -- # [[ tcp == tcp ]] 00:22:32.829 11:44:33 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@460 -- # nvmf_veth_init 00:22:32.829 11:44:33 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@145 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:22:32.829 11:44:33 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@146 -- # NVMF_SECOND_INITIATOR_IP=10.0.0.2 00:22:32.829 11:44:33 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@147 -- # NVMF_FIRST_TARGET_IP=10.0.0.3 00:22:32.829 11:44:33 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@148 -- # NVMF_SECOND_TARGET_IP=10.0.0.4 00:22:32.829 11:44:33 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@149 -- # NVMF_INITIATOR_IP=10.0.0.1 00:22:32.829 11:44:33 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@150 -- # NVMF_BRIDGE=nvmf_br 00:22:32.829 11:44:33 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@151 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:22:33.095 11:44:33 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@152 -- # NVMF_INITIATOR_INTERFACE2=nvmf_init_if2 00:22:33.095 11:44:33 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@153 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:22:33.095 11:44:33 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@154 -- # NVMF_INITIATOR_BRIDGE2=nvmf_init_br2 00:22:33.095 11:44:33 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@155 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:22:33.095 11:44:33 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@156 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:22:33.095 11:44:33 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@157 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:22:33.095 11:44:33 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@158 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:22:33.095 11:44:33 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@159 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:22:33.095 11:44:33 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@160 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:22:33.095 11:44:33 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@162 -- # ip link set nvmf_init_br nomaster 00:22:33.095 Cannot find device "nvmf_init_br" 00:22:33.095 11:44:33 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@162 -- # true 00:22:33.095 11:44:33 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@163 -- # ip link set nvmf_init_br2 nomaster 00:22:33.095 Cannot find device "nvmf_init_br2" 00:22:33.095 11:44:33 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@163 -- # true 00:22:33.095 11:44:33 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@164 -- # ip link set nvmf_tgt_br nomaster 00:22:33.095 Cannot find device "nvmf_tgt_br" 00:22:33.095 11:44:33 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@164 -- # true 00:22:33.095 11:44:33 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@165 -- # ip link set nvmf_tgt_br2 nomaster 00:22:33.095 Cannot find device "nvmf_tgt_br2" 00:22:33.095 11:44:33 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@165 -- # true 00:22:33.095 11:44:33 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@166 -- # ip link set nvmf_init_br down 00:22:33.095 Cannot find device "nvmf_init_br" 00:22:33.095 11:44:33 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@166 -- # true 00:22:33.095 11:44:33 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@167 -- # ip link set nvmf_init_br2 down 00:22:33.095 Cannot find device "nvmf_init_br2" 00:22:33.095 11:44:33 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@167 -- # true 00:22:33.095 11:44:33 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@168 -- # ip link set nvmf_tgt_br down 00:22:33.095 Cannot find device "nvmf_tgt_br" 00:22:33.095 11:44:33 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@168 -- # true 00:22:33.095 11:44:33 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@169 -- # ip link set nvmf_tgt_br2 down 00:22:33.095 Cannot find device "nvmf_tgt_br2" 00:22:33.095 11:44:33 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@169 -- # true 00:22:33.095 11:44:33 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@170 -- # ip link delete nvmf_br type bridge 00:22:33.095 Cannot find device "nvmf_br" 00:22:33.095 11:44:33 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@170 -- # true 00:22:33.095 11:44:33 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@171 -- # ip link delete nvmf_init_if 00:22:33.095 Cannot find device "nvmf_init_if" 00:22:33.095 11:44:33 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@171 -- # true 00:22:33.095 11:44:33 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@172 -- # ip link delete nvmf_init_if2 00:22:33.095 Cannot find device "nvmf_init_if2" 00:22:33.095 11:44:33 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@172 -- # true 00:22:33.095 11:44:33 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@173 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:22:33.095 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:22:33.095 11:44:33 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@173 -- # true 00:22:33.095 11:44:33 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@174 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:22:33.095 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:22:33.095 11:44:33 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@174 -- # true 00:22:33.095 11:44:33 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@177 -- # ip netns add nvmf_tgt_ns_spdk 00:22:33.095 11:44:33 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@180 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:22:33.095 11:44:33 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@181 -- # ip link add nvmf_init_if2 type veth peer name nvmf_init_br2 00:22:33.095 11:44:33 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@182 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:22:33.095 11:44:33 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@183 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:22:33.095 11:44:33 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:22:33.095 11:44:33 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@187 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:22:33.095 11:44:33 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@190 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:22:33.095 11:44:33 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@191 -- # ip addr add 10.0.0.2/24 dev nvmf_init_if2 00:22:33.355 11:44:33 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@192 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if 00:22:33.355 11:44:33 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@193 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.4/24 dev nvmf_tgt_if2 00:22:33.355 11:44:33 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@196 -- # ip link set nvmf_init_if up 00:22:33.355 11:44:33 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@197 -- # ip link set nvmf_init_if2 up 00:22:33.355 11:44:33 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@198 -- # ip link set nvmf_init_br up 00:22:33.355 11:44:33 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@199 -- # ip link set nvmf_init_br2 up 00:22:33.355 11:44:33 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@200 -- # ip link set nvmf_tgt_br up 00:22:33.355 11:44:33 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@201 -- # ip link set nvmf_tgt_br2 up 00:22:33.355 11:44:33 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@202 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:22:33.355 11:44:33 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@203 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:22:33.355 11:44:33 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@204 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:22:33.355 11:44:33 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@207 -- # ip link add nvmf_br type bridge 00:22:33.355 11:44:33 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@208 -- # ip link set nvmf_br up 00:22:33.355 11:44:33 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@211 -- # ip link set nvmf_init_br master nvmf_br 00:22:33.355 11:44:33 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@212 -- # ip link set nvmf_init_br2 master nvmf_br 00:22:33.355 11:44:33 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@213 -- # ip link set nvmf_tgt_br master nvmf_br 00:22:33.355 11:44:33 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@214 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:22:33.355 11:44:33 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@217 -- # ipts -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:22:33.355 11:44:33 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT' 00:22:33.355 11:44:33 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@218 -- # ipts -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT 00:22:33.355 11:44:33 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT' 00:22:33.355 11:44:33 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@219 -- # ipts -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:22:33.355 11:44:33 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@790 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT -m comment --comment 'SPDK_NVMF:-A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT' 00:22:33.355 11:44:33 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@222 -- # ping -c 1 10.0.0.3 00:22:33.355 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:22:33.355 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.063 ms 00:22:33.355 00:22:33.355 --- 10.0.0.3 ping statistics --- 00:22:33.355 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:22:33.355 rtt min/avg/max/mdev = 0.063/0.063/0.063/0.000 ms 00:22:33.355 11:44:33 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@223 -- # ping -c 1 10.0.0.4 00:22:33.355 PING 10.0.0.4 (10.0.0.4) 56(84) bytes of data. 00:22:33.355 64 bytes from 10.0.0.4: icmp_seq=1 ttl=64 time=0.063 ms 00:22:33.355 00:22:33.355 --- 10.0.0.4 ping statistics --- 00:22:33.355 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:22:33.355 rtt min/avg/max/mdev = 0.063/0.063/0.063/0.000 ms 00:22:33.355 11:44:33 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@224 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:22:33.355 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:22:33.355 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.029 ms 00:22:33.355 00:22:33.355 --- 10.0.0.1 ping statistics --- 00:22:33.355 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:22:33.355 rtt min/avg/max/mdev = 0.029/0.029/0.029/0.000 ms 00:22:33.355 11:44:33 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@225 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.2 00:22:33.355 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:22:33.355 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.047 ms 00:22:33.355 00:22:33.355 --- 10.0.0.2 ping statistics --- 00:22:33.355 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:22:33.355 rtt min/avg/max/mdev = 0.047/0.047/0.047/0.000 ms 00:22:33.355 11:44:34 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@227 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:22:33.355 11:44:34 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@461 -- # return 0 00:22:33.355 11:44:34 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:22:33.355 11:44:34 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:22:33.355 11:44:34 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:22:33.355 11:44:34 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:22:33.355 11:44:34 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:22:33.355 11:44:34 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:22:33.355 11:44:34 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:22:33.355 11:44:34 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@21 -- # nvmfappstart -m 0x3 00:22:33.355 11:44:34 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:22:33.355 11:44:34 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@726 -- # xtrace_disable 00:22:33.355 11:44:34 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@10 -- # set +x 00:22:33.355 11:44:34 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@509 -- # nvmfpid=98938 00:22:33.355 11:44:34 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@508 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x3 00:22:33.355 11:44:34 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@510 -- # waitforlisten 98938 00:22:33.355 11:44:34 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@835 -- # '[' -z 98938 ']' 00:22:33.355 11:44:34 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:22:33.355 11:44:34 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@840 -- # local max_retries=100 00:22:33.356 11:44:34 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:22:33.356 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:22:33.356 11:44:34 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@844 -- # xtrace_disable 00:22:33.356 11:44:34 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@10 -- # set +x 00:22:33.356 [2024-12-16 11:44:34.090861] Starting SPDK v25.01-pre git sha1 c360129e7 / DPDK 24.03.0 initialization... 00:22:33.356 [2024-12-16 11:44:34.090965] [ DPDK EAL parameters: nvmf -c 0x3 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:22:33.614 [2024-12-16 11:44:34.233782] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:22:33.614 [2024-12-16 11:44:34.289910] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:22:33.614 [2024-12-16 11:44:34.289980] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:22:33.614 [2024-12-16 11:44:34.290007] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:22:33.614 [2024-12-16 11:44:34.290015] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:22:33.614 [2024-12-16 11:44:34.290022] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:22:33.614 [2024-12-16 11:44:34.291232] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 1 00:22:33.614 [2024-12-16 11:44:34.291239] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:22:34.551 11:44:35 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:22:34.551 11:44:35 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@868 -- # return 0 00:22:34.551 11:44:35 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:22:34.551 11:44:35 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@732 -- # xtrace_disable 00:22:34.551 11:44:35 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@10 -- # set +x 00:22:34.551 11:44:35 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:22:34.551 11:44:35 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@23 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; killprocess $bdevperf_pid || :; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:22:34.551 11:44:35 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@25 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:22:34.809 [2024-12-16 11:44:35.443708] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:22:34.809 11:44:35 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@26 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc0 00:22:35.069 Malloc0 00:22:35.069 11:44:35 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@27 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:22:35.328 11:44:36 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:22:35.587 11:44:36 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@29 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 00:22:35.846 [2024-12-16 11:44:36.506146] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4420 *** 00:22:35.846 11:44:36 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@31 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 -f 00:22:35.846 11:44:36 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@32 -- # bdevperf_pid=99035 00:22:35.846 11:44:36 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@34 -- # waitforlisten 99035 /var/tmp/bdevperf.sock 00:22:35.846 11:44:36 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@835 -- # '[' -z 99035 ']' 00:22:35.846 11:44:36 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:22:35.846 11:44:36 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@840 -- # local max_retries=100 00:22:35.846 11:44:36 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:22:35.846 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:22:35.846 11:44:36 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@844 -- # xtrace_disable 00:22:35.846 11:44:36 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@10 -- # set +x 00:22:35.846 [2024-12-16 11:44:36.579416] Starting SPDK v25.01-pre git sha1 c360129e7 / DPDK 24.03.0 initialization... 00:22:35.846 [2024-12-16 11:44:36.579503] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid99035 ] 00:22:36.105 [2024-12-16 11:44:36.722783] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:22:36.105 [2024-12-16 11:44:36.784106] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 2 00:22:36.363 11:44:36 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:22:36.363 11:44:36 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@868 -- # return 0 00:22:36.363 11:44:36 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@45 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_set_options -r -1 00:22:36.622 11:44:37 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@46 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.3 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 --ctrlr-loss-timeout-sec 5 --reconnect-delay-sec 2 00:22:36.881 NVMe0n1 00:22:36.881 11:44:37 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@51 -- # rpc_pid=99068 00:22:36.881 11:44:37 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@50 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:22:36.881 11:44:37 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@53 -- # sleep 1 00:22:37.140 Running I/O for 10 seconds... 00:22:38.079 11:44:38 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@55 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 00:22:38.079 8472.00 IOPS, 33.09 MiB/s [2024-12-16T11:44:38.832Z] [2024-12-16 11:44:38.807801] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7cd550 is same with the state(6) to be set 00:22:38.079 [2024-12-16 11:44:38.807856] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7cd550 is same with the state(6) to be set 00:22:38.079 [2024-12-16 11:44:38.807868] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7cd550 is same with the state(6) to be set 00:22:38.079 [2024-12-16 11:44:38.807877] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7cd550 is same with the state(6) to be set 00:22:38.079 [2024-12-16 11:44:38.807886] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7cd550 is same with the state(6) to be set 00:22:38.079 [2024-12-16 11:44:38.807898] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7cd550 is same with the state(6) to be set 00:22:38.079 [2024-12-16 11:44:38.808381] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:22:38.079 [2024-12-16 11:44:38.808420] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:38.079 [2024-12-16 11:44:38.808433] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:22:38.079 [2024-12-16 11:44:38.808443] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:38.079 [2024-12-16 11:44:38.808454] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:22:38.079 [2024-12-16 11:44:38.808463] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:38.079 [2024-12-16 11:44:38.808473] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:22:38.079 [2024-12-16 11:44:38.808482] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:38.079 [2024-12-16 11:44:38.808492] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1514f30 is same with the state(6) to be set 00:22:38.079 [2024-12-16 11:44:38.808548] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:79448 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:38.079 [2024-12-16 11:44:38.808562] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:38.079 [2024-12-16 11:44:38.808691] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:79456 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:38.079 [2024-12-16 11:44:38.808703] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:38.079 [2024-12-16 11:44:38.808949] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:79464 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:38.079 [2024-12-16 11:44:38.808988] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:38.079 [2024-12-16 11:44:38.809001] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:79472 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:38.079 [2024-12-16 11:44:38.809010] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:38.079 [2024-12-16 11:44:38.809021] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:106 nsid:1 lba:79480 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:38.079 [2024-12-16 11:44:38.809030] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:38.079 [2024-12-16 11:44:38.809042] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:76 nsid:1 lba:78616 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:38.079 [2024-12-16 11:44:38.809051] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:38.079 [2024-12-16 11:44:38.809063] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:78624 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:38.079 [2024-12-16 11:44:38.809072] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:38.079 [2024-12-16 11:44:38.809083] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:92 nsid:1 lba:78632 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:38.079 [2024-12-16 11:44:38.809093] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:38.079 [2024-12-16 11:44:38.809104] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:110 nsid:1 lba:78640 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:38.079 [2024-12-16 11:44:38.809112] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:38.079 [2024-12-16 11:44:38.809220] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:96 nsid:1 lba:78648 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:38.079 [2024-12-16 11:44:38.809235] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:38.079 [2024-12-16 11:44:38.809247] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:78656 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:38.079 [2024-12-16 11:44:38.809257] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:38.079 [2024-12-16 11:44:38.809268] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:78664 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:38.079 [2024-12-16 11:44:38.809277] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:38.079 [2024-12-16 11:44:38.809402] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:78672 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:38.079 [2024-12-16 11:44:38.809633] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:38.079 [2024-12-16 11:44:38.809650] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:78680 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:38.079 [2024-12-16 11:44:38.809659] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:38.079 [2024-12-16 11:44:38.809670] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:78688 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:38.079 [2024-12-16 11:44:38.809679] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:38.079 [2024-12-16 11:44:38.809691] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:78696 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:38.079 [2024-12-16 11:44:38.809700] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:38.079 [2024-12-16 11:44:38.809711] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:105 nsid:1 lba:78704 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:38.079 [2024-12-16 11:44:38.809720] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:38.079 [2024-12-16 11:44:38.809731] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:78712 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:38.079 [2024-12-16 11:44:38.809739] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:38.079 [2024-12-16 11:44:38.809750] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:70 nsid:1 lba:78720 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:38.079 [2024-12-16 11:44:38.809759] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:38.079 [2024-12-16 11:44:38.810068] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:78728 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:38.079 [2024-12-16 11:44:38.810080] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:38.079 [2024-12-16 11:44:38.810091] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:68 nsid:1 lba:78736 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:38.079 [2024-12-16 11:44:38.810100] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:38.079 [2024-12-16 11:44:38.810112] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:86 nsid:1 lba:78744 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:38.079 [2024-12-16 11:44:38.810122] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:38.079 [2024-12-16 11:44:38.810133] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:78752 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:38.079 [2024-12-16 11:44:38.810156] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:38.079 [2024-12-16 11:44:38.810169] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:65 nsid:1 lba:78760 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:38.079 [2024-12-16 11:44:38.810178] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:38.079 [2024-12-16 11:44:38.810189] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:78768 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:38.079 [2024-12-16 11:44:38.810290] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:38.079 [2024-12-16 11:44:38.810308] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:101 nsid:1 lba:78776 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:38.079 [2024-12-16 11:44:38.810318] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:38.079 [2024-12-16 11:44:38.810330] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:78784 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:38.080 [2024-12-16 11:44:38.810342] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:38.080 [2024-12-16 11:44:38.810473] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:78 nsid:1 lba:78792 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:38.080 [2024-12-16 11:44:38.810586] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:38.080 [2024-12-16 11:44:38.810600] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:93 nsid:1 lba:78800 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:38.080 [2024-12-16 11:44:38.810609] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:38.080 [2024-12-16 11:44:38.810621] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:78808 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:38.080 [2024-12-16 11:44:38.810862] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:38.080 [2024-12-16 11:44:38.810878] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:77 nsid:1 lba:78816 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:38.080 [2024-12-16 11:44:38.810887] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:38.080 [2024-12-16 11:44:38.810898] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:83 nsid:1 lba:78824 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:38.080 [2024-12-16 11:44:38.810907] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:38.080 [2024-12-16 11:44:38.810986] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:100 nsid:1 lba:78832 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:38.080 [2024-12-16 11:44:38.810999] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:38.080 [2024-12-16 11:44:38.811010] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:74 nsid:1 lba:78840 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:38.080 [2024-12-16 11:44:38.811018] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:38.080 [2024-12-16 11:44:38.811029] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:78848 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:38.080 [2024-12-16 11:44:38.811038] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:38.080 [2024-12-16 11:44:38.811156] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:114 nsid:1 lba:78856 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:38.080 [2024-12-16 11:44:38.811171] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:38.080 [2024-12-16 11:44:38.811183] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:78864 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:38.080 [2024-12-16 11:44:38.811275] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:38.080 [2024-12-16 11:44:38.811293] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:108 nsid:1 lba:78872 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:38.080 [2024-12-16 11:44:38.811302] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:38.080 [2024-12-16 11:44:38.811314] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:121 nsid:1 lba:79488 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:38.080 [2024-12-16 11:44:38.811322] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:38.080 [2024-12-16 11:44:38.811473] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:79496 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:38.080 [2024-12-16 11:44:38.811568] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:38.080 [2024-12-16 11:44:38.811582] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:109 nsid:1 lba:79504 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:38.080 [2024-12-16 11:44:38.811592] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:38.080 [2024-12-16 11:44:38.811602] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:79512 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:38.080 [2024-12-16 11:44:38.811611] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:38.080 [2024-12-16 11:44:38.811623] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:79520 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:38.080 [2024-12-16 11:44:38.811719] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:38.080 [2024-12-16 11:44:38.811733] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:79528 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:38.080 [2024-12-16 11:44:38.811742] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:38.080 [2024-12-16 11:44:38.811763] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:79536 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:38.080 [2024-12-16 11:44:38.811772] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:38.080 [2024-12-16 11:44:38.811794] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:104 nsid:1 lba:79544 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:38.080 [2024-12-16 11:44:38.811803] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:38.080 [2024-12-16 11:44:38.811882] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:79552 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:38.080 [2024-12-16 11:44:38.811895] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:38.080 [2024-12-16 11:44:38.811906] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:79560 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:38.080 [2024-12-16 11:44:38.811915] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:38.080 [2024-12-16 11:44:38.811926] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:118 nsid:1 lba:79568 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:38.080 [2024-12-16 11:44:38.811935] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:38.080 [2024-12-16 11:44:38.811946] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:79576 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:38.080 [2024-12-16 11:44:38.812191] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:38.080 [2024-12-16 11:44:38.812208] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:78880 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:38.080 [2024-12-16 11:44:38.812218] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:38.080 [2024-12-16 11:44:38.812229] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:102 nsid:1 lba:78888 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:38.080 [2024-12-16 11:44:38.812239] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:38.080 [2024-12-16 11:44:38.812251] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:78896 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:38.080 [2024-12-16 11:44:38.812268] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:38.080 [2024-12-16 11:44:38.812279] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:78904 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:38.080 [2024-12-16 11:44:38.812288] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:38.080 [2024-12-16 11:44:38.812303] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:78912 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:38.080 [2024-12-16 11:44:38.812312] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:38.080 [2024-12-16 11:44:38.812323] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:94 nsid:1 lba:78920 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:38.080 [2024-12-16 11:44:38.812332] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:38.080 [2024-12-16 11:44:38.812343] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:85 nsid:1 lba:78928 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:38.080 [2024-12-16 11:44:38.812640] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:38.080 [2024-12-16 11:44:38.812656] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:78936 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:38.080 [2024-12-16 11:44:38.812666] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:38.080 [2024-12-16 11:44:38.812678] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:115 nsid:1 lba:78944 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:38.080 [2024-12-16 11:44:38.812689] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:38.080 [2024-12-16 11:44:38.812700] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:107 nsid:1 lba:78952 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:38.080 [2024-12-16 11:44:38.812709] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:38.080 [2024-12-16 11:44:38.812720] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:78960 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:38.080 [2024-12-16 11:44:38.812728] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:38.080 [2024-12-16 11:44:38.812739] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:103 nsid:1 lba:78968 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:38.080 [2024-12-16 11:44:38.812748] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:38.080 [2024-12-16 11:44:38.813040] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:80 nsid:1 lba:78976 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:38.080 [2024-12-16 11:44:38.813062] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:38.080 [2024-12-16 11:44:38.813074] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:91 nsid:1 lba:78984 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:38.080 [2024-12-16 11:44:38.813084] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:38.080 [2024-12-16 11:44:38.813095] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:78992 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:38.080 [2024-12-16 11:44:38.813104] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:38.080 [2024-12-16 11:44:38.813115] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:79000 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:38.081 [2024-12-16 11:44:38.813124] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:38.081 [2024-12-16 11:44:38.813349] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:119 nsid:1 lba:79008 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:38.081 [2024-12-16 11:44:38.813369] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:38.081 [2024-12-16 11:44:38.813381] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:79016 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:38.081 [2024-12-16 11:44:38.813390] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:38.081 [2024-12-16 11:44:38.813402] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:75 nsid:1 lba:79024 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:38.081 [2024-12-16 11:44:38.813412] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:38.081 [2024-12-16 11:44:38.813424] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:125 nsid:1 lba:79032 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:38.081 [2024-12-16 11:44:38.813433] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:38.081 [2024-12-16 11:44:38.813444] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:79040 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:38.081 [2024-12-16 11:44:38.813453] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:38.081 [2024-12-16 11:44:38.813684] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:79048 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:38.081 [2024-12-16 11:44:38.813695] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:38.081 [2024-12-16 11:44:38.813706] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:79056 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:38.081 [2024-12-16 11:44:38.813715] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:38.081 [2024-12-16 11:44:38.813726] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:79064 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:38.081 [2024-12-16 11:44:38.813735] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:38.081 [2024-12-16 11:44:38.813747] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:79072 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:38.081 [2024-12-16 11:44:38.814016] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:38.081 [2024-12-16 11:44:38.814045] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:79080 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:38.081 [2024-12-16 11:44:38.814055] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:38.081 [2024-12-16 11:44:38.814066] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:79088 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:38.081 [2024-12-16 11:44:38.814076] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:38.081 [2024-12-16 11:44:38.814088] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:79096 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:38.081 [2024-12-16 11:44:38.814097] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:38.081 [2024-12-16 11:44:38.814108] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:122 nsid:1 lba:79104 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:38.081 [2024-12-16 11:44:38.814250] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:38.081 [2024-12-16 11:44:38.814549] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:97 nsid:1 lba:79112 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:38.081 [2024-12-16 11:44:38.814652] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:38.081 [2024-12-16 11:44:38.814666] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:88 nsid:1 lba:79120 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:38.081 [2024-12-16 11:44:38.814676] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:38.081 [2024-12-16 11:44:38.814688] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:82 nsid:1 lba:79128 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:38.081 [2024-12-16 11:44:38.814697] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:38.081 [2024-12-16 11:44:38.814709] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:67 nsid:1 lba:79136 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:38.081 [2024-12-16 11:44:38.814718] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:38.081 [2024-12-16 11:44:38.814737] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:79144 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:38.081 [2024-12-16 11:44:38.814968] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:38.081 [2024-12-16 11:44:38.814981] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:90 nsid:1 lba:79152 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:38.081 [2024-12-16 11:44:38.814991] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:38.081 [2024-12-16 11:44:38.815002] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:79160 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:38.081 [2024-12-16 11:44:38.815011] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:38.081 [2024-12-16 11:44:38.815022] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:79168 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:38.081 [2024-12-16 11:44:38.815031] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:38.081 [2024-12-16 11:44:38.815042] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:79176 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:38.081 [2024-12-16 11:44:38.815164] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:38.081 [2024-12-16 11:44:38.815188] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:64 nsid:1 lba:79184 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:38.081 [2024-12-16 11:44:38.815198] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:38.081 [2024-12-16 11:44:38.815440] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:79192 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:38.081 [2024-12-16 11:44:38.815454] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:38.081 [2024-12-16 11:44:38.815466] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:123 nsid:1 lba:79200 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:38.081 [2024-12-16 11:44:38.815475] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:38.081 [2024-12-16 11:44:38.815485] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:117 nsid:1 lba:79208 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:38.081 [2024-12-16 11:44:38.815494] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:38.081 [2024-12-16 11:44:38.815505] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:79216 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:38.081 [2024-12-16 11:44:38.815514] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:38.081 [2024-12-16 11:44:38.815841] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:79224 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:38.081 [2024-12-16 11:44:38.815853] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:38.081 [2024-12-16 11:44:38.815864] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:81 nsid:1 lba:79232 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:38.081 [2024-12-16 11:44:38.815874] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:38.081 [2024-12-16 11:44:38.815885] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:79240 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:38.081 [2024-12-16 11:44:38.815894] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:38.081 [2024-12-16 11:44:38.815905] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:72 nsid:1 lba:79248 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:38.081 [2024-12-16 11:44:38.815914] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:38.081 [2024-12-16 11:44:38.816159] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:79256 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:38.081 [2024-12-16 11:44:38.816174] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:38.081 [2024-12-16 11:44:38.816185] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:79264 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:38.081 [2024-12-16 11:44:38.816194] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:38.081 [2024-12-16 11:44:38.816211] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:66 nsid:1 lba:79272 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:38.081 [2024-12-16 11:44:38.816220] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:38.081 [2024-12-16 11:44:38.816495] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:69 nsid:1 lba:79280 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:38.081 [2024-12-16 11:44:38.816517] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:38.081 [2024-12-16 11:44:38.816529] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:113 nsid:1 lba:79288 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:38.081 [2024-12-16 11:44:38.816538] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:38.081 [2024-12-16 11:44:38.816549] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:124 nsid:1 lba:79296 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:38.081 [2024-12-16 11:44:38.816558] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:38.081 [2024-12-16 11:44:38.816569] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:79304 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:38.082 [2024-12-16 11:44:38.816578] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:38.082 [2024-12-16 11:44:38.816589] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:99 nsid:1 lba:79312 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:38.082 [2024-12-16 11:44:38.816598] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:38.082 [2024-12-16 11:44:38.816830] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:79320 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:38.082 [2024-12-16 11:44:38.816842] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:38.082 [2024-12-16 11:44:38.816853] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:79328 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:38.082 [2024-12-16 11:44:38.816862] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:38.082 [2024-12-16 11:44:38.816873] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:71 nsid:1 lba:79336 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:38.082 [2024-12-16 11:44:38.816882] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:38.082 [2024-12-16 11:44:38.816892] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:111 nsid:1 lba:79344 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:38.082 [2024-12-16 11:44:38.816901] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:38.082 [2024-12-16 11:44:38.816912] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:120 nsid:1 lba:79352 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:38.082 [2024-12-16 11:44:38.817046] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:38.082 [2024-12-16 11:44:38.817059] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:79360 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:38.082 [2024-12-16 11:44:38.817315] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:38.082 [2024-12-16 11:44:38.817332] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:116 nsid:1 lba:79368 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:38.082 [2024-12-16 11:44:38.817343] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:38.082 [2024-12-16 11:44:38.817354] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:79376 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:38.082 [2024-12-16 11:44:38.817364] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:38.082 [2024-12-16 11:44:38.817374] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:89 nsid:1 lba:79384 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:38.082 [2024-12-16 11:44:38.817591] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:38.082 [2024-12-16 11:44:38.817616] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:95 nsid:1 lba:79392 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:38.082 [2024-12-16 11:44:38.817626] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:38.082 [2024-12-16 11:44:38.817638] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:73 nsid:1 lba:79400 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:38.082 [2024-12-16 11:44:38.817647] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:38.082 [2024-12-16 11:44:38.817657] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:79 nsid:1 lba:79408 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:38.082 [2024-12-16 11:44:38.817666] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:38.082 [2024-12-16 11:44:38.817677] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:98 nsid:1 lba:79416 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:38.082 [2024-12-16 11:44:38.817686] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:38.082 [2024-12-16 11:44:38.817835] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:112 nsid:1 lba:79424 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:38.082 [2024-12-16 11:44:38.817902] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:38.082 [2024-12-16 11:44:38.817915] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:79432 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:38.082 [2024-12-16 11:44:38.817925] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:38.082 [2024-12-16 11:44:38.817936] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:79440 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:38.082 [2024-12-16 11:44:38.817945] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:38.082 [2024-12-16 11:44:38.817956] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:79584 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:38.082 [2024-12-16 11:44:38.817965] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:38.082 [2024-12-16 11:44:38.817976] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:84 nsid:1 lba:79592 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:38.082 [2024-12-16 11:44:38.817984] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:38.082 [2024-12-16 11:44:38.817995] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:79600 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:38.082 [2024-12-16 11:44:38.818119] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:38.082 [2024-12-16 11:44:38.818366] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:87 nsid:1 lba:79608 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:38.082 [2024-12-16 11:44:38.818390] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:38.082 [2024-12-16 11:44:38.818402] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:126 nsid:1 lba:79616 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:38.082 [2024-12-16 11:44:38.818411] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:38.082 [2024-12-16 11:44:38.818422] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:79624 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:38.082 [2024-12-16 11:44:38.818431] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:38.082 [2024-12-16 11:44:38.818459] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:22:38.082 [2024-12-16 11:44:38.818679] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:22:38.082 [2024-12-16 11:44:38.818700] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:79632 len:8 PRP1 0x0 PRP2 0x0 00:22:38.082 [2024-12-16 11:44:38.818710] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:38.082 [2024-12-16 11:44:38.819039] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1514f30 (9): Bad file descriptor 00:22:38.082 [2024-12-16 11:44:38.819435] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 1] resetting controller 00:22:38.082 [2024-12-16 11:44:38.819552] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:38.082 [2024-12-16 11:44:38.819576] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1514f30 with addr=10.0.0.3, port=4420 00:22:38.082 [2024-12-16 11:44:38.819587] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1514f30 is same with the state(6) to be set 00:22:38.082 [2024-12-16 11:44:38.819729] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1514f30 (9): Bad file descriptor 00:22:38.082 [2024-12-16 11:44:38.819877] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] Ctrlr is in error state 00:22:38.082 [2024-12-16 11:44:38.820007] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] controller reinitialization failed 00:22:38.082 [2024-12-16 11:44:38.820124] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] in failed state. 00:22:38.082 [2024-12-16 11:44:38.820273] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] Resetting controller failed. 00:22:38.082 [2024-12-16 11:44:38.820388] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 1] resetting controller 00:22:38.341 11:44:38 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@56 -- # sleep 2 00:22:40.284 4913.50 IOPS, 19.19 MiB/s [2024-12-16T11:44:41.037Z] 3275.67 IOPS, 12.80 MiB/s [2024-12-16T11:44:41.037Z] [2024-12-16 11:44:40.820545] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:40.284 [2024-12-16 11:44:40.820675] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1514f30 with addr=10.0.0.3, port=4420 00:22:40.284 [2024-12-16 11:44:40.820692] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1514f30 is same with the state(6) to be set 00:22:40.284 [2024-12-16 11:44:40.820719] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1514f30 (9): Bad file descriptor 00:22:40.284 [2024-12-16 11:44:40.820740] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] Ctrlr is in error state 00:22:40.284 [2024-12-16 11:44:40.820750] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] controller reinitialization failed 00:22:40.284 [2024-12-16 11:44:40.820762] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] in failed state. 00:22:40.284 [2024-12-16 11:44:40.820773] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] Resetting controller failed. 00:22:40.284 [2024-12-16 11:44:40.820784] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 1] resetting controller 00:22:40.284 11:44:40 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@57 -- # get_controller 00:22:40.284 11:44:40 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@41 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:22:40.284 11:44:40 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@41 -- # jq -r '.[].name' 00:22:40.542 11:44:41 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@57 -- # [[ NVMe0 == \N\V\M\e\0 ]] 00:22:40.542 11:44:41 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@58 -- # get_bdev 00:22:40.542 11:44:41 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@37 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_get_bdevs 00:22:40.542 11:44:41 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@37 -- # jq -r '.[].name' 00:22:40.800 11:44:41 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@58 -- # [[ NVMe0n1 == \N\V\M\e\0\n\1 ]] 00:22:40.800 11:44:41 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@61 -- # sleep 5 00:22:41.995 2456.75 IOPS, 9.60 MiB/s [2024-12-16T11:44:43.007Z] 1965.40 IOPS, 7.68 MiB/s [2024-12-16T11:44:43.007Z] [2024-12-16 11:44:42.820983] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:42.254 [2024-12-16 11:44:42.821073] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1514f30 with addr=10.0.0.3, port=4420 00:22:42.254 [2024-12-16 11:44:42.821090] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1514f30 is same with the state(6) to be set 00:22:42.254 [2024-12-16 11:44:42.821129] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1514f30 (9): Bad file descriptor 00:22:42.254 [2024-12-16 11:44:42.821163] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] Ctrlr is in error state 00:22:42.254 [2024-12-16 11:44:42.821175] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] controller reinitialization failed 00:22:42.254 [2024-12-16 11:44:42.821189] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] in failed state. 00:22:42.254 [2024-12-16 11:44:42.821203] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] Resetting controller failed. 00:22:42.254 [2024-12-16 11:44:42.821216] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 1] resetting controller 00:22:44.126 1637.83 IOPS, 6.40 MiB/s [2024-12-16T11:44:44.879Z] 1403.86 IOPS, 5.48 MiB/s [2024-12-16T11:44:44.879Z] [2024-12-16 11:44:44.821288] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] in failed state. 00:22:44.126 [2024-12-16 11:44:44.821342] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] Ctrlr is in error state 00:22:44.126 [2024-12-16 11:44:44.821353] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] controller reinitialization failed 00:22:44.126 [2024-12-16 11:44:44.821364] nvme_ctrlr.c:1098:nvme_ctrlr_fail: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 1] already in failed state 00:22:44.127 [2024-12-16 11:44:44.821378] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] Resetting controller failed. 00:22:45.116 1228.38 IOPS, 4.80 MiB/s 00:22:45.116 Latency(us) 00:22:45.116 [2024-12-16T11:44:45.869Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:22:45.116 Job: NVMe0n1 (Core Mask 0x4, workload: verify, depth: 128, IO size: 4096) 00:22:45.116 Verification LBA range: start 0x0 length 0x4000 00:22:45.116 NVMe0n1 : 8.19 1200.44 4.69 15.64 0.00 105240.93 2174.60 7046430.72 00:22:45.116 [2024-12-16T11:44:45.869Z] =================================================================================================================== 00:22:45.116 [2024-12-16T11:44:45.869Z] Total : 1200.44 4.69 15.64 0.00 105240.93 2174.60 7046430.72 00:22:45.116 { 00:22:45.116 "results": [ 00:22:45.116 { 00:22:45.116 "job": "NVMe0n1", 00:22:45.116 "core_mask": "0x4", 00:22:45.116 "workload": "verify", 00:22:45.116 "status": "finished", 00:22:45.116 "verify_range": { 00:22:45.116 "start": 0, 00:22:45.116 "length": 16384 00:22:45.116 }, 00:22:45.116 "queue_depth": 128, 00:22:45.116 "io_size": 4096, 00:22:45.116 "runtime": 8.186152, 00:22:45.116 "iops": 1200.441917032569, 00:22:45.116 "mibps": 4.689226238408473, 00:22:45.116 "io_failed": 128, 00:22:45.116 "io_timeout": 0, 00:22:45.116 "avg_latency_us": 105240.92545326697, 00:22:45.116 "min_latency_us": 2174.6036363636363, 00:22:45.116 "max_latency_us": 7046430.72 00:22:45.116 } 00:22:45.116 ], 00:22:45.116 "core_count": 1 00:22:45.116 } 00:22:46.051 11:44:46 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@62 -- # get_controller 00:22:46.051 11:44:46 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@41 -- # jq -r '.[].name' 00:22:46.051 11:44:46 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@41 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:22:46.310 11:44:46 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@62 -- # [[ '' == '' ]] 00:22:46.310 11:44:46 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@63 -- # get_bdev 00:22:46.310 11:44:46 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@37 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_get_bdevs 00:22:46.310 11:44:46 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@37 -- # jq -r '.[].name' 00:22:46.569 11:44:47 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@63 -- # [[ '' == '' ]] 00:22:46.569 11:44:47 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@65 -- # wait 99068 00:22:46.569 11:44:47 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@67 -- # killprocess 99035 00:22:46.569 11:44:47 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@954 -- # '[' -z 99035 ']' 00:22:46.569 11:44:47 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@958 -- # kill -0 99035 00:22:46.569 11:44:47 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@959 -- # uname 00:22:46.569 11:44:47 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:22:46.569 11:44:47 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 99035 00:22:46.569 11:44:47 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@960 -- # process_name=reactor_2 00:22:46.569 11:44:47 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@964 -- # '[' reactor_2 = sudo ']' 00:22:46.569 killing process with pid 99035 00:22:46.569 11:44:47 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@972 -- # echo 'killing process with pid 99035' 00:22:46.569 11:44:47 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@973 -- # kill 99035 00:22:46.569 Received shutdown signal, test time was about 9.588011 seconds 00:22:46.569 00:22:46.569 Latency(us) 00:22:46.569 [2024-12-16T11:44:47.322Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:22:46.569 [2024-12-16T11:44:47.322Z] =================================================================================================================== 00:22:46.569 [2024-12-16T11:44:47.322Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:22:46.569 11:44:47 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@978 -- # wait 99035 00:22:46.827 11:44:47 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@71 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 00:22:47.087 [2024-12-16 11:44:47.685868] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4420 *** 00:22:47.087 11:44:47 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@74 -- # bdevperf_pid=99228 00:22:47.087 11:44:47 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@73 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 -f 00:22:47.087 11:44:47 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@76 -- # waitforlisten 99228 /var/tmp/bdevperf.sock 00:22:47.087 11:44:47 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@835 -- # '[' -z 99228 ']' 00:22:47.087 11:44:47 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:22:47.087 11:44:47 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@840 -- # local max_retries=100 00:22:47.087 11:44:47 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:22:47.087 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:22:47.087 11:44:47 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@844 -- # xtrace_disable 00:22:47.087 11:44:47 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@10 -- # set +x 00:22:47.087 [2024-12-16 11:44:47.763742] Starting SPDK v25.01-pre git sha1 c360129e7 / DPDK 24.03.0 initialization... 00:22:47.087 [2024-12-16 11:44:47.763871] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid99228 ] 00:22:47.345 [2024-12-16 11:44:47.904913] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:22:47.345 [2024-12-16 11:44:47.970328] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 2 00:22:48.281 11:44:48 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:22:48.281 11:44:48 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@868 -- # return 0 00:22:48.281 11:44:48 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@78 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_set_options -r -1 00:22:48.540 11:44:49 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@79 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.3 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 --ctrlr-loss-timeout-sec 5 --fast-io-fail-timeout-sec 2 --reconnect-delay-sec 1 00:22:48.800 NVMe0n1 00:22:48.800 11:44:49 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@83 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:22:48.800 11:44:49 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@84 -- # rpc_pid=99270 00:22:48.800 11:44:49 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@86 -- # sleep 1 00:22:49.058 Running I/O for 10 seconds... 00:22:49.996 11:44:50 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@87 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 00:22:49.996 8265.00 IOPS, 32.29 MiB/s [2024-12-16T11:44:50.749Z] [2024-12-16 11:44:50.741014] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x825850 is same with the state(6) to be set 00:22:49.996 [2024-12-16 11:44:50.741094] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x825850 is same with the state(6) to be set 00:22:49.996 [2024-12-16 11:44:50.741105] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x825850 is same with the state(6) to be set 00:22:49.996 [2024-12-16 11:44:50.741117] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x825850 is same with the state(6) to be set 00:22:49.996 [2024-12-16 11:44:50.741125] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x825850 is same with the state(6) to be set 00:22:49.996 [2024-12-16 11:44:50.741132] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x825850 is same with the state(6) to be set 00:22:49.996 [2024-12-16 11:44:50.741167] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x825850 is same with the state(6) to be set 00:22:49.996 [2024-12-16 11:44:50.741177] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x825850 is same with the state(6) to be set 00:22:49.996 [2024-12-16 11:44:50.741186] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x825850 is same with the state(6) to be set 00:22:49.996 [2024-12-16 11:44:50.741194] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x825850 is same with the state(6) to be set 00:22:49.996 [2024-12-16 11:44:50.741202] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x825850 is same with the state(6) to be set 00:22:49.996 [2024-12-16 11:44:50.741210] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x825850 is same with the state(6) to be set 00:22:49.996 [2024-12-16 11:44:50.741218] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x825850 is same with the state(6) to be set 00:22:49.996 [2024-12-16 11:44:50.741225] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x825850 is same with the state(6) to be set 00:22:49.996 [2024-12-16 11:44:50.741233] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x825850 is same with the state(6) to be set 00:22:49.996 [2024-12-16 11:44:50.741240] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x825850 is same with the state(6) to be set 00:22:49.996 [2024-12-16 11:44:50.741247] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x825850 is same with the state(6) to be set 00:22:49.996 [2024-12-16 11:44:50.741255] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x825850 is same with the state(6) to be set 00:22:49.996 [2024-12-16 11:44:50.741262] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x825850 is same with the state(6) to be set 00:22:49.996 [2024-12-16 11:44:50.741285] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x825850 is same with the state(6) to be set 00:22:49.996 [2024-12-16 11:44:50.741292] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x825850 is same with the state(6) to be set 00:22:49.996 [2024-12-16 11:44:50.741299] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x825850 is same with the state(6) to be set 00:22:49.996 [2024-12-16 11:44:50.741306] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x825850 is same with the state(6) to be set 00:22:49.996 [2024-12-16 11:44:50.741314] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x825850 is same with the state(6) to be set 00:22:49.996 [2024-12-16 11:44:50.741321] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x825850 is same with the state(6) to be set 00:22:49.996 [2024-12-16 11:44:50.741328] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x825850 is same with the state(6) to be set 00:22:49.996 [2024-12-16 11:44:50.741336] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x825850 is same with the state(6) to be set 00:22:49.996 [2024-12-16 11:44:50.741343] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x825850 is same with the state(6) to be set 00:22:49.996 [2024-12-16 11:44:50.741350] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x825850 is same with the state(6) to be set 00:22:49.996 [2024-12-16 11:44:50.741358] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x825850 is same with the state(6) to be set 00:22:49.996 [2024-12-16 11:44:50.741367] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x825850 is same with the state(6) to be set 00:22:49.996 [2024-12-16 11:44:50.741374] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x825850 is same with the state(6) to be set 00:22:49.996 [2024-12-16 11:44:50.741382] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x825850 is same with the state(6) to be set 00:22:49.996 [2024-12-16 11:44:50.741390] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x825850 is same with the state(6) to be set 00:22:49.996 [2024-12-16 11:44:50.741397] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x825850 is same with the state(6) to be set 00:22:49.996 [2024-12-16 11:44:50.741403] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x825850 is same with the state(6) to be set 00:22:49.996 [2024-12-16 11:44:50.741411] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x825850 is same with the state(6) to be set 00:22:49.996 [2024-12-16 11:44:50.741418] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x825850 is same with the state(6) to be set 00:22:49.996 [2024-12-16 11:44:50.742754] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:77120 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:49.996 [2024-12-16 11:44:50.742800] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:49.996 [2024-12-16 11:44:50.742822] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:77128 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:49.996 [2024-12-16 11:44:50.742834] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:49.996 [2024-12-16 11:44:50.742846] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:103 nsid:1 lba:77136 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:49.996 [2024-12-16 11:44:50.742855] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:49.996 [2024-12-16 11:44:50.742867] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:77144 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:49.996 [2024-12-16 11:44:50.742876] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:49.996 [2024-12-16 11:44:50.742887] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:77152 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:49.996 [2024-12-16 11:44:50.742901] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:49.996 [2024-12-16 11:44:50.742912] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:77448 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:49.996 [2024-12-16 11:44:50.742921] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:49.996 [2024-12-16 11:44:50.742932] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:86 nsid:1 lba:77456 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:49.996 [2024-12-16 11:44:50.742941] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:49.996 [2024-12-16 11:44:50.742960] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:84 nsid:1 lba:77464 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:49.996 [2024-12-16 11:44:50.742968] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:49.996 [2024-12-16 11:44:50.742979] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:77472 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:49.996 [2024-12-16 11:44:50.742988] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:49.996 [2024-12-16 11:44:50.742999] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:90 nsid:1 lba:77480 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:49.996 [2024-12-16 11:44:50.743013] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:49.996 [2024-12-16 11:44:50.743296] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:85 nsid:1 lba:77488 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:49.996 [2024-12-16 11:44:50.743670] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:49.996 [2024-12-16 11:44:50.743696] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:94 nsid:1 lba:77496 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:49.996 [2024-12-16 11:44:50.743707] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:49.996 [2024-12-16 11:44:50.743720] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:77504 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:49.997 [2024-12-16 11:44:50.743730] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:49.997 [2024-12-16 11:44:50.743742] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:77512 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:49.997 [2024-12-16 11:44:50.743750] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:49.997 [2024-12-16 11:44:50.743775] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:67 nsid:1 lba:77520 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:49.997 [2024-12-16 11:44:50.743791] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:49.997 [2024-12-16 11:44:50.743803] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:80 nsid:1 lba:77528 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:49.997 [2024-12-16 11:44:50.743811] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:49.997 [2024-12-16 11:44:50.743823] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:77536 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:49.997 [2024-12-16 11:44:50.744061] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:49.997 [2024-12-16 11:44:50.744078] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:116 nsid:1 lba:77544 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:49.997 [2024-12-16 11:44:50.744097] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:49.997 [2024-12-16 11:44:50.744108] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:111 nsid:1 lba:77552 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:49.997 [2024-12-16 11:44:50.744201] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:49.997 [2024-12-16 11:44:50.744219] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:119 nsid:1 lba:77560 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:49.997 [2024-12-16 11:44:50.744228] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:49.997 [2024-12-16 11:44:50.744239] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:78 nsid:1 lba:77568 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:49.997 [2024-12-16 11:44:50.744252] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:49.997 [2024-12-16 11:44:50.744263] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:101 nsid:1 lba:77576 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:49.997 [2024-12-16 11:44:50.744558] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:49.997 [2024-12-16 11:44:50.744666] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:115 nsid:1 lba:77584 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:49.997 [2024-12-16 11:44:50.744680] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:49.997 [2024-12-16 11:44:50.744692] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:77592 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:49.997 [2024-12-16 11:44:50.744702] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:49.997 [2024-12-16 11:44:50.744713] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:77600 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:49.997 [2024-12-16 11:44:50.744980] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:49.997 [2024-12-16 11:44:50.745080] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:76 nsid:1 lba:77608 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:49.997 [2024-12-16 11:44:50.745095] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:49.997 [2024-12-16 11:44:50.745107] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:64 nsid:1 lba:77616 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:49.997 [2024-12-16 11:44:50.745116] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:49.997 [2024-12-16 11:44:50.745127] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:124 nsid:1 lba:77624 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:49.997 [2024-12-16 11:44:50.745136] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:49.997 [2024-12-16 11:44:50.745232] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:95 nsid:1 lba:77632 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:49.997 [2024-12-16 11:44:50.745244] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:49.997 [2024-12-16 11:44:50.745255] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:123 nsid:1 lba:77640 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:49.997 [2024-12-16 11:44:50.745265] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:49.997 [2024-12-16 11:44:50.745415] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:77648 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:49.997 [2024-12-16 11:44:50.745523] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:49.997 [2024-12-16 11:44:50.745539] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:77656 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:49.997 [2024-12-16 11:44:50.745549] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:49.997 [2024-12-16 11:44:50.745567] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:77664 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:49.997 [2024-12-16 11:44:50.745577] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:49.997 [2024-12-16 11:44:50.745595] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:77672 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:49.997 [2024-12-16 11:44:50.745603] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:49.997 [2024-12-16 11:44:50.745614] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:77680 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:49.997 [2024-12-16 11:44:50.745624] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:49.997 [2024-12-16 11:44:50.745635] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:77688 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:49.997 [2024-12-16 11:44:50.745644] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:49.997 [2024-12-16 11:44:50.745961] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:89 nsid:1 lba:77696 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:49.997 [2024-12-16 11:44:50.745984] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:49.997 [2024-12-16 11:44:50.745996] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:77704 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:49.997 [2024-12-16 11:44:50.746006] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:49.997 [2024-12-16 11:44:50.746017] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:77712 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:49.997 [2024-12-16 11:44:50.746026] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:49.997 [2024-12-16 11:44:50.746037] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:77720 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:49.997 [2024-12-16 11:44:50.746046] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:49.997 [2024-12-16 11:44:50.746057] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:100 nsid:1 lba:77728 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:49.997 [2024-12-16 11:44:50.746065] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:49.997 [2024-12-16 11:44:50.746076] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:99 nsid:1 lba:77736 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:49.997 [2024-12-16 11:44:50.746085] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:49.997 [2024-12-16 11:44:50.746095] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:106 nsid:1 lba:77744 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:49.997 [2024-12-16 11:44:50.746206] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:49.997 [2024-12-16 11:44:50.746224] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:73 nsid:1 lba:77752 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:49.997 [2024-12-16 11:44:50.746233] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:49.997 [2024-12-16 11:44:50.746388] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:88 nsid:1 lba:77760 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:49.997 [2024-12-16 11:44:50.746537] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:49.997 [2024-12-16 11:44:50.746834] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:79 nsid:1 lba:77768 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:49.997 [2024-12-16 11:44:50.746931] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:49.997 [2024-12-16 11:44:50.746946] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:77776 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:49.997 [2024-12-16 11:44:50.746955] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:49.997 [2024-12-16 11:44:50.746967] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:77784 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:49.997 [2024-12-16 11:44:50.746975] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:49.997 [2024-12-16 11:44:50.746986] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:77792 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:49.997 [2024-12-16 11:44:50.746995] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:49.997 [2024-12-16 11:44:50.747005] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:77800 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:49.997 [2024-12-16 11:44:50.747014] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:49.997 [2024-12-16 11:44:50.747025] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:87 nsid:1 lba:77808 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:49.997 [2024-12-16 11:44:50.747034] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:49.997 [2024-12-16 11:44:50.747045] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:104 nsid:1 lba:77816 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:49.997 [2024-12-16 11:44:50.747054] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:49.997 [2024-12-16 11:44:50.747393] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:77824 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:49.997 [2024-12-16 11:44:50.747406] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:49.998 [2024-12-16 11:44:50.747417] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:98 nsid:1 lba:77832 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:49.998 [2024-12-16 11:44:50.747426] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:49.998 [2024-12-16 11:44:50.747437] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:65 nsid:1 lba:77840 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:49.998 [2024-12-16 11:44:50.747446] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:49.998 [2024-12-16 11:44:50.747457] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:77848 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:49.998 [2024-12-16 11:44:50.747475] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:49.998 [2024-12-16 11:44:50.747486] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:77856 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:49.998 [2024-12-16 11:44:50.747494] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:49.998 [2024-12-16 11:44:50.747505] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:77864 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:50.258 [2024-12-16 11:44:50.747867] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:50.258 [2024-12-16 11:44:50.747889] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:77160 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:50.258 [2024-12-16 11:44:50.747900] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:50.258 [2024-12-16 11:44:50.747911] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:72 nsid:1 lba:77168 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:50.258 [2024-12-16 11:44:50.747919] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:50.258 [2024-12-16 11:44:50.747932] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:109 nsid:1 lba:77176 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:50.258 [2024-12-16 11:44:50.747941] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:50.258 [2024-12-16 11:44:50.747952] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:77184 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:50.258 [2024-12-16 11:44:50.747961] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:50.258 [2024-12-16 11:44:50.747972] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:77192 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:50.258 [2024-12-16 11:44:50.747981] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:50.258 [2024-12-16 11:44:50.747993] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:77200 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:50.258 [2024-12-16 11:44:50.748002] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:50.258 [2024-12-16 11:44:50.748252] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:110 nsid:1 lba:77208 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:50.258 [2024-12-16 11:44:50.748266] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:50.258 [2024-12-16 11:44:50.748278] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:126 nsid:1 lba:77216 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:50.258 [2024-12-16 11:44:50.748287] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:50.258 [2024-12-16 11:44:50.748298] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:68 nsid:1 lba:77224 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:50.258 [2024-12-16 11:44:50.748308] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:50.258 [2024-12-16 11:44:50.748388] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:77232 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:50.258 [2024-12-16 11:44:50.748400] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:50.258 [2024-12-16 11:44:50.748412] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:77240 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:50.258 [2024-12-16 11:44:50.748421] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:50.258 [2024-12-16 11:44:50.748433] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:77248 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:50.258 [2024-12-16 11:44:50.748442] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:50.258 [2024-12-16 11:44:50.748656] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:77256 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:50.258 [2024-12-16 11:44:50.748669] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:50.258 [2024-12-16 11:44:50.748681] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:77264 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:50.258 [2024-12-16 11:44:50.748690] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:50.258 [2024-12-16 11:44:50.748701] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:69 nsid:1 lba:77272 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:50.258 [2024-12-16 11:44:50.748712] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:50.258 [2024-12-16 11:44:50.748917] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:77280 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:50.258 [2024-12-16 11:44:50.748931] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:50.258 [2024-12-16 11:44:50.748954] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:77288 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:50.258 [2024-12-16 11:44:50.748962] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:50.258 [2024-12-16 11:44:50.748973] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:77296 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:50.258 [2024-12-16 11:44:50.748983] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:50.258 [2024-12-16 11:44:50.749255] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:77304 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:50.258 [2024-12-16 11:44:50.749273] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:50.258 [2024-12-16 11:44:50.749286] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:77312 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:50.258 [2024-12-16 11:44:50.749295] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:50.258 [2024-12-16 11:44:50.749446] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:96 nsid:1 lba:77320 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:50.258 [2024-12-16 11:44:50.749639] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:50.258 [2024-12-16 11:44:50.749654] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:77328 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:50.258 [2024-12-16 11:44:50.749664] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:50.259 [2024-12-16 11:44:50.749675] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:117 nsid:1 lba:77336 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:50.259 [2024-12-16 11:44:50.749690] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:50.259 [2024-12-16 11:44:50.749702] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:77344 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:50.259 [2024-12-16 11:44:50.749711] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:50.259 [2024-12-16 11:44:50.749723] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:105 nsid:1 lba:77352 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:50.259 [2024-12-16 11:44:50.749987] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:50.259 [2024-12-16 11:44:50.750056] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:77360 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:50.259 [2024-12-16 11:44:50.750068] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:50.259 [2024-12-16 11:44:50.750079] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:113 nsid:1 lba:77368 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:50.259 [2024-12-16 11:44:50.750089] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:50.259 [2024-12-16 11:44:50.750100] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:77376 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:50.259 [2024-12-16 11:44:50.750110] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:50.259 [2024-12-16 11:44:50.750121] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:77384 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:50.259 [2024-12-16 11:44:50.750130] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:50.259 [2024-12-16 11:44:50.750152] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:83 nsid:1 lba:77392 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:50.259 [2024-12-16 11:44:50.750163] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:50.259 [2024-12-16 11:44:50.750537] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:120 nsid:1 lba:77400 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:50.259 [2024-12-16 11:44:50.750555] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:50.259 [2024-12-16 11:44:50.750567] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:77408 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:50.259 [2024-12-16 11:44:50.750577] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:50.259 [2024-12-16 11:44:50.750588] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:102 nsid:1 lba:77416 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:50.259 [2024-12-16 11:44:50.750597] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:50.259 [2024-12-16 11:44:50.750608] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:66 nsid:1 lba:77424 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:50.259 [2024-12-16 11:44:50.750618] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:50.259 [2024-12-16 11:44:50.750628] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:77432 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:50.259 [2024-12-16 11:44:50.750637] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:50.259 [2024-12-16 11:44:50.750647] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:114 nsid:1 lba:77440 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:50.259 [2024-12-16 11:44:50.750656] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:50.259 [2024-12-16 11:44:50.750667] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:91 nsid:1 lba:77872 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:50.259 [2024-12-16 11:44:50.750954] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:50.259 [2024-12-16 11:44:50.750979] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:77880 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:50.259 [2024-12-16 11:44:50.750990] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:50.259 [2024-12-16 11:44:50.751001] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:77888 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:50.259 [2024-12-16 11:44:50.751011] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:50.259 [2024-12-16 11:44:50.751022] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:77896 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:50.259 [2024-12-16 11:44:50.751030] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:50.259 [2024-12-16 11:44:50.751041] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:118 nsid:1 lba:77904 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:50.259 [2024-12-16 11:44:50.751049] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:50.259 [2024-12-16 11:44:50.751061] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:122 nsid:1 lba:77912 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:50.259 [2024-12-16 11:44:50.751070] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:50.259 [2024-12-16 11:44:50.751080] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:77920 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:50.259 [2024-12-16 11:44:50.751089] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:50.259 [2024-12-16 11:44:50.751207] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:97 nsid:1 lba:77928 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:50.259 [2024-12-16 11:44:50.751222] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:50.259 [2024-12-16 11:44:50.751366] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:77936 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:50.259 [2024-12-16 11:44:50.751456] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:50.259 [2024-12-16 11:44:50.751471] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:112 nsid:1 lba:77944 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:50.259 [2024-12-16 11:44:50.751480] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:50.259 [2024-12-16 11:44:50.751491] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:81 nsid:1 lba:77952 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:50.259 [2024-12-16 11:44:50.751499] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:50.259 [2024-12-16 11:44:50.751510] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:107 nsid:1 lba:77960 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:50.259 [2024-12-16 11:44:50.751518] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:50.259 [2024-12-16 11:44:50.751529] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:77968 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:50.259 [2024-12-16 11:44:50.751537] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:50.259 [2024-12-16 11:44:50.751631] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:77976 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:50.259 [2024-12-16 11:44:50.751643] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:50.259 [2024-12-16 11:44:50.751654] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:77984 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:50.259 [2024-12-16 11:44:50.751662] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:50.259 [2024-12-16 11:44:50.751673] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:77992 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:50.259 [2024-12-16 11:44:50.751682] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:50.259 [2024-12-16 11:44:50.751692] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:70 nsid:1 lba:78000 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:50.259 [2024-12-16 11:44:50.751701] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:50.259 [2024-12-16 11:44:50.751955] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:121 nsid:1 lba:78008 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:50.259 [2024-12-16 11:44:50.751968] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:50.259 [2024-12-16 11:44:50.751980] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:78016 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:50.259 [2024-12-16 11:44:50.751988] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:50.259 [2024-12-16 11:44:50.752000] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:78024 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:50.259 [2024-12-16 11:44:50.752009] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:50.259 [2024-12-16 11:44:50.752019] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:78032 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:50.259 [2024-12-16 11:44:50.752028] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:50.259 [2024-12-16 11:44:50.752038] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:77 nsid:1 lba:78040 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:50.259 [2024-12-16 11:44:50.752047] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:50.259 [2024-12-16 11:44:50.752057] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:82 nsid:1 lba:78048 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:50.259 [2024-12-16 11:44:50.752437] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:50.259 [2024-12-16 11:44:50.752464] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:78056 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:50.259 [2024-12-16 11:44:50.752474] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:50.259 [2024-12-16 11:44:50.752486] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:93 nsid:1 lba:78064 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:50.259 [2024-12-16 11:44:50.752495] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:50.259 [2024-12-16 11:44:50.752506] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:78072 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:50.259 [2024-12-16 11:44:50.752515] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:50.259 [2024-12-16 11:44:50.752525] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:78080 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:50.259 [2024-12-16 11:44:50.752534] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:50.260 [2024-12-16 11:44:50.752544] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:108 nsid:1 lba:78088 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:50.260 [2024-12-16 11:44:50.752561] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:50.260 [2024-12-16 11:44:50.752572] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:92 nsid:1 lba:78096 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:50.260 [2024-12-16 11:44:50.752925] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:50.260 [2024-12-16 11:44:50.752943] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:78104 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:50.260 [2024-12-16 11:44:50.752952] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:50.260 [2024-12-16 11:44:50.752962] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:71 nsid:1 lba:78112 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:50.260 [2024-12-16 11:44:50.752971] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:50.260 [2024-12-16 11:44:50.752983] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:75 nsid:1 lba:78120 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:50.260 [2024-12-16 11:44:50.752991] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:50.260 [2024-12-16 11:44:50.753326] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:22:50.260 [2024-12-16 11:44:50.753352] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:78128 len:8 PRP1 0x0 PRP2 0x0 00:22:50.260 [2024-12-16 11:44:50.753363] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:50.260 [2024-12-16 11:44:50.753378] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:22:50.260 [2024-12-16 11:44:50.753386] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:22:50.260 [2024-12-16 11:44:50.753394] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:78136 len:8 PRP1 0x0 PRP2 0x0 00:22:50.260 [2024-12-16 11:44:50.753403] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:50.260 [2024-12-16 11:44:50.753836] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:22:50.260 [2024-12-16 11:44:50.753862] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:50.260 [2024-12-16 11:44:50.753874] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:22:50.260 [2024-12-16 11:44:50.753884] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:50.260 [2024-12-16 11:44:50.753894] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:22:50.260 [2024-12-16 11:44:50.753903] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:50.260 [2024-12-16 11:44:50.753913] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:22:50.260 [2024-12-16 11:44:50.753922] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:50.260 [2024-12-16 11:44:50.753930] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1353f30 is same with the state(6) to be set 00:22:50.260 [2024-12-16 11:44:50.754543] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:22:50.260 [2024-12-16 11:44:50.754584] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1353f30 (9): Bad file descriptor 00:22:50.260 [2024-12-16 11:44:50.754809] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:50.260 [2024-12-16 11:44:50.754842] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1353f30 with addr=10.0.0.3, port=4420 00:22:50.260 [2024-12-16 11:44:50.754855] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1353f30 is same with the state(6) to be set 00:22:50.260 [2024-12-16 11:44:50.754877] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1353f30 (9): Bad file descriptor 00:22:50.260 [2024-12-16 11:44:50.754894] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:22:50.260 [2024-12-16 11:44:50.754904] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:22:50.260 [2024-12-16 11:44:50.754914] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:22:50.260 [2024-12-16 11:44:50.754925] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:22:50.260 [2024-12-16 11:44:50.754935] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:22:50.260 11:44:50 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@90 -- # sleep 1 00:22:51.197 4820.00 IOPS, 18.83 MiB/s [2024-12-16T11:44:51.950Z] [2024-12-16 11:44:51.755064] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:51.197 [2024-12-16 11:44:51.755179] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1353f30 with addr=10.0.0.3, port=4420 00:22:51.197 [2024-12-16 11:44:51.755198] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1353f30 is same with the state(6) to be set 00:22:51.197 [2024-12-16 11:44:51.755222] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1353f30 (9): Bad file descriptor 00:22:51.197 [2024-12-16 11:44:51.755243] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:22:51.197 [2024-12-16 11:44:51.755253] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:22:51.197 [2024-12-16 11:44:51.755263] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:22:51.197 [2024-12-16 11:44:51.755274] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:22:51.197 [2024-12-16 11:44:51.755285] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:22:51.197 11:44:51 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@91 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 00:22:51.455 [2024-12-16 11:44:52.044813] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4420 *** 00:22:51.455 11:44:52 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@92 -- # wait 99270 00:22:52.023 3213.33 IOPS, 12.55 MiB/s [2024-12-16T11:44:52.776Z] [2024-12-16 11:44:52.766318] bdev_nvme.c:2287:bdev_nvme_reset_ctrlr_complete: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 3] Resetting controller successful. 00:22:53.895 2410.00 IOPS, 9.41 MiB/s [2024-12-16T11:44:56.027Z] 3430.20 IOPS, 13.40 MiB/s [2024-12-16T11:44:56.601Z] 4355.67 IOPS, 17.01 MiB/s [2024-12-16T11:44:57.987Z] 5033.00 IOPS, 19.66 MiB/s [2024-12-16T11:44:58.923Z] 5485.50 IOPS, 21.43 MiB/s [2024-12-16T11:44:59.859Z] 5862.11 IOPS, 22.90 MiB/s [2024-12-16T11:44:59.859Z] 6162.60 IOPS, 24.07 MiB/s 00:22:59.106 Latency(us) 00:22:59.106 [2024-12-16T11:44:59.859Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:22:59.106 Job: NVMe0n1 (Core Mask 0x4, workload: verify, depth: 128, IO size: 4096) 00:22:59.106 Verification LBA range: start 0x0 length 0x4000 00:22:59.106 NVMe0n1 : 10.01 6169.28 24.10 0.00 0.00 20721.12 2174.60 3035150.89 00:22:59.106 [2024-12-16T11:44:59.859Z] =================================================================================================================== 00:22:59.106 [2024-12-16T11:44:59.859Z] Total : 6169.28 24.10 0.00 0.00 20721.12 2174.60 3035150.89 00:22:59.106 { 00:22:59.106 "results": [ 00:22:59.106 { 00:22:59.106 "job": "NVMe0n1", 00:22:59.106 "core_mask": "0x4", 00:22:59.106 "workload": "verify", 00:22:59.106 "status": "finished", 00:22:59.106 "verify_range": { 00:22:59.106 "start": 0, 00:22:59.106 "length": 16384 00:22:59.106 }, 00:22:59.106 "queue_depth": 128, 00:22:59.106 "io_size": 4096, 00:22:59.106 "runtime": 10.006193, 00:22:59.106 "iops": 6169.279365289077, 00:22:59.106 "mibps": 24.098747520660456, 00:22:59.106 "io_failed": 0, 00:22:59.106 "io_timeout": 0, 00:22:59.106 "avg_latency_us": 20721.11991635262, 00:22:59.106 "min_latency_us": 2174.6036363636363, 00:22:59.106 "max_latency_us": 3035150.8945454545 00:22:59.106 } 00:22:59.106 ], 00:22:59.106 "core_count": 1 00:22:59.106 } 00:22:59.106 11:44:59 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@97 -- # rpc_pid=99387 00:22:59.106 11:44:59 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@96 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:22:59.106 11:44:59 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@98 -- # sleep 1 00:22:59.106 Running I/O for 10 seconds... 00:23:00.043 11:45:00 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@99 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 00:23:00.338 7622.00 IOPS, 29.77 MiB/s [2024-12-16T11:45:01.091Z] [2024-12-16 11:45:00.913700] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x823d40 is same with the state(6) to be set 00:23:00.338 [2024-12-16 11:45:00.913759] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x823d40 is same with the state(6) to be set 00:23:00.338 [2024-12-16 11:45:00.913771] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x823d40 is same with the state(6) to be set 00:23:00.338 [2024-12-16 11:45:00.913780] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x823d40 is same with the state(6) to be set 00:23:00.338 [2024-12-16 11:45:00.913788] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x823d40 is same with the state(6) to be set 00:23:00.338 [2024-12-16 11:45:00.913797] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x823d40 is same with the state(6) to be set 00:23:00.338 [2024-12-16 11:45:00.913805] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x823d40 is same with the state(6) to be set 00:23:00.338 [2024-12-16 11:45:00.913813] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x823d40 is same with the state(6) to be set 00:23:00.338 [2024-12-16 11:45:00.913821] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x823d40 is same with the state(6) to be set 00:23:00.338 [2024-12-16 11:45:00.913829] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x823d40 is same with the state(6) to be set 00:23:00.338 [2024-12-16 11:45:00.913837] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x823d40 is same with the state(6) to be set 00:23:00.338 [2024-12-16 11:45:00.913845] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x823d40 is same with the state(6) to be set 00:23:00.338 [2024-12-16 11:45:00.913853] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x823d40 is same with the state(6) to be set 00:23:00.338 [2024-12-16 11:45:00.913861] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x823d40 is same with the state(6) to be set 00:23:00.338 [2024-12-16 11:45:00.913869] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x823d40 is same with the state(6) to be set 00:23:00.338 [2024-12-16 11:45:00.913877] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x823d40 is same with the state(6) to be set 00:23:00.338 [2024-12-16 11:45:00.913885] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x823d40 is same with the state(6) to be set 00:23:00.338 [2024-12-16 11:45:00.913893] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x823d40 is same with the state(6) to be set 00:23:00.338 [2024-12-16 11:45:00.913901] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x823d40 is same with the state(6) to be set 00:23:00.338 [2024-12-16 11:45:00.913909] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x823d40 is same with the state(6) to be set 00:23:00.338 [2024-12-16 11:45:00.913917] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x823d40 is same with the state(6) to be set 00:23:00.338 [2024-12-16 11:45:00.913924] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x823d40 is same with the state(6) to be set 00:23:00.338 [2024-12-16 11:45:00.913932] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x823d40 is same with the state(6) to be set 00:23:00.338 [2024-12-16 11:45:00.913940] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x823d40 is same with the state(6) to be set 00:23:00.338 [2024-12-16 11:45:00.913947] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x823d40 is same with the state(6) to be set 00:23:00.338 [2024-12-16 11:45:00.913955] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x823d40 is same with the state(6) to be set 00:23:00.338 [2024-12-16 11:45:00.913963] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x823d40 is same with the state(6) to be set 00:23:00.338 [2024-12-16 11:45:00.913970] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x823d40 is same with the state(6) to be set 00:23:00.338 [2024-12-16 11:45:00.913978] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x823d40 is same with the state(6) to be set 00:23:00.338 [2024-12-16 11:45:00.913987] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x823d40 is same with the state(6) to be set 00:23:00.338 [2024-12-16 11:45:00.913997] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x823d40 is same with the state(6) to be set 00:23:00.338 [2024-12-16 11:45:00.914005] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x823d40 is same with the state(6) to be set 00:23:00.338 [2024-12-16 11:45:00.914013] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x823d40 is same with the state(6) to be set 00:23:00.338 [2024-12-16 11:45:00.914021] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x823d40 is same with the state(6) to be set 00:23:00.338 [2024-12-16 11:45:00.914029] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x823d40 is same with the state(6) to be set 00:23:00.338 [2024-12-16 11:45:00.914038] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x823d40 is same with the state(6) to be set 00:23:00.338 [2024-12-16 11:45:00.914046] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x823d40 is same with the state(6) to be set 00:23:00.338 [2024-12-16 11:45:00.914053] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x823d40 is same with the state(6) to be set 00:23:00.338 [2024-12-16 11:45:00.914061] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x823d40 is same with the state(6) to be set 00:23:00.338 [2024-12-16 11:45:00.914069] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x823d40 is same with the state(6) to be set 00:23:00.338 [2024-12-16 11:45:00.914076] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x823d40 is same with the state(6) to be set 00:23:00.338 [2024-12-16 11:45:00.914084] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x823d40 is same with the state(6) to be set 00:23:00.339 [2024-12-16 11:45:00.914092] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x823d40 is same with the state(6) to be set 00:23:00.339 [2024-12-16 11:45:00.914100] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x823d40 is same with the state(6) to be set 00:23:00.339 [2024-12-16 11:45:00.914107] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x823d40 is same with the state(6) to be set 00:23:00.339 [2024-12-16 11:45:00.914115] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x823d40 is same with the state(6) to be set 00:23:00.339 [2024-12-16 11:45:00.914122] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x823d40 is same with the state(6) to be set 00:23:00.339 [2024-12-16 11:45:00.914130] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x823d40 is same with the state(6) to be set 00:23:00.339 [2024-12-16 11:45:00.914150] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x823d40 is same with the state(6) to be set 00:23:00.339 [2024-12-16 11:45:00.914160] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x823d40 is same with the state(6) to be set 00:23:00.339 [2024-12-16 11:45:00.914168] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x823d40 is same with the state(6) to be set 00:23:00.339 [2024-12-16 11:45:00.914176] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x823d40 is same with the state(6) to be set 00:23:00.339 [2024-12-16 11:45:00.914191] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x823d40 is same with the state(6) to be set 00:23:00.339 [2024-12-16 11:45:00.914199] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x823d40 is same with the state(6) to be set 00:23:00.339 [2024-12-16 11:45:00.914207] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x823d40 is same with the state(6) to be set 00:23:00.339 [2024-12-16 11:45:00.914215] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x823d40 is same with the state(6) to be set 00:23:00.339 [2024-12-16 11:45:00.914223] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x823d40 is same with the state(6) to be set 00:23:00.339 [2024-12-16 11:45:00.914243] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x823d40 is same with the state(6) to be set 00:23:00.339 [2024-12-16 11:45:00.914252] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x823d40 is same with the state(6) to be set 00:23:00.339 [2024-12-16 11:45:00.914260] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x823d40 is same with the state(6) to be set 00:23:00.339 [2024-12-16 11:45:00.914268] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x823d40 is same with the state(6) to be set 00:23:00.339 [2024-12-16 11:45:00.914276] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x823d40 is same with the state(6) to be set 00:23:00.339 [2024-12-16 11:45:00.914285] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x823d40 is same with the state(6) to be set 00:23:00.339 [2024-12-16 11:45:00.914293] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x823d40 is same with the state(6) to be set 00:23:00.339 [2024-12-16 11:45:00.914301] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x823d40 is same with the state(6) to be set 00:23:00.339 [2024-12-16 11:45:00.914309] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x823d40 is same with the state(6) to be set 00:23:00.339 [2024-12-16 11:45:00.914317] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x823d40 is same with the state(6) to be set 00:23:00.339 [2024-12-16 11:45:00.914324] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x823d40 is same with the state(6) to be set 00:23:00.339 [2024-12-16 11:45:00.914332] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x823d40 is same with the state(6) to be set 00:23:00.339 [2024-12-16 11:45:00.914339] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x823d40 is same with the state(6) to be set 00:23:00.339 [2024-12-16 11:45:00.914347] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x823d40 is same with the state(6) to be set 00:23:00.339 [2024-12-16 11:45:00.914355] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x823d40 is same with the state(6) to be set 00:23:00.339 [2024-12-16 11:45:00.914363] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x823d40 is same with the state(6) to be set 00:23:00.339 [2024-12-16 11:45:00.914371] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x823d40 is same with the state(6) to be set 00:23:00.339 [2024-12-16 11:45:00.914379] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x823d40 is same with the state(6) to be set 00:23:00.339 [2024-12-16 11:45:00.914387] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x823d40 is same with the state(6) to be set 00:23:00.339 [2024-12-16 11:45:00.914395] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x823d40 is same with the state(6) to be set 00:23:00.339 [2024-12-16 11:45:00.914402] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x823d40 is same with the state(6) to be set 00:23:00.339 [2024-12-16 11:45:00.914410] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x823d40 is same with the state(6) to be set 00:23:00.339 [2024-12-16 11:45:00.914418] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x823d40 is same with the state(6) to be set 00:23:00.339 [2024-12-16 11:45:00.914425] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x823d40 is same with the state(6) to be set 00:23:00.339 [2024-12-16 11:45:00.914433] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x823d40 is same with the state(6) to be set 00:23:00.339 [2024-12-16 11:45:00.914440] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x823d40 is same with the state(6) to be set 00:23:00.339 [2024-12-16 11:45:00.914448] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x823d40 is same with the state(6) to be set 00:23:00.339 [2024-12-16 11:45:00.914463] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x823d40 is same with the state(6) to be set 00:23:00.339 [2024-12-16 11:45:00.914471] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x823d40 is same with the state(6) to be set 00:23:00.339 [2024-12-16 11:45:00.914966] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:23:00.339 [2024-12-16 11:45:00.915004] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:00.339 [2024-12-16 11:45:00.915019] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:23:00.339 [2024-12-16 11:45:00.915029] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:00.339 [2024-12-16 11:45:00.915038] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:23:00.339 [2024-12-16 11:45:00.915048] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:00.339 [2024-12-16 11:45:00.915058] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:23:00.339 [2024-12-16 11:45:00.915067] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:00.339 [2024-12-16 11:45:00.915076] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1353f30 is same with the state(6) to be set 00:23:00.339 [2024-12-16 11:45:00.915125] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:71864 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:00.339 [2024-12-16 11:45:00.915293] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:00.339 [2024-12-16 11:45:00.915424] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:71872 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:00.339 [2024-12-16 11:45:00.915443] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:00.339 [2024-12-16 11:45:00.915573] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:71880 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:00.339 [2024-12-16 11:45:00.915586] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:00.339 [2024-12-16 11:45:00.915797] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:71888 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:00.339 [2024-12-16 11:45:00.915820] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:00.339 [2024-12-16 11:45:00.916000] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:71896 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:00.339 [2024-12-16 11:45:00.916099] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:00.339 [2024-12-16 11:45:00.916113] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:120 nsid:1 lba:71904 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:00.339 [2024-12-16 11:45:00.916129] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:00.339 [2024-12-16 11:45:00.916268] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:96 nsid:1 lba:71912 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:00.339 [2024-12-16 11:45:00.916283] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:00.339 [2024-12-16 11:45:00.916295] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:77 nsid:1 lba:71920 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:00.339 [2024-12-16 11:45:00.916584] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:00.339 [2024-12-16 11:45:00.916610] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:82 nsid:1 lba:71928 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:00.339 [2024-12-16 11:45:00.916620] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:00.339 [2024-12-16 11:45:00.916639] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:73 nsid:1 lba:71936 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:00.339 [2024-12-16 11:45:00.916648] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:00.339 [2024-12-16 11:45:00.916660] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:121 nsid:1 lba:71944 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:00.339 [2024-12-16 11:45:00.916668] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:00.339 [2024-12-16 11:45:00.916680] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:71952 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:00.339 [2024-12-16 11:45:00.916689] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:00.339 [2024-12-16 11:45:00.916706] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:85 nsid:1 lba:71960 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:00.339 [2024-12-16 11:45:00.916715] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:00.339 [2024-12-16 11:45:00.916726] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:71968 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:00.339 [2024-12-16 11:45:00.916735] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:00.339 [2024-12-16 11:45:00.916746] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:74 nsid:1 lba:71976 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:00.340 [2024-12-16 11:45:00.917021] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:00.340 [2024-12-16 11:45:00.917075] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:71984 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:00.340 [2024-12-16 11:45:00.917086] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:00.340 [2024-12-16 11:45:00.917098] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:78 nsid:1 lba:71992 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:00.340 [2024-12-16 11:45:00.917107] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:00.340 [2024-12-16 11:45:00.917118] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:72000 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:00.340 [2024-12-16 11:45:00.917127] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:00.340 [2024-12-16 11:45:00.917149] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:81 nsid:1 lba:72008 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:00.340 [2024-12-16 11:45:00.917160] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:00.340 [2024-12-16 11:45:00.917172] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:80 nsid:1 lba:72016 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:00.340 [2024-12-16 11:45:00.917180] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:00.340 [2024-12-16 11:45:00.917213] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:72024 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:00.340 [2024-12-16 11:45:00.917351] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:00.340 [2024-12-16 11:45:00.917366] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:72032 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:00.340 [2024-12-16 11:45:00.917376] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:00.340 [2024-12-16 11:45:00.917524] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:72040 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:00.340 [2024-12-16 11:45:00.917668] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:00.340 [2024-12-16 11:45:00.917764] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:124 nsid:1 lba:72048 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:00.340 [2024-12-16 11:45:00.917778] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:00.340 [2024-12-16 11:45:00.917789] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:76 nsid:1 lba:72056 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:00.340 [2024-12-16 11:45:00.917798] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:00.340 [2024-12-16 11:45:00.917809] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:86 nsid:1 lba:72064 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:00.340 [2024-12-16 11:45:00.917818] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:00.340 [2024-12-16 11:45:00.917830] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:72072 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:00.340 [2024-12-16 11:45:00.917959] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:00.340 [2024-12-16 11:45:00.918118] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:72080 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:00.340 [2024-12-16 11:45:00.918224] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:00.340 [2024-12-16 11:45:00.918249] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:72088 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:00.340 [2024-12-16 11:45:00.918259] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:00.340 [2024-12-16 11:45:00.918271] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:72096 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:00.340 [2024-12-16 11:45:00.918289] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:00.340 [2024-12-16 11:45:00.918513] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:72104 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:00.340 [2024-12-16 11:45:00.918532] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:00.340 [2024-12-16 11:45:00.918544] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:72112 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:00.340 [2024-12-16 11:45:00.918555] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:00.340 [2024-12-16 11:45:00.918566] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:117 nsid:1 lba:72120 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:00.340 [2024-12-16 11:45:00.918575] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:00.340 [2024-12-16 11:45:00.918587] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:70 nsid:1 lba:72128 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:00.340 [2024-12-16 11:45:00.918596] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:00.340 [2024-12-16 11:45:00.918607] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:71 nsid:1 lba:72136 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:00.340 [2024-12-16 11:45:00.918616] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:00.340 [2024-12-16 11:45:00.918626] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:72144 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:00.340 [2024-12-16 11:45:00.918636] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:00.340 [2024-12-16 11:45:00.918772] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:93 nsid:1 lba:72152 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:00.340 [2024-12-16 11:45:00.918987] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:00.340 [2024-12-16 11:45:00.919003] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:92 nsid:1 lba:72160 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:00.340 [2024-12-16 11:45:00.919013] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:00.340 [2024-12-16 11:45:00.919025] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:72240 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:00.340 [2024-12-16 11:45:00.919034] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:00.340 [2024-12-16 11:45:00.919046] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:67 nsid:1 lba:72248 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:00.340 [2024-12-16 11:45:00.919199] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:00.340 [2024-12-16 11:45:00.919215] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:72256 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:00.340 [2024-12-16 11:45:00.919225] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:00.340 [2024-12-16 11:45:00.919236] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:84 nsid:1 lba:72264 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:00.340 [2024-12-16 11:45:00.919245] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:00.340 [2024-12-16 11:45:00.919256] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:72272 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:00.340 [2024-12-16 11:45:00.919265] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:00.340 [2024-12-16 11:45:00.919286] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:72280 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:00.340 [2024-12-16 11:45:00.919295] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:00.340 [2024-12-16 11:45:00.919306] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:72288 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:00.340 [2024-12-16 11:45:00.919315] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:00.340 [2024-12-16 11:45:00.919326] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:88 nsid:1 lba:72296 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:00.340 [2024-12-16 11:45:00.919335] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:00.340 [2024-12-16 11:45:00.919346] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:72304 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:00.340 [2024-12-16 11:45:00.919591] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:00.340 [2024-12-16 11:45:00.919617] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:72312 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:00.340 [2024-12-16 11:45:00.919638] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:00.340 [2024-12-16 11:45:00.919650] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:72320 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:00.340 [2024-12-16 11:45:00.919659] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:00.340 [2024-12-16 11:45:00.919670] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:72328 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:00.340 [2024-12-16 11:45:00.919679] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:00.340 [2024-12-16 11:45:00.919690] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:100 nsid:1 lba:72336 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:00.340 [2024-12-16 11:45:00.919699] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:00.340 [2024-12-16 11:45:00.919710] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:106 nsid:1 lba:72344 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:00.340 [2024-12-16 11:45:00.919718] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:00.340 [2024-12-16 11:45:00.920032] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:72352 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:00.340 [2024-12-16 11:45:00.920054] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:00.341 [2024-12-16 11:45:00.920067] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:123 nsid:1 lba:72360 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:00.341 [2024-12-16 11:45:00.920076] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:00.341 [2024-12-16 11:45:00.920087] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:64 nsid:1 lba:72368 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:00.341 [2024-12-16 11:45:00.920096] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:00.341 [2024-12-16 11:45:00.920106] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:72376 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:00.341 [2024-12-16 11:45:00.920115] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:00.341 [2024-12-16 11:45:00.920126] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:72384 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:00.341 [2024-12-16 11:45:00.920136] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:00.341 [2024-12-16 11:45:00.920162] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:104 nsid:1 lba:72392 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:00.341 [2024-12-16 11:45:00.920172] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:00.341 [2024-12-16 11:45:00.920307] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:72400 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:00.341 [2024-12-16 11:45:00.920602] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:00.341 [2024-12-16 11:45:00.920679] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:98 nsid:1 lba:72408 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:00.341 [2024-12-16 11:45:00.920694] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:00.341 [2024-12-16 11:45:00.920710] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:99 nsid:1 lba:72416 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:00.341 [2024-12-16 11:45:00.920719] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:00.341 [2024-12-16 11:45:00.920730] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:72424 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:00.341 [2024-12-16 11:45:00.920739] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:00.341 [2024-12-16 11:45:00.920749] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:116 nsid:1 lba:72432 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:00.341 [2024-12-16 11:45:00.920758] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:00.341 [2024-12-16 11:45:00.920898] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:72440 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:00.341 [2024-12-16 11:45:00.920912] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:00.341 [2024-12-16 11:45:00.920923] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:72448 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:00.341 [2024-12-16 11:45:00.921153] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:00.341 [2024-12-16 11:45:00.921178] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:68 nsid:1 lba:72456 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:00.341 [2024-12-16 11:45:00.921189] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:00.341 [2024-12-16 11:45:00.921200] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:72464 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:00.341 [2024-12-16 11:45:00.921209] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:00.341 [2024-12-16 11:45:00.921220] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:65 nsid:1 lba:72472 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:00.341 [2024-12-16 11:45:00.921229] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:00.341 [2024-12-16 11:45:00.921240] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:72480 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:00.341 [2024-12-16 11:45:00.921249] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:00.341 [2024-12-16 11:45:00.921260] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:72488 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:00.341 [2024-12-16 11:45:00.921268] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:00.341 [2024-12-16 11:45:00.921279] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:91 nsid:1 lba:72496 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:00.341 [2024-12-16 11:45:00.921379] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:00.341 [2024-12-16 11:45:00.921396] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:110 nsid:1 lba:72504 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:00.341 [2024-12-16 11:45:00.921406] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:00.341 [2024-12-16 11:45:00.921417] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:113 nsid:1 lba:72512 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:00.341 [2024-12-16 11:45:00.921426] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:00.341 [2024-12-16 11:45:00.921668] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:115 nsid:1 lba:72520 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:00.341 [2024-12-16 11:45:00.921689] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:00.341 [2024-12-16 11:45:00.921702] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:72528 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:00.341 [2024-12-16 11:45:00.921712] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:00.341 [2024-12-16 11:45:00.921723] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:72536 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:00.341 [2024-12-16 11:45:00.921732] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:00.341 [2024-12-16 11:45:00.921743] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:72544 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:00.341 [2024-12-16 11:45:00.921752] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:00.341 [2024-12-16 11:45:00.921763] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:112 nsid:1 lba:72552 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:00.341 [2024-12-16 11:45:00.921772] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:00.341 [2024-12-16 11:45:00.921782] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:108 nsid:1 lba:72560 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:00.341 [2024-12-16 11:45:00.921791] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:00.341 [2024-12-16 11:45:00.921802] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:72 nsid:1 lba:72568 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:00.341 [2024-12-16 11:45:00.921811] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:00.341 [2024-12-16 11:45:00.922191] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:72576 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:00.341 [2024-12-16 11:45:00.922204] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:00.341 [2024-12-16 11:45:00.922215] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:89 nsid:1 lba:72584 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:00.341 [2024-12-16 11:45:00.922225] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:00.341 [2024-12-16 11:45:00.922237] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:97 nsid:1 lba:72592 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:00.341 [2024-12-16 11:45:00.922246] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:00.341 [2024-12-16 11:45:00.922257] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:111 nsid:1 lba:72600 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:00.341 [2024-12-16 11:45:00.922266] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:00.341 [2024-12-16 11:45:00.922277] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:72608 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:00.341 [2024-12-16 11:45:00.922286] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:00.341 [2024-12-16 11:45:00.922297] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:72168 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:00.341 [2024-12-16 11:45:00.922306] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:00.341 [2024-12-16 11:45:00.922316] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:101 nsid:1 lba:72176 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:00.341 [2024-12-16 11:45:00.922325] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:00.341 [2024-12-16 11:45:00.922767] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:72184 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:00.341 [2024-12-16 11:45:00.922860] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:00.341 [2024-12-16 11:45:00.922876] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:72192 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:00.341 [2024-12-16 11:45:00.922885] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:00.341 [2024-12-16 11:45:00.922896] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:102 nsid:1 lba:72200 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:00.341 [2024-12-16 11:45:00.922906] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:00.341 [2024-12-16 11:45:00.922916] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:118 nsid:1 lba:72208 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:00.341 [2024-12-16 11:45:00.922926] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:00.341 [2024-12-16 11:45:00.923225] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:72216 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:00.341 [2024-12-16 11:45:00.923422] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:00.341 [2024-12-16 11:45:00.923436] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:94 nsid:1 lba:72224 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:00.341 [2024-12-16 11:45:00.923447] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:00.341 [2024-12-16 11:45:00.923464] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:72232 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:00.342 [2024-12-16 11:45:00.923480] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:00.342 [2024-12-16 11:45:00.923491] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:72616 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:00.342 [2024-12-16 11:45:00.923500] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:00.342 [2024-12-16 11:45:00.923511] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:122 nsid:1 lba:72624 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:00.342 [2024-12-16 11:45:00.923520] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:00.342 [2024-12-16 11:45:00.923531] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:107 nsid:1 lba:72632 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:00.342 [2024-12-16 11:45:00.923709] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:00.342 [2024-12-16 11:45:00.923822] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:103 nsid:1 lba:72640 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:00.342 [2024-12-16 11:45:00.923838] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:00.342 [2024-12-16 11:45:00.923849] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:114 nsid:1 lba:72648 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:00.342 [2024-12-16 11:45:00.923858] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:00.342 [2024-12-16 11:45:00.923996] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:87 nsid:1 lba:72656 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:00.342 [2024-12-16 11:45:00.924007] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:00.342 [2024-12-16 11:45:00.924244] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:109 nsid:1 lba:72664 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:00.342 [2024-12-16 11:45:00.924269] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:00.342 [2024-12-16 11:45:00.924281] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:72672 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:00.342 [2024-12-16 11:45:00.924291] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:00.342 [2024-12-16 11:45:00.924301] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:72680 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:00.342 [2024-12-16 11:45:00.924311] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:00.342 [2024-12-16 11:45:00.924322] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:105 nsid:1 lba:72688 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:00.342 [2024-12-16 11:45:00.924332] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:00.342 [2024-12-16 11:45:00.924343] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:72696 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:00.342 [2024-12-16 11:45:00.924351] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:00.342 [2024-12-16 11:45:00.924362] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:72704 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:00.342 [2024-12-16 11:45:00.924371] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:00.342 [2024-12-16 11:45:00.924381] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:72712 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:00.342 [2024-12-16 11:45:00.924390] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:00.342 [2024-12-16 11:45:00.924401] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:75 nsid:1 lba:72720 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:00.342 [2024-12-16 11:45:00.924691] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:00.342 [2024-12-16 11:45:00.924764] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:72728 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:00.342 [2024-12-16 11:45:00.924776] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:00.342 [2024-12-16 11:45:00.924793] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:72736 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:00.342 [2024-12-16 11:45:00.924804] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:00.342 [2024-12-16 11:45:00.924815] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:72744 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:00.342 [2024-12-16 11:45:00.924824] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:00.342 [2024-12-16 11:45:00.924835] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:72752 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:00.342 [2024-12-16 11:45:00.924843] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:00.342 [2024-12-16 11:45:00.924855] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:72760 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:00.342 [2024-12-16 11:45:00.924864] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:00.342 [2024-12-16 11:45:00.924874] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:66 nsid:1 lba:72768 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:00.342 [2024-12-16 11:45:00.924883] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:00.342 [2024-12-16 11:45:00.924993] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:72776 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:00.342 [2024-12-16 11:45:00.925006] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:00.342 [2024-12-16 11:45:00.925017] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:83 nsid:1 lba:72784 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:00.342 [2024-12-16 11:45:00.925156] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:00.342 [2024-12-16 11:45:00.925173] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:126 nsid:1 lba:72792 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:00.342 [2024-12-16 11:45:00.925182] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:00.342 [2024-12-16 11:45:00.925196] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:72800 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:00.342 [2024-12-16 11:45:00.925205] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:00.342 [2024-12-16 11:45:00.925317] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:90 nsid:1 lba:72808 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:00.342 [2024-12-16 11:45:00.925328] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:00.342 [2024-12-16 11:45:00.925338] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:72816 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:00.342 [2024-12-16 11:45:00.925347] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:00.342 [2024-12-16 11:45:00.925358] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:79 nsid:1 lba:72824 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:00.342 [2024-12-16 11:45:00.925367] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:00.342 [2024-12-16 11:45:00.925601] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:72832 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:00.342 [2024-12-16 11:45:00.925615] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:00.342 [2024-12-16 11:45:00.925626] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:119 nsid:1 lba:72840 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:00.342 [2024-12-16 11:45:00.925635] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:00.342 [2024-12-16 11:45:00.925646] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:95 nsid:1 lba:72848 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:00.342 [2024-12-16 11:45:00.925655] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:00.342 [2024-12-16 11:45:00.925671] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:69 nsid:1 lba:72856 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:00.342 [2024-12-16 11:45:00.925679] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:00.342 [2024-12-16 11:45:00.925696] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:72864 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:00.342 [2024-12-16 11:45:00.925705] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:00.342 [2024-12-16 11:45:00.925836] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:72872 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:00.342 [2024-12-16 11:45:00.925973] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:00.342 [2024-12-16 11:45:00.926223] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:23:00.342 [2024-12-16 11:45:00.926244] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:23:00.342 [2024-12-16 11:45:00.926254] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:72880 len:8 PRP1 0x0 PRP2 0x0 00:23:00.342 [2024-12-16 11:45:00.926264] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:00.343 [2024-12-16 11:45:00.926362] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1353f30 (9): Bad file descriptor 00:23:00.343 [2024-12-16 11:45:00.927000] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 3] resetting controller 00:23:00.343 [2024-12-16 11:45:00.927123] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:00.343 [2024-12-16 11:45:00.927263] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1353f30 with addr=10.0.0.3, port=4420 00:23:00.343 [2024-12-16 11:45:00.927277] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1353f30 is same with the state(6) to be set 00:23:00.343 [2024-12-16 11:45:00.927533] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1353f30 (9): Bad file descriptor 00:23:00.343 [2024-12-16 11:45:00.927558] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 3] Ctrlr is in error state 00:23:00.343 [2024-12-16 11:45:00.927569] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 3] controller reinitialization failed 00:23:00.343 [2024-12-16 11:45:00.927690] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 3] in failed state. 00:23:00.343 [2024-12-16 11:45:00.927704] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 3] Resetting controller failed. 00:23:00.343 [2024-12-16 11:45:00.927824] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 3] resetting controller 00:23:00.343 11:45:00 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@101 -- # sleep 3 00:23:01.278 4491.50 IOPS, 17.54 MiB/s [2024-12-16T11:45:02.031Z] [2024-12-16 11:45:01.928163] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:01.278 [2024-12-16 11:45:01.928244] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1353f30 with addr=10.0.0.3, port=4420 00:23:01.278 [2024-12-16 11:45:01.928263] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1353f30 is same with the state(6) to be set 00:23:01.278 [2024-12-16 11:45:01.928292] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1353f30 (9): Bad file descriptor 00:23:01.278 [2024-12-16 11:45:01.928314] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 3] Ctrlr is in error state 00:23:01.278 [2024-12-16 11:45:01.928324] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 3] controller reinitialization failed 00:23:01.278 [2024-12-16 11:45:01.928346] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 3] in failed state. 00:23:01.278 [2024-12-16 11:45:01.928358] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 3] Resetting controller failed. 00:23:01.278 [2024-12-16 11:45:01.928370] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 3] resetting controller 00:23:02.215 2994.33 IOPS, 11.70 MiB/s [2024-12-16T11:45:02.968Z] [2024-12-16 11:45:02.928542] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:02.215 [2024-12-16 11:45:02.928627] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1353f30 with addr=10.0.0.3, port=4420 00:23:02.215 [2024-12-16 11:45:02.928644] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1353f30 is same with the state(6) to be set 00:23:02.215 [2024-12-16 11:45:02.928687] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1353f30 (9): Bad file descriptor 00:23:02.215 [2024-12-16 11:45:02.928724] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 3] Ctrlr is in error state 00:23:02.215 [2024-12-16 11:45:02.928737] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 3] controller reinitialization failed 00:23:02.215 [2024-12-16 11:45:02.928749] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 3] in failed state. 00:23:02.215 [2024-12-16 11:45:02.928761] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 3] Resetting controller failed. 00:23:02.215 [2024-12-16 11:45:02.928773] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 3] resetting controller 00:23:03.411 2245.75 IOPS, 8.77 MiB/s [2024-12-16T11:45:04.164Z] [2024-12-16 11:45:03.931989] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:03.411 [2024-12-16 11:45:03.932054] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1353f30 with addr=10.0.0.3, port=4420 00:23:03.411 [2024-12-16 11:45:03.932071] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1353f30 is same with the state(6) to be set 00:23:03.411 [2024-12-16 11:45:03.932600] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1353f30 (9): Bad file descriptor 00:23:03.411 [2024-12-16 11:45:03.933023] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 3] Ctrlr is in error state 00:23:03.411 [2024-12-16 11:45:03.933055] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 3] controller reinitialization failed 00:23:03.411 [2024-12-16 11:45:03.933069] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 3] in failed state. 00:23:03.411 [2024-12-16 11:45:03.933081] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 3] Resetting controller failed. 00:23:03.411 [2024-12-16 11:45:03.933093] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 3] resetting controller 00:23:03.411 11:45:03 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@102 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 00:23:03.669 [2024-12-16 11:45:04.222096] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4420 *** 00:23:03.669 11:45:04 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@103 -- # wait 99387 00:23:04.235 1796.60 IOPS, 7.02 MiB/s [2024-12-16T11:45:04.988Z] [2024-12-16 11:45:04.958015] bdev_nvme.c:2287:bdev_nvme_reset_ctrlr_complete: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 4] Resetting controller successful. 00:23:06.127 2526.17 IOPS, 9.87 MiB/s [2024-12-16T11:45:07.816Z] 3339.14 IOPS, 13.04 MiB/s [2024-12-16T11:45:08.752Z] 3912.25 IOPS, 15.28 MiB/s [2024-12-16T11:45:10.128Z] 4405.89 IOPS, 17.21 MiB/s [2024-12-16T11:45:10.128Z] 4858.30 IOPS, 18.98 MiB/s 00:23:09.375 Latency(us) 00:23:09.375 [2024-12-16T11:45:10.128Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:23:09.375 Job: NVMe0n1 (Core Mask 0x4, workload: verify, depth: 128, IO size: 4096) 00:23:09.375 Verification LBA range: start 0x0 length 0x4000 00:23:09.375 NVMe0n1 : 10.01 4866.28 19.01 3597.84 0.00 15094.56 2144.81 3035150.89 00:23:09.375 [2024-12-16T11:45:10.128Z] =================================================================================================================== 00:23:09.375 [2024-12-16T11:45:10.128Z] Total : 4866.28 19.01 3597.84 0.00 15094.56 0.00 3035150.89 00:23:09.375 { 00:23:09.375 "results": [ 00:23:09.375 { 00:23:09.375 "job": "NVMe0n1", 00:23:09.375 "core_mask": "0x4", 00:23:09.375 "workload": "verify", 00:23:09.375 "status": "finished", 00:23:09.375 "verify_range": { 00:23:09.375 "start": 0, 00:23:09.375 "length": 16384 00:23:09.375 }, 00:23:09.375 "queue_depth": 128, 00:23:09.375 "io_size": 4096, 00:23:09.375 "runtime": 10.0099, 00:23:09.375 "iops": 4866.282380443361, 00:23:09.375 "mibps": 19.00891554860688, 00:23:09.375 "io_failed": 36014, 00:23:09.375 "io_timeout": 0, 00:23:09.375 "avg_latency_us": 15094.556478017115, 00:23:09.376 "min_latency_us": 2144.8145454545456, 00:23:09.376 "max_latency_us": 3035150.8945454545 00:23:09.376 } 00:23:09.376 ], 00:23:09.376 "core_count": 1 00:23:09.376 } 00:23:09.376 11:45:09 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@105 -- # killprocess 99228 00:23:09.376 11:45:09 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@954 -- # '[' -z 99228 ']' 00:23:09.376 11:45:09 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@958 -- # kill -0 99228 00:23:09.376 11:45:09 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@959 -- # uname 00:23:09.376 11:45:09 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:23:09.376 11:45:09 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 99228 00:23:09.376 killing process with pid 99228 00:23:09.376 Received shutdown signal, test time was about 10.000000 seconds 00:23:09.376 00:23:09.376 Latency(us) 00:23:09.376 [2024-12-16T11:45:10.129Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:23:09.376 [2024-12-16T11:45:10.129Z] =================================================================================================================== 00:23:09.376 [2024-12-16T11:45:10.129Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:23:09.376 11:45:09 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@960 -- # process_name=reactor_2 00:23:09.376 11:45:09 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@964 -- # '[' reactor_2 = sudo ']' 00:23:09.376 11:45:09 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@972 -- # echo 'killing process with pid 99228' 00:23:09.376 11:45:09 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@973 -- # kill 99228 00:23:09.376 11:45:09 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@978 -- # wait 99228 00:23:09.376 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:23:09.376 11:45:10 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@110 -- # bdevperf_pid=99518 00:23:09.376 11:45:10 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@112 -- # waitforlisten 99518 /var/tmp/bdevperf.sock 00:23:09.376 11:45:10 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@109 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w randread -t 10 -f 00:23:09.376 11:45:10 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@835 -- # '[' -z 99518 ']' 00:23:09.376 11:45:10 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:23:09.376 11:45:10 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@840 -- # local max_retries=100 00:23:09.376 11:45:10 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:23:09.376 11:45:10 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@844 -- # xtrace_disable 00:23:09.376 11:45:10 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@10 -- # set +x 00:23:09.376 [2024-12-16 11:45:10.082251] Starting SPDK v25.01-pre git sha1 c360129e7 / DPDK 24.03.0 initialization... 00:23:09.376 [2024-12-16 11:45:10.082366] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid99518 ] 00:23:09.635 [2024-12-16 11:45:10.232992] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:23:09.635 [2024-12-16 11:45:10.305714] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 2 00:23:09.901 11:45:10 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:23:09.901 11:45:10 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@868 -- # return 0 00:23:09.901 11:45:10 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@115 -- # /home/vagrant/spdk_repo/spdk/scripts/bpftrace.sh 99518 /home/vagrant/spdk_repo/spdk/scripts/bpf/nvmf_timeout.bt 00:23:09.901 11:45:10 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@116 -- # dtrace_pid=99528 00:23:09.901 11:45:10 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@118 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_set_options -r -1 -e 9 00:23:10.160 11:45:10 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@120 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.3 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 --ctrlr-loss-timeout-sec 5 --reconnect-delay-sec 2 00:23:10.418 NVMe0n1 00:23:10.418 11:45:11 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@124 -- # rpc_pid=99586 00:23:10.418 11:45:11 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@123 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:23:10.418 11:45:11 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@125 -- # sleep 1 00:23:10.676 Running I/O for 10 seconds... 00:23:11.645 11:45:12 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 00:23:11.645 16578.00 IOPS, 64.76 MiB/s [2024-12-16T11:45:12.398Z] [2024-12-16 11:45:12.383772] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x826a90 is same with the state(6) to be set 00:23:11.645 [2024-12-16 11:45:12.383863] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x826a90 is same with the state(6) to be set 00:23:11.645 [2024-12-16 11:45:12.383875] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x826a90 is same with the state(6) to be set 00:23:11.645 [2024-12-16 11:45:12.383884] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x826a90 is same with the state(6) to be set 00:23:11.645 [2024-12-16 11:45:12.383893] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x826a90 is same with the state(6) to be set 00:23:11.645 [2024-12-16 11:45:12.383901] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x826a90 is same with the state(6) to be set 00:23:11.645 [2024-12-16 11:45:12.383910] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x826a90 is same with the state(6) to be set 00:23:11.645 [2024-12-16 11:45:12.383919] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x826a90 is same with the state(6) to be set 00:23:11.645 [2024-12-16 11:45:12.383926] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x826a90 is same with the state(6) to be set 00:23:11.645 [2024-12-16 11:45:12.383935] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x826a90 is same with the state(6) to be set 00:23:11.645 [2024-12-16 11:45:12.383943] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x826a90 is same with the state(6) to be set 00:23:11.645 [2024-12-16 11:45:12.383951] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x826a90 is same with the state(6) to be set 00:23:11.645 [2024-12-16 11:45:12.383959] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x826a90 is same with the state(6) to be set 00:23:11.645 [2024-12-16 11:45:12.383967] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x826a90 is same with the state(6) to be set 00:23:11.645 [2024-12-16 11:45:12.383974] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x826a90 is same with the state(6) to be set 00:23:11.645 [2024-12-16 11:45:12.383982] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x826a90 is same with the state(6) to be set 00:23:11.645 [2024-12-16 11:45:12.383990] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x826a90 is same with the state(6) to be set 00:23:11.645 [2024-12-16 11:45:12.383999] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x826a90 is same with the state(6) to be set 00:23:11.645 [2024-12-16 11:45:12.384006] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x826a90 is same with the state(6) to be set 00:23:11.645 [2024-12-16 11:45:12.384014] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x826a90 is same with the state(6) to be set 00:23:11.645 [2024-12-16 11:45:12.384021] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x826a90 is same with the state(6) to be set 00:23:11.645 [2024-12-16 11:45:12.384029] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x826a90 is same with the state(6) to be set 00:23:11.645 [2024-12-16 11:45:12.384037] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x826a90 is same with the state(6) to be set 00:23:11.645 [2024-12-16 11:45:12.384045] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x826a90 is same with the state(6) to be set 00:23:11.645 [2024-12-16 11:45:12.384053] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x826a90 is same with the state(6) to be set 00:23:11.645 [2024-12-16 11:45:12.384061] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x826a90 is same with the state(6) to be set 00:23:11.645 [2024-12-16 11:45:12.384069] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x826a90 is same with the state(6) to be set 00:23:11.645 [2024-12-16 11:45:12.384076] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x826a90 is same with the state(6) to be set 00:23:11.645 [2024-12-16 11:45:12.384084] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x826a90 is same with the state(6) to be set 00:23:11.645 [2024-12-16 11:45:12.384092] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x826a90 is same with the state(6) to be set 00:23:11.645 [2024-12-16 11:45:12.384100] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x826a90 is same with the state(6) to be set 00:23:11.645 [2024-12-16 11:45:12.384108] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x826a90 is same with the state(6) to be set 00:23:11.645 [2024-12-16 11:45:12.384116] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x826a90 is same with the state(6) to be set 00:23:11.645 [2024-12-16 11:45:12.384124] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x826a90 is same with the state(6) to be set 00:23:11.645 [2024-12-16 11:45:12.384133] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x826a90 is same with the state(6) to be set 00:23:11.645 [2024-12-16 11:45:12.384156] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x826a90 is same with the state(6) to be set 00:23:11.645 [2024-12-16 11:45:12.384170] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x826a90 is same with the state(6) to be set 00:23:11.645 [2024-12-16 11:45:12.384178] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x826a90 is same with the state(6) to be set 00:23:11.645 [2024-12-16 11:45:12.384187] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x826a90 is same with the state(6) to be set 00:23:11.645 [2024-12-16 11:45:12.384196] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x826a90 is same with the state(6) to be set 00:23:11.645 [2024-12-16 11:45:12.384204] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x826a90 is same with the state(6) to be set 00:23:11.645 [2024-12-16 11:45:12.384214] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x826a90 is same with the state(6) to be set 00:23:11.645 [2024-12-16 11:45:12.384223] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x826a90 is same with the state(6) to be set 00:23:11.645 [2024-12-16 11:45:12.384231] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x826a90 is same with the state(6) to be set 00:23:11.645 [2024-12-16 11:45:12.384240] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x826a90 is same with the state(6) to be set 00:23:11.645 [2024-12-16 11:45:12.384250] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x826a90 is same with the state(6) to be set 00:23:11.645 [2024-12-16 11:45:12.384259] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x826a90 is same with the state(6) to be set 00:23:11.645 [2024-12-16 11:45:12.384267] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x826a90 is same with the state(6) to be set 00:23:11.645 [2024-12-16 11:45:12.384276] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x826a90 is same with the state(6) to be set 00:23:11.645 [2024-12-16 11:45:12.384285] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x826a90 is same with the state(6) to be set 00:23:11.645 [2024-12-16 11:45:12.384293] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x826a90 is same with the state(6) to be set 00:23:11.645 [2024-12-16 11:45:12.384301] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x826a90 is same with the state(6) to be set 00:23:11.645 [2024-12-16 11:45:12.384309] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x826a90 is same with the state(6) to be set 00:23:11.645 [2024-12-16 11:45:12.384317] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x826a90 is same with the state(6) to be set 00:23:11.645 [2024-12-16 11:45:12.384325] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x826a90 is same with the state(6) to be set 00:23:11.645 [2024-12-16 11:45:12.384333] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x826a90 is same with the state(6) to be set 00:23:11.645 [2024-12-16 11:45:12.384341] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x826a90 is same with the state(6) to be set 00:23:11.646 [2024-12-16 11:45:12.384349] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x826a90 is same with the state(6) to be set 00:23:11.646 [2024-12-16 11:45:12.384357] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x826a90 is same with the state(6) to be set 00:23:11.646 [2024-12-16 11:45:12.384366] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x826a90 is same with the state(6) to be set 00:23:11.646 [2024-12-16 11:45:12.384373] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x826a90 is same with the state(6) to be set 00:23:11.646 [2024-12-16 11:45:12.384381] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x826a90 is same with the state(6) to be set 00:23:11.646 [2024-12-16 11:45:12.384389] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x826a90 is same with the state(6) to be set 00:23:11.646 [2024-12-16 11:45:12.384397] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x826a90 is same with the state(6) to be set 00:23:11.646 [2024-12-16 11:45:12.384405] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x826a90 is same with the state(6) to be set 00:23:11.646 [2024-12-16 11:45:12.384413] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x826a90 is same with the state(6) to be set 00:23:11.646 [2024-12-16 11:45:12.384421] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x826a90 is same with the state(6) to be set 00:23:11.646 [2024-12-16 11:45:12.384429] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x826a90 is same with the state(6) to be set 00:23:11.646 [2024-12-16 11:45:12.384437] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x826a90 is same with the state(6) to be set 00:23:11.646 [2024-12-16 11:45:12.384446] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x826a90 is same with the state(6) to be set 00:23:11.646 [2024-12-16 11:45:12.384454] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x826a90 is same with the state(6) to be set 00:23:11.646 [2024-12-16 11:45:12.384462] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x826a90 is same with the state(6) to be set 00:23:11.646 [2024-12-16 11:45:12.384469] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x826a90 is same with the state(6) to be set 00:23:11.646 [2024-12-16 11:45:12.384477] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x826a90 is same with the state(6) to be set 00:23:11.646 [2024-12-16 11:45:12.384485] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x826a90 is same with the state(6) to be set 00:23:11.646 [2024-12-16 11:45:12.384492] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x826a90 is same with the state(6) to be set 00:23:11.646 [2024-12-16 11:45:12.384500] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x826a90 is same with the state(6) to be set 00:23:11.646 [2024-12-16 11:45:12.384507] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x826a90 is same with the state(6) to be set 00:23:11.646 [2024-12-16 11:45:12.384515] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x826a90 is same with the state(6) to be set 00:23:11.646 [2024-12-16 11:45:12.384523] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x826a90 is same with the state(6) to be set 00:23:11.646 [2024-12-16 11:45:12.385595] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:99 nsid:1 lba:118368 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:11.646 [2024-12-16 11:45:12.385667] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:11.646 [2024-12-16 11:45:12.385693] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:79600 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:11.646 [2024-12-16 11:45:12.385704] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:11.646 [2024-12-16 11:45:12.385715] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:93392 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:11.646 [2024-12-16 11:45:12.385724] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:11.646 [2024-12-16 11:45:12.385734] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:124 nsid:1 lba:99840 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:11.646 [2024-12-16 11:45:12.385743] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:11.646 [2024-12-16 11:45:12.385753] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:1496 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:11.646 [2024-12-16 11:45:12.385761] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:11.646 [2024-12-16 11:45:12.385771] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:67 nsid:1 lba:75352 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:11.646 [2024-12-16 11:45:12.385779] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:11.646 [2024-12-16 11:45:12.385789] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:94 nsid:1 lba:112424 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:11.646 [2024-12-16 11:45:12.385797] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:11.646 [2024-12-16 11:45:12.385807] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:113 nsid:1 lba:17360 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:11.646 [2024-12-16 11:45:12.385814] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:11.646 [2024-12-16 11:45:12.385823] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:10616 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:11.646 [2024-12-16 11:45:12.385831] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:11.646 [2024-12-16 11:45:12.385841] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:66 nsid:1 lba:23968 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:11.646 [2024-12-16 11:45:12.385848] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:11.646 [2024-12-16 11:45:12.385859] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:40696 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:11.646 [2024-12-16 11:45:12.385867] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:11.646 [2024-12-16 11:45:12.385877] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:77 nsid:1 lba:41104 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:11.646 [2024-12-16 11:45:12.385885] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:11.646 [2024-12-16 11:45:12.385895] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:88504 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:11.646 [2024-12-16 11:45:12.385902] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:11.646 [2024-12-16 11:45:12.386329] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:123 nsid:1 lba:80312 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:11.646 [2024-12-16 11:45:12.386351] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:11.646 [2024-12-16 11:45:12.386364] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:69 nsid:1 lba:6752 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:11.646 [2024-12-16 11:45:12.386373] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:11.646 [2024-12-16 11:45:12.386384] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:107 nsid:1 lba:216 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:11.646 [2024-12-16 11:45:12.386393] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:11.646 [2024-12-16 11:45:12.386419] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:117 nsid:1 lba:22144 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:11.646 [2024-12-16 11:45:12.386430] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:11.646 [2024-12-16 11:45:12.386440] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:116 nsid:1 lba:23264 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:11.646 [2024-12-16 11:45:12.386448] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:11.646 [2024-12-16 11:45:12.386459] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:111 nsid:1 lba:26400 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:11.646 [2024-12-16 11:45:12.386467] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:11.646 [2024-12-16 11:45:12.386478] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:21672 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:11.646 [2024-12-16 11:45:12.386485] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:11.646 [2024-12-16 11:45:12.386495] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:20304 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:11.646 [2024-12-16 11:45:12.386797] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:11.646 [2024-12-16 11:45:12.386811] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:68 nsid:1 lba:5368 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:11.646 [2024-12-16 11:45:12.386821] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:11.646 [2024-12-16 11:45:12.386835] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:102 nsid:1 lba:96888 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:11.646 [2024-12-16 11:45:12.386844] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:11.646 [2024-12-16 11:45:12.386855] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:109 nsid:1 lba:42896 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:11.646 [2024-12-16 11:45:12.386863] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:11.646 [2024-12-16 11:45:12.386874] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:29456 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:11.646 [2024-12-16 11:45:12.386882] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:11.646 [2024-12-16 11:45:12.386892] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:73560 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:11.646 [2024-12-16 11:45:12.386900] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:11.646 [2024-12-16 11:45:12.386910] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:94472 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:11.647 [2024-12-16 11:45:12.386918] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:11.647 [2024-12-16 11:45:12.386928] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:78 nsid:1 lba:69832 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:11.647 [2024-12-16 11:45:12.387216] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:11.647 [2024-12-16 11:45:12.387231] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:64 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:11.647 [2024-12-16 11:45:12.387240] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:11.647 [2024-12-16 11:45:12.387251] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:98 nsid:1 lba:9400 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:11.647 [2024-12-16 11:45:12.387260] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:11.647 [2024-12-16 11:45:12.387271] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:92 nsid:1 lba:32528 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:11.647 [2024-12-16 11:45:12.387281] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:11.647 [2024-12-16 11:45:12.387292] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:41664 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:11.647 [2024-12-16 11:45:12.387300] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:11.647 [2024-12-16 11:45:12.387315] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:64 nsid:1 lba:95440 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:11.647 [2024-12-16 11:45:12.387323] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:11.647 [2024-12-16 11:45:12.387334] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:86928 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:11.647 [2024-12-16 11:45:12.387342] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:11.647 [2024-12-16 11:45:12.387681] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:72 nsid:1 lba:32840 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:11.647 [2024-12-16 11:45:12.387694] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:11.647 [2024-12-16 11:45:12.387704] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:88992 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:11.647 [2024-12-16 11:45:12.387713] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:11.647 [2024-12-16 11:45:12.387723] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:24112 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:11.647 [2024-12-16 11:45:12.387731] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:11.647 [2024-12-16 11:45:12.387741] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:109984 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:11.647 [2024-12-16 11:45:12.387749] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:11.647 [2024-12-16 11:45:12.387760] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:118 nsid:1 lba:100304 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:11.647 [2024-12-16 11:45:12.387768] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:11.647 [2024-12-16 11:45:12.388123] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:29944 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:11.647 [2024-12-16 11:45:12.388209] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:11.647 [2024-12-16 11:45:12.388227] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:108888 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:11.647 [2024-12-16 11:45:12.388236] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:11.647 [2024-12-16 11:45:12.388246] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:121 nsid:1 lba:112704 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:11.647 [2024-12-16 11:45:12.388255] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:11.647 [2024-12-16 11:45:12.388266] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:118896 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:11.647 [2024-12-16 11:45:12.388274] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:11.647 [2024-12-16 11:45:12.388285] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:71 nsid:1 lba:111568 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:11.647 [2024-12-16 11:45:12.388293] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:11.647 [2024-12-16 11:45:12.388593] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:87 nsid:1 lba:12984 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:11.647 [2024-12-16 11:45:12.388604] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:11.647 [2024-12-16 11:45:12.388617] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:86 nsid:1 lba:109640 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:11.647 [2024-12-16 11:45:12.388627] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:11.647 [2024-12-16 11:45:12.388638] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:101 nsid:1 lba:68696 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:11.647 [2024-12-16 11:45:12.388839] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:11.647 [2024-12-16 11:45:12.388857] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:100 nsid:1 lba:39120 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:11.647 [2024-12-16 11:45:12.388868] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:11.647 [2024-12-16 11:45:12.389135] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:87632 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:11.647 [2024-12-16 11:45:12.389288] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:11.647 [2024-12-16 11:45:12.389304] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:17920 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:11.647 [2024-12-16 11:45:12.389315] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:11.647 [2024-12-16 11:45:12.389326] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:103728 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:11.647 [2024-12-16 11:45:12.389335] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:11.647 [2024-12-16 11:45:12.389346] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:79 nsid:1 lba:45048 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:11.647 [2024-12-16 11:45:12.389355] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:11.647 [2024-12-16 11:45:12.389366] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:93 nsid:1 lba:130944 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:11.647 [2024-12-16 11:45:12.389375] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:11.647 [2024-12-16 11:45:12.389386] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:122 nsid:1 lba:13632 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:11.647 [2024-12-16 11:45:12.389395] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:11.647 [2024-12-16 11:45:12.389513] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:105528 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:11.647 [2024-12-16 11:45:12.389526] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:11.647 [2024-12-16 11:45:12.389537] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:98544 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:11.647 [2024-12-16 11:45:12.389545] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:11.647 [2024-12-16 11:45:12.389556] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:80200 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:11.647 [2024-12-16 11:45:12.389884] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:11.647 [2024-12-16 11:45:12.389908] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:126312 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:11.647 [2024-12-16 11:45:12.389917] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:11.647 [2024-12-16 11:45:12.389929] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:55848 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:11.647 [2024-12-16 11:45:12.389938] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:11.647 [2024-12-16 11:45:12.389949] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:57024 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:11.647 [2024-12-16 11:45:12.389957] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:11.647 [2024-12-16 11:45:12.389967] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:7720 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:11.647 [2024-12-16 11:45:12.389976] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:11.647 [2024-12-16 11:45:12.389987] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:13632 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:11.647 [2024-12-16 11:45:12.389995] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:11.647 [2024-12-16 11:45:12.390005] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:106280 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:11.647 [2024-12-16 11:45:12.390014] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:11.647 [2024-12-16 11:45:12.390086] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:120832 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:11.647 [2024-12-16 11:45:12.390100] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:11.647 [2024-12-16 11:45:12.390111] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:88 nsid:1 lba:116736 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:11.648 [2024-12-16 11:45:12.390120] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:11.648 [2024-12-16 11:45:12.390130] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:103 nsid:1 lba:63272 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:11.648 [2024-12-16 11:45:12.390248] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:11.648 [2024-12-16 11:45:12.390265] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:104 nsid:1 lba:123624 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:11.648 [2024-12-16 11:45:12.390275] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:11.648 [2024-12-16 11:45:12.390285] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:82 nsid:1 lba:34488 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:11.648 [2024-12-16 11:45:12.390293] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:11.648 [2024-12-16 11:45:12.390302] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:81 nsid:1 lba:73792 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:11.648 [2024-12-16 11:45:12.390431] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:11.648 [2024-12-16 11:45:12.390710] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:102632 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:11.648 [2024-12-16 11:45:12.390733] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:11.648 [2024-12-16 11:45:12.390834] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:111800 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:11.648 [2024-12-16 11:45:12.390847] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:11.648 [2024-12-16 11:45:12.390858] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:125 nsid:1 lba:136 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:11.648 [2024-12-16 11:45:12.390868] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:11.648 [2024-12-16 11:45:12.390880] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:119 nsid:1 lba:86000 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:11.648 [2024-12-16 11:45:12.390889] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:11.648 [2024-12-16 11:45:12.390900] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:976 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:11.648 [2024-12-16 11:45:12.390909] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:11.648 [2024-12-16 11:45:12.390920] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:120 nsid:1 lba:84776 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:11.648 [2024-12-16 11:45:12.390930] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:11.648 [2024-12-16 11:45:12.391025] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:115 nsid:1 lba:18056 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:11.648 [2024-12-16 11:45:12.391038] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:11.648 [2024-12-16 11:45:12.391050] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:94080 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:11.648 [2024-12-16 11:45:12.391060] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:11.648 [2024-12-16 11:45:12.391071] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:89 nsid:1 lba:83136 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:11.648 [2024-12-16 11:45:12.391080] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:11.648 [2024-12-16 11:45:12.391381] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:80 nsid:1 lba:13448 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:11.648 [2024-12-16 11:45:12.391469] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:11.648 [2024-12-16 11:45:12.391484] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:85 nsid:1 lba:30016 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:11.648 [2024-12-16 11:45:12.391495] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:11.648 [2024-12-16 11:45:12.391505] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:38616 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:11.648 [2024-12-16 11:45:12.391514] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:11.648 [2024-12-16 11:45:12.391524] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:95 nsid:1 lba:109576 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:11.648 [2024-12-16 11:45:12.391533] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:11.648 [2024-12-16 11:45:12.391798] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:112 nsid:1 lba:75384 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:11.648 [2024-12-16 11:45:12.391828] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:11.648 [2024-12-16 11:45:12.391840] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:90 nsid:1 lba:29912 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:11.648 [2024-12-16 11:45:12.391850] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:11.648 [2024-12-16 11:45:12.391863] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:126320 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:11.648 [2024-12-16 11:45:12.391873] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:11.648 [2024-12-16 11:45:12.391885] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:107904 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:11.648 [2024-12-16 11:45:12.391894] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:11.648 [2024-12-16 11:45:12.391906] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:70 nsid:1 lba:118328 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:11.648 [2024-12-16 11:45:12.391915] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:11.648 [2024-12-16 11:45:12.391927] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:116936 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:11.648 [2024-12-16 11:45:12.392070] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:11.648 [2024-12-16 11:45:12.392092] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:105 nsid:1 lba:6728 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:11.648 [2024-12-16 11:45:12.392394] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:11.648 [2024-12-16 11:45:12.392420] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:8312 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:11.648 [2024-12-16 11:45:12.392429] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:11.648 [2024-12-16 11:45:12.392441] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:32888 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:11.648 [2024-12-16 11:45:12.392450] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:11.648 [2024-12-16 11:45:12.392460] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:48 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:11.648 [2024-12-16 11:45:12.392469] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:11.648 [2024-12-16 11:45:12.392478] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:46016 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:11.648 [2024-12-16 11:45:12.392486] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:11.648 [2024-12-16 11:45:12.392597] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:126 nsid:1 lba:90048 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:11.648 [2024-12-16 11:45:12.392615] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:11.648 [2024-12-16 11:45:12.392626] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:108 nsid:1 lba:67952 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:11.648 [2024-12-16 11:45:12.392766] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:11.648 [2024-12-16 11:45:12.393043] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:34216 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:11.648 [2024-12-16 11:45:12.393059] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:11.648 [2024-12-16 11:45:12.393071] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:28640 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:11.648 [2024-12-16 11:45:12.393082] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:11.648 [2024-12-16 11:45:12.393092] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:2488 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:11.648 [2024-12-16 11:45:12.393101] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:11.648 [2024-12-16 11:45:12.393227] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:105680 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:11.648 [2024-12-16 11:45:12.393238] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:11.648 [2024-12-16 11:45:12.393249] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:97 nsid:1 lba:69448 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:11.648 [2024-12-16 11:45:12.393259] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:11.648 [2024-12-16 11:45:12.393584] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:110 nsid:1 lba:13320 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:11.648 [2024-12-16 11:45:12.393708] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:11.648 [2024-12-16 11:45:12.393732] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:96 nsid:1 lba:47136 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:11.648 [2024-12-16 11:45:12.393851] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:11.648 [2024-12-16 11:45:12.393887] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:91 nsid:1 lba:17864 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:11.648 [2024-12-16 11:45:12.394156] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:11.648 [2024-12-16 11:45:12.394176] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:69272 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:11.648 [2024-12-16 11:45:12.394187] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:11.649 [2024-12-16 11:45:12.394496] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:99648 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:11.649 [2024-12-16 11:45:12.394508] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:11.649 [2024-12-16 11:45:12.394519] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:114 nsid:1 lba:2136 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:11.649 [2024-12-16 11:45:12.394529] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:11.649 [2024-12-16 11:45:12.394540] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:83 nsid:1 lba:63256 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:11.649 [2024-12-16 11:45:12.394548] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:11.649 [2024-12-16 11:45:12.394559] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:84 nsid:1 lba:8376 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:11.649 [2024-12-16 11:45:12.394567] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:11.649 [2024-12-16 11:45:12.394827] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:79864 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:11.649 [2024-12-16 11:45:12.394839] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:11.649 [2024-12-16 11:45:12.394850] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:27456 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:11.649 [2024-12-16 11:45:12.394859] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:11.649 [2024-12-16 11:45:12.394871] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:129776 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:11.649 [2024-12-16 11:45:12.394879] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:11.649 [2024-12-16 11:45:12.394890] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:122424 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:11.649 [2024-12-16 11:45:12.395292] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:11.649 [2024-12-16 11:45:12.395310] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:24600 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:11.649 [2024-12-16 11:45:12.395320] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:11.649 [2024-12-16 11:45:12.395332] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:21784 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:11.649 [2024-12-16 11:45:12.395342] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:11.649 [2024-12-16 11:45:12.395354] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:106 nsid:1 lba:82680 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:11.649 [2024-12-16 11:45:12.395443] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:11.649 [2024-12-16 11:45:12.395465] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:76 nsid:1 lba:117536 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:11.649 [2024-12-16 11:45:12.395492] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:11.908 [2024-12-16 11:45:12.395736] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:8472 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:11.908 [2024-12-16 11:45:12.395758] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:11.908 [2024-12-16 11:45:12.395771] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:74 nsid:1 lba:36280 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:11.908 [2024-12-16 11:45:12.395781] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:11.908 [2024-12-16 11:45:12.395818] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:75 nsid:1 lba:55408 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:11.909 [2024-12-16 11:45:12.395829] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:11.909 [2024-12-16 11:45:12.395841] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:91920 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:11.909 [2024-12-16 11:45:12.395851] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:11.909 [2024-12-16 11:45:12.395941] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:73 nsid:1 lba:97776 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:11.909 [2024-12-16 11:45:12.395956] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:11.909 [2024-12-16 11:45:12.395970] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:114816 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:11.909 [2024-12-16 11:45:12.395980] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:11.909 [2024-12-16 11:45:12.395992] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:35848 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:11.909 [2024-12-16 11:45:12.396001] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:11.909 [2024-12-16 11:45:12.396286] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:40224 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:11.909 [2024-12-16 11:45:12.396308] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:11.909 [2024-12-16 11:45:12.396322] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:65 nsid:1 lba:100632 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:11.909 [2024-12-16 11:45:12.396331] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:11.909 [2024-12-16 11:45:12.396343] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:3128 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:11.909 [2024-12-16 11:45:12.396352] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:11.909 [2024-12-16 11:45:12.396364] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:67056 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:11.909 [2024-12-16 11:45:12.396373] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:11.909 [2024-12-16 11:45:12.396612] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:23:11.909 [2024-12-16 11:45:12.396632] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:23:11.909 [2024-12-16 11:45:12.396641] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:29656 len:8 PRP1 0x0 PRP2 0x0 00:23:11.909 [2024-12-16 11:45:12.396651] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:11.909 [2024-12-16 11:45:12.397236] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:23:11.909 [2024-12-16 11:45:12.397280] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:11.909 [2024-12-16 11:45:12.397293] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:23:11.909 [2024-12-16 11:45:12.397302] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:11.909 [2024-12-16 11:45:12.397312] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:23:11.909 [2024-12-16 11:45:12.397321] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:11.909 [2024-12-16 11:45:12.397331] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:23:11.909 [2024-12-16 11:45:12.397339] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:11.909 [2024-12-16 11:45:12.397348] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12d7f30 is same with the state(6) to be set 00:23:11.909 [2024-12-16 11:45:12.397837] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 5] resetting controller 00:23:11.909 [2024-12-16 11:45:12.397873] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x12d7f30 (9): Bad file descriptor 00:23:11.909 [2024-12-16 11:45:12.398074] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:11.909 [2024-12-16 11:45:12.398104] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12d7f30 with addr=10.0.0.3, port=4420 00:23:11.909 [2024-12-16 11:45:12.398116] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12d7f30 is same with the state(6) to be set 00:23:11.909 [2024-12-16 11:45:12.398399] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x12d7f30 (9): Bad file descriptor 00:23:11.909 [2024-12-16 11:45:12.398430] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 5] Ctrlr is in error state 00:23:11.909 [2024-12-16 11:45:12.398442] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 5] controller reinitialization failed 00:23:11.909 [2024-12-16 11:45:12.398578] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 5] in failed state. 00:23:11.909 [2024-12-16 11:45:12.398695] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 5] Resetting controller failed. 00:23:11.909 [2024-12-16 11:45:12.398713] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 5] resetting controller 00:23:11.909 11:45:12 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@128 -- # wait 99586 00:23:13.782 9280.50 IOPS, 36.25 MiB/s [2024-12-16T11:45:14.535Z] 6187.00 IOPS, 24.17 MiB/s [2024-12-16T11:45:14.535Z] [2024-12-16 11:45:14.399037] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:13.782 [2024-12-16 11:45:14.399127] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12d7f30 with addr=10.0.0.3, port=4420 00:23:13.782 [2024-12-16 11:45:14.399161] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12d7f30 is same with the state(6) to be set 00:23:13.782 [2024-12-16 11:45:14.399195] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x12d7f30 (9): Bad file descriptor 00:23:13.782 [2024-12-16 11:45:14.399218] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 5] Ctrlr is in error state 00:23:13.782 [2024-12-16 11:45:14.399229] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 5] controller reinitialization failed 00:23:13.782 [2024-12-16 11:45:14.399241] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 5] in failed state. 00:23:13.782 [2024-12-16 11:45:14.399254] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 5] Resetting controller failed. 00:23:13.782 [2024-12-16 11:45:14.399266] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 5] resetting controller 00:23:15.654 4640.25 IOPS, 18.13 MiB/s [2024-12-16T11:45:16.407Z] 3712.20 IOPS, 14.50 MiB/s [2024-12-16T11:45:16.407Z] [2024-12-16 11:45:16.399472] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:15.654 [2024-12-16 11:45:16.399550] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12d7f30 with addr=10.0.0.3, port=4420 00:23:15.654 [2024-12-16 11:45:16.399569] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12d7f30 is same with the state(6) to be set 00:23:15.654 [2024-12-16 11:45:16.399599] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x12d7f30 (9): Bad file descriptor 00:23:15.654 [2024-12-16 11:45:16.399634] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 5] Ctrlr is in error state 00:23:15.654 [2024-12-16 11:45:16.399649] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 5] controller reinitialization failed 00:23:15.654 [2024-12-16 11:45:16.399662] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 5] in failed state. 00:23:15.654 [2024-12-16 11:45:16.399677] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 5] Resetting controller failed. 00:23:15.654 [2024-12-16 11:45:16.399689] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 5] resetting controller 00:23:17.531 3093.50 IOPS, 12.08 MiB/s [2024-12-16T11:45:18.543Z] 2651.57 IOPS, 10.36 MiB/s [2024-12-16T11:45:18.543Z] [2024-12-16 11:45:18.399762] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 5] in failed state. 00:23:17.790 [2024-12-16 11:45:18.399867] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 5] Ctrlr is in error state 00:23:17.790 [2024-12-16 11:45:18.399882] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 5] controller reinitialization failed 00:23:17.790 [2024-12-16 11:45:18.399893] nvme_ctrlr.c:1098:nvme_ctrlr_fail: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 5] already in failed state 00:23:17.790 [2024-12-16 11:45:18.399907] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 5] Resetting controller failed. 00:23:18.726 2320.12 IOPS, 9.06 MiB/s 00:23:18.726 Latency(us) 00:23:18.726 [2024-12-16T11:45:19.479Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:23:18.726 Job: NVMe0n1 (Core Mask 0x4, workload: randread, depth: 128, IO size: 4096) 00:23:18.726 NVMe0n1 : 8.14 2280.24 8.91 15.72 0.00 55750.72 3530.01 7046430.72 00:23:18.726 [2024-12-16T11:45:19.479Z] =================================================================================================================== 00:23:18.726 [2024-12-16T11:45:19.479Z] Total : 2280.24 8.91 15.72 0.00 55750.72 3530.01 7046430.72 00:23:18.726 { 00:23:18.726 "results": [ 00:23:18.726 { 00:23:18.726 "job": "NVMe0n1", 00:23:18.726 "core_mask": "0x4", 00:23:18.726 "workload": "randread", 00:23:18.726 "status": "finished", 00:23:18.726 "queue_depth": 128, 00:23:18.726 "io_size": 4096, 00:23:18.726 "runtime": 8.139927, 00:23:18.726 "iops": 2280.2415795620773, 00:23:18.726 "mibps": 8.907193670164364, 00:23:18.726 "io_failed": 128, 00:23:18.726 "io_timeout": 0, 00:23:18.726 "avg_latency_us": 55750.71838271419, 00:23:18.726 "min_latency_us": 3530.0072727272727, 00:23:18.726 "max_latency_us": 7046430.72 00:23:18.726 } 00:23:18.726 ], 00:23:18.726 "core_count": 1 00:23:18.726 } 00:23:18.726 11:45:19 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@129 -- # cat /home/vagrant/spdk_repo/spdk/test/nvmf/host/trace.txt 00:23:18.726 Attaching 5 probes... 00:23:18.726 1389.770911: reset bdev controller NVMe0 00:23:18.726 1389.873083: reconnect bdev controller NVMe0 00:23:18.726 3390.778558: reconnect delay bdev controller NVMe0 00:23:18.726 3390.812817: reconnect bdev controller NVMe0 00:23:18.726 5391.254511: reconnect delay bdev controller NVMe0 00:23:18.726 5391.279230: reconnect bdev controller NVMe0 00:23:18.726 7391.675811: reconnect delay bdev controller NVMe0 00:23:18.726 7391.700382: reconnect bdev controller NVMe0 00:23:18.726 11:45:19 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@132 -- # grep -c 'reconnect delay bdev controller NVMe0' 00:23:18.726 11:45:19 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@132 -- # (( 3 <= 2 )) 00:23:18.726 11:45:19 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@136 -- # kill 99528 00:23:18.726 11:45:19 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@137 -- # rm -f /home/vagrant/spdk_repo/spdk/test/nvmf/host/trace.txt 00:23:18.726 11:45:19 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@139 -- # killprocess 99518 00:23:18.726 11:45:19 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@954 -- # '[' -z 99518 ']' 00:23:18.726 11:45:19 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@958 -- # kill -0 99518 00:23:18.726 11:45:19 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@959 -- # uname 00:23:18.726 11:45:19 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:23:18.726 11:45:19 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 99518 00:23:18.726 killing process with pid 99518 00:23:18.726 Received shutdown signal, test time was about 8.212974 seconds 00:23:18.726 00:23:18.726 Latency(us) 00:23:18.726 [2024-12-16T11:45:19.479Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:23:18.726 [2024-12-16T11:45:19.479Z] =================================================================================================================== 00:23:18.726 [2024-12-16T11:45:19.479Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:23:18.726 11:45:19 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@960 -- # process_name=reactor_2 00:23:18.726 11:45:19 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@964 -- # '[' reactor_2 = sudo ']' 00:23:18.726 11:45:19 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@972 -- # echo 'killing process with pid 99518' 00:23:18.726 11:45:19 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@973 -- # kill 99518 00:23:18.726 11:45:19 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@978 -- # wait 99518 00:23:18.984 11:45:19 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@141 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:23:19.242 11:45:19 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@143 -- # trap - SIGINT SIGTERM EXIT 00:23:19.242 11:45:19 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@145 -- # nvmftestfini 00:23:19.242 11:45:19 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@516 -- # nvmfcleanup 00:23:19.242 11:45:19 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@121 -- # sync 00:23:19.500 11:45:20 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:23:19.500 11:45:20 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@124 -- # set +e 00:23:19.500 11:45:20 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@125 -- # for i in {1..20} 00:23:19.500 11:45:20 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:23:19.500 rmmod nvme_tcp 00:23:19.500 rmmod nvme_fabrics 00:23:19.500 rmmod nvme_keyring 00:23:19.500 11:45:20 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:23:19.500 11:45:20 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@128 -- # set -e 00:23:19.500 11:45:20 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@129 -- # return 0 00:23:19.500 11:45:20 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@517 -- # '[' -n 98938 ']' 00:23:19.500 11:45:20 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@518 -- # killprocess 98938 00:23:19.500 11:45:20 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@954 -- # '[' -z 98938 ']' 00:23:19.500 11:45:20 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@958 -- # kill -0 98938 00:23:19.500 11:45:20 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@959 -- # uname 00:23:19.500 11:45:20 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:23:19.500 11:45:20 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 98938 00:23:19.500 killing process with pid 98938 00:23:19.500 11:45:20 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:23:19.500 11:45:20 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:23:19.500 11:45:20 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@972 -- # echo 'killing process with pid 98938' 00:23:19.500 11:45:20 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@973 -- # kill 98938 00:23:19.500 11:45:20 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@978 -- # wait 98938 00:23:19.759 11:45:20 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:23:19.759 11:45:20 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:23:19.759 11:45:20 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:23:19.759 11:45:20 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@297 -- # iptr 00:23:19.759 11:45:20 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@791 -- # iptables-save 00:23:19.759 11:45:20 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:23:19.759 11:45:20 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@791 -- # iptables-restore 00:23:19.759 11:45:20 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@298 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:23:19.759 11:45:20 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@299 -- # nvmf_veth_fini 00:23:19.759 11:45:20 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@233 -- # ip link set nvmf_init_br nomaster 00:23:19.759 11:45:20 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@234 -- # ip link set nvmf_init_br2 nomaster 00:23:19.759 11:45:20 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@235 -- # ip link set nvmf_tgt_br nomaster 00:23:19.759 11:45:20 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@236 -- # ip link set nvmf_tgt_br2 nomaster 00:23:19.759 11:45:20 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@237 -- # ip link set nvmf_init_br down 00:23:19.759 11:45:20 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@238 -- # ip link set nvmf_init_br2 down 00:23:19.759 11:45:20 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@239 -- # ip link set nvmf_tgt_br down 00:23:19.759 11:45:20 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@240 -- # ip link set nvmf_tgt_br2 down 00:23:19.759 11:45:20 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@241 -- # ip link delete nvmf_br type bridge 00:23:20.018 11:45:20 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@242 -- # ip link delete nvmf_init_if 00:23:20.018 11:45:20 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@243 -- # ip link delete nvmf_init_if2 00:23:20.018 11:45:20 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@244 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:23:20.018 11:45:20 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@245 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:23:20.018 11:45:20 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@246 -- # remove_spdk_ns 00:23:20.018 11:45:20 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:23:20.018 11:45:20 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:23:20.018 11:45:20 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:23:20.018 11:45:20 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@300 -- # return 0 00:23:20.018 00:23:20.018 real 0m47.308s 00:23:20.018 user 2m18.365s 00:23:20.018 sys 0m5.240s 00:23:20.018 11:45:20 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@1130 -- # xtrace_disable 00:23:20.018 11:45:20 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@10 -- # set +x 00:23:20.018 ************************************ 00:23:20.018 END TEST nvmf_timeout 00:23:20.018 ************************************ 00:23:20.018 11:45:20 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@46 -- # [[ virt == phy ]] 00:23:20.018 11:45:20 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@51 -- # trap - SIGINT SIGTERM EXIT 00:23:20.018 00:23:20.018 real 5m40.322s 00:23:20.018 user 14m33.881s 00:23:20.018 sys 1m3.894s 00:23:20.018 11:45:20 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1130 -- # xtrace_disable 00:23:20.018 11:45:20 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:23:20.018 ************************************ 00:23:20.018 END TEST nvmf_host 00:23:20.018 ************************************ 00:23:20.018 11:45:20 nvmf_tcp -- nvmf/nvmf.sh@19 -- # [[ tcp = \t\c\p ]] 00:23:20.019 11:45:20 nvmf_tcp -- nvmf/nvmf.sh@19 -- # [[ 0 -eq 0 ]] 00:23:20.019 11:45:20 nvmf_tcp -- nvmf/nvmf.sh@20 -- # run_test nvmf_target_core_interrupt_mode /home/vagrant/spdk_repo/spdk/test/nvmf/nvmf_target_core.sh --transport=tcp --interrupt-mode 00:23:20.019 11:45:20 nvmf_tcp -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:23:20.019 11:45:20 nvmf_tcp -- common/autotest_common.sh@1111 -- # xtrace_disable 00:23:20.019 11:45:20 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:23:20.019 ************************************ 00:23:20.019 START TEST nvmf_target_core_interrupt_mode 00:23:20.019 ************************************ 00:23:20.019 11:45:20 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/nvmf_target_core.sh --transport=tcp --interrupt-mode 00:23:20.278 * Looking for test storage... 00:23:20.278 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf 00:23:20.278 11:45:20 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:23:20.278 11:45:20 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1711 -- # lcov --version 00:23:20.278 11:45:20 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:23:20.278 11:45:20 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:23:20.278 11:45:20 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:23:20.278 11:45:20 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@333 -- # local ver1 ver1_l 00:23:20.278 11:45:20 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@334 -- # local ver2 ver2_l 00:23:20.278 11:45:20 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@336 -- # IFS=.-: 00:23:20.278 11:45:20 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@336 -- # read -ra ver1 00:23:20.278 11:45:20 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@337 -- # IFS=.-: 00:23:20.278 11:45:20 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@337 -- # read -ra ver2 00:23:20.278 11:45:20 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@338 -- # local 'op=<' 00:23:20.278 11:45:20 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@340 -- # ver1_l=2 00:23:20.278 11:45:20 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@341 -- # ver2_l=1 00:23:20.278 11:45:20 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:23:20.278 11:45:20 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@344 -- # case "$op" in 00:23:20.278 11:45:20 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@345 -- # : 1 00:23:20.278 11:45:20 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@364 -- # (( v = 0 )) 00:23:20.278 11:45:20 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:23:20.278 11:45:20 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@365 -- # decimal 1 00:23:20.278 11:45:20 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@353 -- # local d=1 00:23:20.278 11:45:20 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:23:20.278 11:45:20 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@355 -- # echo 1 00:23:20.278 11:45:20 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@365 -- # ver1[v]=1 00:23:20.278 11:45:20 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@366 -- # decimal 2 00:23:20.278 11:45:20 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@353 -- # local d=2 00:23:20.278 11:45:20 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:23:20.278 11:45:20 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@355 -- # echo 2 00:23:20.278 11:45:20 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@366 -- # ver2[v]=2 00:23:20.278 11:45:20 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:23:20.278 11:45:20 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:23:20.278 11:45:20 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@368 -- # return 0 00:23:20.278 11:45:20 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:23:20.278 11:45:20 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:23:20.278 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:23:20.278 --rc genhtml_branch_coverage=1 00:23:20.278 --rc genhtml_function_coverage=1 00:23:20.278 --rc genhtml_legend=1 00:23:20.278 --rc geninfo_all_blocks=1 00:23:20.278 --rc geninfo_unexecuted_blocks=1 00:23:20.278 00:23:20.278 ' 00:23:20.278 11:45:20 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:23:20.278 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:23:20.278 --rc genhtml_branch_coverage=1 00:23:20.278 --rc genhtml_function_coverage=1 00:23:20.278 --rc genhtml_legend=1 00:23:20.278 --rc geninfo_all_blocks=1 00:23:20.278 --rc geninfo_unexecuted_blocks=1 00:23:20.278 00:23:20.278 ' 00:23:20.278 11:45:20 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:23:20.278 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:23:20.278 --rc genhtml_branch_coverage=1 00:23:20.278 --rc genhtml_function_coverage=1 00:23:20.278 --rc genhtml_legend=1 00:23:20.279 --rc geninfo_all_blocks=1 00:23:20.279 --rc geninfo_unexecuted_blocks=1 00:23:20.279 00:23:20.279 ' 00:23:20.279 11:45:20 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:23:20.279 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:23:20.279 --rc genhtml_branch_coverage=1 00:23:20.279 --rc genhtml_function_coverage=1 00:23:20.279 --rc genhtml_legend=1 00:23:20.279 --rc geninfo_all_blocks=1 00:23:20.279 --rc geninfo_unexecuted_blocks=1 00:23:20.279 00:23:20.279 ' 00:23:20.279 11:45:20 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/nvmf_target_core.sh@10 -- # uname -s 00:23:20.279 11:45:20 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/nvmf_target_core.sh@10 -- # '[' '!' Linux = Linux ']' 00:23:20.279 11:45:20 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/nvmf_target_core.sh@14 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:23:20.279 11:45:20 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@7 -- # uname -s 00:23:20.279 11:45:20 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:23:20.279 11:45:20 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:23:20.279 11:45:20 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:23:20.279 11:45:20 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:23:20.279 11:45:20 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:23:20.279 11:45:20 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:23:20.279 11:45:20 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:23:20.279 11:45:20 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:23:20.279 11:45:20 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:23:20.279 11:45:20 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:23:20.279 11:45:20 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:2fb5c03d-b67f-4471-b8a9-a83bf7b15be8 00:23:20.279 11:45:20 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@18 -- # NVME_HOSTID=2fb5c03d-b67f-4471-b8a9-a83bf7b15be8 00:23:20.279 11:45:20 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:23:20.279 11:45:20 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:23:20.279 11:45:20 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:23:20.279 11:45:20 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:23:20.279 11:45:20 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@49 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:23:20.279 11:45:20 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@15 -- # shopt -s extglob 00:23:20.279 11:45:20 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:23:20.279 11:45:20 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:23:20.279 11:45:20 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:23:20.279 11:45:20 nvmf_tcp.nvmf_target_core_interrupt_mode -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:20.279 11:45:20 nvmf_tcp.nvmf_target_core_interrupt_mode -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:20.279 11:45:20 nvmf_tcp.nvmf_target_core_interrupt_mode -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:20.279 11:45:20 nvmf_tcp.nvmf_target_core_interrupt_mode -- paths/export.sh@5 -- # export PATH 00:23:20.279 11:45:20 nvmf_tcp.nvmf_target_core_interrupt_mode -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:20.279 11:45:20 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@51 -- # : 0 00:23:20.279 11:45:20 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:23:20.279 11:45:20 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:23:20.279 11:45:20 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:23:20.279 11:45:20 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:23:20.279 11:45:20 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:23:20.279 11:45:20 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@33 -- # '[' 1 -eq 1 ']' 00:23:20.279 11:45:20 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@34 -- # NVMF_APP+=(--interrupt-mode) 00:23:20.279 11:45:20 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:23:20.279 11:45:20 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:23:20.279 11:45:20 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@55 -- # have_pci_nics=0 00:23:20.279 11:45:20 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/nvmf_target_core.sh@16 -- # trap 'exit 1' SIGINT SIGTERM EXIT 00:23:20.279 11:45:20 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/nvmf_target_core.sh@18 -- # TEST_ARGS=("$@") 00:23:20.279 11:45:20 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/nvmf_target_core.sh@20 -- # [[ 0 -eq 0 ]] 00:23:20.279 11:45:20 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/nvmf_target_core.sh@21 -- # run_test nvmf_abort /home/vagrant/spdk_repo/spdk/test/nvmf/target/abort.sh --transport=tcp --interrupt-mode 00:23:20.279 11:45:20 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:23:20.279 11:45:20 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1111 -- # xtrace_disable 00:23:20.279 11:45:20 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@10 -- # set +x 00:23:20.279 ************************************ 00:23:20.279 START TEST nvmf_abort 00:23:20.279 ************************************ 00:23:20.279 11:45:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/abort.sh --transport=tcp --interrupt-mode 00:23:20.539 * Looking for test storage... 00:23:20.539 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:23:20.539 11:45:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:23:20.539 11:45:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@1711 -- # lcov --version 00:23:20.539 11:45:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:23:20.539 11:45:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:23:20.539 11:45:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:23:20.539 11:45:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@333 -- # local ver1 ver1_l 00:23:20.539 11:45:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@334 -- # local ver2 ver2_l 00:23:20.539 11:45:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@336 -- # IFS=.-: 00:23:20.539 11:45:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@336 -- # read -ra ver1 00:23:20.539 11:45:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@337 -- # IFS=.-: 00:23:20.539 11:45:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@337 -- # read -ra ver2 00:23:20.539 11:45:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@338 -- # local 'op=<' 00:23:20.539 11:45:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@340 -- # ver1_l=2 00:23:20.539 11:45:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@341 -- # ver2_l=1 00:23:20.539 11:45:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:23:20.539 11:45:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@344 -- # case "$op" in 00:23:20.539 11:45:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@345 -- # : 1 00:23:20.539 11:45:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@364 -- # (( v = 0 )) 00:23:20.539 11:45:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:23:20.539 11:45:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@365 -- # decimal 1 00:23:20.539 11:45:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@353 -- # local d=1 00:23:20.539 11:45:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:23:20.539 11:45:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@355 -- # echo 1 00:23:20.539 11:45:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@365 -- # ver1[v]=1 00:23:20.539 11:45:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@366 -- # decimal 2 00:23:20.539 11:45:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@353 -- # local d=2 00:23:20.539 11:45:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:23:20.539 11:45:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@355 -- # echo 2 00:23:20.539 11:45:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@366 -- # ver2[v]=2 00:23:20.539 11:45:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:23:20.539 11:45:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:23:20.539 11:45:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@368 -- # return 0 00:23:20.539 11:45:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:23:20.539 11:45:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:23:20.539 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:23:20.539 --rc genhtml_branch_coverage=1 00:23:20.539 --rc genhtml_function_coverage=1 00:23:20.539 --rc genhtml_legend=1 00:23:20.539 --rc geninfo_all_blocks=1 00:23:20.539 --rc geninfo_unexecuted_blocks=1 00:23:20.539 00:23:20.539 ' 00:23:20.539 11:45:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:23:20.539 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:23:20.539 --rc genhtml_branch_coverage=1 00:23:20.539 --rc genhtml_function_coverage=1 00:23:20.539 --rc genhtml_legend=1 00:23:20.539 --rc geninfo_all_blocks=1 00:23:20.539 --rc geninfo_unexecuted_blocks=1 00:23:20.539 00:23:20.539 ' 00:23:20.539 11:45:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:23:20.539 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:23:20.539 --rc genhtml_branch_coverage=1 00:23:20.539 --rc genhtml_function_coverage=1 00:23:20.539 --rc genhtml_legend=1 00:23:20.539 --rc geninfo_all_blocks=1 00:23:20.539 --rc geninfo_unexecuted_blocks=1 00:23:20.539 00:23:20.539 ' 00:23:20.539 11:45:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:23:20.539 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:23:20.539 --rc genhtml_branch_coverage=1 00:23:20.539 --rc genhtml_function_coverage=1 00:23:20.539 --rc genhtml_legend=1 00:23:20.539 --rc geninfo_all_blocks=1 00:23:20.539 --rc geninfo_unexecuted_blocks=1 00:23:20.539 00:23:20.539 ' 00:23:20.539 11:45:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- target/abort.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:23:20.540 11:45:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@7 -- # uname -s 00:23:20.540 11:45:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:23:20.540 11:45:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:23:20.540 11:45:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:23:20.540 11:45:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:23:20.540 11:45:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:23:20.540 11:45:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:23:20.540 11:45:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:23:20.540 11:45:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:23:20.540 11:45:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:23:20.540 11:45:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:23:20.540 11:45:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:2fb5c03d-b67f-4471-b8a9-a83bf7b15be8 00:23:20.540 11:45:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@18 -- # NVME_HOSTID=2fb5c03d-b67f-4471-b8a9-a83bf7b15be8 00:23:20.540 11:45:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:23:20.540 11:45:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:23:20.540 11:45:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:23:20.540 11:45:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:23:20.540 11:45:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@49 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:23:20.540 11:45:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@15 -- # shopt -s extglob 00:23:20.540 11:45:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:23:20.540 11:45:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:23:20.540 11:45:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:23:20.540 11:45:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:20.540 11:45:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:20.540 11:45:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:20.540 11:45:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- paths/export.sh@5 -- # export PATH 00:23:20.540 11:45:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:20.540 11:45:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@51 -- # : 0 00:23:20.540 11:45:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:23:20.540 11:45:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:23:20.540 11:45:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:23:20.540 11:45:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:23:20.540 11:45:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:23:20.540 11:45:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@33 -- # '[' 1 -eq 1 ']' 00:23:20.540 11:45:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@34 -- # NVMF_APP+=(--interrupt-mode) 00:23:20.540 11:45:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:23:20.540 11:45:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:23:20.540 11:45:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@55 -- # have_pci_nics=0 00:23:20.540 11:45:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- target/abort.sh@11 -- # MALLOC_BDEV_SIZE=64 00:23:20.540 11:45:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- target/abort.sh@12 -- # MALLOC_BLOCK_SIZE=4096 00:23:20.540 11:45:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- target/abort.sh@14 -- # nvmftestinit 00:23:20.540 11:45:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:23:20.540 11:45:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:23:20.540 11:45:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@476 -- # prepare_net_devs 00:23:20.540 11:45:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@438 -- # local -g is_hw=no 00:23:20.540 11:45:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@440 -- # remove_spdk_ns 00:23:20.540 11:45:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:23:20.540 11:45:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:23:20.540 11:45:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:23:20.540 11:45:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@442 -- # [[ virt != virt ]] 00:23:20.540 11:45:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@444 -- # [[ no == yes ]] 00:23:20.540 11:45:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@451 -- # [[ virt == phy ]] 00:23:20.540 11:45:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@454 -- # [[ virt == phy-fallback ]] 00:23:20.540 11:45:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@459 -- # [[ tcp == tcp ]] 00:23:20.540 11:45:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@460 -- # nvmf_veth_init 00:23:20.540 11:45:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@145 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:23:20.540 11:45:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@146 -- # NVMF_SECOND_INITIATOR_IP=10.0.0.2 00:23:20.540 11:45:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@147 -- # NVMF_FIRST_TARGET_IP=10.0.0.3 00:23:20.540 11:45:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@148 -- # NVMF_SECOND_TARGET_IP=10.0.0.4 00:23:20.540 11:45:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@149 -- # NVMF_INITIATOR_IP=10.0.0.1 00:23:20.540 11:45:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@150 -- # NVMF_BRIDGE=nvmf_br 00:23:20.540 11:45:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@151 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:23:20.540 11:45:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@152 -- # NVMF_INITIATOR_INTERFACE2=nvmf_init_if2 00:23:20.540 11:45:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@153 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:23:20.540 11:45:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@154 -- # NVMF_INITIATOR_BRIDGE2=nvmf_init_br2 00:23:20.540 11:45:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@155 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:23:20.540 11:45:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@156 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:23:20.540 11:45:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@157 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:23:20.540 11:45:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@158 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:23:20.540 11:45:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@159 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:23:20.540 11:45:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@160 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:23:20.540 11:45:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@162 -- # ip link set nvmf_init_br nomaster 00:23:20.540 Cannot find device "nvmf_init_br" 00:23:20.540 11:45:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@162 -- # true 00:23:20.540 11:45:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@163 -- # ip link set nvmf_init_br2 nomaster 00:23:20.540 Cannot find device "nvmf_init_br2" 00:23:20.540 11:45:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@163 -- # true 00:23:20.540 11:45:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@164 -- # ip link set nvmf_tgt_br nomaster 00:23:20.540 Cannot find device "nvmf_tgt_br" 00:23:20.540 11:45:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@164 -- # true 00:23:20.540 11:45:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@165 -- # ip link set nvmf_tgt_br2 nomaster 00:23:20.540 Cannot find device "nvmf_tgt_br2" 00:23:20.540 11:45:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@165 -- # true 00:23:20.540 11:45:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@166 -- # ip link set nvmf_init_br down 00:23:20.540 Cannot find device "nvmf_init_br" 00:23:20.540 11:45:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@166 -- # true 00:23:20.540 11:45:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@167 -- # ip link set nvmf_init_br2 down 00:23:20.540 Cannot find device "nvmf_init_br2" 00:23:20.540 11:45:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@167 -- # true 00:23:20.540 11:45:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@168 -- # ip link set nvmf_tgt_br down 00:23:20.540 Cannot find device "nvmf_tgt_br" 00:23:20.800 11:45:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@168 -- # true 00:23:20.800 11:45:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@169 -- # ip link set nvmf_tgt_br2 down 00:23:20.800 Cannot find device "nvmf_tgt_br2" 00:23:20.800 11:45:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@169 -- # true 00:23:20.800 11:45:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@170 -- # ip link delete nvmf_br type bridge 00:23:20.800 Cannot find device "nvmf_br" 00:23:20.800 11:45:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@170 -- # true 00:23:20.800 11:45:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@171 -- # ip link delete nvmf_init_if 00:23:20.800 Cannot find device "nvmf_init_if" 00:23:20.800 11:45:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@171 -- # true 00:23:20.800 11:45:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@172 -- # ip link delete nvmf_init_if2 00:23:20.800 Cannot find device "nvmf_init_if2" 00:23:20.800 11:45:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@172 -- # true 00:23:20.800 11:45:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@173 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:23:20.800 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:23:20.800 11:45:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@173 -- # true 00:23:20.800 11:45:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@174 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:23:20.800 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:23:20.800 11:45:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@174 -- # true 00:23:20.800 11:45:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@177 -- # ip netns add nvmf_tgt_ns_spdk 00:23:20.800 11:45:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@180 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:23:20.800 11:45:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@181 -- # ip link add nvmf_init_if2 type veth peer name nvmf_init_br2 00:23:20.800 11:45:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@182 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:23:20.800 11:45:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@183 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:23:20.800 11:45:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:23:20.800 11:45:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@187 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:23:20.800 11:45:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@190 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:23:20.800 11:45:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@191 -- # ip addr add 10.0.0.2/24 dev nvmf_init_if2 00:23:20.800 11:45:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@192 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if 00:23:20.800 11:45:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@193 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.4/24 dev nvmf_tgt_if2 00:23:20.800 11:45:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@196 -- # ip link set nvmf_init_if up 00:23:20.800 11:45:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@197 -- # ip link set nvmf_init_if2 up 00:23:20.800 11:45:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@198 -- # ip link set nvmf_init_br up 00:23:20.800 11:45:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@199 -- # ip link set nvmf_init_br2 up 00:23:20.800 11:45:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@200 -- # ip link set nvmf_tgt_br up 00:23:20.800 11:45:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@201 -- # ip link set nvmf_tgt_br2 up 00:23:20.800 11:45:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@202 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:23:20.800 11:45:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@203 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:23:20.800 11:45:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@204 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:23:20.800 11:45:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@207 -- # ip link add nvmf_br type bridge 00:23:20.800 11:45:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@208 -- # ip link set nvmf_br up 00:23:20.800 11:45:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@211 -- # ip link set nvmf_init_br master nvmf_br 00:23:20.800 11:45:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@212 -- # ip link set nvmf_init_br2 master nvmf_br 00:23:20.800 11:45:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@213 -- # ip link set nvmf_tgt_br master nvmf_br 00:23:21.059 11:45:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@214 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:23:21.059 11:45:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@217 -- # ipts -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:23:21.059 11:45:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT' 00:23:21.059 11:45:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@218 -- # ipts -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT 00:23:21.059 11:45:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT' 00:23:21.059 11:45:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@219 -- # ipts -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:23:21.059 11:45:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@790 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT -m comment --comment 'SPDK_NVMF:-A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT' 00:23:21.059 11:45:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@222 -- # ping -c 1 10.0.0.3 00:23:21.059 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:23:21.059 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.077 ms 00:23:21.059 00:23:21.059 --- 10.0.0.3 ping statistics --- 00:23:21.059 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:23:21.059 rtt min/avg/max/mdev = 0.077/0.077/0.077/0.000 ms 00:23:21.059 11:45:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@223 -- # ping -c 1 10.0.0.4 00:23:21.059 PING 10.0.0.4 (10.0.0.4) 56(84) bytes of data. 00:23:21.059 64 bytes from 10.0.0.4: icmp_seq=1 ttl=64 time=0.047 ms 00:23:21.059 00:23:21.060 --- 10.0.0.4 ping statistics --- 00:23:21.060 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:23:21.060 rtt min/avg/max/mdev = 0.047/0.047/0.047/0.000 ms 00:23:21.060 11:45:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@224 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:23:21.060 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:23:21.060 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.041 ms 00:23:21.060 00:23:21.060 --- 10.0.0.1 ping statistics --- 00:23:21.060 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:23:21.060 rtt min/avg/max/mdev = 0.041/0.041/0.041/0.000 ms 00:23:21.060 11:45:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@225 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.2 00:23:21.060 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:23:21.060 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.047 ms 00:23:21.060 00:23:21.060 --- 10.0.0.2 ping statistics --- 00:23:21.060 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:23:21.060 rtt min/avg/max/mdev = 0.047/0.047/0.047/0.000 ms 00:23:21.060 11:45:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@227 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:23:21.060 11:45:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@461 -- # return 0 00:23:21.060 11:45:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:23:21.060 11:45:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:23:21.060 11:45:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:23:21.060 11:45:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:23:21.060 11:45:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:23:21.060 11:45:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:23:21.060 11:45:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:23:21.060 11:45:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- target/abort.sh@15 -- # nvmfappstart -m 0xE 00:23:21.060 11:45:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:23:21.060 11:45:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@726 -- # xtrace_disable 00:23:21.060 11:45:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:23:21.060 11:45:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@509 -- # nvmfpid=99998 00:23:21.060 11:45:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@510 -- # waitforlisten 99998 00:23:21.060 11:45:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@508 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --interrupt-mode -m 0xE 00:23:21.060 11:45:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@835 -- # '[' -z 99998 ']' 00:23:21.060 11:45:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:23:21.060 11:45:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@840 -- # local max_retries=100 00:23:21.060 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:23:21.060 11:45:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:23:21.060 11:45:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@844 -- # xtrace_disable 00:23:21.060 11:45:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:23:21.060 [2024-12-16 11:45:21.713320] thread.c:3079:spdk_interrupt_mode_enable: *NOTICE*: Set SPDK running in interrupt mode. 00:23:21.060 [2024-12-16 11:45:21.714658] Starting SPDK v25.01-pre git sha1 c360129e7 / DPDK 24.03.0 initialization... 00:23:21.060 [2024-12-16 11:45:21.714737] [ DPDK EAL parameters: nvmf -c 0xE --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:23:21.319 [2024-12-16 11:45:21.868456] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:23:21.319 [2024-12-16 11:45:21.947720] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:23:21.319 [2024-12-16 11:45:21.947815] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:23:21.319 [2024-12-16 11:45:21.947851] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:23:21.319 [2024-12-16 11:45:21.947862] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:23:21.319 [2024-12-16 11:45:21.947872] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:23:21.319 [2024-12-16 11:45:21.949476] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 2 00:23:21.319 [2024-12-16 11:45:21.949650] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 3 00:23:21.319 [2024-12-16 11:45:21.949659] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 1 00:23:21.577 [2024-12-16 11:45:22.085527] thread.c:2144:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_000) to intr mode from intr mode. 00:23:21.577 [2024-12-16 11:45:22.085790] thread.c:2144:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_001) to intr mode from intr mode. 00:23:21.577 [2024-12-16 11:45:22.085542] thread.c:2144:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (app_thread) to intr mode from intr mode. 00:23:21.577 [2024-12-16 11:45:22.086322] thread.c:2144:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_002) to intr mode from intr mode. 00:23:22.145 11:45:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:23:22.145 11:45:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@868 -- # return 0 00:23:22.145 11:45:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:23:22.145 11:45:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@732 -- # xtrace_disable 00:23:22.145 11:45:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:23:22.145 11:45:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:23:22.145 11:45:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- target/abort.sh@17 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 -a 256 00:23:22.145 11:45:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:22.145 11:45:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:23:22.145 [2024-12-16 11:45:22.787045] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:23:22.145 11:45:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:22.145 11:45:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- target/abort.sh@20 -- # rpc_cmd bdev_malloc_create 64 4096 -b Malloc0 00:23:22.145 11:45:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:22.145 11:45:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:23:22.145 Malloc0 00:23:22.145 11:45:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:22.145 11:45:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- target/abort.sh@21 -- # rpc_cmd bdev_delay_create -b Malloc0 -d Delay0 -r 1000000 -t 1000000 -w 1000000 -n 1000000 00:23:22.145 11:45:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:22.145 11:45:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:23:22.145 Delay0 00:23:22.145 11:45:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:22.145 11:45:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- target/abort.sh@24 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 -a -s SPDK0 00:23:22.145 11:45:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:22.145 11:45:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:23:22.145 11:45:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:22.145 11:45:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- target/abort.sh@25 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 Delay0 00:23:22.145 11:45:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:22.145 11:45:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:23:22.145 11:45:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:22.145 11:45:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- target/abort.sh@26 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.3 -s 4420 00:23:22.145 11:45:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:22.145 11:45:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:23:22.145 [2024-12-16 11:45:22.871089] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4420 *** 00:23:22.145 11:45:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:22.145 11:45:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- target/abort.sh@27 -- # rpc_cmd nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.3 -s 4420 00:23:22.145 11:45:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:22.145 11:45:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:23:22.145 11:45:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:22.145 11:45:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- target/abort.sh@30 -- # /home/vagrant/spdk_repo/spdk/build/examples/abort -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.3 trsvcid:4420' -c 0x1 -t 1 -l warning -q 128 00:23:22.404 [2024-12-16 11:45:23.058269] nvme_fabric.c: 295:nvme_fabric_discover_probe: *WARNING*: Skipping unsupported current discovery service or discovery service referral 00:23:24.938 Initializing NVMe Controllers 00:23:24.938 Attached to NVMe over Fabrics controller at 10.0.0.3:4420: nqn.2016-06.io.spdk:cnode0 00:23:24.938 controller IO queue size 128 less than required 00:23:24.938 Consider using lower queue depth or small IO size because IO requests may be queued at the NVMe driver. 00:23:24.938 Associating TCP (addr:10.0.0.3 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 with lcore 0 00:23:24.938 Initialization complete. Launching workers. 00:23:24.938 NS: TCP (addr:10.0.0.3 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 I/O completed: 127, failed: 28581 00:23:24.938 CTRLR: TCP (addr:10.0.0.3 subnqn:nqn.2016-06.io.spdk:cnode0) abort submitted 28642, failed to submit 66 00:23:24.938 success 28581, unsuccessful 61, failed 0 00:23:24.938 11:45:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- target/abort.sh@34 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:23:24.938 11:45:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:24.938 11:45:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:23:24.938 11:45:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:24.938 11:45:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- target/abort.sh@36 -- # trap - SIGINT SIGTERM EXIT 00:23:24.938 11:45:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- target/abort.sh@38 -- # nvmftestfini 00:23:24.938 11:45:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@516 -- # nvmfcleanup 00:23:24.938 11:45:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@121 -- # sync 00:23:24.938 11:45:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:23:24.938 11:45:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@124 -- # set +e 00:23:24.938 11:45:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@125 -- # for i in {1..20} 00:23:24.938 11:45:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:23:24.938 rmmod nvme_tcp 00:23:24.938 rmmod nvme_fabrics 00:23:24.938 rmmod nvme_keyring 00:23:24.938 11:45:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:23:24.938 11:45:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@128 -- # set -e 00:23:24.938 11:45:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@129 -- # return 0 00:23:24.938 11:45:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@517 -- # '[' -n 99998 ']' 00:23:24.938 11:45:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@518 -- # killprocess 99998 00:23:24.938 11:45:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@954 -- # '[' -z 99998 ']' 00:23:24.938 11:45:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@958 -- # kill -0 99998 00:23:24.938 11:45:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@959 -- # uname 00:23:24.938 11:45:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:23:24.938 11:45:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 99998 00:23:24.938 killing process with pid 99998 00:23:24.938 11:45:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:23:24.938 11:45:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:23:24.938 11:45:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@972 -- # echo 'killing process with pid 99998' 00:23:24.938 11:45:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@973 -- # kill 99998 00:23:24.938 11:45:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@978 -- # wait 99998 00:23:24.938 11:45:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:23:24.938 11:45:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:23:24.938 11:45:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:23:24.938 11:45:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@297 -- # iptr 00:23:24.938 11:45:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@791 -- # iptables-save 00:23:24.938 11:45:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:23:24.938 11:45:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@791 -- # iptables-restore 00:23:24.938 11:45:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@298 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:23:24.938 11:45:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@299 -- # nvmf_veth_fini 00:23:24.938 11:45:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@233 -- # ip link set nvmf_init_br nomaster 00:23:24.938 11:45:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@234 -- # ip link set nvmf_init_br2 nomaster 00:23:24.938 11:45:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@235 -- # ip link set nvmf_tgt_br nomaster 00:23:24.938 11:45:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@236 -- # ip link set nvmf_tgt_br2 nomaster 00:23:24.938 11:45:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@237 -- # ip link set nvmf_init_br down 00:23:24.938 11:45:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@238 -- # ip link set nvmf_init_br2 down 00:23:24.938 11:45:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@239 -- # ip link set nvmf_tgt_br down 00:23:24.938 11:45:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@240 -- # ip link set nvmf_tgt_br2 down 00:23:24.938 11:45:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@241 -- # ip link delete nvmf_br type bridge 00:23:24.938 11:45:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@242 -- # ip link delete nvmf_init_if 00:23:25.197 11:45:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@243 -- # ip link delete nvmf_init_if2 00:23:25.197 11:45:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@244 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:23:25.197 11:45:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@245 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:23:25.198 11:45:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@246 -- # remove_spdk_ns 00:23:25.198 11:45:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:23:25.198 11:45:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:23:25.198 11:45:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:23:25.198 11:45:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@300 -- # return 0 00:23:25.198 00:23:25.198 real 0m4.835s 00:23:25.198 user 0m9.531s 00:23:25.198 sys 0m1.415s 00:23:25.198 ************************************ 00:23:25.198 END TEST nvmf_abort 00:23:25.198 ************************************ 00:23:25.198 11:45:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@1130 -- # xtrace_disable 00:23:25.198 11:45:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:23:25.198 11:45:25 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/nvmf_target_core.sh@22 -- # run_test nvmf_ns_hotplug_stress /home/vagrant/spdk_repo/spdk/test/nvmf/target/ns_hotplug_stress.sh --transport=tcp --interrupt-mode 00:23:25.198 11:45:25 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:23:25.198 11:45:25 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1111 -- # xtrace_disable 00:23:25.198 11:45:25 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@10 -- # set +x 00:23:25.198 ************************************ 00:23:25.198 START TEST nvmf_ns_hotplug_stress 00:23:25.198 ************************************ 00:23:25.198 11:45:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/ns_hotplug_stress.sh --transport=tcp --interrupt-mode 00:23:25.458 * Looking for test storage... 00:23:25.458 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:23:25.458 11:45:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:23:25.458 11:45:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1711 -- # lcov --version 00:23:25.458 11:45:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:23:25.458 11:45:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:23:25.458 11:45:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:23:25.458 11:45:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@333 -- # local ver1 ver1_l 00:23:25.458 11:45:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@334 -- # local ver2 ver2_l 00:23:25.458 11:45:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@336 -- # IFS=.-: 00:23:25.458 11:45:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@336 -- # read -ra ver1 00:23:25.458 11:45:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@337 -- # IFS=.-: 00:23:25.458 11:45:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@337 -- # read -ra ver2 00:23:25.458 11:45:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@338 -- # local 'op=<' 00:23:25.458 11:45:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@340 -- # ver1_l=2 00:23:25.458 11:45:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@341 -- # ver2_l=1 00:23:25.458 11:45:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:23:25.458 11:45:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@344 -- # case "$op" in 00:23:25.458 11:45:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@345 -- # : 1 00:23:25.458 11:45:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@364 -- # (( v = 0 )) 00:23:25.458 11:45:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:23:25.458 11:45:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@365 -- # decimal 1 00:23:25.458 11:45:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@353 -- # local d=1 00:23:25.458 11:45:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:23:25.458 11:45:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@355 -- # echo 1 00:23:25.458 11:45:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@365 -- # ver1[v]=1 00:23:25.458 11:45:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@366 -- # decimal 2 00:23:25.458 11:45:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@353 -- # local d=2 00:23:25.458 11:45:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:23:25.458 11:45:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@355 -- # echo 2 00:23:25.458 11:45:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@366 -- # ver2[v]=2 00:23:25.458 11:45:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:23:25.458 11:45:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:23:25.458 11:45:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@368 -- # return 0 00:23:25.458 11:45:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:23:25.458 11:45:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:23:25.458 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:23:25.458 --rc genhtml_branch_coverage=1 00:23:25.458 --rc genhtml_function_coverage=1 00:23:25.458 --rc genhtml_legend=1 00:23:25.458 --rc geninfo_all_blocks=1 00:23:25.458 --rc geninfo_unexecuted_blocks=1 00:23:25.458 00:23:25.458 ' 00:23:25.458 11:45:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:23:25.458 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:23:25.458 --rc genhtml_branch_coverage=1 00:23:25.458 --rc genhtml_function_coverage=1 00:23:25.458 --rc genhtml_legend=1 00:23:25.458 --rc geninfo_all_blocks=1 00:23:25.458 --rc geninfo_unexecuted_blocks=1 00:23:25.458 00:23:25.458 ' 00:23:25.458 11:45:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:23:25.458 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:23:25.458 --rc genhtml_branch_coverage=1 00:23:25.458 --rc genhtml_function_coverage=1 00:23:25.458 --rc genhtml_legend=1 00:23:25.458 --rc geninfo_all_blocks=1 00:23:25.458 --rc geninfo_unexecuted_blocks=1 00:23:25.458 00:23:25.458 ' 00:23:25.458 11:45:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:23:25.458 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:23:25.458 --rc genhtml_branch_coverage=1 00:23:25.458 --rc genhtml_function_coverage=1 00:23:25.458 --rc genhtml_legend=1 00:23:25.458 --rc geninfo_all_blocks=1 00:23:25.458 --rc geninfo_unexecuted_blocks=1 00:23:25.458 00:23:25.458 ' 00:23:25.458 11:45:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:23:25.458 11:45:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@7 -- # uname -s 00:23:25.458 11:45:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:23:25.458 11:45:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:23:25.458 11:45:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:23:25.458 11:45:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:23:25.458 11:45:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:23:25.458 11:45:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:23:25.458 11:45:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:23:25.458 11:45:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:23:25.458 11:45:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:23:25.458 11:45:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:23:25.458 11:45:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:2fb5c03d-b67f-4471-b8a9-a83bf7b15be8 00:23:25.458 11:45:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@18 -- # NVME_HOSTID=2fb5c03d-b67f-4471-b8a9-a83bf7b15be8 00:23:25.458 11:45:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:23:25.458 11:45:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:23:25.458 11:45:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:23:25.458 11:45:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:23:25.458 11:45:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@49 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:23:25.458 11:45:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@15 -- # shopt -s extglob 00:23:25.459 11:45:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:23:25.459 11:45:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:23:25.459 11:45:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:23:25.459 11:45:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:25.459 11:45:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:25.459 11:45:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:25.459 11:45:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- paths/export.sh@5 -- # export PATH 00:23:25.459 11:45:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:25.459 11:45:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@51 -- # : 0 00:23:25.459 11:45:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:23:25.459 11:45:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:23:25.459 11:45:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:23:25.459 11:45:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:23:25.459 11:45:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:23:25.459 11:45:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@33 -- # '[' 1 -eq 1 ']' 00:23:25.459 11:45:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@34 -- # NVMF_APP+=(--interrupt-mode) 00:23:25.459 11:45:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:23:25.459 11:45:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:23:25.459 11:45:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@55 -- # have_pci_nics=0 00:23:25.459 11:45:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@11 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:23:25.459 11:45:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@22 -- # nvmftestinit 00:23:25.459 11:45:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:23:25.459 11:45:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:23:25.459 11:45:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@476 -- # prepare_net_devs 00:23:25.459 11:45:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@438 -- # local -g is_hw=no 00:23:25.459 11:45:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@440 -- # remove_spdk_ns 00:23:25.459 11:45:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:23:25.459 11:45:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:23:25.459 11:45:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:23:25.459 11:45:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@442 -- # [[ virt != virt ]] 00:23:25.459 11:45:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@444 -- # [[ no == yes ]] 00:23:25.459 11:45:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@451 -- # [[ virt == phy ]] 00:23:25.459 11:45:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@454 -- # [[ virt == phy-fallback ]] 00:23:25.459 11:45:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@459 -- # [[ tcp == tcp ]] 00:23:25.459 11:45:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@460 -- # nvmf_veth_init 00:23:25.459 11:45:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@145 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:23:25.459 11:45:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@146 -- # NVMF_SECOND_INITIATOR_IP=10.0.0.2 00:23:25.459 11:45:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@147 -- # NVMF_FIRST_TARGET_IP=10.0.0.3 00:23:25.459 11:45:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@148 -- # NVMF_SECOND_TARGET_IP=10.0.0.4 00:23:25.459 11:45:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@149 -- # NVMF_INITIATOR_IP=10.0.0.1 00:23:25.459 11:45:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@150 -- # NVMF_BRIDGE=nvmf_br 00:23:25.459 11:45:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@151 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:23:25.459 11:45:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@152 -- # NVMF_INITIATOR_INTERFACE2=nvmf_init_if2 00:23:25.459 11:45:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@153 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:23:25.459 11:45:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@154 -- # NVMF_INITIATOR_BRIDGE2=nvmf_init_br2 00:23:25.459 11:45:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@155 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:23:25.459 11:45:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@156 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:23:25.459 11:45:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@157 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:23:25.459 11:45:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@158 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:23:25.459 11:45:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@159 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:23:25.459 11:45:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@160 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:23:25.459 11:45:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@162 -- # ip link set nvmf_init_br nomaster 00:23:25.459 Cannot find device "nvmf_init_br" 00:23:25.459 11:45:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@162 -- # true 00:23:25.459 11:45:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@163 -- # ip link set nvmf_init_br2 nomaster 00:23:25.459 Cannot find device "nvmf_init_br2" 00:23:25.459 11:45:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@163 -- # true 00:23:25.459 11:45:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@164 -- # ip link set nvmf_tgt_br nomaster 00:23:25.459 Cannot find device "nvmf_tgt_br" 00:23:25.459 11:45:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@164 -- # true 00:23:25.459 11:45:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@165 -- # ip link set nvmf_tgt_br2 nomaster 00:23:25.459 Cannot find device "nvmf_tgt_br2" 00:23:25.459 11:45:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@165 -- # true 00:23:25.459 11:45:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@166 -- # ip link set nvmf_init_br down 00:23:25.459 Cannot find device "nvmf_init_br" 00:23:25.459 11:45:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@166 -- # true 00:23:25.459 11:45:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@167 -- # ip link set nvmf_init_br2 down 00:23:25.459 Cannot find device "nvmf_init_br2" 00:23:25.459 11:45:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@167 -- # true 00:23:25.459 11:45:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@168 -- # ip link set nvmf_tgt_br down 00:23:25.459 Cannot find device "nvmf_tgt_br" 00:23:25.459 11:45:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@168 -- # true 00:23:25.459 11:45:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@169 -- # ip link set nvmf_tgt_br2 down 00:23:25.459 Cannot find device "nvmf_tgt_br2" 00:23:25.459 11:45:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@169 -- # true 00:23:25.459 11:45:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@170 -- # ip link delete nvmf_br type bridge 00:23:25.459 Cannot find device "nvmf_br" 00:23:25.459 11:45:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@170 -- # true 00:23:25.459 11:45:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@171 -- # ip link delete nvmf_init_if 00:23:25.459 Cannot find device "nvmf_init_if" 00:23:25.459 11:45:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@171 -- # true 00:23:25.459 11:45:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@172 -- # ip link delete nvmf_init_if2 00:23:25.719 Cannot find device "nvmf_init_if2" 00:23:25.719 11:45:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@172 -- # true 00:23:25.719 11:45:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@173 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:23:25.719 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:23:25.719 11:45:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@173 -- # true 00:23:25.719 11:45:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@174 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:23:25.719 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:23:25.719 11:45:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@174 -- # true 00:23:25.719 11:45:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@177 -- # ip netns add nvmf_tgt_ns_spdk 00:23:25.719 11:45:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@180 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:23:25.719 11:45:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@181 -- # ip link add nvmf_init_if2 type veth peer name nvmf_init_br2 00:23:25.719 11:45:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@182 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:23:25.719 11:45:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@183 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:23:25.720 11:45:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:23:25.720 11:45:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@187 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:23:25.720 11:45:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@190 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:23:25.720 11:45:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@191 -- # ip addr add 10.0.0.2/24 dev nvmf_init_if2 00:23:25.720 11:45:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@192 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if 00:23:25.720 11:45:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@193 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.4/24 dev nvmf_tgt_if2 00:23:25.720 11:45:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@196 -- # ip link set nvmf_init_if up 00:23:25.720 11:45:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@197 -- # ip link set nvmf_init_if2 up 00:23:25.720 11:45:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@198 -- # ip link set nvmf_init_br up 00:23:25.720 11:45:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@199 -- # ip link set nvmf_init_br2 up 00:23:25.720 11:45:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@200 -- # ip link set nvmf_tgt_br up 00:23:25.720 11:45:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@201 -- # ip link set nvmf_tgt_br2 up 00:23:25.720 11:45:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@202 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:23:25.720 11:45:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@203 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:23:25.720 11:45:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@204 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:23:25.720 11:45:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@207 -- # ip link add nvmf_br type bridge 00:23:25.720 11:45:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@208 -- # ip link set nvmf_br up 00:23:25.720 11:45:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@211 -- # ip link set nvmf_init_br master nvmf_br 00:23:25.720 11:45:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@212 -- # ip link set nvmf_init_br2 master nvmf_br 00:23:25.720 11:45:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@213 -- # ip link set nvmf_tgt_br master nvmf_br 00:23:25.720 11:45:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@214 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:23:25.720 11:45:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@217 -- # ipts -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:23:25.720 11:45:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT' 00:23:25.720 11:45:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@218 -- # ipts -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT 00:23:25.720 11:45:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT' 00:23:25.720 11:45:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@219 -- # ipts -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:23:25.720 11:45:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@790 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT -m comment --comment 'SPDK_NVMF:-A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT' 00:23:25.720 11:45:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@222 -- # ping -c 1 10.0.0.3 00:23:25.720 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:23:25.720 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.125 ms 00:23:25.720 00:23:25.720 --- 10.0.0.3 ping statistics --- 00:23:25.720 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:23:25.720 rtt min/avg/max/mdev = 0.125/0.125/0.125/0.000 ms 00:23:25.720 11:45:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@223 -- # ping -c 1 10.0.0.4 00:23:25.720 PING 10.0.0.4 (10.0.0.4) 56(84) bytes of data. 00:23:25.720 64 bytes from 10.0.0.4: icmp_seq=1 ttl=64 time=0.044 ms 00:23:25.720 00:23:25.720 --- 10.0.0.4 ping statistics --- 00:23:25.720 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:23:25.720 rtt min/avg/max/mdev = 0.044/0.044/0.044/0.000 ms 00:23:25.720 11:45:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@224 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:23:25.720 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:23:25.720 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.036 ms 00:23:25.720 00:23:25.720 --- 10.0.0.1 ping statistics --- 00:23:25.720 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:23:25.720 rtt min/avg/max/mdev = 0.036/0.036/0.036/0.000 ms 00:23:25.720 11:45:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@225 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.2 00:23:25.720 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:23:25.720 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.072 ms 00:23:25.720 00:23:25.720 --- 10.0.0.2 ping statistics --- 00:23:25.720 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:23:25.720 rtt min/avg/max/mdev = 0.072/0.072/0.072/0.000 ms 00:23:25.720 11:45:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@227 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:23:25.720 11:45:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@461 -- # return 0 00:23:25.720 11:45:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:23:25.720 11:45:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:23:25.720 11:45:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:23:25.720 11:45:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:23:25.720 11:45:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:23:25.720 11:45:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:23:25.720 11:45:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:23:25.720 11:45:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@23 -- # nvmfappstart -m 0xE 00:23:25.720 11:45:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:23:25.720 11:45:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@726 -- # xtrace_disable 00:23:25.720 11:45:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@10 -- # set +x 00:23:25.720 11:45:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@509 -- # nvmfpid=100316 00:23:25.720 11:45:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@508 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --interrupt-mode -m 0xE 00:23:25.720 11:45:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@510 -- # waitforlisten 100316 00:23:25.720 11:45:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@835 -- # '[' -z 100316 ']' 00:23:25.720 11:45:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:23:25.720 11:45:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@840 -- # local max_retries=100 00:23:25.720 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:23:25.720 11:45:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:23:25.720 11:45:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@844 -- # xtrace_disable 00:23:25.720 11:45:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@10 -- # set +x 00:23:25.979 [2024-12-16 11:45:26.529200] thread.c:3079:spdk_interrupt_mode_enable: *NOTICE*: Set SPDK running in interrupt mode. 00:23:25.979 [2024-12-16 11:45:26.530443] Starting SPDK v25.01-pre git sha1 c360129e7 / DPDK 24.03.0 initialization... 00:23:25.979 [2024-12-16 11:45:26.530514] [ DPDK EAL parameters: nvmf -c 0xE --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:23:25.979 [2024-12-16 11:45:26.682834] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:23:26.239 [2024-12-16 11:45:26.760432] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:23:26.239 [2024-12-16 11:45:26.760508] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:23:26.239 [2024-12-16 11:45:26.760522] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:23:26.239 [2024-12-16 11:45:26.760533] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:23:26.239 [2024-12-16 11:45:26.760542] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:23:26.239 [2024-12-16 11:45:26.762073] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 2 00:23:26.239 [2024-12-16 11:45:26.762242] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 3 00:23:26.239 [2024-12-16 11:45:26.762252] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 1 00:23:26.239 [2024-12-16 11:45:26.900719] thread.c:2144:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (app_thread) to intr mode from intr mode. 00:23:26.239 [2024-12-16 11:45:26.900723] thread.c:2144:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_000) to intr mode from intr mode. 00:23:26.239 [2024-12-16 11:45:26.901259] thread.c:2144:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_001) to intr mode from intr mode. 00:23:26.239 [2024-12-16 11:45:26.901312] thread.c:2144:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_002) to intr mode from intr mode. 00:23:26.239 11:45:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:23:26.239 11:45:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@868 -- # return 0 00:23:26.239 11:45:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:23:26.239 11:45:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@732 -- # xtrace_disable 00:23:26.239 11:45:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@10 -- # set +x 00:23:26.239 11:45:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:23:26.239 11:45:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@25 -- # null_size=1000 00:23:26.239 11:45:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@27 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:23:26.498 [2024-12-16 11:45:27.211601] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:23:26.498 11:45:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@29 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 10 00:23:26.757 11:45:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@30 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 00:23:27.015 [2024-12-16 11:45:27.728274] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4420 *** 00:23:27.015 11:45:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.3 -s 4420 00:23:27.274 11:45:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@32 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_create 32 512 -b Malloc0 00:23:27.841 Malloc0 00:23:27.841 11:45:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@33 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_delay_create -b Malloc0 -d Delay0 -r 1000000 -t 1000000 -w 1000000 -n 1000000 00:23:27.841 Delay0 00:23:27.841 11:45:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@34 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:23:28.407 11:45:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@35 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_null_create NULL1 1000 512 00:23:28.407 NULL1 00:23:28.407 11:45:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@36 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 NULL1 00:23:28.665 11:45:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@40 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf -c 0x1 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.3 trsvcid:4420' -t 30 -q 128 -w randread -o 512 -Q 1000 00:23:28.665 11:45:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@42 -- # PERF_PID=100435 00:23:28.665 11:45:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 100435 00:23:28.665 11:45:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:23:30.040 Read completed with error (sct=0, sc=11) 00:23:30.040 11:45:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:23:30.040 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:23:30.040 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:23:30.040 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:23:30.040 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:23:30.299 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:23:30.299 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:23:30.299 11:45:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1001 00:23:30.299 11:45:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_null_resize NULL1 1001 00:23:30.558 true 00:23:30.558 11:45:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 100435 00:23:30.558 11:45:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:23:31.494 11:45:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:23:31.494 11:45:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1002 00:23:31.494 11:45:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_null_resize NULL1 1002 00:23:31.753 true 00:23:31.753 11:45:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 100435 00:23:31.753 11:45:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:23:32.011 11:45:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:23:32.577 11:45:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1003 00:23:32.577 11:45:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_null_resize NULL1 1003 00:23:32.577 true 00:23:32.577 11:45:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 100435 00:23:32.577 11:45:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:23:33.144 11:45:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:23:33.402 11:45:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1004 00:23:33.402 11:45:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_null_resize NULL1 1004 00:23:33.661 true 00:23:33.661 11:45:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 100435 00:23:33.661 11:45:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:23:34.595 11:45:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:23:34.595 11:45:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1005 00:23:34.595 11:45:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_null_resize NULL1 1005 00:23:34.863 true 00:23:34.863 11:45:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 100435 00:23:34.863 11:45:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:23:35.149 11:45:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:23:35.407 11:45:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1006 00:23:35.407 11:45:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_null_resize NULL1 1006 00:23:35.665 true 00:23:35.665 11:45:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 100435 00:23:35.665 11:45:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:23:35.923 11:45:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:23:36.181 11:45:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1007 00:23:36.181 11:45:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_null_resize NULL1 1007 00:23:36.439 true 00:23:36.439 11:45:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 100435 00:23:36.439 11:45:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:23:37.375 11:45:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:23:37.634 11:45:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1008 00:23:37.634 11:45:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_null_resize NULL1 1008 00:23:37.892 true 00:23:37.892 11:45:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 100435 00:23:37.892 11:45:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:23:38.151 11:45:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:23:38.409 11:45:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1009 00:23:38.409 11:45:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_null_resize NULL1 1009 00:23:38.669 true 00:23:38.669 11:45:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 100435 00:23:38.669 11:45:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:23:38.928 11:45:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:23:39.186 11:45:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1010 00:23:39.186 11:45:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_null_resize NULL1 1010 00:23:39.444 true 00:23:39.444 11:45:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 100435 00:23:39.444 11:45:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:23:40.378 11:45:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:23:40.636 11:45:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1011 00:23:40.636 11:45:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_null_resize NULL1 1011 00:23:40.896 true 00:23:40.896 11:45:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 100435 00:23:40.896 11:45:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:23:41.157 11:45:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:23:41.415 11:45:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1012 00:23:41.415 11:45:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_null_resize NULL1 1012 00:23:41.673 true 00:23:41.673 11:45:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 100435 00:23:41.673 11:45:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:23:41.932 11:45:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:23:42.190 11:45:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1013 00:23:42.190 11:45:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_null_resize NULL1 1013 00:23:42.449 true 00:23:42.449 11:45:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 100435 00:23:42.449 11:45:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:23:43.385 11:45:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:23:43.643 11:45:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1014 00:23:43.643 11:45:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_null_resize NULL1 1014 00:23:43.902 true 00:23:43.902 11:45:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 100435 00:23:43.902 11:45:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:23:44.161 11:45:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:23:44.419 11:45:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1015 00:23:44.419 11:45:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_null_resize NULL1 1015 00:23:44.676 true 00:23:44.676 11:45:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 100435 00:23:44.676 11:45:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:23:44.934 11:45:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:23:45.192 11:45:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1016 00:23:45.192 11:45:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_null_resize NULL1 1016 00:23:45.450 true 00:23:45.450 11:45:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 100435 00:23:45.450 11:45:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:23:46.384 11:45:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:23:46.642 11:45:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1017 00:23:46.642 11:45:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_null_resize NULL1 1017 00:23:46.900 true 00:23:46.900 11:45:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 100435 00:23:46.900 11:45:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:23:47.176 11:45:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:23:47.434 11:45:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1018 00:23:47.434 11:45:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_null_resize NULL1 1018 00:23:47.692 true 00:23:47.692 11:45:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 100435 00:23:47.692 11:45:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:23:47.951 11:45:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:23:48.209 11:45:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1019 00:23:48.209 11:45:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_null_resize NULL1 1019 00:23:48.466 true 00:23:48.466 11:45:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 100435 00:23:48.466 11:45:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:23:49.400 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:23:49.400 11:45:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:23:49.658 11:45:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1020 00:23:49.658 11:45:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_null_resize NULL1 1020 00:23:49.915 true 00:23:49.915 11:45:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 100435 00:23:49.915 11:45:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:23:50.173 11:45:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:23:50.431 11:45:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1021 00:23:50.431 11:45:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_null_resize NULL1 1021 00:23:50.689 true 00:23:50.689 11:45:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 100435 00:23:50.689 11:45:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:23:51.252 11:45:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:23:51.252 11:45:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1022 00:23:51.252 11:45:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_null_resize NULL1 1022 00:23:51.510 true 00:23:51.510 11:45:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 100435 00:23:51.510 11:45:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:23:52.443 11:45:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:23:52.702 11:45:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1023 00:23:52.702 11:45:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_null_resize NULL1 1023 00:23:52.960 true 00:23:52.960 11:45:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 100435 00:23:52.960 11:45:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:23:53.219 11:45:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:23:53.480 11:45:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1024 00:23:53.480 11:45:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_null_resize NULL1 1024 00:23:53.772 true 00:23:53.772 11:45:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 100435 00:23:53.772 11:45:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:23:54.031 11:45:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:23:54.290 11:45:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1025 00:23:54.290 11:45:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_null_resize NULL1 1025 00:23:54.553 true 00:23:54.553 11:45:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 100435 00:23:54.553 11:45:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:23:55.489 11:45:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:23:55.747 11:45:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1026 00:23:55.747 11:45:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_null_resize NULL1 1026 00:23:56.005 true 00:23:56.005 11:45:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 100435 00:23:56.005 11:45:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:23:56.263 11:45:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:23:56.522 11:45:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1027 00:23:56.522 11:45:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_null_resize NULL1 1027 00:23:56.781 true 00:23:56.781 11:45:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 100435 00:23:56.781 11:45:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:23:57.039 11:45:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:23:57.297 11:45:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1028 00:23:57.297 11:45:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_null_resize NULL1 1028 00:23:57.556 true 00:23:57.556 11:45:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 100435 00:23:57.556 11:45:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:23:58.491 11:45:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:23:58.750 11:45:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1029 00:23:58.750 11:45:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_null_resize NULL1 1029 00:23:59.008 true 00:23:59.008 11:45:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 100435 00:23:59.008 11:45:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:23:59.008 Initializing NVMe Controllers 00:23:59.008 Attached to NVMe over Fabrics controller at 10.0.0.3:4420: nqn.2016-06.io.spdk:cnode1 00:23:59.008 Controller IO queue size 128, less than required. 00:23:59.008 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:23:59.008 Controller IO queue size 128, less than required. 00:23:59.008 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:23:59.009 Associating TCP (addr:10.0.0.3 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:23:59.009 Associating TCP (addr:10.0.0.3 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 with lcore 0 00:23:59.009 Initialization complete. Launching workers. 00:23:59.009 ======================================================== 00:23:59.009 Latency(us) 00:23:59.009 Device Information : IOPS MiB/s Average min max 00:23:59.009 TCP (addr:10.0.0.3 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 298.57 0.15 172374.61 3360.64 1046376.44 00:23:59.009 TCP (addr:10.0.0.3 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 from core 0: 8235.75 4.02 15542.06 3422.22 550734.61 00:23:59.009 ======================================================== 00:23:59.009 Total : 8534.32 4.17 21028.79 3360.64 1046376.44 00:23:59.009 00:23:59.267 11:45:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:23:59.525 11:46:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1030 00:23:59.525 11:46:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_null_resize NULL1 1030 00:23:59.783 true 00:23:59.783 11:46:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 100435 00:23:59.783 /home/vagrant/spdk_repo/spdk/test/nvmf/target/ns_hotplug_stress.sh: line 44: kill: (100435) - No such process 00:23:59.783 11:46:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@53 -- # wait 100435 00:23:59.783 11:46:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:24:00.041 11:46:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@55 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:24:00.308 11:46:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@58 -- # nthreads=8 00:24:00.308 11:46:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@58 -- # pids=() 00:24:00.308 11:46:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i = 0 )) 00:24:00.308 11:46:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:24:00.308 11:46:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_null_create null0 100 4096 00:24:00.565 null0 00:24:00.566 11:46:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:24:00.566 11:46:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:24:00.566 11:46:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_null_create null1 100 4096 00:24:00.823 null1 00:24:00.823 11:46:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:24:00.823 11:46:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:24:00.823 11:46:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_null_create null2 100 4096 00:24:01.082 null2 00:24:01.082 11:46:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:24:01.082 11:46:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:24:01.082 11:46:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_null_create null3 100 4096 00:24:01.340 null3 00:24:01.599 11:46:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:24:01.599 11:46:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:24:01.599 11:46:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_null_create null4 100 4096 00:24:01.599 null4 00:24:01.857 11:46:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:24:01.857 11:46:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:24:01.857 11:46:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_null_create null5 100 4096 00:24:01.857 null5 00:24:02.116 11:46:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:24:02.116 11:46:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:24:02.116 11:46:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_null_create null6 100 4096 00:24:02.375 null6 00:24:02.375 11:46:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:24:02.375 11:46:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:24:02.375 11:46:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_null_create null7 100 4096 00:24:02.634 null7 00:24:02.634 11:46:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:24:02.634 11:46:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:24:02.634 11:46:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i = 0 )) 00:24:02.634 11:46:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:24:02.634 11:46:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:24:02.634 11:46:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 1 null0 00:24:02.634 11:46:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:24:02.634 11:46:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:24:02.634 11:46:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=1 bdev=null0 00:24:02.634 11:46:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:24:02.634 11:46:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:24:02.634 11:46:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:24:02.634 11:46:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:24:02.634 11:46:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 2 null1 00:24:02.634 11:46:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:24:02.634 11:46:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=2 bdev=null1 00:24:02.634 11:46:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:24:02.634 11:46:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:24:02.634 11:46:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:24:02.634 11:46:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:24:02.634 11:46:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:24:02.635 11:46:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 3 null2 00:24:02.635 11:46:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=3 bdev=null2 00:24:02.635 11:46:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:24:02.635 11:46:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:24:02.635 11:46:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:24:02.635 11:46:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:24:02.635 11:46:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:24:02.635 11:46:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:24:02.635 11:46:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:24:02.635 11:46:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 4 null3 00:24:02.635 11:46:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:24:02.635 11:46:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=4 bdev=null3 00:24:02.635 11:46:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:24:02.635 11:46:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:24:02.635 11:46:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 5 null4 00:24:02.635 11:46:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:24:02.635 11:46:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:24:02.635 11:46:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=5 bdev=null4 00:24:02.635 11:46:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:24:02.635 11:46:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:24:02.635 11:46:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:24:02.635 11:46:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:24:02.635 11:46:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:24:02.635 11:46:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:24:02.635 11:46:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:24:02.635 11:46:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 6 null5 00:24:02.635 11:46:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:24:02.635 11:46:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=6 bdev=null5 00:24:02.635 11:46:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:24:02.635 11:46:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:24:02.635 11:46:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:24:02.635 11:46:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:24:02.635 11:46:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 7 null6 00:24:02.635 11:46:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:24:02.635 11:46:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:24:02.635 11:46:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=7 bdev=null6 00:24:02.635 11:46:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:24:02.635 11:46:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:24:02.635 11:46:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:24:02.635 11:46:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 8 null7 00:24:02.635 11:46:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:24:02.635 11:46:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:24:02.635 11:46:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:24:02.635 11:46:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@66 -- # wait 101472 101473 101475 101478 101479 101482 101483 101485 00:24:02.635 11:46:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=8 bdev=null7 00:24:02.635 11:46:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:24:02.635 11:46:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:24:02.635 11:46:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:24:02.894 11:46:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:24:02.894 11:46:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:24:02.894 11:46:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:24:02.894 11:46:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:24:02.894 11:46:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:24:02.894 11:46:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:24:02.894 11:46:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:24:02.894 11:46:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:24:03.153 11:46:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:24:03.153 11:46:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:24:03.153 11:46:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:24:03.153 11:46:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:24:03.153 11:46:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:24:03.153 11:46:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:24:03.153 11:46:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:24:03.153 11:46:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:24:03.153 11:46:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:24:03.153 11:46:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:24:03.153 11:46:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:24:03.153 11:46:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:24:03.153 11:46:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:24:03.153 11:46:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:24:03.153 11:46:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:24:03.153 11:46:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:24:03.153 11:46:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:24:03.153 11:46:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:24:03.153 11:46:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:24:03.153 11:46:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:24:03.153 11:46:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:24:03.153 11:46:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:24:03.153 11:46:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:24:03.153 11:46:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:24:03.412 11:46:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:24:03.412 11:46:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:24:03.412 11:46:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:24:03.412 11:46:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:24:03.412 11:46:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:24:03.412 11:46:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:24:03.671 11:46:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:24:03.671 11:46:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:24:03.671 11:46:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:24:03.671 11:46:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:24:03.671 11:46:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:24:03.671 11:46:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:24:03.671 11:46:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:24:03.671 11:46:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:24:03.671 11:46:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:24:03.671 11:46:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:24:03.671 11:46:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:24:03.671 11:46:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:24:03.671 11:46:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:24:03.671 11:46:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:24:03.930 11:46:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:24:03.930 11:46:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:24:03.930 11:46:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:24:03.930 11:46:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:24:03.930 11:46:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:24:03.930 11:46:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:24:03.930 11:46:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:24:03.930 11:46:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:24:03.930 11:46:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:24:03.930 11:46:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:24:03.930 11:46:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:24:03.930 11:46:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:24:03.930 11:46:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:24:03.930 11:46:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:24:04.189 11:46:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:24:04.189 11:46:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:24:04.189 11:46:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:24:04.189 11:46:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:24:04.189 11:46:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:24:04.189 11:46:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:24:04.189 11:46:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:24:04.189 11:46:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:24:04.189 11:46:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:24:04.189 11:46:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:24:04.189 11:46:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:24:04.447 11:46:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:24:04.447 11:46:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:24:04.447 11:46:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:24:04.447 11:46:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:24:04.447 11:46:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:24:04.447 11:46:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:24:04.447 11:46:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:24:04.447 11:46:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:24:04.448 11:46:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:24:04.448 11:46:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:24:04.448 11:46:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:24:04.448 11:46:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:24:04.448 11:46:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:24:04.448 11:46:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:24:04.448 11:46:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:24:04.448 11:46:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:24:04.705 11:46:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:24:04.705 11:46:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:24:04.705 11:46:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:24:04.706 11:46:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:24:04.706 11:46:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:24:04.706 11:46:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:24:04.706 11:46:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:24:04.706 11:46:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:24:04.706 11:46:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:24:04.706 11:46:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:24:04.964 11:46:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:24:04.964 11:46:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:24:04.964 11:46:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:24:04.964 11:46:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:24:04.964 11:46:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:24:04.964 11:46:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:24:04.964 11:46:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:24:04.964 11:46:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:24:04.964 11:46:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:24:04.964 11:46:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:24:04.964 11:46:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:24:04.964 11:46:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:24:04.964 11:46:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:24:04.964 11:46:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:24:04.964 11:46:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:24:04.964 11:46:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:24:05.224 11:46:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:24:05.224 11:46:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:24:05.224 11:46:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:24:05.224 11:46:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:24:05.224 11:46:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:24:05.224 11:46:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:24:05.224 11:46:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:24:05.224 11:46:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:24:05.224 11:46:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:24:05.224 11:46:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:24:05.224 11:46:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:24:05.224 11:46:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:24:05.493 11:46:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:24:05.493 11:46:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:24:05.493 11:46:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:24:05.493 11:46:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:24:05.493 11:46:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:24:05.493 11:46:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:24:05.493 11:46:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:24:05.493 11:46:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:24:05.493 11:46:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:24:05.493 11:46:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:24:05.493 11:46:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:24:05.493 11:46:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:24:05.493 11:46:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:24:05.493 11:46:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:24:05.765 11:46:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:24:05.765 11:46:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:24:05.765 11:46:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:24:05.765 11:46:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:24:05.765 11:46:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:24:05.765 11:46:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:24:05.765 11:46:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:24:05.765 11:46:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:24:05.765 11:46:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:24:05.765 11:46:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:24:05.765 11:46:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:24:05.765 11:46:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:24:05.765 11:46:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:24:05.765 11:46:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:24:05.765 11:46:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:24:05.765 11:46:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:24:05.765 11:46:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:24:06.023 11:46:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:24:06.023 11:46:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:24:06.023 11:46:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:24:06.023 11:46:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:24:06.023 11:46:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:24:06.023 11:46:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:24:06.023 11:46:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:24:06.023 11:46:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:24:06.023 11:46:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:24:06.023 11:46:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:24:06.023 11:46:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:24:06.280 11:46:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:24:06.280 11:46:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:24:06.280 11:46:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:24:06.280 11:46:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:24:06.280 11:46:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:24:06.280 11:46:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:24:06.280 11:46:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:24:06.280 11:46:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:24:06.280 11:46:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:24:06.280 11:46:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:24:06.280 11:46:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:24:06.280 11:46:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:24:06.280 11:46:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:24:06.280 11:46:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:24:06.539 11:46:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:24:06.539 11:46:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:24:06.539 11:46:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:24:06.539 11:46:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:24:06.539 11:46:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:24:06.539 11:46:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:24:06.539 11:46:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:24:06.539 11:46:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:24:06.539 11:46:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:24:06.539 11:46:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:24:06.539 11:46:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:24:06.798 11:46:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:24:06.798 11:46:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:24:06.798 11:46:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:24:06.798 11:46:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:24:06.798 11:46:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:24:06.798 11:46:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:24:06.798 11:46:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:24:06.798 11:46:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:24:06.798 11:46:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:24:06.798 11:46:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:24:06.798 11:46:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:24:07.057 11:46:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:24:07.057 11:46:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:24:07.057 11:46:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:24:07.057 11:46:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:24:07.057 11:46:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:24:07.057 11:46:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:24:07.057 11:46:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:24:07.057 11:46:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:24:07.057 11:46:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:24:07.057 11:46:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:24:07.057 11:46:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:24:07.057 11:46:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:24:07.057 11:46:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:24:07.057 11:46:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:24:07.057 11:46:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:24:07.316 11:46:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:24:07.316 11:46:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:24:07.316 11:46:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:24:07.316 11:46:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:24:07.316 11:46:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:24:07.316 11:46:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:24:07.316 11:46:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:24:07.316 11:46:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:24:07.316 11:46:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:24:07.316 11:46:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:24:07.316 11:46:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:24:07.316 11:46:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:24:07.316 11:46:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:24:07.316 11:46:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:24:07.316 11:46:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:24:07.316 11:46:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:24:07.574 11:46:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:24:07.574 11:46:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:24:07.574 11:46:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:24:07.574 11:46:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:24:07.574 11:46:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:24:07.574 11:46:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:24:07.574 11:46:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:24:07.574 11:46:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:24:07.574 11:46:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:24:07.574 11:46:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:24:07.574 11:46:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:24:07.574 11:46:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:24:07.574 11:46:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:24:07.574 11:46:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:24:07.574 11:46:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:24:07.833 11:46:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:24:07.833 11:46:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:24:07.833 11:46:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:24:07.833 11:46:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:24:07.833 11:46:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:24:07.833 11:46:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:24:07.833 11:46:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:24:07.833 11:46:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:24:07.833 11:46:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:24:07.833 11:46:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:24:07.833 11:46:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:24:08.091 11:46:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:24:08.091 11:46:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:24:08.091 11:46:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:24:08.091 11:46:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:24:08.091 11:46:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:24:08.091 11:46:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:24:08.091 11:46:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:24:08.091 11:46:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:24:08.091 11:46:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:24:08.091 11:46:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:24:08.091 11:46:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:24:08.091 11:46:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:24:08.350 11:46:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:24:08.350 11:46:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:24:08.350 11:46:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:24:08.350 11:46:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:24:08.350 11:46:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:24:08.350 11:46:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:24:08.350 11:46:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:24:08.350 11:46:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:24:08.350 11:46:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:24:08.350 11:46:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:24:08.350 11:46:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:24:08.350 11:46:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:24:08.350 11:46:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:24:08.350 11:46:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:24:08.350 11:46:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:24:08.350 11:46:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:24:08.609 11:46:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:24:08.609 11:46:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:24:08.609 11:46:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:24:08.609 11:46:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:24:08.609 11:46:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:24:08.609 11:46:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:24:08.609 11:46:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:24:08.609 11:46:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:24:08.609 11:46:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:24:08.868 11:46:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:24:08.868 11:46:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:24:08.868 11:46:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:24:08.868 11:46:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:24:08.868 11:46:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:24:08.868 11:46:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:24:08.868 11:46:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:24:09.127 11:46:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:24:09.127 11:46:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:24:09.127 11:46:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@68 -- # trap - SIGINT SIGTERM EXIT 00:24:09.127 11:46:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@70 -- # nvmftestfini 00:24:09.127 11:46:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@516 -- # nvmfcleanup 00:24:09.127 11:46:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@121 -- # sync 00:24:09.127 11:46:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:24:09.127 11:46:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@124 -- # set +e 00:24:09.127 11:46:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@125 -- # for i in {1..20} 00:24:09.127 11:46:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:24:09.127 rmmod nvme_tcp 00:24:09.127 rmmod nvme_fabrics 00:24:09.127 rmmod nvme_keyring 00:24:09.127 11:46:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:24:09.127 11:46:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@128 -- # set -e 00:24:09.127 11:46:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@129 -- # return 0 00:24:09.127 11:46:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@517 -- # '[' -n 100316 ']' 00:24:09.127 11:46:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@518 -- # killprocess 100316 00:24:09.127 11:46:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@954 -- # '[' -z 100316 ']' 00:24:09.127 11:46:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@958 -- # kill -0 100316 00:24:09.127 11:46:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@959 -- # uname 00:24:09.127 11:46:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:24:09.127 11:46:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 100316 00:24:09.127 killing process with pid 100316 00:24:09.127 11:46:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:24:09.127 11:46:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:24:09.127 11:46:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@972 -- # echo 'killing process with pid 100316' 00:24:09.127 11:46:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@973 -- # kill 100316 00:24:09.127 11:46:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@978 -- # wait 100316 00:24:09.386 11:46:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:24:09.386 11:46:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:24:09.386 11:46:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:24:09.386 11:46:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@297 -- # iptr 00:24:09.386 11:46:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@791 -- # iptables-save 00:24:09.386 11:46:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:24:09.386 11:46:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@791 -- # iptables-restore 00:24:09.386 11:46:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@298 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:24:09.386 11:46:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@299 -- # nvmf_veth_fini 00:24:09.386 11:46:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@233 -- # ip link set nvmf_init_br nomaster 00:24:09.386 11:46:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@234 -- # ip link set nvmf_init_br2 nomaster 00:24:09.386 11:46:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@235 -- # ip link set nvmf_tgt_br nomaster 00:24:09.386 11:46:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@236 -- # ip link set nvmf_tgt_br2 nomaster 00:24:09.386 11:46:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@237 -- # ip link set nvmf_init_br down 00:24:09.644 11:46:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@238 -- # ip link set nvmf_init_br2 down 00:24:09.644 11:46:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@239 -- # ip link set nvmf_tgt_br down 00:24:09.644 11:46:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@240 -- # ip link set nvmf_tgt_br2 down 00:24:09.644 11:46:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@241 -- # ip link delete nvmf_br type bridge 00:24:09.644 11:46:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@242 -- # ip link delete nvmf_init_if 00:24:09.644 11:46:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@243 -- # ip link delete nvmf_init_if2 00:24:09.644 11:46:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@244 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:24:09.645 11:46:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@245 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:24:09.645 11:46:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@246 -- # remove_spdk_ns 00:24:09.645 11:46:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:24:09.645 11:46:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:24:09.645 11:46:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:24:09.645 11:46:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@300 -- # return 0 00:24:09.645 00:24:09.645 real 0m44.447s 00:24:09.645 user 3m22.096s 00:24:09.645 sys 0m16.462s 00:24:09.645 11:46:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1130 -- # xtrace_disable 00:24:09.645 11:46:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@10 -- # set +x 00:24:09.645 ************************************ 00:24:09.645 END TEST nvmf_ns_hotplug_stress 00:24:09.645 ************************************ 00:24:09.645 11:46:10 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/nvmf_target_core.sh@23 -- # run_test nvmf_delete_subsystem /home/vagrant/spdk_repo/spdk/test/nvmf/target/delete_subsystem.sh --transport=tcp --interrupt-mode 00:24:09.645 11:46:10 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:24:09.645 11:46:10 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1111 -- # xtrace_disable 00:24:09.645 11:46:10 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@10 -- # set +x 00:24:09.645 ************************************ 00:24:09.645 START TEST nvmf_delete_subsystem 00:24:09.645 ************************************ 00:24:09.645 11:46:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/delete_subsystem.sh --transport=tcp --interrupt-mode 00:24:09.904 * Looking for test storage... 00:24:09.904 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:24:09.904 11:46:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:24:09.904 11:46:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@1711 -- # lcov --version 00:24:09.904 11:46:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:24:09.904 11:46:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:24:09.904 11:46:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:24:09.905 11:46:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@333 -- # local ver1 ver1_l 00:24:09.905 11:46:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@334 -- # local ver2 ver2_l 00:24:09.905 11:46:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@336 -- # IFS=.-: 00:24:09.905 11:46:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@336 -- # read -ra ver1 00:24:09.905 11:46:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@337 -- # IFS=.-: 00:24:09.905 11:46:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@337 -- # read -ra ver2 00:24:09.905 11:46:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@338 -- # local 'op=<' 00:24:09.905 11:46:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@340 -- # ver1_l=2 00:24:09.905 11:46:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@341 -- # ver2_l=1 00:24:09.905 11:46:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:24:09.905 11:46:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@344 -- # case "$op" in 00:24:09.905 11:46:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@345 -- # : 1 00:24:09.905 11:46:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@364 -- # (( v = 0 )) 00:24:09.905 11:46:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:24:09.905 11:46:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@365 -- # decimal 1 00:24:09.905 11:46:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@353 -- # local d=1 00:24:09.905 11:46:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:24:09.905 11:46:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@355 -- # echo 1 00:24:09.905 11:46:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@365 -- # ver1[v]=1 00:24:09.905 11:46:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@366 -- # decimal 2 00:24:09.905 11:46:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@353 -- # local d=2 00:24:09.905 11:46:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:24:09.905 11:46:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@355 -- # echo 2 00:24:09.905 11:46:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@366 -- # ver2[v]=2 00:24:09.905 11:46:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:24:09.905 11:46:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:24:09.905 11:46:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@368 -- # return 0 00:24:09.905 11:46:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:24:09.905 11:46:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:24:09.905 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:24:09.905 --rc genhtml_branch_coverage=1 00:24:09.905 --rc genhtml_function_coverage=1 00:24:09.905 --rc genhtml_legend=1 00:24:09.905 --rc geninfo_all_blocks=1 00:24:09.905 --rc geninfo_unexecuted_blocks=1 00:24:09.905 00:24:09.905 ' 00:24:09.905 11:46:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:24:09.905 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:24:09.905 --rc genhtml_branch_coverage=1 00:24:09.905 --rc genhtml_function_coverage=1 00:24:09.905 --rc genhtml_legend=1 00:24:09.905 --rc geninfo_all_blocks=1 00:24:09.905 --rc geninfo_unexecuted_blocks=1 00:24:09.905 00:24:09.905 ' 00:24:09.905 11:46:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:24:09.905 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:24:09.905 --rc genhtml_branch_coverage=1 00:24:09.905 --rc genhtml_function_coverage=1 00:24:09.905 --rc genhtml_legend=1 00:24:09.905 --rc geninfo_all_blocks=1 00:24:09.905 --rc geninfo_unexecuted_blocks=1 00:24:09.905 00:24:09.905 ' 00:24:09.905 11:46:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:24:09.905 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:24:09.905 --rc genhtml_branch_coverage=1 00:24:09.905 --rc genhtml_function_coverage=1 00:24:09.905 --rc genhtml_legend=1 00:24:09.905 --rc geninfo_all_blocks=1 00:24:09.905 --rc geninfo_unexecuted_blocks=1 00:24:09.905 00:24:09.905 ' 00:24:09.905 11:46:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@10 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:24:09.905 11:46:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@7 -- # uname -s 00:24:09.905 11:46:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:24:09.905 11:46:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:24:09.905 11:46:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:24:09.905 11:46:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:24:09.905 11:46:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:24:09.905 11:46:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:24:09.905 11:46:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:24:09.905 11:46:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:24:09.905 11:46:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:24:09.905 11:46:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:24:09.905 11:46:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:2fb5c03d-b67f-4471-b8a9-a83bf7b15be8 00:24:09.905 11:46:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@18 -- # NVME_HOSTID=2fb5c03d-b67f-4471-b8a9-a83bf7b15be8 00:24:09.905 11:46:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:24:09.905 11:46:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:24:09.905 11:46:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:24:09.905 11:46:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:24:09.905 11:46:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@49 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:24:09.905 11:46:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@15 -- # shopt -s extglob 00:24:09.905 11:46:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:24:09.905 11:46:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:24:09.905 11:46:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:24:09.905 11:46:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:09.905 11:46:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:09.905 11:46:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:09.905 11:46:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- paths/export.sh@5 -- # export PATH 00:24:09.906 11:46:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:09.906 11:46:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@51 -- # : 0 00:24:09.906 11:46:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:24:09.906 11:46:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:24:09.906 11:46:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:24:09.906 11:46:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:24:09.906 11:46:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:24:09.906 11:46:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@33 -- # '[' 1 -eq 1 ']' 00:24:09.906 11:46:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@34 -- # NVMF_APP+=(--interrupt-mode) 00:24:09.906 11:46:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:24:09.906 11:46:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:24:09.906 11:46:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@55 -- # have_pci_nics=0 00:24:09.906 11:46:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@12 -- # nvmftestinit 00:24:09.906 11:46:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:24:09.906 11:46:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:24:09.906 11:46:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@476 -- # prepare_net_devs 00:24:09.906 11:46:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@438 -- # local -g is_hw=no 00:24:09.906 11:46:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@440 -- # remove_spdk_ns 00:24:09.906 11:46:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:24:09.906 11:46:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:24:09.906 11:46:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:24:09.906 11:46:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@442 -- # [[ virt != virt ]] 00:24:09.906 11:46:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@444 -- # [[ no == yes ]] 00:24:09.906 11:46:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@451 -- # [[ virt == phy ]] 00:24:09.906 11:46:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@454 -- # [[ virt == phy-fallback ]] 00:24:09.906 11:46:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@459 -- # [[ tcp == tcp ]] 00:24:09.906 11:46:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@460 -- # nvmf_veth_init 00:24:09.906 11:46:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@145 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:24:09.906 11:46:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@146 -- # NVMF_SECOND_INITIATOR_IP=10.0.0.2 00:24:09.906 11:46:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@147 -- # NVMF_FIRST_TARGET_IP=10.0.0.3 00:24:09.906 11:46:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@148 -- # NVMF_SECOND_TARGET_IP=10.0.0.4 00:24:09.906 11:46:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@149 -- # NVMF_INITIATOR_IP=10.0.0.1 00:24:09.906 11:46:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@150 -- # NVMF_BRIDGE=nvmf_br 00:24:09.906 11:46:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@151 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:24:09.906 11:46:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@152 -- # NVMF_INITIATOR_INTERFACE2=nvmf_init_if2 00:24:09.906 11:46:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@153 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:24:09.906 11:46:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@154 -- # NVMF_INITIATOR_BRIDGE2=nvmf_init_br2 00:24:09.906 11:46:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@155 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:24:09.906 11:46:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@156 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:24:09.906 11:46:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@157 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:24:09.906 11:46:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@158 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:24:09.906 11:46:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@159 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:24:09.906 11:46:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@160 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:24:09.906 11:46:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@162 -- # ip link set nvmf_init_br nomaster 00:24:09.906 Cannot find device "nvmf_init_br" 00:24:09.906 11:46:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@162 -- # true 00:24:09.906 11:46:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@163 -- # ip link set nvmf_init_br2 nomaster 00:24:09.906 Cannot find device "nvmf_init_br2" 00:24:09.906 11:46:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@163 -- # true 00:24:09.906 11:46:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@164 -- # ip link set nvmf_tgt_br nomaster 00:24:09.906 Cannot find device "nvmf_tgt_br" 00:24:09.906 11:46:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@164 -- # true 00:24:09.906 11:46:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@165 -- # ip link set nvmf_tgt_br2 nomaster 00:24:09.906 Cannot find device "nvmf_tgt_br2" 00:24:10.165 11:46:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@165 -- # true 00:24:10.165 11:46:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@166 -- # ip link set nvmf_init_br down 00:24:10.165 Cannot find device "nvmf_init_br" 00:24:10.165 11:46:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@166 -- # true 00:24:10.165 11:46:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@167 -- # ip link set nvmf_init_br2 down 00:24:10.165 Cannot find device "nvmf_init_br2" 00:24:10.165 11:46:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@167 -- # true 00:24:10.165 11:46:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@168 -- # ip link set nvmf_tgt_br down 00:24:10.165 Cannot find device "nvmf_tgt_br" 00:24:10.165 11:46:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@168 -- # true 00:24:10.165 11:46:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@169 -- # ip link set nvmf_tgt_br2 down 00:24:10.165 Cannot find device "nvmf_tgt_br2" 00:24:10.165 11:46:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@169 -- # true 00:24:10.165 11:46:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@170 -- # ip link delete nvmf_br type bridge 00:24:10.165 Cannot find device "nvmf_br" 00:24:10.165 11:46:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@170 -- # true 00:24:10.165 11:46:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@171 -- # ip link delete nvmf_init_if 00:24:10.165 Cannot find device "nvmf_init_if" 00:24:10.165 11:46:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@171 -- # true 00:24:10.165 11:46:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@172 -- # ip link delete nvmf_init_if2 00:24:10.165 Cannot find device "nvmf_init_if2" 00:24:10.165 11:46:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@172 -- # true 00:24:10.165 11:46:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@173 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:24:10.165 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:24:10.165 11:46:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@173 -- # true 00:24:10.165 11:46:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@174 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:24:10.166 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:24:10.166 11:46:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@174 -- # true 00:24:10.166 11:46:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@177 -- # ip netns add nvmf_tgt_ns_spdk 00:24:10.166 11:46:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@180 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:24:10.166 11:46:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@181 -- # ip link add nvmf_init_if2 type veth peer name nvmf_init_br2 00:24:10.166 11:46:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@182 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:24:10.166 11:46:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@183 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:24:10.166 11:46:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:24:10.166 11:46:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@187 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:24:10.166 11:46:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@190 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:24:10.166 11:46:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@191 -- # ip addr add 10.0.0.2/24 dev nvmf_init_if2 00:24:10.166 11:46:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@192 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if 00:24:10.166 11:46:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@193 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.4/24 dev nvmf_tgt_if2 00:24:10.166 11:46:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@196 -- # ip link set nvmf_init_if up 00:24:10.166 11:46:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@197 -- # ip link set nvmf_init_if2 up 00:24:10.166 11:46:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@198 -- # ip link set nvmf_init_br up 00:24:10.166 11:46:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@199 -- # ip link set nvmf_init_br2 up 00:24:10.166 11:46:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@200 -- # ip link set nvmf_tgt_br up 00:24:10.166 11:46:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@201 -- # ip link set nvmf_tgt_br2 up 00:24:10.166 11:46:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@202 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:24:10.166 11:46:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@203 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:24:10.166 11:46:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@204 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:24:10.166 11:46:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@207 -- # ip link add nvmf_br type bridge 00:24:10.166 11:46:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@208 -- # ip link set nvmf_br up 00:24:10.166 11:46:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@211 -- # ip link set nvmf_init_br master nvmf_br 00:24:10.166 11:46:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@212 -- # ip link set nvmf_init_br2 master nvmf_br 00:24:10.166 11:46:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@213 -- # ip link set nvmf_tgt_br master nvmf_br 00:24:10.425 11:46:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@214 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:24:10.425 11:46:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@217 -- # ipts -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:24:10.425 11:46:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT' 00:24:10.425 11:46:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@218 -- # ipts -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT 00:24:10.425 11:46:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT' 00:24:10.425 11:46:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@219 -- # ipts -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:24:10.425 11:46:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@790 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT -m comment --comment 'SPDK_NVMF:-A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT' 00:24:10.425 11:46:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@222 -- # ping -c 1 10.0.0.3 00:24:10.425 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:24:10.425 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.063 ms 00:24:10.425 00:24:10.425 --- 10.0.0.3 ping statistics --- 00:24:10.425 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:24:10.425 rtt min/avg/max/mdev = 0.063/0.063/0.063/0.000 ms 00:24:10.425 11:46:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@223 -- # ping -c 1 10.0.0.4 00:24:10.425 PING 10.0.0.4 (10.0.0.4) 56(84) bytes of data. 00:24:10.425 64 bytes from 10.0.0.4: icmp_seq=1 ttl=64 time=0.044 ms 00:24:10.425 00:24:10.425 --- 10.0.0.4 ping statistics --- 00:24:10.425 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:24:10.425 rtt min/avg/max/mdev = 0.044/0.044/0.044/0.000 ms 00:24:10.425 11:46:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@224 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:24:10.425 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:24:10.425 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.033 ms 00:24:10.425 00:24:10.425 --- 10.0.0.1 ping statistics --- 00:24:10.425 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:24:10.425 rtt min/avg/max/mdev = 0.033/0.033/0.033/0.000 ms 00:24:10.425 11:46:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@225 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.2 00:24:10.425 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:24:10.425 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.118 ms 00:24:10.425 00:24:10.425 --- 10.0.0.2 ping statistics --- 00:24:10.425 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:24:10.425 rtt min/avg/max/mdev = 0.118/0.118/0.118/0.000 ms 00:24:10.425 11:46:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@227 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:24:10.425 11:46:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@461 -- # return 0 00:24:10.425 11:46:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:24:10.425 11:46:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:24:10.425 11:46:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:24:10.425 11:46:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:24:10.425 11:46:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:24:10.425 11:46:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:24:10.425 11:46:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:24:10.425 11:46:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@13 -- # nvmfappstart -m 0x3 00:24:10.425 11:46:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:24:10.425 11:46:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@726 -- # xtrace_disable 00:24:10.425 11:46:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:24:10.425 11:46:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@509 -- # nvmfpid=102874 00:24:10.425 11:46:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@508 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --interrupt-mode -m 0x3 00:24:10.425 11:46:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@510 -- # waitforlisten 102874 00:24:10.425 11:46:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@835 -- # '[' -z 102874 ']' 00:24:10.425 11:46:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:24:10.425 11:46:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@840 -- # local max_retries=100 00:24:10.425 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:24:10.425 11:46:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:24:10.425 11:46:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@844 -- # xtrace_disable 00:24:10.425 11:46:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:24:10.425 [2024-12-16 11:46:11.072871] thread.c:3079:spdk_interrupt_mode_enable: *NOTICE*: Set SPDK running in interrupt mode. 00:24:10.425 [2024-12-16 11:46:11.074310] Starting SPDK v25.01-pre git sha1 c360129e7 / DPDK 24.03.0 initialization... 00:24:10.425 [2024-12-16 11:46:11.074420] [ DPDK EAL parameters: nvmf -c 0x3 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:24:10.684 [2024-12-16 11:46:11.227597] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:24:10.684 [2024-12-16 11:46:11.284787] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:24:10.684 [2024-12-16 11:46:11.284846] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:24:10.684 [2024-12-16 11:46:11.284861] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:24:10.684 [2024-12-16 11:46:11.284871] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:24:10.684 [2024-12-16 11:46:11.284881] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:24:10.684 [2024-12-16 11:46:11.286847] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 1 00:24:10.684 [2024-12-16 11:46:11.286910] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:24:10.684 [2024-12-16 11:46:11.388794] thread.c:2144:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (app_thread) to intr mode from intr mode. 00:24:10.684 [2024-12-16 11:46:11.389044] thread.c:2144:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_000) to intr mode from intr mode. 00:24:10.684 [2024-12-16 11:46:11.389181] thread.c:2144:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_001) to intr mode from intr mode. 00:24:11.638 11:46:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:24:11.638 11:46:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@868 -- # return 0 00:24:11.638 11:46:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:24:11.638 11:46:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@732 -- # xtrace_disable 00:24:11.638 11:46:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:24:11.638 11:46:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:24:11.638 11:46:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@15 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:24:11.638 11:46:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:11.638 11:46:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:24:11.638 [2024-12-16 11:46:12.111732] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:24:11.638 11:46:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:11.638 11:46:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@16 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 10 00:24:11.638 11:46:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:11.638 11:46:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:24:11.638 11:46:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:11.638 11:46:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@17 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 00:24:11.638 11:46:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:11.638 11:46:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:24:11.638 [2024-12-16 11:46:12.131999] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4420 *** 00:24:11.638 11:46:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:11.638 11:46:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@18 -- # rpc_cmd bdev_null_create NULL1 1000 512 00:24:11.638 11:46:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:11.638 11:46:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:24:11.638 NULL1 00:24:11.638 11:46:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:11.638 11:46:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@23 -- # rpc_cmd bdev_delay_create -b NULL1 -d Delay0 -r 1000000 -t 1000000 -w 1000000 -n 1000000 00:24:11.638 11:46:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:11.638 11:46:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:24:11.638 Delay0 00:24:11.638 11:46:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:11.638 11:46:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:24:11.638 11:46:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:11.638 11:46:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:24:11.638 11:46:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:11.638 11:46:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@28 -- # perf_pid=102921 00:24:11.638 11:46:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@30 -- # sleep 2 00:24:11.638 11:46:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@26 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf -c 0xC -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.3 trsvcid:4420' -t 5 -q 128 -w randrw -M 70 -o 512 -P 4 00:24:11.638 [2024-12-16 11:46:12.335994] subsystem.c:1641:spdk_nvmf_subsystem_listener_allowed: *WARNING*: Allowing connection to discovery subsystem on TCP/10.0.0.3/4420, even though this listener was not added to the discovery subsystem. This behavior is deprecated and will be removed in a future release. 00:24:13.540 11:46:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@32 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:24:13.540 11:46:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:13.540 11:46:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:24:13.799 Read completed with error (sct=0, sc=8) 00:24:13.799 starting I/O failed: -6 00:24:13.799 Read completed with error (sct=0, sc=8) 00:24:13.799 Write completed with error (sct=0, sc=8) 00:24:13.799 Read completed with error (sct=0, sc=8) 00:24:13.799 Read completed with error (sct=0, sc=8) 00:24:13.799 starting I/O failed: -6 00:24:13.799 Write completed with error (sct=0, sc=8) 00:24:13.799 Write completed with error (sct=0, sc=8) 00:24:13.799 Read completed with error (sct=0, sc=8) 00:24:13.799 Read completed with error (sct=0, sc=8) 00:24:13.799 starting I/O failed: -6 00:24:13.799 Read completed with error (sct=0, sc=8) 00:24:13.799 Read completed with error (sct=0, sc=8) 00:24:13.799 Read completed with error (sct=0, sc=8) 00:24:13.799 Read completed with error (sct=0, sc=8) 00:24:13.799 starting I/O failed: -6 00:24:13.799 Read completed with error (sct=0, sc=8) 00:24:13.799 Read completed with error (sct=0, sc=8) 00:24:13.799 Write completed with error (sct=0, sc=8) 00:24:13.799 Read completed with error (sct=0, sc=8) 00:24:13.799 starting I/O failed: -6 00:24:13.799 Read completed with error (sct=0, sc=8) 00:24:13.799 Write completed with error (sct=0, sc=8) 00:24:13.799 Read completed with error (sct=0, sc=8) 00:24:13.799 Write completed with error (sct=0, sc=8) 00:24:13.799 starting I/O failed: -6 00:24:13.799 Read completed with error (sct=0, sc=8) 00:24:13.799 Write completed with error (sct=0, sc=8) 00:24:13.799 Read completed with error (sct=0, sc=8) 00:24:13.799 Read completed with error (sct=0, sc=8) 00:24:13.799 starting I/O failed: -6 00:24:13.799 Read completed with error (sct=0, sc=8) 00:24:13.799 Write completed with error (sct=0, sc=8) 00:24:13.799 Read completed with error (sct=0, sc=8) 00:24:13.799 Write completed with error (sct=0, sc=8) 00:24:13.799 starting I/O failed: -6 00:24:13.799 Read completed with error (sct=0, sc=8) 00:24:13.799 Read completed with error (sct=0, sc=8) 00:24:13.799 Read completed with error (sct=0, sc=8) 00:24:13.799 Read completed with error (sct=0, sc=8) 00:24:13.799 starting I/O failed: -6 00:24:13.799 Read completed with error (sct=0, sc=8) 00:24:13.799 Read completed with error (sct=0, sc=8) 00:24:13.799 Write completed with error (sct=0, sc=8) 00:24:13.799 Read completed with error (sct=0, sc=8) 00:24:13.799 starting I/O failed: -6 00:24:13.799 Read completed with error (sct=0, sc=8) 00:24:13.799 Write completed with error (sct=0, sc=8) 00:24:13.799 Read completed with error (sct=0, sc=8) 00:24:13.799 Read completed with error (sct=0, sc=8) 00:24:13.800 starting I/O failed: -6 00:24:13.800 Read completed with error (sct=0, sc=8) 00:24:13.800 [2024-12-16 11:46:14.372004] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x131d7e0 is same with the state(6) to be set 00:24:13.800 Write completed with error (sct=0, sc=8) 00:24:13.800 Write completed with error (sct=0, sc=8) 00:24:13.800 Read completed with error (sct=0, sc=8) 00:24:13.800 Read completed with error (sct=0, sc=8) 00:24:13.800 Read completed with error (sct=0, sc=8) 00:24:13.800 Read completed with error (sct=0, sc=8) 00:24:13.800 Read completed with error (sct=0, sc=8) 00:24:13.800 Read completed with error (sct=0, sc=8) 00:24:13.800 Write completed with error (sct=0, sc=8) 00:24:13.800 Read completed with error (sct=0, sc=8) 00:24:13.800 Write completed with error (sct=0, sc=8) 00:24:13.800 Read completed with error (sct=0, sc=8) 00:24:13.800 Read completed with error (sct=0, sc=8) 00:24:13.800 Read completed with error (sct=0, sc=8) 00:24:13.800 Read completed with error (sct=0, sc=8) 00:24:13.800 Write completed with error (sct=0, sc=8) 00:24:13.800 Write completed with error (sct=0, sc=8) 00:24:13.800 Write completed with error (sct=0, sc=8) 00:24:13.800 Read completed with error (sct=0, sc=8) 00:24:13.800 Read completed with error (sct=0, sc=8) 00:24:13.800 Write completed with error (sct=0, sc=8) 00:24:13.800 Read completed with error (sct=0, sc=8) 00:24:13.800 Write completed with error (sct=0, sc=8) 00:24:13.800 Write completed with error (sct=0, sc=8) 00:24:13.800 Read completed with error (sct=0, sc=8) 00:24:13.800 Write completed with error (sct=0, sc=8) 00:24:13.800 Read completed with error (sct=0, sc=8) 00:24:13.800 Read completed with error (sct=0, sc=8) 00:24:13.800 Read completed with error (sct=0, sc=8) 00:24:13.800 Write completed with error (sct=0, sc=8) 00:24:13.800 Read completed with error (sct=0, sc=8) 00:24:13.800 Read completed with error (sct=0, sc=8) 00:24:13.800 Read completed with error (sct=0, sc=8) 00:24:13.800 Read completed with error (sct=0, sc=8) 00:24:13.800 Read completed with error (sct=0, sc=8) 00:24:13.800 Read completed with error (sct=0, sc=8) 00:24:13.800 Read completed with error (sct=0, sc=8) 00:24:13.800 Read completed with error (sct=0, sc=8) 00:24:13.800 Write completed with error (sct=0, sc=8) 00:24:13.800 Write completed with error (sct=0, sc=8) 00:24:13.800 Write completed with error (sct=0, sc=8) 00:24:13.800 Read completed with error (sct=0, sc=8) 00:24:13.800 Write completed with error (sct=0, sc=8) 00:24:13.800 Read completed with error (sct=0, sc=8) 00:24:13.800 Read completed with error (sct=0, sc=8) 00:24:13.800 Read completed with error (sct=0, sc=8) 00:24:13.800 Read completed with error (sct=0, sc=8) 00:24:13.800 Read completed with error (sct=0, sc=8) 00:24:13.800 Read completed with error (sct=0, sc=8) 00:24:13.800 Read completed with error (sct=0, sc=8) 00:24:13.800 Write completed with error (sct=0, sc=8) 00:24:13.800 Write completed with error (sct=0, sc=8) 00:24:13.800 Read completed with error (sct=0, sc=8) 00:24:13.800 Read completed with error (sct=0, sc=8) 00:24:13.800 Write completed with error (sct=0, sc=8) 00:24:13.800 Read completed with error (sct=0, sc=8) 00:24:13.800 starting I/O failed: -6 00:24:13.800 Read completed with error (sct=0, sc=8) 00:24:13.800 Read completed with error (sct=0, sc=8) 00:24:13.800 Read completed with error (sct=0, sc=8) 00:24:13.800 Read completed with error (sct=0, sc=8) 00:24:13.800 starting I/O failed: -6 00:24:13.800 Write completed with error (sct=0, sc=8) 00:24:13.800 Read completed with error (sct=0, sc=8) 00:24:13.800 Read completed with error (sct=0, sc=8) 00:24:13.800 Write completed with error (sct=0, sc=8) 00:24:13.800 starting I/O failed: -6 00:24:13.800 Read completed with error (sct=0, sc=8) 00:24:13.800 Read completed with error (sct=0, sc=8) 00:24:13.800 Read completed with error (sct=0, sc=8) 00:24:13.800 Read completed with error (sct=0, sc=8) 00:24:13.800 starting I/O failed: -6 00:24:13.800 Write completed with error (sct=0, sc=8) 00:24:13.800 Read completed with error (sct=0, sc=8) 00:24:13.800 Read completed with error (sct=0, sc=8) 00:24:13.800 Write completed with error (sct=0, sc=8) 00:24:13.800 starting I/O failed: -6 00:24:13.800 Read completed with error (sct=0, sc=8) 00:24:13.800 Read completed with error (sct=0, sc=8) 00:24:13.800 Read completed with error (sct=0, sc=8) 00:24:13.800 Read completed with error (sct=0, sc=8) 00:24:13.800 starting I/O failed: -6 00:24:13.800 Write completed with error (sct=0, sc=8) 00:24:13.800 Read completed with error (sct=0, sc=8) 00:24:13.800 Read completed with error (sct=0, sc=8) 00:24:13.800 Write completed with error (sct=0, sc=8) 00:24:13.800 starting I/O failed: -6 00:24:13.800 Read completed with error (sct=0, sc=8) 00:24:13.800 Read completed with error (sct=0, sc=8) 00:24:13.800 Read completed with error (sct=0, sc=8) 00:24:13.800 Write completed with error (sct=0, sc=8) 00:24:13.800 starting I/O failed: -6 00:24:13.800 Write completed with error (sct=0, sc=8) 00:24:13.800 Read completed with error (sct=0, sc=8) 00:24:13.800 Read completed with error (sct=0, sc=8) 00:24:13.800 Read completed with error (sct=0, sc=8) 00:24:13.800 starting I/O failed: -6 00:24:13.800 Read completed with error (sct=0, sc=8) 00:24:13.800 Read completed with error (sct=0, sc=8) 00:24:13.800 Read completed with error (sct=0, sc=8) 00:24:13.800 Read completed with error (sct=0, sc=8) 00:24:13.800 starting I/O failed: -6 00:24:13.800 Write completed with error (sct=0, sc=8) 00:24:13.800 Read completed with error (sct=0, sc=8) 00:24:13.800 [2024-12-16 11:46:14.375686] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7f6648000c80 is same with the state(6) to be set 00:24:13.800 Read completed with error (sct=0, sc=8) 00:24:13.800 Read completed with error (sct=0, sc=8) 00:24:13.800 Read completed with error (sct=0, sc=8) 00:24:13.800 Read completed with error (sct=0, sc=8) 00:24:13.800 Read completed with error (sct=0, sc=8) 00:24:13.800 Write completed with error (sct=0, sc=8) 00:24:13.800 Write completed with error (sct=0, sc=8) 00:24:13.800 Write completed with error (sct=0, sc=8) 00:24:13.800 Write completed with error (sct=0, sc=8) 00:24:13.800 Write completed with error (sct=0, sc=8) 00:24:13.800 Write completed with error (sct=0, sc=8) 00:24:13.800 Read completed with error (sct=0, sc=8) 00:24:13.800 Read completed with error (sct=0, sc=8) 00:24:13.800 Read completed with error (sct=0, sc=8) 00:24:13.800 Read completed with error (sct=0, sc=8) 00:24:13.800 Read completed with error (sct=0, sc=8) 00:24:13.800 Read completed with error (sct=0, sc=8) 00:24:13.800 Read completed with error (sct=0, sc=8) 00:24:13.800 Read completed with error (sct=0, sc=8) 00:24:13.800 Read completed with error (sct=0, sc=8) 00:24:13.800 Read completed with error (sct=0, sc=8) 00:24:13.800 Read completed with error (sct=0, sc=8) 00:24:13.800 Write completed with error (sct=0, sc=8) 00:24:13.800 Read completed with error (sct=0, sc=8) 00:24:13.800 Read completed with error (sct=0, sc=8) 00:24:13.800 Read completed with error (sct=0, sc=8) 00:24:13.800 Read completed with error (sct=0, sc=8) 00:24:13.800 Read completed with error (sct=0, sc=8) 00:24:13.800 Read completed with error (sct=0, sc=8) 00:24:13.800 Write completed with error (sct=0, sc=8) 00:24:13.800 Read completed with error (sct=0, sc=8) 00:24:13.800 Read completed with error (sct=0, sc=8) 00:24:13.800 Read completed with error (sct=0, sc=8) 00:24:13.800 Read completed with error (sct=0, sc=8) 00:24:13.800 Write completed with error (sct=0, sc=8) 00:24:13.800 Write completed with error (sct=0, sc=8) 00:24:13.800 Read completed with error (sct=0, sc=8) 00:24:13.800 Read completed with error (sct=0, sc=8) 00:24:13.800 Read completed with error (sct=0, sc=8) 00:24:13.800 Write completed with error (sct=0, sc=8) 00:24:13.800 Read completed with error (sct=0, sc=8) 00:24:13.800 Read completed with error (sct=0, sc=8) 00:24:13.800 Read completed with error (sct=0, sc=8) 00:24:13.800 Read completed with error (sct=0, sc=8) 00:24:13.800 Read completed with error (sct=0, sc=8) 00:24:13.800 Read completed with error (sct=0, sc=8) 00:24:13.800 Read completed with error (sct=0, sc=8) 00:24:13.800 Read completed with error (sct=0, sc=8) 00:24:13.800 Read completed with error (sct=0, sc=8) 00:24:13.800 Read completed with error (sct=0, sc=8) 00:24:13.800 Write completed with error (sct=0, sc=8) 00:24:13.800 Read completed with error (sct=0, sc=8) 00:24:14.736 [2024-12-16 11:46:15.351334] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1311aa0 is same with the state(6) to be set 00:24:14.736 Write completed with error (sct=0, sc=8) 00:24:14.736 Read completed with error (sct=0, sc=8) 00:24:14.736 Write completed with error (sct=0, sc=8) 00:24:14.736 Read completed with error (sct=0, sc=8) 00:24:14.736 Write completed with error (sct=0, sc=8) 00:24:14.736 Read completed with error (sct=0, sc=8) 00:24:14.736 Write completed with error (sct=0, sc=8) 00:24:14.736 Read completed with error (sct=0, sc=8) 00:24:14.736 Read completed with error (sct=0, sc=8) 00:24:14.736 Write completed with error (sct=0, sc=8) 00:24:14.736 Write completed with error (sct=0, sc=8) 00:24:14.736 Read completed with error (sct=0, sc=8) 00:24:14.736 Read completed with error (sct=0, sc=8) 00:24:14.736 Read completed with error (sct=0, sc=8) 00:24:14.736 Read completed with error (sct=0, sc=8) 00:24:14.736 Write completed with error (sct=0, sc=8) 00:24:14.736 Read completed with error (sct=0, sc=8) 00:24:14.736 Read completed with error (sct=0, sc=8) 00:24:14.736 Read completed with error (sct=0, sc=8) 00:24:14.736 Write completed with error (sct=0, sc=8) 00:24:14.736 Read completed with error (sct=0, sc=8) 00:24:14.736 [2024-12-16 11:46:15.373689] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x131ca50 is same with the state(6) to be set 00:24:14.736 Read completed with error (sct=0, sc=8) 00:24:14.736 Read completed with error (sct=0, sc=8) 00:24:14.736 Read completed with error (sct=0, sc=8) 00:24:14.736 Read completed with error (sct=0, sc=8) 00:24:14.736 Read completed with error (sct=0, sc=8) 00:24:14.736 Read completed with error (sct=0, sc=8) 00:24:14.736 Write completed with error (sct=0, sc=8) 00:24:14.736 Read completed with error (sct=0, sc=8) 00:24:14.736 Read completed with error (sct=0, sc=8) 00:24:14.736 Read completed with error (sct=0, sc=8) 00:24:14.736 Write completed with error (sct=0, sc=8) 00:24:14.736 Read completed with error (sct=0, sc=8) 00:24:14.736 Write completed with error (sct=0, sc=8) 00:24:14.736 Read completed with error (sct=0, sc=8) 00:24:14.736 Read completed with error (sct=0, sc=8) 00:24:14.736 Read completed with error (sct=0, sc=8) 00:24:14.736 Read completed with error (sct=0, sc=8) 00:24:14.736 Read completed with error (sct=0, sc=8) 00:24:14.736 Read completed with error (sct=0, sc=8) 00:24:14.736 Write completed with error (sct=0, sc=8) 00:24:14.736 [2024-12-16 11:46:15.374331] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x131fea0 is same with the state(6) to be set 00:24:14.736 Read completed with error (sct=0, sc=8) 00:24:14.736 Read completed with error (sct=0, sc=8) 00:24:14.736 Write completed with error (sct=0, sc=8) 00:24:14.736 Write completed with error (sct=0, sc=8) 00:24:14.736 Read completed with error (sct=0, sc=8) 00:24:14.736 Read completed with error (sct=0, sc=8) 00:24:14.736 Read completed with error (sct=0, sc=8) 00:24:14.736 Read completed with error (sct=0, sc=8) 00:24:14.736 Read completed with error (sct=0, sc=8) 00:24:14.736 Read completed with error (sct=0, sc=8) 00:24:14.736 Read completed with error (sct=0, sc=8) 00:24:14.736 Write completed with error (sct=0, sc=8) 00:24:14.736 Write completed with error (sct=0, sc=8) 00:24:14.736 Read completed with error (sct=0, sc=8) 00:24:14.736 Write completed with error (sct=0, sc=8) 00:24:14.736 Read completed with error (sct=0, sc=8) 00:24:14.736 Read completed with error (sct=0, sc=8) 00:24:14.736 Read completed with error (sct=0, sc=8) 00:24:14.736 Read completed with error (sct=0, sc=8) 00:24:14.736 [2024-12-16 11:46:15.376127] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7f664800d840 is same with the state(6) to be set 00:24:14.736 Write completed with error (sct=0, sc=8) 00:24:14.736 Read completed with error (sct=0, sc=8) 00:24:14.736 Read completed with error (sct=0, sc=8) 00:24:14.736 Write completed with error (sct=0, sc=8) 00:24:14.736 Read completed with error (sct=0, sc=8) 00:24:14.736 Write completed with error (sct=0, sc=8) 00:24:14.736 Read completed with error (sct=0, sc=8) 00:24:14.736 Write completed with error (sct=0, sc=8) 00:24:14.736 Write completed with error (sct=0, sc=8) 00:24:14.736 Read completed with error (sct=0, sc=8) 00:24:14.736 Read completed with error (sct=0, sc=8) 00:24:14.737 Write completed with error (sct=0, sc=8) 00:24:14.737 Write completed with error (sct=0, sc=8) 00:24:14.737 Read completed with error (sct=0, sc=8) 00:24:14.737 Write completed with error (sct=0, sc=8) 00:24:14.737 Read completed with error (sct=0, sc=8) 00:24:14.737 Read completed with error (sct=0, sc=8) 00:24:14.737 Write completed with error (sct=0, sc=8) 00:24:14.737 Read completed with error (sct=0, sc=8) 00:24:14.737 Read completed with error (sct=0, sc=8) 00:24:14.737 [2024-12-16 11:46:15.376372] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7f664800d060 is same with the state(6) to be set 00:24:14.737 Initializing NVMe Controllers 00:24:14.737 Attached to NVMe over Fabrics controller at 10.0.0.3:4420: nqn.2016-06.io.spdk:cnode1 00:24:14.737 Controller IO queue size 128, less than required. 00:24:14.737 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:24:14.737 Associating TCP (addr:10.0.0.3 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 2 00:24:14.737 Associating TCP (addr:10.0.0.3 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 3 00:24:14.737 Initialization complete. Launching workers. 00:24:14.737 ======================================================== 00:24:14.737 Latency(us) 00:24:14.737 Device Information : IOPS MiB/s Average min max 00:24:14.737 TCP (addr:10.0.0.3 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 2: 162.32 0.08 914272.40 1243.92 1013134.81 00:24:14.737 TCP (addr:10.0.0.3 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 3: 161.33 0.08 914579.48 441.81 1017658.00 00:24:14.737 ======================================================== 00:24:14.737 Total : 323.65 0.16 914425.47 441.81 1017658.00 00:24:14.737 00:24:14.737 [2024-12-16 11:46:15.377476] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1311aa0 (9): Bad file descriptor 00:24:14.737 /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf: errors occurred 00:24:14.737 11:46:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:14.737 11:46:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@34 -- # delay=0 00:24:14.737 11:46:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@35 -- # kill -0 102921 00:24:14.737 11:46:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@36 -- # sleep 0.5 00:24:15.305 11:46:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@38 -- # (( delay++ > 30 )) 00:24:15.305 11:46:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@35 -- # kill -0 102921 00:24:15.305 /home/vagrant/spdk_repo/spdk/test/nvmf/target/delete_subsystem.sh: line 35: kill: (102921) - No such process 00:24:15.305 11:46:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@45 -- # NOT wait 102921 00:24:15.305 11:46:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@652 -- # local es=0 00:24:15.305 11:46:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@654 -- # valid_exec_arg wait 102921 00:24:15.305 11:46:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@640 -- # local arg=wait 00:24:15.305 11:46:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:24:15.305 11:46:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@644 -- # type -t wait 00:24:15.305 11:46:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:24:15.305 11:46:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@655 -- # wait 102921 00:24:15.305 11:46:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@655 -- # es=1 00:24:15.305 11:46:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:24:15.305 11:46:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:24:15.305 11:46:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:24:15.305 11:46:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@48 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 10 00:24:15.305 11:46:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:15.305 11:46:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:24:15.305 11:46:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:15.305 11:46:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@49 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 00:24:15.305 11:46:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:15.305 11:46:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:24:15.305 [2024-12-16 11:46:15.908050] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4420 *** 00:24:15.305 11:46:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:15.305 11:46:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@50 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:24:15.305 11:46:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:15.305 11:46:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:24:15.305 11:46:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:15.305 11:46:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@54 -- # perf_pid=102967 00:24:15.305 11:46:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@56 -- # delay=0 00:24:15.305 11:46:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@52 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf -c 0xC -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.3 trsvcid:4420' -t 3 -q 128 -w randrw -M 70 -o 512 -P 4 00:24:15.305 11:46:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 102967 00:24:15.305 11:46:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:24:15.564 [2024-12-16 11:46:16.090683] subsystem.c:1641:spdk_nvmf_subsystem_listener_allowed: *WARNING*: Allowing connection to discovery subsystem on TCP/10.0.0.3/4420, even though this listener was not added to the discovery subsystem. This behavior is deprecated and will be removed in a future release. 00:24:15.823 11:46:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:24:15.823 11:46:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 102967 00:24:15.823 11:46:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:24:16.390 11:46:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:24:16.390 11:46:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 102967 00:24:16.390 11:46:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:24:16.957 11:46:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:24:16.957 11:46:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 102967 00:24:16.957 11:46:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:24:17.217 11:46:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:24:17.217 11:46:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 102967 00:24:17.217 11:46:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:24:17.785 11:46:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:24:17.785 11:46:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 102967 00:24:17.785 11:46:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:24:18.351 11:46:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:24:18.351 11:46:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 102967 00:24:18.351 11:46:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:24:18.609 Initializing NVMe Controllers 00:24:18.609 Attached to NVMe over Fabrics controller at 10.0.0.3:4420: nqn.2016-06.io.spdk:cnode1 00:24:18.609 Controller IO queue size 128, less than required. 00:24:18.609 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:24:18.609 Associating TCP (addr:10.0.0.3 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 2 00:24:18.609 Associating TCP (addr:10.0.0.3 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 3 00:24:18.609 Initialization complete. Launching workers. 00:24:18.609 ======================================================== 00:24:18.609 Latency(us) 00:24:18.609 Device Information : IOPS MiB/s Average min max 00:24:18.609 TCP (addr:10.0.0.3 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 2: 128.00 0.06 1003700.19 1000175.39 1043495.99 00:24:18.609 TCP (addr:10.0.0.3 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 3: 128.00 0.06 1006215.97 1000296.30 1015695.58 00:24:18.609 ======================================================== 00:24:18.609 Total : 256.00 0.12 1004958.08 1000175.39 1043495.99 00:24:18.609 00:24:18.868 11:46:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:24:18.868 11:46:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 102967 00:24:18.868 /home/vagrant/spdk_repo/spdk/test/nvmf/target/delete_subsystem.sh: line 57: kill: (102967) - No such process 00:24:18.868 11:46:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@67 -- # wait 102967 00:24:18.868 11:46:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@69 -- # trap - SIGINT SIGTERM EXIT 00:24:18.868 11:46:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@71 -- # nvmftestfini 00:24:18.868 11:46:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@516 -- # nvmfcleanup 00:24:18.868 11:46:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@121 -- # sync 00:24:18.868 11:46:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:24:18.868 11:46:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@124 -- # set +e 00:24:18.868 11:46:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@125 -- # for i in {1..20} 00:24:18.868 11:46:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:24:18.868 rmmod nvme_tcp 00:24:18.868 rmmod nvme_fabrics 00:24:18.868 rmmod nvme_keyring 00:24:18.868 11:46:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:24:18.868 11:46:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@128 -- # set -e 00:24:18.868 11:46:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@129 -- # return 0 00:24:18.868 11:46:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@517 -- # '[' -n 102874 ']' 00:24:18.868 11:46:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@518 -- # killprocess 102874 00:24:18.868 11:46:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@954 -- # '[' -z 102874 ']' 00:24:18.868 11:46:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@958 -- # kill -0 102874 00:24:18.868 11:46:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@959 -- # uname 00:24:18.868 11:46:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:24:18.868 11:46:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 102874 00:24:18.868 11:46:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:24:18.868 11:46:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:24:18.868 killing process with pid 102874 00:24:18.868 11:46:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@972 -- # echo 'killing process with pid 102874' 00:24:18.868 11:46:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@973 -- # kill 102874 00:24:18.868 11:46:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@978 -- # wait 102874 00:24:19.172 11:46:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:24:19.172 11:46:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:24:19.172 11:46:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:24:19.172 11:46:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@297 -- # iptr 00:24:19.172 11:46:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@791 -- # iptables-save 00:24:19.172 11:46:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:24:19.172 11:46:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@791 -- # iptables-restore 00:24:19.172 11:46:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@298 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:24:19.172 11:46:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@299 -- # nvmf_veth_fini 00:24:19.172 11:46:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@233 -- # ip link set nvmf_init_br nomaster 00:24:19.172 11:46:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@234 -- # ip link set nvmf_init_br2 nomaster 00:24:19.172 11:46:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@235 -- # ip link set nvmf_tgt_br nomaster 00:24:19.172 11:46:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@236 -- # ip link set nvmf_tgt_br2 nomaster 00:24:19.172 11:46:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@237 -- # ip link set nvmf_init_br down 00:24:19.172 11:46:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@238 -- # ip link set nvmf_init_br2 down 00:24:19.172 11:46:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@239 -- # ip link set nvmf_tgt_br down 00:24:19.172 11:46:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@240 -- # ip link set nvmf_tgt_br2 down 00:24:19.172 11:46:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@241 -- # ip link delete nvmf_br type bridge 00:24:19.431 11:46:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@242 -- # ip link delete nvmf_init_if 00:24:19.431 11:46:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@243 -- # ip link delete nvmf_init_if2 00:24:19.431 11:46:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@244 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:24:19.431 11:46:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@245 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:24:19.431 11:46:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@246 -- # remove_spdk_ns 00:24:19.431 11:46:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:24:19.431 11:46:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:24:19.431 11:46:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:24:19.431 11:46:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@300 -- # return 0 00:24:19.431 00:24:19.431 real 0m9.701s 00:24:19.431 user 0m24.424s 00:24:19.431 sys 0m2.457s 00:24:19.431 11:46:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@1130 -- # xtrace_disable 00:24:19.431 11:46:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:24:19.431 ************************************ 00:24:19.431 END TEST nvmf_delete_subsystem 00:24:19.431 ************************************ 00:24:19.431 11:46:20 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/nvmf_target_core.sh@26 -- # run_test nvmf_host_management /home/vagrant/spdk_repo/spdk/test/nvmf/target/host_management.sh --transport=tcp --interrupt-mode 00:24:19.431 11:46:20 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:24:19.431 11:46:20 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1111 -- # xtrace_disable 00:24:19.431 11:46:20 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@10 -- # set +x 00:24:19.431 ************************************ 00:24:19.431 START TEST nvmf_host_management 00:24:19.431 ************************************ 00:24:19.431 11:46:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/host_management.sh --transport=tcp --interrupt-mode 00:24:19.691 * Looking for test storage... 00:24:19.691 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:24:19.691 11:46:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:24:19.691 11:46:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:24:19.691 11:46:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@1711 -- # lcov --version 00:24:19.691 11:46:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:24:19.692 11:46:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:24:19.692 11:46:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@333 -- # local ver1 ver1_l 00:24:19.692 11:46:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@334 -- # local ver2 ver2_l 00:24:19.692 11:46:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@336 -- # IFS=.-: 00:24:19.692 11:46:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@336 -- # read -ra ver1 00:24:19.692 11:46:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@337 -- # IFS=.-: 00:24:19.692 11:46:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@337 -- # read -ra ver2 00:24:19.692 11:46:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@338 -- # local 'op=<' 00:24:19.692 11:46:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@340 -- # ver1_l=2 00:24:19.692 11:46:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@341 -- # ver2_l=1 00:24:19.692 11:46:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:24:19.692 11:46:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@344 -- # case "$op" in 00:24:19.692 11:46:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@345 -- # : 1 00:24:19.692 11:46:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@364 -- # (( v = 0 )) 00:24:19.692 11:46:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:24:19.692 11:46:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@365 -- # decimal 1 00:24:19.692 11:46:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@353 -- # local d=1 00:24:19.692 11:46:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:24:19.692 11:46:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@355 -- # echo 1 00:24:19.692 11:46:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@365 -- # ver1[v]=1 00:24:19.692 11:46:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@366 -- # decimal 2 00:24:19.692 11:46:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@353 -- # local d=2 00:24:19.692 11:46:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:24:19.692 11:46:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@355 -- # echo 2 00:24:19.692 11:46:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@366 -- # ver2[v]=2 00:24:19.692 11:46:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:24:19.692 11:46:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:24:19.692 11:46:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@368 -- # return 0 00:24:19.692 11:46:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:24:19.692 11:46:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:24:19.692 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:24:19.692 --rc genhtml_branch_coverage=1 00:24:19.692 --rc genhtml_function_coverage=1 00:24:19.692 --rc genhtml_legend=1 00:24:19.692 --rc geninfo_all_blocks=1 00:24:19.692 --rc geninfo_unexecuted_blocks=1 00:24:19.692 00:24:19.692 ' 00:24:19.692 11:46:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:24:19.692 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:24:19.692 --rc genhtml_branch_coverage=1 00:24:19.692 --rc genhtml_function_coverage=1 00:24:19.692 --rc genhtml_legend=1 00:24:19.692 --rc geninfo_all_blocks=1 00:24:19.692 --rc geninfo_unexecuted_blocks=1 00:24:19.692 00:24:19.692 ' 00:24:19.692 11:46:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:24:19.692 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:24:19.692 --rc genhtml_branch_coverage=1 00:24:19.692 --rc genhtml_function_coverage=1 00:24:19.692 --rc genhtml_legend=1 00:24:19.692 --rc geninfo_all_blocks=1 00:24:19.692 --rc geninfo_unexecuted_blocks=1 00:24:19.692 00:24:19.692 ' 00:24:19.692 11:46:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:24:19.692 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:24:19.692 --rc genhtml_branch_coverage=1 00:24:19.692 --rc genhtml_function_coverage=1 00:24:19.692 --rc genhtml_legend=1 00:24:19.692 --rc geninfo_all_blocks=1 00:24:19.692 --rc geninfo_unexecuted_blocks=1 00:24:19.692 00:24:19.692 ' 00:24:19.692 11:46:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:24:19.692 11:46:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@7 -- # uname -s 00:24:19.692 11:46:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:24:19.692 11:46:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:24:19.692 11:46:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:24:19.692 11:46:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:24:19.692 11:46:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:24:19.692 11:46:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:24:19.692 11:46:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:24:19.692 11:46:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:24:19.692 11:46:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:24:19.692 11:46:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:24:19.692 11:46:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:2fb5c03d-b67f-4471-b8a9-a83bf7b15be8 00:24:19.692 11:46:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@18 -- # NVME_HOSTID=2fb5c03d-b67f-4471-b8a9-a83bf7b15be8 00:24:19.692 11:46:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:24:19.692 11:46:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:24:19.692 11:46:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:24:19.692 11:46:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:24:19.692 11:46:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@49 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:24:19.692 11:46:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@15 -- # shopt -s extglob 00:24:19.692 11:46:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:24:19.692 11:46:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:24:19.692 11:46:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:24:19.692 11:46:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:19.692 11:46:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:19.692 11:46:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:19.692 11:46:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- paths/export.sh@5 -- # export PATH 00:24:19.692 11:46:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:19.692 11:46:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@51 -- # : 0 00:24:19.692 11:46:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:24:19.692 11:46:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:24:19.692 11:46:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:24:19.692 11:46:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:24:19.692 11:46:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:24:19.692 11:46:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@33 -- # '[' 1 -eq 1 ']' 00:24:19.692 11:46:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@34 -- # NVMF_APP+=(--interrupt-mode) 00:24:19.693 11:46:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:24:19.693 11:46:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:24:19.693 11:46:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@55 -- # have_pci_nics=0 00:24:19.693 11:46:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@11 -- # MALLOC_BDEV_SIZE=64 00:24:19.693 11:46:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:24:19.693 11:46:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@105 -- # nvmftestinit 00:24:19.693 11:46:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:24:19.693 11:46:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:24:19.693 11:46:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@476 -- # prepare_net_devs 00:24:19.693 11:46:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@438 -- # local -g is_hw=no 00:24:19.693 11:46:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@440 -- # remove_spdk_ns 00:24:19.693 11:46:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:24:19.693 11:46:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:24:19.693 11:46:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:24:19.693 11:46:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@442 -- # [[ virt != virt ]] 00:24:19.693 11:46:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@444 -- # [[ no == yes ]] 00:24:19.693 11:46:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@451 -- # [[ virt == phy ]] 00:24:19.693 11:46:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@454 -- # [[ virt == phy-fallback ]] 00:24:19.693 11:46:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@459 -- # [[ tcp == tcp ]] 00:24:19.693 11:46:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@460 -- # nvmf_veth_init 00:24:19.693 11:46:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@145 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:24:19.693 11:46:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@146 -- # NVMF_SECOND_INITIATOR_IP=10.0.0.2 00:24:19.693 11:46:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@147 -- # NVMF_FIRST_TARGET_IP=10.0.0.3 00:24:19.693 11:46:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@148 -- # NVMF_SECOND_TARGET_IP=10.0.0.4 00:24:19.693 11:46:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@149 -- # NVMF_INITIATOR_IP=10.0.0.1 00:24:19.693 11:46:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@150 -- # NVMF_BRIDGE=nvmf_br 00:24:19.693 11:46:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@151 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:24:19.693 11:46:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@152 -- # NVMF_INITIATOR_INTERFACE2=nvmf_init_if2 00:24:19.693 11:46:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@153 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:24:19.693 11:46:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@154 -- # NVMF_INITIATOR_BRIDGE2=nvmf_init_br2 00:24:19.693 11:46:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@155 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:24:19.693 11:46:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@156 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:24:19.693 11:46:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@157 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:24:19.693 11:46:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@158 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:24:19.693 11:46:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@159 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:24:19.693 11:46:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@160 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:24:19.693 11:46:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@162 -- # ip link set nvmf_init_br nomaster 00:24:19.693 Cannot find device "nvmf_init_br" 00:24:19.693 11:46:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@162 -- # true 00:24:19.693 11:46:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@163 -- # ip link set nvmf_init_br2 nomaster 00:24:19.693 Cannot find device "nvmf_init_br2" 00:24:19.693 11:46:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@163 -- # true 00:24:19.693 11:46:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@164 -- # ip link set nvmf_tgt_br nomaster 00:24:19.693 Cannot find device "nvmf_tgt_br" 00:24:19.693 11:46:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@164 -- # true 00:24:19.693 11:46:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@165 -- # ip link set nvmf_tgt_br2 nomaster 00:24:19.693 Cannot find device "nvmf_tgt_br2" 00:24:19.693 11:46:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@165 -- # true 00:24:19.693 11:46:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@166 -- # ip link set nvmf_init_br down 00:24:19.693 Cannot find device "nvmf_init_br" 00:24:19.693 11:46:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@166 -- # true 00:24:19.693 11:46:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@167 -- # ip link set nvmf_init_br2 down 00:24:19.693 Cannot find device "nvmf_init_br2" 00:24:19.693 11:46:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@167 -- # true 00:24:19.693 11:46:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@168 -- # ip link set nvmf_tgt_br down 00:24:19.693 Cannot find device "nvmf_tgt_br" 00:24:19.693 11:46:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@168 -- # true 00:24:19.693 11:46:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@169 -- # ip link set nvmf_tgt_br2 down 00:24:19.693 Cannot find device "nvmf_tgt_br2" 00:24:19.693 11:46:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@169 -- # true 00:24:19.693 11:46:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@170 -- # ip link delete nvmf_br type bridge 00:24:19.693 Cannot find device "nvmf_br" 00:24:19.693 11:46:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@170 -- # true 00:24:19.693 11:46:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@171 -- # ip link delete nvmf_init_if 00:24:19.953 Cannot find device "nvmf_init_if" 00:24:19.953 11:46:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@171 -- # true 00:24:19.953 11:46:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@172 -- # ip link delete nvmf_init_if2 00:24:19.953 Cannot find device "nvmf_init_if2" 00:24:19.953 11:46:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@172 -- # true 00:24:19.953 11:46:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@173 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:24:19.953 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:24:19.953 11:46:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@173 -- # true 00:24:19.953 11:46:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@174 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:24:19.953 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:24:19.953 11:46:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@174 -- # true 00:24:19.953 11:46:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@177 -- # ip netns add nvmf_tgt_ns_spdk 00:24:19.953 11:46:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@180 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:24:19.953 11:46:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@181 -- # ip link add nvmf_init_if2 type veth peer name nvmf_init_br2 00:24:19.953 11:46:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@182 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:24:19.953 11:46:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@183 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:24:19.953 11:46:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:24:19.953 11:46:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@187 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:24:19.953 11:46:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@190 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:24:19.953 11:46:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@191 -- # ip addr add 10.0.0.2/24 dev nvmf_init_if2 00:24:19.953 11:46:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@192 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if 00:24:19.953 11:46:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@193 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.4/24 dev nvmf_tgt_if2 00:24:19.953 11:46:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@196 -- # ip link set nvmf_init_if up 00:24:19.953 11:46:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@197 -- # ip link set nvmf_init_if2 up 00:24:19.953 11:46:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@198 -- # ip link set nvmf_init_br up 00:24:19.953 11:46:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@199 -- # ip link set nvmf_init_br2 up 00:24:19.953 11:46:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@200 -- # ip link set nvmf_tgt_br up 00:24:19.953 11:46:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@201 -- # ip link set nvmf_tgt_br2 up 00:24:19.953 11:46:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@202 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:24:19.953 11:46:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@203 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:24:19.953 11:46:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@204 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:24:19.953 11:46:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@207 -- # ip link add nvmf_br type bridge 00:24:19.953 11:46:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@208 -- # ip link set nvmf_br up 00:24:19.953 11:46:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@211 -- # ip link set nvmf_init_br master nvmf_br 00:24:19.953 11:46:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@212 -- # ip link set nvmf_init_br2 master nvmf_br 00:24:19.953 11:46:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@213 -- # ip link set nvmf_tgt_br master nvmf_br 00:24:19.953 11:46:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@214 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:24:19.953 11:46:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@217 -- # ipts -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:24:19.953 11:46:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT' 00:24:19.953 11:46:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@218 -- # ipts -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT 00:24:19.953 11:46:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT' 00:24:19.953 11:46:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@219 -- # ipts -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:24:19.953 11:46:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@790 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT -m comment --comment 'SPDK_NVMF:-A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT' 00:24:19.953 11:46:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@222 -- # ping -c 1 10.0.0.3 00:24:19.953 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:24:19.953 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.069 ms 00:24:19.953 00:24:19.953 --- 10.0.0.3 ping statistics --- 00:24:19.953 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:24:19.953 rtt min/avg/max/mdev = 0.069/0.069/0.069/0.000 ms 00:24:19.953 11:46:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@223 -- # ping -c 1 10.0.0.4 00:24:19.953 PING 10.0.0.4 (10.0.0.4) 56(84) bytes of data. 00:24:19.953 64 bytes from 10.0.0.4: icmp_seq=1 ttl=64 time=0.068 ms 00:24:19.953 00:24:19.953 --- 10.0.0.4 ping statistics --- 00:24:19.953 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:24:19.953 rtt min/avg/max/mdev = 0.068/0.068/0.068/0.000 ms 00:24:19.953 11:46:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@224 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:24:19.953 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:24:19.953 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.024 ms 00:24:19.953 00:24:19.953 --- 10.0.0.1 ping statistics --- 00:24:19.953 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:24:19.953 rtt min/avg/max/mdev = 0.024/0.024/0.024/0.000 ms 00:24:19.953 11:46:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@225 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.2 00:24:19.953 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:24:19.953 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.054 ms 00:24:19.953 00:24:19.953 --- 10.0.0.2 ping statistics --- 00:24:19.953 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:24:19.953 rtt min/avg/max/mdev = 0.054/0.054/0.054/0.000 ms 00:24:19.953 11:46:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@227 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:24:19.953 11:46:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@461 -- # return 0 00:24:19.953 11:46:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:24:19.953 11:46:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:24:19.953 11:46:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:24:19.953 11:46:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:24:19.954 11:46:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:24:19.954 11:46:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:24:19.954 11:46:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:24:19.954 11:46:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@107 -- # nvmf_host_management 00:24:19.954 11:46:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@69 -- # starttarget 00:24:19.954 11:46:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@16 -- # nvmfappstart -m 0x1E 00:24:19.954 11:46:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:24:19.954 11:46:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@726 -- # xtrace_disable 00:24:19.954 11:46:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:24:20.213 11:46:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@509 -- # nvmfpid=103256 00:24:20.213 11:46:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@508 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --interrupt-mode -m 0x1E 00:24:20.213 11:46:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@510 -- # waitforlisten 103256 00:24:20.213 11:46:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@835 -- # '[' -z 103256 ']' 00:24:20.213 11:46:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:24:20.213 11:46:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@840 -- # local max_retries=100 00:24:20.213 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:24:20.213 11:46:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:24:20.213 11:46:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@844 -- # xtrace_disable 00:24:20.213 11:46:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:24:20.213 [2024-12-16 11:46:20.767691] thread.c:3079:spdk_interrupt_mode_enable: *NOTICE*: Set SPDK running in interrupt mode. 00:24:20.213 [2024-12-16 11:46:20.768953] Starting SPDK v25.01-pre git sha1 c360129e7 / DPDK 24.03.0 initialization... 00:24:20.213 [2024-12-16 11:46:20.769033] [ DPDK EAL parameters: nvmf -c 0x1E --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:24:20.213 [2024-12-16 11:46:20.917335] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:24:20.213 [2024-12-16 11:46:20.964903] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:24:20.213 [2024-12-16 11:46:20.964982] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:24:20.213 [2024-12-16 11:46:20.964993] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:24:20.213 [2024-12-16 11:46:20.965001] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:24:20.213 [2024-12-16 11:46:20.965008] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:24:20.472 [2024-12-16 11:46:20.966216] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 2 00:24:20.472 [2024-12-16 11:46:20.966282] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 3 00:24:20.472 [2024-12-16 11:46:20.966434] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 4 00:24:20.472 [2024-12-16 11:46:20.966438] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 1 00:24:20.472 [2024-12-16 11:46:21.058100] thread.c:2144:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_000) to intr mode from intr mode. 00:24:20.472 [2024-12-16 11:46:21.058529] thread.c:2144:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_001) to intr mode from intr mode. 00:24:20.472 [2024-12-16 11:46:21.058824] thread.c:2144:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_002) to intr mode from intr mode. 00:24:20.472 [2024-12-16 11:46:21.059122] thread.c:2144:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (app_thread) to intr mode from intr mode. 00:24:20.472 [2024-12-16 11:46:21.059378] thread.c:2144:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_003) to intr mode from intr mode. 00:24:20.472 11:46:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:24:20.472 11:46:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@868 -- # return 0 00:24:20.472 11:46:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:24:20.472 11:46:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@732 -- # xtrace_disable 00:24:20.472 11:46:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:24:20.472 11:46:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:24:20.472 11:46:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@18 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:24:20.472 11:46:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:20.472 11:46:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:24:20.472 [2024-12-16 11:46:21.151537] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:24:20.472 11:46:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:20.472 11:46:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@20 -- # timing_enter create_subsystem 00:24:20.472 11:46:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@726 -- # xtrace_disable 00:24:20.472 11:46:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:24:20.472 11:46:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@22 -- # rm -rf /home/vagrant/spdk_repo/spdk/test/nvmf/target/rpcs.txt 00:24:20.472 11:46:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@23 -- # cat 00:24:20.472 11:46:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@30 -- # rpc_cmd 00:24:20.472 11:46:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:20.472 11:46:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:24:20.472 Malloc0 00:24:20.731 [2024-12-16 11:46:21.235461] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4420 *** 00:24:20.731 11:46:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:20.731 11:46:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@31 -- # timing_exit create_subsystems 00:24:20.731 11:46:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@732 -- # xtrace_disable 00:24:20.731 11:46:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:24:20.731 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:24:20.731 11:46:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@73 -- # perfpid=103313 00:24:20.731 11:46:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@74 -- # waitforlisten 103313 /var/tmp/bdevperf.sock 00:24:20.731 11:46:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@835 -- # '[' -z 103313 ']' 00:24:20.731 11:46:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:24:20.731 11:46:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@840 -- # local max_retries=100 00:24:20.731 11:46:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@72 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -r /var/tmp/bdevperf.sock --json /dev/fd/63 -q 64 -o 65536 -w verify -t 10 00:24:20.731 11:46:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:24:20.731 11:46:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@72 -- # gen_nvmf_target_json 0 00:24:20.731 11:46:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@844 -- # xtrace_disable 00:24:20.731 11:46:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:24:20.731 11:46:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@560 -- # config=() 00:24:20.731 11:46:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@560 -- # local subsystem config 00:24:20.731 11:46:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:24:20.731 11:46:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:24:20.731 { 00:24:20.731 "params": { 00:24:20.731 "name": "Nvme$subsystem", 00:24:20.731 "trtype": "$TEST_TRANSPORT", 00:24:20.731 "traddr": "$NVMF_FIRST_TARGET_IP", 00:24:20.731 "adrfam": "ipv4", 00:24:20.731 "trsvcid": "$NVMF_PORT", 00:24:20.731 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:24:20.731 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:24:20.731 "hdgst": ${hdgst:-false}, 00:24:20.731 "ddgst": ${ddgst:-false} 00:24:20.731 }, 00:24:20.731 "method": "bdev_nvme_attach_controller" 00:24:20.731 } 00:24:20.731 EOF 00:24:20.731 )") 00:24:20.731 11:46:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@582 -- # cat 00:24:20.731 11:46:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@584 -- # jq . 00:24:20.731 11:46:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@585 -- # IFS=, 00:24:20.731 11:46:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:24:20.731 "params": { 00:24:20.731 "name": "Nvme0", 00:24:20.731 "trtype": "tcp", 00:24:20.731 "traddr": "10.0.0.3", 00:24:20.731 "adrfam": "ipv4", 00:24:20.731 "trsvcid": "4420", 00:24:20.731 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:24:20.731 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:24:20.731 "hdgst": false, 00:24:20.731 "ddgst": false 00:24:20.731 }, 00:24:20.731 "method": "bdev_nvme_attach_controller" 00:24:20.731 }' 00:24:20.731 [2024-12-16 11:46:21.347432] Starting SPDK v25.01-pre git sha1 c360129e7 / DPDK 24.03.0 initialization... 00:24:20.731 [2024-12-16 11:46:21.347546] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid103313 ] 00:24:20.991 [2024-12-16 11:46:21.499206] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:24:20.991 [2024-12-16 11:46:21.586037] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:24:21.249 Running I/O for 10 seconds... 00:24:21.819 11:46:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:24:21.819 11:46:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@868 -- # return 0 00:24:21.819 11:46:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@75 -- # rpc_cmd -s /var/tmp/bdevperf.sock framework_wait_init 00:24:21.819 11:46:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:21.819 11:46:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:24:21.819 11:46:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:21.819 11:46:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@78 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; kill -9 $perfpid || true; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:24:21.819 11:46:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@80 -- # waitforio /var/tmp/bdevperf.sock Nvme0n1 00:24:21.819 11:46:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@45 -- # '[' -z /var/tmp/bdevperf.sock ']' 00:24:21.819 11:46:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@49 -- # '[' -z Nvme0n1 ']' 00:24:21.819 11:46:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@52 -- # local ret=1 00:24:21.819 11:46:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@53 -- # local i 00:24:21.819 11:46:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@54 -- # (( i = 10 )) 00:24:21.819 11:46:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@54 -- # (( i != 0 )) 00:24:21.819 11:46:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@55 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_get_iostat -b Nvme0n1 00:24:21.819 11:46:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:21.819 11:46:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:24:21.819 11:46:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@55 -- # jq -r '.bdevs[0].num_read_ops' 00:24:21.819 11:46:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:21.819 11:46:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@55 -- # read_io_count=899 00:24:21.819 11:46:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@58 -- # '[' 899 -ge 100 ']' 00:24:21.819 11:46:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@59 -- # ret=0 00:24:21.819 11:46:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@60 -- # break 00:24:21.819 11:46:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@64 -- # return 0 00:24:21.819 11:46:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@84 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2016-06.io.spdk:cnode0 nqn.2016-06.io.spdk:host0 00:24:21.819 11:46:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:21.819 11:46:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:24:21.819 [2024-12-16 11:46:22.508190] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2110530 is same with the state(6) to be set 00:24:21.819 [2024-12-16 11:46:22.508254] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2110530 is same with the state(6) to be set 00:24:21.819 [2024-12-16 11:46:22.508267] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2110530 is same with the state(6) to be set 00:24:21.819 [2024-12-16 11:46:22.508277] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2110530 is same with the state(6) to be set 00:24:21.819 [2024-12-16 11:46:22.508286] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2110530 is same with the state(6) to be set 00:24:21.819 [2024-12-16 11:46:22.508295] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2110530 is same with the state(6) to be set 00:24:21.819 [2024-12-16 11:46:22.508304] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2110530 is same with the state(6) to be set 00:24:21.819 [2024-12-16 11:46:22.508312] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2110530 is same with the state(6) to be set 00:24:21.819 [2024-12-16 11:46:22.508322] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2110530 is same with the state(6) to be set 00:24:21.819 [2024-12-16 11:46:22.508330] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2110530 is same with the state(6) to be set 00:24:21.819 [2024-12-16 11:46:22.508339] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2110530 is same with the state(6) to be set 00:24:21.819 [2024-12-16 11:46:22.508347] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2110530 is same with the state(6) to be set 00:24:21.819 [2024-12-16 11:46:22.508356] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2110530 is same with the state(6) to be set 00:24:21.819 [2024-12-16 11:46:22.508365] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2110530 is same with the state(6) to be set 00:24:21.819 [2024-12-16 11:46:22.508374] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2110530 is same with the state(6) to be set 00:24:21.819 [2024-12-16 11:46:22.508382] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2110530 is same with the state(6) to be set 00:24:21.819 [2024-12-16 11:46:22.508391] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2110530 is same with the state(6) to be set 00:24:21.819 [2024-12-16 11:46:22.508399] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2110530 is same with the state(6) to be set 00:24:21.819 [2024-12-16 11:46:22.508408] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2110530 is same with the state(6) to be set 00:24:21.819 [2024-12-16 11:46:22.508417] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2110530 is same with the state(6) to be set 00:24:21.819 [2024-12-16 11:46:22.508426] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2110530 is same with the state(6) to be set 00:24:21.819 [2024-12-16 11:46:22.508434] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2110530 is same with the state(6) to be set 00:24:21.819 [2024-12-16 11:46:22.508443] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2110530 is same with the state(6) to be set 00:24:21.819 [2024-12-16 11:46:22.508451] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2110530 is same with the state(6) to be set 00:24:21.819 [2024-12-16 11:46:22.508459] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2110530 is same with the state(6) to be set 00:24:21.819 [2024-12-16 11:46:22.508467] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2110530 is same with the state(6) to be set 00:24:21.819 [2024-12-16 11:46:22.508475] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2110530 is same with the state(6) to be set 00:24:21.820 [2024-12-16 11:46:22.508483] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2110530 is same with the state(6) to be set 00:24:21.820 [2024-12-16 11:46:22.508492] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2110530 is same with the state(6) to be set 00:24:21.820 [2024-12-16 11:46:22.508500] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2110530 is same with the state(6) to be set 00:24:21.820 [2024-12-16 11:46:22.508508] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2110530 is same with the state(6) to be set 00:24:21.820 [2024-12-16 11:46:22.508516] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2110530 is same with the state(6) to be set 00:24:21.820 [2024-12-16 11:46:22.508524] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2110530 is same with the state(6) to be set 00:24:21.820 [2024-12-16 11:46:22.508532] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2110530 is same with the state(6) to be set 00:24:21.820 [2024-12-16 11:46:22.508540] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2110530 is same with the state(6) to be set 00:24:21.820 [2024-12-16 11:46:22.508548] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2110530 is same with the state(6) to be set 00:24:21.820 [2024-12-16 11:46:22.508557] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2110530 is same with the state(6) to be set 00:24:21.820 [2024-12-16 11:46:22.508565] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2110530 is same with the state(6) to be set 00:24:21.820 [2024-12-16 11:46:22.508573] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2110530 is same with the state(6) to be set 00:24:21.820 [2024-12-16 11:46:22.508583] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2110530 is same with the state(6) to be set 00:24:21.820 [2024-12-16 11:46:22.508591] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2110530 is same with the state(6) to be set 00:24:21.820 [2024-12-16 11:46:22.508599] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2110530 is same with the state(6) to be set 00:24:21.820 [2024-12-16 11:46:22.508607] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2110530 is same with the state(6) to be set 00:24:21.820 [2024-12-16 11:46:22.508616] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2110530 is same with the state(6) to be set 00:24:21.820 [2024-12-16 11:46:22.508624] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2110530 is same with the state(6) to be set 00:24:21.820 [2024-12-16 11:46:22.508632] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2110530 is same with the state(6) to be set 00:24:21.820 [2024-12-16 11:46:22.508640] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2110530 is same with the state(6) to be set 00:24:21.820 [2024-12-16 11:46:22.508648] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2110530 is same with the state(6) to be set 00:24:21.820 [2024-12-16 11:46:22.508656] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2110530 is same with the state(6) to be set 00:24:21.820 [2024-12-16 11:46:22.508664] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2110530 is same with the state(6) to be set 00:24:21.820 [2024-12-16 11:46:22.508672] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2110530 is same with the state(6) to be set 00:24:21.820 [2024-12-16 11:46:22.508681] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2110530 is same with the state(6) to be set 00:24:21.820 [2024-12-16 11:46:22.508689] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2110530 is same with the state(6) to be set 00:24:21.820 [2024-12-16 11:46:22.508697] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2110530 is same with the state(6) to be set 00:24:21.820 [2024-12-16 11:46:22.508705] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2110530 is same with the state(6) to be set 00:24:21.820 [2024-12-16 11:46:22.508713] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2110530 is same with the state(6) to be set 00:24:21.820 [2024-12-16 11:46:22.508721] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2110530 is same with the state(6) to be set 00:24:21.820 [2024-12-16 11:46:22.508729] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2110530 is same with the state(6) to be set 00:24:21.820 [2024-12-16 11:46:22.508737] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2110530 is same with the state(6) to be set 00:24:21.820 [2024-12-16 11:46:22.508745] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2110530 is same with the state(6) to be set 00:24:21.820 [2024-12-16 11:46:22.508753] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2110530 is same with the state(6) to be set 00:24:21.820 [2024-12-16 11:46:22.508761] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2110530 is same with the state(6) to be set 00:24:21.820 [2024-12-16 11:46:22.508927] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:122880 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:21.820 [2024-12-16 11:46:22.508980] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:21.820 [2024-12-16 11:46:22.509008] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:123008 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:21.820 [2024-12-16 11:46:22.509020] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:21.820 [2024-12-16 11:46:22.509033] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:123136 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:21.820 [2024-12-16 11:46:22.509044] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:21.820 [2024-12-16 11:46:22.509056] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:123264 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:21.820 [2024-12-16 11:46:22.509066] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:21.820 [2024-12-16 11:46:22.509078] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:123392 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:21.820 [2024-12-16 11:46:22.509090] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:21.820 [2024-12-16 11:46:22.509103] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:123520 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:21.820 [2024-12-16 11:46:22.509112] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:21.820 [2024-12-16 11:46:22.509125] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:123648 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:21.820 [2024-12-16 11:46:22.509135] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:21.820 [2024-12-16 11:46:22.509165] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:123776 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:21.820 [2024-12-16 11:46:22.509176] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:21.820 [2024-12-16 11:46:22.509189] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:123904 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:21.820 [2024-12-16 11:46:22.509198] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:21.820 [2024-12-16 11:46:22.509211] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:124032 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:21.820 [2024-12-16 11:46:22.509221] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:21.820 [2024-12-16 11:46:22.509233] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:124160 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:21.820 [2024-12-16 11:46:22.509242] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:21.820 [2024-12-16 11:46:22.509253] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:124288 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:21.820 [2024-12-16 11:46:22.509263] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:21.820 [2024-12-16 11:46:22.509274] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:124416 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:21.820 [2024-12-16 11:46:22.509284] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:21.820 [2024-12-16 11:46:22.509297] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:124544 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:21.820 [2024-12-16 11:46:22.509306] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:21.820 [2024-12-16 11:46:22.509319] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:124672 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:21.820 [2024-12-16 11:46:22.509329] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:21.820 [2024-12-16 11:46:22.509341] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:124800 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:21.820 [2024-12-16 11:46:22.509360] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:21.820 [2024-12-16 11:46:22.509373] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:124928 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:21.820 [2024-12-16 11:46:22.509382] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:21.820 [2024-12-16 11:46:22.509396] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:125056 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:21.820 [2024-12-16 11:46:22.509405] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:21.820 [2024-12-16 11:46:22.509419] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:125184 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:21.820 [2024-12-16 11:46:22.509429] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:21.820 [2024-12-16 11:46:22.509441] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:125312 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:21.820 [2024-12-16 11:46:22.509450] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:21.820 [2024-12-16 11:46:22.509461] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:125440 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:21.820 [2024-12-16 11:46:22.509471] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:21.820 [2024-12-16 11:46:22.509483] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:125568 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:21.820 [2024-12-16 11:46:22.509492] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:21.820 [2024-12-16 11:46:22.509504] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:125696 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:21.820 [2024-12-16 11:46:22.509513] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:21.820 [2024-12-16 11:46:22.509525] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:125824 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:21.821 [2024-12-16 11:46:22.509534] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:21.821 [2024-12-16 11:46:22.509546] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:125952 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:21.821 [2024-12-16 11:46:22.509555] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:21.821 [2024-12-16 11:46:22.509567] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:126080 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:21.821 [2024-12-16 11:46:22.509576] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:21.821 [2024-12-16 11:46:22.509588] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:126208 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:21.821 [2024-12-16 11:46:22.509597] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:21.821 [2024-12-16 11:46:22.509609] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:126336 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:21.821 [2024-12-16 11:46:22.509618] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:21.821 [2024-12-16 11:46:22.509629] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:126464 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:21.821 [2024-12-16 11:46:22.509639] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:21.821 [2024-12-16 11:46:22.509650] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:126592 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:21.821 [2024-12-16 11:46:22.509659] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:21.821 [2024-12-16 11:46:22.509670] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:126720 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:21.821 [2024-12-16 11:46:22.509680] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:21.821 [2024-12-16 11:46:22.509691] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:126848 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:21.821 [2024-12-16 11:46:22.509706] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:21.821 [2024-12-16 11:46:22.509717] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:126976 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:21.821 [2024-12-16 11:46:22.509727] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:21.821 [2024-12-16 11:46:22.509739] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:127104 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:21.821 [2024-12-16 11:46:22.509748] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:21.821 [2024-12-16 11:46:22.509760] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:127232 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:21.821 [2024-12-16 11:46:22.509770] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:21.821 [2024-12-16 11:46:22.509780] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:127360 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:21.821 [2024-12-16 11:46:22.509790] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:21.821 [2024-12-16 11:46:22.509801] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:127488 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:21.821 [2024-12-16 11:46:22.509812] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:21.821 [2024-12-16 11:46:22.509824] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:127616 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:21.821 [2024-12-16 11:46:22.509833] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:21.821 [2024-12-16 11:46:22.509845] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:127744 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:21.821 [2024-12-16 11:46:22.509854] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:21.821 [2024-12-16 11:46:22.509866] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:127872 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:21.821 [2024-12-16 11:46:22.509876] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:21.821 [2024-12-16 11:46:22.509888] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:128000 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:21.821 [2024-12-16 11:46:22.509897] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:21.821 [2024-12-16 11:46:22.509909] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:128128 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:21.821 [2024-12-16 11:46:22.509918] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:21.821 [2024-12-16 11:46:22.509930] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:128256 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:21.821 [2024-12-16 11:46:22.509939] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:21.821 [2024-12-16 11:46:22.509950] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:128384 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:21.821 [2024-12-16 11:46:22.509960] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:21.821 [2024-12-16 11:46:22.509971] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:128512 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:21.821 [2024-12-16 11:46:22.509980] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:21.821 [2024-12-16 11:46:22.509992] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:128640 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:21.821 [2024-12-16 11:46:22.510001] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:21.821 [2024-12-16 11:46:22.510014] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:128768 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:21.821 [2024-12-16 11:46:22.510024] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:21.821 [2024-12-16 11:46:22.510036] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:128896 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:21.821 [2024-12-16 11:46:22.510051] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:21.821 [2024-12-16 11:46:22.510064] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:129024 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:21.821 [2024-12-16 11:46:22.510073] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:21.821 [2024-12-16 11:46:22.510085] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:129152 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:21.821 [2024-12-16 11:46:22.510095] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:21.821 [2024-12-16 11:46:22.510107] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:129280 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:21.821 [2024-12-16 11:46:22.510116] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:21.821 [2024-12-16 11:46:22.510127] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:129408 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:21.821 [2024-12-16 11:46:22.510137] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:21.821 [2024-12-16 11:46:22.510160] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:129536 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:21.821 [2024-12-16 11:46:22.510170] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:21.821 [2024-12-16 11:46:22.510181] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:129664 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:21.821 [2024-12-16 11:46:22.510191] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:21.821 [2024-12-16 11:46:22.510202] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:129792 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:21.821 [2024-12-16 11:46:22.510212] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:21.821 [2024-12-16 11:46:22.510224] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:129920 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:21.821 [2024-12-16 11:46:22.510233] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:21.821 [2024-12-16 11:46:22.510245] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:130048 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:21.821 [2024-12-16 11:46:22.510254] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:21.821 [2024-12-16 11:46:22.510265] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:130176 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:21.821 [2024-12-16 11:46:22.510274] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:21.821 [2024-12-16 11:46:22.510285] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:130304 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:21.821 [2024-12-16 11:46:22.510294] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:21.821 [2024-12-16 11:46:22.510305] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:130432 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:21.821 [2024-12-16 11:46:22.510315] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:21.821 [2024-12-16 11:46:22.510326] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:130560 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:21.821 [2024-12-16 11:46:22.510336] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:21.821 [2024-12-16 11:46:22.510348] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:130688 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:21.821 [2024-12-16 11:46:22.510357] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:21.821 [2024-12-16 11:46:22.510368] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:130816 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:21.821 [2024-12-16 11:46:22.510377] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:21.822 [2024-12-16 11:46:22.510389] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:130944 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:21.822 [2024-12-16 11:46:22.510403] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:21.822 [2024-12-16 11:46:22.510415] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9d62a0 is same with the state(6) to be set 00:24:21.822 task offset: 122880 on job bdev=Nvme0n1 fails 00:24:21.822 00:24:21.822 Latency(us) 00:24:21.822 [2024-12-16T11:46:22.575Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:24:21.822 Job: Nvme0n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:24:21.822 Job: Nvme0n1 ended in about 0.71 seconds with error 00:24:21.822 Verification LBA range: start 0x0 length 0x400 00:24:21.822 Nvme0n1 : 0.71 1360.47 85.03 90.70 0.00 43035.36 5213.09 38606.66 00:24:21.822 [2024-12-16T11:46:22.575Z] =================================================================================================================== 00:24:21.822 [2024-12-16T11:46:22.575Z] Total : 1360.47 85.03 90.70 0.00 43035.36 5213.09 38606.66 00:24:21.822 [2024-12-16 11:46:22.511697] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0, 1] resetting controller 00:24:21.822 [2024-12-16 11:46:22.514486] app.c:1064:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:24:21.822 [2024-12-16 11:46:22.514516] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x9de130 (9): Bad file descriptor 00:24:21.822 [2024-12-16 11:46:22.515603] ctrlr.c: 825:nvmf_qpair_access_allowed: *ERROR*: Subsystem 'nqn.2016-06.io.spdk:cnode0' does not allow host 'nqn.2016-06.io.spdk:host0' 00:24:21.822 [2024-12-16 11:46:22.515705] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:0 cid:3 SGL DATA BLOCK OFFSET 0x0 len:0x400 00:24:21.822 [2024-12-16 11:46:22.515730] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND SPECIFIC (01/84) qid:0 cid:3 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:21.822 [2024-12-16 11:46:22.515751] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.3 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode0 00:24:21.822 [2024-12-16 11:46:22.515763] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 132 00:24:21.822 [2024-12-16 11:46:22.515774] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:21.822 [2024-12-16 11:46:22.515783] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x9de130 00:24:21.822 [2024-12-16 11:46:22.515823] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x9de130 (9): Bad file descriptor 00:24:21.822 [2024-12-16 11:46:22.515855] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] Ctrlr is in error state 00:24:21.822 [2024-12-16 11:46:22.515882] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] controller reinitialization failed 00:24:21.822 [2024-12-16 11:46:22.515895] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] in failed state. 00:24:21.822 [2024-12-16 11:46:22.515908] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] Resetting controller failed. 00:24:21.822 11:46:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:21.822 11:46:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@85 -- # rpc_cmd nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode0 nqn.2016-06.io.spdk:host0 00:24:21.822 11:46:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:21.822 11:46:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:24:21.822 11:46:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:21.822 11:46:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@87 -- # sleep 1 00:24:23.200 11:46:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@91 -- # kill -9 103313 00:24:23.200 /home/vagrant/spdk_repo/spdk/test/nvmf/target/host_management.sh: line 91: kill: (103313) - No such process 00:24:23.200 11:46:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@91 -- # true 00:24:23.200 11:46:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@97 -- # rm -f /var/tmp/spdk_cpu_lock_001 /var/tmp/spdk_cpu_lock_002 /var/tmp/spdk_cpu_lock_003 /var/tmp/spdk_cpu_lock_004 00:24:23.200 11:46:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@100 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /dev/fd/62 -q 64 -o 65536 -w verify -t 1 00:24:23.200 11:46:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@100 -- # gen_nvmf_target_json 0 00:24:23.200 11:46:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@560 -- # config=() 00:24:23.200 11:46:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@560 -- # local subsystem config 00:24:23.200 11:46:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:24:23.200 11:46:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:24:23.200 { 00:24:23.200 "params": { 00:24:23.200 "name": "Nvme$subsystem", 00:24:23.200 "trtype": "$TEST_TRANSPORT", 00:24:23.200 "traddr": "$NVMF_FIRST_TARGET_IP", 00:24:23.200 "adrfam": "ipv4", 00:24:23.200 "trsvcid": "$NVMF_PORT", 00:24:23.200 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:24:23.200 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:24:23.200 "hdgst": ${hdgst:-false}, 00:24:23.200 "ddgst": ${ddgst:-false} 00:24:23.200 }, 00:24:23.200 "method": "bdev_nvme_attach_controller" 00:24:23.200 } 00:24:23.200 EOF 00:24:23.200 )") 00:24:23.200 11:46:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@582 -- # cat 00:24:23.200 11:46:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@584 -- # jq . 00:24:23.200 11:46:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@585 -- # IFS=, 00:24:23.200 11:46:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:24:23.200 "params": { 00:24:23.200 "name": "Nvme0", 00:24:23.200 "trtype": "tcp", 00:24:23.200 "traddr": "10.0.0.3", 00:24:23.200 "adrfam": "ipv4", 00:24:23.200 "trsvcid": "4420", 00:24:23.200 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:24:23.200 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:24:23.200 "hdgst": false, 00:24:23.200 "ddgst": false 00:24:23.200 }, 00:24:23.200 "method": "bdev_nvme_attach_controller" 00:24:23.200 }' 00:24:23.200 [2024-12-16 11:46:23.606104] Starting SPDK v25.01-pre git sha1 c360129e7 / DPDK 24.03.0 initialization... 00:24:23.200 [2024-12-16 11:46:23.606230] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid103359 ] 00:24:23.200 [2024-12-16 11:46:23.756847] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:24:23.200 [2024-12-16 11:46:23.840092] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:24:23.459 Running I/O for 1 seconds... 00:24:24.395 1344.00 IOPS, 84.00 MiB/s 00:24:24.395 Latency(us) 00:24:24.395 [2024-12-16T11:46:25.148Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:24:24.395 Job: Nvme0n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:24:24.395 Verification LBA range: start 0x0 length 0x400 00:24:24.395 Nvme0n1 : 1.01 1389.04 86.82 0.00 0.00 45171.80 7179.17 38844.97 00:24:24.395 [2024-12-16T11:46:25.148Z] =================================================================================================================== 00:24:24.395 [2024-12-16T11:46:25.148Z] Total : 1389.04 86.82 0.00 0.00 45171.80 7179.17 38844.97 00:24:24.653 11:46:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@102 -- # stoptarget 00:24:24.653 11:46:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@36 -- # rm -f ./local-job0-0-verify.state 00:24:24.653 11:46:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@37 -- # rm -rf /home/vagrant/spdk_repo/spdk/test/nvmf/target/bdevperf.conf 00:24:24.653 11:46:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@38 -- # rm -rf /home/vagrant/spdk_repo/spdk/test/nvmf/target/rpcs.txt 00:24:24.653 11:46:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@40 -- # nvmftestfini 00:24:24.653 11:46:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@516 -- # nvmfcleanup 00:24:24.653 11:46:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@121 -- # sync 00:24:24.911 11:46:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:24:24.911 11:46:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@124 -- # set +e 00:24:24.911 11:46:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@125 -- # for i in {1..20} 00:24:24.911 11:46:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:24:24.911 rmmod nvme_tcp 00:24:24.911 rmmod nvme_fabrics 00:24:24.911 rmmod nvme_keyring 00:24:24.911 11:46:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:24:24.911 11:46:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@128 -- # set -e 00:24:24.911 11:46:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@129 -- # return 0 00:24:24.911 11:46:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@517 -- # '[' -n 103256 ']' 00:24:24.911 11:46:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@518 -- # killprocess 103256 00:24:24.911 11:46:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@954 -- # '[' -z 103256 ']' 00:24:24.911 11:46:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@958 -- # kill -0 103256 00:24:24.911 11:46:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@959 -- # uname 00:24:24.911 11:46:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:24:24.911 11:46:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 103256 00:24:24.911 killing process with pid 103256 00:24:24.911 11:46:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:24:24.911 11:46:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:24:24.911 11:46:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@972 -- # echo 'killing process with pid 103256' 00:24:24.911 11:46:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@973 -- # kill 103256 00:24:24.911 11:46:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@978 -- # wait 103256 00:24:25.170 [2024-12-16 11:46:25.727830] app.c: 721:unclaim_cpu_cores: *ERROR*: Failed to unlink lock fd for core 1, errno: 2 00:24:25.170 11:46:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:24:25.170 11:46:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:24:25.170 11:46:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:24:25.170 11:46:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@297 -- # iptr 00:24:25.170 11:46:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@791 -- # iptables-save 00:24:25.170 11:46:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@791 -- # iptables-restore 00:24:25.170 11:46:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:24:25.170 11:46:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@298 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:24:25.170 11:46:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@299 -- # nvmf_veth_fini 00:24:25.170 11:46:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@233 -- # ip link set nvmf_init_br nomaster 00:24:25.170 11:46:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@234 -- # ip link set nvmf_init_br2 nomaster 00:24:25.170 11:46:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@235 -- # ip link set nvmf_tgt_br nomaster 00:24:25.170 11:46:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@236 -- # ip link set nvmf_tgt_br2 nomaster 00:24:25.170 11:46:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@237 -- # ip link set nvmf_init_br down 00:24:25.170 11:46:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@238 -- # ip link set nvmf_init_br2 down 00:24:25.170 11:46:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@239 -- # ip link set nvmf_tgt_br down 00:24:25.170 11:46:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@240 -- # ip link set nvmf_tgt_br2 down 00:24:25.170 11:46:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@241 -- # ip link delete nvmf_br type bridge 00:24:25.170 11:46:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@242 -- # ip link delete nvmf_init_if 00:24:25.170 11:46:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@243 -- # ip link delete nvmf_init_if2 00:24:25.442 11:46:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@244 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:24:25.442 11:46:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@245 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:24:25.442 11:46:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@246 -- # remove_spdk_ns 00:24:25.442 11:46:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:24:25.442 11:46:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:24:25.442 11:46:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:24:25.442 11:46:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@300 -- # return 0 00:24:25.442 11:46:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@109 -- # trap - SIGINT SIGTERM EXIT 00:24:25.442 00:24:25.442 real 0m5.896s 00:24:25.442 user 0m19.952s 00:24:25.442 sys 0m2.375s 00:24:25.442 11:46:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@1130 -- # xtrace_disable 00:24:25.442 11:46:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:24:25.442 ************************************ 00:24:25.442 END TEST nvmf_host_management 00:24:25.442 ************************************ 00:24:25.442 11:46:26 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/nvmf_target_core.sh@27 -- # run_test nvmf_lvol /home/vagrant/spdk_repo/spdk/test/nvmf/target/nvmf_lvol.sh --transport=tcp --interrupt-mode 00:24:25.442 11:46:26 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:24:25.442 11:46:26 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1111 -- # xtrace_disable 00:24:25.442 11:46:26 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@10 -- # set +x 00:24:25.442 ************************************ 00:24:25.442 START TEST nvmf_lvol 00:24:25.442 ************************************ 00:24:25.442 11:46:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/nvmf_lvol.sh --transport=tcp --interrupt-mode 00:24:25.442 * Looking for test storage... 00:24:25.442 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:24:25.442 11:46:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:24:25.442 11:46:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@1711 -- # lcov --version 00:24:25.442 11:46:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:24:25.728 11:46:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:24:25.728 11:46:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:24:25.728 11:46:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@333 -- # local ver1 ver1_l 00:24:25.728 11:46:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@334 -- # local ver2 ver2_l 00:24:25.728 11:46:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@336 -- # IFS=.-: 00:24:25.728 11:46:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@336 -- # read -ra ver1 00:24:25.728 11:46:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@337 -- # IFS=.-: 00:24:25.728 11:46:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@337 -- # read -ra ver2 00:24:25.728 11:46:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@338 -- # local 'op=<' 00:24:25.728 11:46:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@340 -- # ver1_l=2 00:24:25.728 11:46:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@341 -- # ver2_l=1 00:24:25.728 11:46:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:24:25.728 11:46:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@344 -- # case "$op" in 00:24:25.728 11:46:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@345 -- # : 1 00:24:25.728 11:46:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@364 -- # (( v = 0 )) 00:24:25.728 11:46:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:24:25.728 11:46:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@365 -- # decimal 1 00:24:25.728 11:46:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@353 -- # local d=1 00:24:25.728 11:46:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:24:25.728 11:46:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@355 -- # echo 1 00:24:25.728 11:46:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@365 -- # ver1[v]=1 00:24:25.728 11:46:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@366 -- # decimal 2 00:24:25.728 11:46:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@353 -- # local d=2 00:24:25.728 11:46:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:24:25.728 11:46:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@355 -- # echo 2 00:24:25.728 11:46:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@366 -- # ver2[v]=2 00:24:25.728 11:46:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:24:25.728 11:46:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:24:25.728 11:46:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@368 -- # return 0 00:24:25.728 11:46:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:24:25.729 11:46:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:24:25.729 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:24:25.729 --rc genhtml_branch_coverage=1 00:24:25.729 --rc genhtml_function_coverage=1 00:24:25.729 --rc genhtml_legend=1 00:24:25.729 --rc geninfo_all_blocks=1 00:24:25.729 --rc geninfo_unexecuted_blocks=1 00:24:25.729 00:24:25.729 ' 00:24:25.729 11:46:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:24:25.729 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:24:25.729 --rc genhtml_branch_coverage=1 00:24:25.729 --rc genhtml_function_coverage=1 00:24:25.729 --rc genhtml_legend=1 00:24:25.729 --rc geninfo_all_blocks=1 00:24:25.729 --rc geninfo_unexecuted_blocks=1 00:24:25.729 00:24:25.729 ' 00:24:25.729 11:46:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:24:25.729 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:24:25.729 --rc genhtml_branch_coverage=1 00:24:25.729 --rc genhtml_function_coverage=1 00:24:25.729 --rc genhtml_legend=1 00:24:25.729 --rc geninfo_all_blocks=1 00:24:25.729 --rc geninfo_unexecuted_blocks=1 00:24:25.729 00:24:25.729 ' 00:24:25.729 11:46:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:24:25.729 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:24:25.729 --rc genhtml_branch_coverage=1 00:24:25.729 --rc genhtml_function_coverage=1 00:24:25.729 --rc genhtml_legend=1 00:24:25.729 --rc geninfo_all_blocks=1 00:24:25.729 --rc geninfo_unexecuted_blocks=1 00:24:25.729 00:24:25.729 ' 00:24:25.729 11:46:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:24:25.729 11:46:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@7 -- # uname -s 00:24:25.729 11:46:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:24:25.729 11:46:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:24:25.729 11:46:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:24:25.729 11:46:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:24:25.729 11:46:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:24:25.729 11:46:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:24:25.729 11:46:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:24:25.729 11:46:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:24:25.729 11:46:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:24:25.729 11:46:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:24:25.729 11:46:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:2fb5c03d-b67f-4471-b8a9-a83bf7b15be8 00:24:25.729 11:46:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@18 -- # NVME_HOSTID=2fb5c03d-b67f-4471-b8a9-a83bf7b15be8 00:24:25.729 11:46:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:24:25.729 11:46:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:24:25.729 11:46:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:24:25.729 11:46:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:24:25.729 11:46:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@49 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:24:25.729 11:46:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@15 -- # shopt -s extglob 00:24:25.729 11:46:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:24:25.729 11:46:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:24:25.729 11:46:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:24:25.729 11:46:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:25.729 11:46:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:25.729 11:46:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:25.729 11:46:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- paths/export.sh@5 -- # export PATH 00:24:25.729 11:46:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:25.729 11:46:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@51 -- # : 0 00:24:25.729 11:46:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:24:25.729 11:46:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:24:25.729 11:46:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:24:25.729 11:46:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:24:25.729 11:46:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:24:25.729 11:46:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@33 -- # '[' 1 -eq 1 ']' 00:24:25.729 11:46:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@34 -- # NVMF_APP+=(--interrupt-mode) 00:24:25.729 11:46:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:24:25.729 11:46:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:24:25.729 11:46:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@55 -- # have_pci_nics=0 00:24:25.729 11:46:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@11 -- # MALLOC_BDEV_SIZE=64 00:24:25.729 11:46:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:24:25.729 11:46:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@13 -- # LVOL_BDEV_INIT_SIZE=20 00:24:25.729 11:46:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@14 -- # LVOL_BDEV_FINAL_SIZE=30 00:24:25.729 11:46:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@16 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:24:25.729 11:46:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@18 -- # nvmftestinit 00:24:25.729 11:46:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:24:25.729 11:46:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:24:25.729 11:46:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@476 -- # prepare_net_devs 00:24:25.729 11:46:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@438 -- # local -g is_hw=no 00:24:25.729 11:46:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@440 -- # remove_spdk_ns 00:24:25.729 11:46:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:24:25.729 11:46:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:24:25.729 11:46:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:24:25.729 11:46:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@442 -- # [[ virt != virt ]] 00:24:25.729 11:46:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@444 -- # [[ no == yes ]] 00:24:25.729 11:46:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@451 -- # [[ virt == phy ]] 00:24:25.729 11:46:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@454 -- # [[ virt == phy-fallback ]] 00:24:25.729 11:46:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@459 -- # [[ tcp == tcp ]] 00:24:25.729 11:46:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@460 -- # nvmf_veth_init 00:24:25.729 11:46:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@145 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:24:25.729 11:46:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@146 -- # NVMF_SECOND_INITIATOR_IP=10.0.0.2 00:24:25.729 11:46:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@147 -- # NVMF_FIRST_TARGET_IP=10.0.0.3 00:24:25.729 11:46:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@148 -- # NVMF_SECOND_TARGET_IP=10.0.0.4 00:24:25.729 11:46:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@149 -- # NVMF_INITIATOR_IP=10.0.0.1 00:24:25.729 11:46:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@150 -- # NVMF_BRIDGE=nvmf_br 00:24:25.729 11:46:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@151 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:24:25.729 11:46:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@152 -- # NVMF_INITIATOR_INTERFACE2=nvmf_init_if2 00:24:25.729 11:46:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@153 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:24:25.729 11:46:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@154 -- # NVMF_INITIATOR_BRIDGE2=nvmf_init_br2 00:24:25.729 11:46:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@155 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:24:25.729 11:46:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@156 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:24:25.730 11:46:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@157 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:24:25.730 11:46:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@158 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:24:25.730 11:46:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@159 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:24:25.730 11:46:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@160 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:24:25.730 11:46:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@162 -- # ip link set nvmf_init_br nomaster 00:24:25.730 Cannot find device "nvmf_init_br" 00:24:25.730 11:46:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@162 -- # true 00:24:25.730 11:46:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@163 -- # ip link set nvmf_init_br2 nomaster 00:24:25.730 Cannot find device "nvmf_init_br2" 00:24:25.730 11:46:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@163 -- # true 00:24:25.730 11:46:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@164 -- # ip link set nvmf_tgt_br nomaster 00:24:25.730 Cannot find device "nvmf_tgt_br" 00:24:25.730 11:46:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@164 -- # true 00:24:25.730 11:46:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@165 -- # ip link set nvmf_tgt_br2 nomaster 00:24:25.730 Cannot find device "nvmf_tgt_br2" 00:24:25.730 11:46:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@165 -- # true 00:24:25.730 11:46:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@166 -- # ip link set nvmf_init_br down 00:24:25.730 Cannot find device "nvmf_init_br" 00:24:25.730 11:46:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@166 -- # true 00:24:25.730 11:46:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@167 -- # ip link set nvmf_init_br2 down 00:24:25.730 Cannot find device "nvmf_init_br2" 00:24:25.730 11:46:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@167 -- # true 00:24:25.730 11:46:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@168 -- # ip link set nvmf_tgt_br down 00:24:25.730 Cannot find device "nvmf_tgt_br" 00:24:25.730 11:46:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@168 -- # true 00:24:25.730 11:46:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@169 -- # ip link set nvmf_tgt_br2 down 00:24:25.730 Cannot find device "nvmf_tgt_br2" 00:24:25.730 11:46:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@169 -- # true 00:24:25.730 11:46:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@170 -- # ip link delete nvmf_br type bridge 00:24:25.730 Cannot find device "nvmf_br" 00:24:25.730 11:46:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@170 -- # true 00:24:25.730 11:46:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@171 -- # ip link delete nvmf_init_if 00:24:25.730 Cannot find device "nvmf_init_if" 00:24:25.730 11:46:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@171 -- # true 00:24:25.730 11:46:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@172 -- # ip link delete nvmf_init_if2 00:24:25.730 Cannot find device "nvmf_init_if2" 00:24:25.730 11:46:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@172 -- # true 00:24:25.730 11:46:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@173 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:24:25.730 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:24:25.730 11:46:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@173 -- # true 00:24:25.730 11:46:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@174 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:24:25.730 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:24:25.730 11:46:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@174 -- # true 00:24:25.730 11:46:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@177 -- # ip netns add nvmf_tgt_ns_spdk 00:24:25.730 11:46:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@180 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:24:25.730 11:46:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@181 -- # ip link add nvmf_init_if2 type veth peer name nvmf_init_br2 00:24:25.730 11:46:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@182 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:24:25.730 11:46:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@183 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:24:25.989 11:46:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:24:25.989 11:46:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@187 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:24:25.989 11:46:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@190 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:24:25.989 11:46:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@191 -- # ip addr add 10.0.0.2/24 dev nvmf_init_if2 00:24:25.989 11:46:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@192 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if 00:24:25.989 11:46:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@193 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.4/24 dev nvmf_tgt_if2 00:24:25.989 11:46:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@196 -- # ip link set nvmf_init_if up 00:24:25.989 11:46:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@197 -- # ip link set nvmf_init_if2 up 00:24:25.989 11:46:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@198 -- # ip link set nvmf_init_br up 00:24:25.989 11:46:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@199 -- # ip link set nvmf_init_br2 up 00:24:25.989 11:46:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@200 -- # ip link set nvmf_tgt_br up 00:24:25.989 11:46:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@201 -- # ip link set nvmf_tgt_br2 up 00:24:25.989 11:46:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@202 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:24:25.989 11:46:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@203 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:24:25.989 11:46:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@204 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:24:25.989 11:46:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@207 -- # ip link add nvmf_br type bridge 00:24:25.989 11:46:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@208 -- # ip link set nvmf_br up 00:24:25.989 11:46:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@211 -- # ip link set nvmf_init_br master nvmf_br 00:24:25.989 11:46:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@212 -- # ip link set nvmf_init_br2 master nvmf_br 00:24:25.989 11:46:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@213 -- # ip link set nvmf_tgt_br master nvmf_br 00:24:25.989 11:46:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@214 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:24:25.989 11:46:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@217 -- # ipts -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:24:25.989 11:46:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT' 00:24:25.989 11:46:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@218 -- # ipts -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT 00:24:25.989 11:46:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT' 00:24:25.989 11:46:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@219 -- # ipts -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:24:25.989 11:46:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@790 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT -m comment --comment 'SPDK_NVMF:-A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT' 00:24:25.989 11:46:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@222 -- # ping -c 1 10.0.0.3 00:24:25.989 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:24:25.989 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.079 ms 00:24:25.989 00:24:25.989 --- 10.0.0.3 ping statistics --- 00:24:25.989 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:24:25.989 rtt min/avg/max/mdev = 0.079/0.079/0.079/0.000 ms 00:24:25.989 11:46:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@223 -- # ping -c 1 10.0.0.4 00:24:26.249 PING 10.0.0.4 (10.0.0.4) 56(84) bytes of data. 00:24:26.249 64 bytes from 10.0.0.4: icmp_seq=1 ttl=64 time=0.116 ms 00:24:26.249 00:24:26.249 --- 10.0.0.4 ping statistics --- 00:24:26.249 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:24:26.249 rtt min/avg/max/mdev = 0.116/0.116/0.116/0.000 ms 00:24:26.249 11:46:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@224 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:24:26.249 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:24:26.249 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.056 ms 00:24:26.249 00:24:26.249 --- 10.0.0.1 ping statistics --- 00:24:26.249 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:24:26.249 rtt min/avg/max/mdev = 0.056/0.056/0.056/0.000 ms 00:24:26.249 11:46:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@225 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.2 00:24:26.249 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:24:26.249 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.063 ms 00:24:26.249 00:24:26.249 --- 10.0.0.2 ping statistics --- 00:24:26.249 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:24:26.249 rtt min/avg/max/mdev = 0.063/0.063/0.063/0.000 ms 00:24:26.249 11:46:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@227 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:24:26.249 11:46:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@461 -- # return 0 00:24:26.249 11:46:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:24:26.249 11:46:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:24:26.249 11:46:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:24:26.249 11:46:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:24:26.249 11:46:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:24:26.249 11:46:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:24:26.249 11:46:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:24:26.249 11:46:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@19 -- # nvmfappstart -m 0x7 00:24:26.249 11:46:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:24:26.249 11:46:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@726 -- # xtrace_disable 00:24:26.249 11:46:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@10 -- # set +x 00:24:26.249 11:46:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@509 -- # nvmfpid=103631 00:24:26.249 11:46:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@508 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --interrupt-mode -m 0x7 00:24:26.249 11:46:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@510 -- # waitforlisten 103631 00:24:26.249 11:46:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@835 -- # '[' -z 103631 ']' 00:24:26.249 11:46:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:24:26.249 11:46:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@840 -- # local max_retries=100 00:24:26.249 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:24:26.249 11:46:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:24:26.249 11:46:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@844 -- # xtrace_disable 00:24:26.249 11:46:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@10 -- # set +x 00:24:26.249 [2024-12-16 11:46:26.852418] thread.c:3079:spdk_interrupt_mode_enable: *NOTICE*: Set SPDK running in interrupt mode. 00:24:26.249 [2024-12-16 11:46:26.853701] Starting SPDK v25.01-pre git sha1 c360129e7 / DPDK 24.03.0 initialization... 00:24:26.249 [2024-12-16 11:46:26.854313] [ DPDK EAL parameters: nvmf -c 0x7 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:24:26.508 [2024-12-16 11:46:27.011731] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:24:26.508 [2024-12-16 11:46:27.100253] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:24:26.508 [2024-12-16 11:46:27.100336] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:24:26.508 [2024-12-16 11:46:27.100351] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:24:26.508 [2024-12-16 11:46:27.100362] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:24:26.508 [2024-12-16 11:46:27.100375] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:24:26.508 [2024-12-16 11:46:27.101877] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 1 00:24:26.508 [2024-12-16 11:46:27.102016] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 2 00:24:26.508 [2024-12-16 11:46:27.102040] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:24:26.508 [2024-12-16 11:46:27.240498] thread.c:2144:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_000) to intr mode from intr mode. 00:24:26.508 [2024-12-16 11:46:27.240938] thread.c:2144:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (app_thread) to intr mode from intr mode. 00:24:26.508 [2024-12-16 11:46:27.241283] thread.c:2144:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_002) to intr mode from intr mode. 00:24:26.508 [2024-12-16 11:46:27.242080] thread.c:2144:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_001) to intr mode from intr mode. 00:24:26.766 11:46:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:24:26.766 11:46:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@868 -- # return 0 00:24:26.766 11:46:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:24:26.766 11:46:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@732 -- # xtrace_disable 00:24:26.766 11:46:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@10 -- # set +x 00:24:26.766 11:46:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:24:26.766 11:46:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@21 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:24:27.025 [2024-12-16 11:46:27.619342] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:24:27.025 11:46:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@24 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:24:27.283 11:46:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@24 -- # base_bdevs='Malloc0 ' 00:24:27.283 11:46:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@25 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:24:27.849 11:46:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@25 -- # base_bdevs+=Malloc1 00:24:27.849 11:46:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@26 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_raid_create -n raid0 -z 64 -r 0 -b 'Malloc0 Malloc1' 00:24:28.133 11:46:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@29 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create_lvstore raid0 lvs 00:24:28.392 11:46:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@29 -- # lvs=6b5abb72-8ef0-430c-9d5f-b822a1721cbc 00:24:28.392 11:46:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@32 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create -u 6b5abb72-8ef0-430c-9d5f-b822a1721cbc lvol 20 00:24:28.650 11:46:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@32 -- # lvol=e6ba70f7-b136-46bf-a85c-7b161b2c97a1 00:24:28.650 11:46:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@35 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 -a -s SPDK0 00:24:28.908 11:46:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@36 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 e6ba70f7-b136-46bf-a85c-7b161b2c97a1 00:24:29.167 11:46:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@37 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.3 -s 4420 00:24:29.425 [2024-12-16 11:46:29.967253] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4420 *** 00:24:29.425 11:46:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@38 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.3 -s 4420 00:24:29.684 11:46:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@42 -- # perf_pid=103764 00:24:29.684 11:46:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@41 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.3 trsvcid:4420' -o 4096 -q 128 -s 512 -w randwrite -t 10 -c 0x18 00:24:29.684 11:46:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@44 -- # sleep 1 00:24:30.619 11:46:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@47 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_snapshot e6ba70f7-b136-46bf-a85c-7b161b2c97a1 MY_SNAPSHOT 00:24:31.185 11:46:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@47 -- # snapshot=0bba12bd-ae2f-4038-998c-7071896e895f 00:24:31.185 11:46:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@48 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_resize e6ba70f7-b136-46bf-a85c-7b161b2c97a1 30 00:24:31.443 11:46:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@49 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_clone 0bba12bd-ae2f-4038-998c-7071896e895f MY_CLONE 00:24:31.702 11:46:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@49 -- # clone=8ca905f9-0c23-45e1-b60f-9d0b57f98d39 00:24:31.702 11:46:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@50 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_inflate 8ca905f9-0c23-45e1-b60f-9d0b57f98d39 00:24:32.269 11:46:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@53 -- # wait 103764 00:24:40.406 Initializing NVMe Controllers 00:24:40.406 Attached to NVMe over Fabrics controller at 10.0.0.3:4420: nqn.2016-06.io.spdk:cnode0 00:24:40.406 Controller IO queue size 128, less than required. 00:24:40.406 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:24:40.406 Associating TCP (addr:10.0.0.3 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 with lcore 3 00:24:40.406 Associating TCP (addr:10.0.0.3 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 with lcore 4 00:24:40.406 Initialization complete. Launching workers. 00:24:40.406 ======================================================== 00:24:40.406 Latency(us) 00:24:40.406 Device Information : IOPS MiB/s Average min max 00:24:40.406 TCP (addr:10.0.0.3 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 from core 3: 10557.20 41.24 12131.01 2940.03 67763.30 00:24:40.406 TCP (addr:10.0.0.3 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 from core 4: 10556.80 41.24 12130.47 3197.99 93158.96 00:24:40.406 ======================================================== 00:24:40.406 Total : 21114.00 82.48 12130.74 2940.03 93158.96 00:24:40.406 00:24:40.406 11:46:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@56 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:24:40.406 11:46:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@57 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_delete e6ba70f7-b136-46bf-a85c-7b161b2c97a1 00:24:40.665 11:46:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@58 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -u 6b5abb72-8ef0-430c-9d5f-b822a1721cbc 00:24:40.924 11:46:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@60 -- # rm -f 00:24:40.924 11:46:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@62 -- # trap - SIGINT SIGTERM EXIT 00:24:40.924 11:46:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@64 -- # nvmftestfini 00:24:40.924 11:46:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@516 -- # nvmfcleanup 00:24:40.924 11:46:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@121 -- # sync 00:24:40.924 11:46:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:24:40.924 11:46:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@124 -- # set +e 00:24:40.924 11:46:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@125 -- # for i in {1..20} 00:24:40.924 11:46:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:24:40.924 rmmod nvme_tcp 00:24:40.924 rmmod nvme_fabrics 00:24:40.924 rmmod nvme_keyring 00:24:40.924 11:46:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:24:40.924 11:46:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@128 -- # set -e 00:24:40.924 11:46:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@129 -- # return 0 00:24:40.924 11:46:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@517 -- # '[' -n 103631 ']' 00:24:40.924 11:46:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@518 -- # killprocess 103631 00:24:40.924 11:46:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@954 -- # '[' -z 103631 ']' 00:24:40.924 11:46:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@958 -- # kill -0 103631 00:24:40.924 11:46:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@959 -- # uname 00:24:40.924 11:46:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:24:40.924 11:46:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 103631 00:24:40.924 11:46:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:24:40.924 11:46:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:24:40.924 11:46:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@972 -- # echo 'killing process with pid 103631' 00:24:40.924 killing process with pid 103631 00:24:40.924 11:46:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@973 -- # kill 103631 00:24:40.924 11:46:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@978 -- # wait 103631 00:24:41.255 11:46:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:24:41.255 11:46:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:24:41.255 11:46:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:24:41.255 11:46:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@297 -- # iptr 00:24:41.255 11:46:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@791 -- # iptables-save 00:24:41.255 11:46:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@791 -- # iptables-restore 00:24:41.255 11:46:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:24:41.255 11:46:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@298 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:24:41.255 11:46:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@299 -- # nvmf_veth_fini 00:24:41.255 11:46:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@233 -- # ip link set nvmf_init_br nomaster 00:24:41.255 11:46:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@234 -- # ip link set nvmf_init_br2 nomaster 00:24:41.255 11:46:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@235 -- # ip link set nvmf_tgt_br nomaster 00:24:41.255 11:46:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@236 -- # ip link set nvmf_tgt_br2 nomaster 00:24:41.255 11:46:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@237 -- # ip link set nvmf_init_br down 00:24:41.255 11:46:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@238 -- # ip link set nvmf_init_br2 down 00:24:41.255 11:46:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@239 -- # ip link set nvmf_tgt_br down 00:24:41.255 11:46:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@240 -- # ip link set nvmf_tgt_br2 down 00:24:41.517 11:46:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@241 -- # ip link delete nvmf_br type bridge 00:24:41.517 11:46:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@242 -- # ip link delete nvmf_init_if 00:24:41.517 11:46:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@243 -- # ip link delete nvmf_init_if2 00:24:41.517 11:46:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@244 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:24:41.517 11:46:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@245 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:24:41.517 11:46:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@246 -- # remove_spdk_ns 00:24:41.517 11:46:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:24:41.517 11:46:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:24:41.517 11:46:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:24:41.517 11:46:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@300 -- # return 0 00:24:41.517 00:24:41.517 real 0m16.111s 00:24:41.517 user 0m56.432s 00:24:41.517 sys 0m5.931s 00:24:41.517 11:46:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@1130 -- # xtrace_disable 00:24:41.517 11:46:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@10 -- # set +x 00:24:41.517 ************************************ 00:24:41.517 END TEST nvmf_lvol 00:24:41.517 ************************************ 00:24:41.517 11:46:42 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/nvmf_target_core.sh@28 -- # run_test nvmf_lvs_grow /home/vagrant/spdk_repo/spdk/test/nvmf/target/nvmf_lvs_grow.sh --transport=tcp --interrupt-mode 00:24:41.517 11:46:42 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:24:41.517 11:46:42 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1111 -- # xtrace_disable 00:24:41.517 11:46:42 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@10 -- # set +x 00:24:41.517 ************************************ 00:24:41.517 START TEST nvmf_lvs_grow 00:24:41.517 ************************************ 00:24:41.517 11:46:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/nvmf_lvs_grow.sh --transport=tcp --interrupt-mode 00:24:41.777 * Looking for test storage... 00:24:41.777 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:24:41.777 11:46:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:24:41.777 11:46:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@1711 -- # lcov --version 00:24:41.777 11:46:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:24:41.777 11:46:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:24:41.777 11:46:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:24:41.777 11:46:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@333 -- # local ver1 ver1_l 00:24:41.777 11:46:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@334 -- # local ver2 ver2_l 00:24:41.777 11:46:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@336 -- # IFS=.-: 00:24:41.777 11:46:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@336 -- # read -ra ver1 00:24:41.777 11:46:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@337 -- # IFS=.-: 00:24:41.777 11:46:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@337 -- # read -ra ver2 00:24:41.777 11:46:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@338 -- # local 'op=<' 00:24:41.777 11:46:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@340 -- # ver1_l=2 00:24:41.777 11:46:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@341 -- # ver2_l=1 00:24:41.777 11:46:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:24:41.777 11:46:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@344 -- # case "$op" in 00:24:41.777 11:46:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@345 -- # : 1 00:24:41.777 11:46:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@364 -- # (( v = 0 )) 00:24:41.777 11:46:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:24:41.777 11:46:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@365 -- # decimal 1 00:24:41.777 11:46:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@353 -- # local d=1 00:24:41.777 11:46:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:24:41.777 11:46:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@355 -- # echo 1 00:24:41.777 11:46:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@365 -- # ver1[v]=1 00:24:41.777 11:46:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@366 -- # decimal 2 00:24:41.777 11:46:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@353 -- # local d=2 00:24:41.777 11:46:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:24:41.777 11:46:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@355 -- # echo 2 00:24:41.777 11:46:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@366 -- # ver2[v]=2 00:24:41.777 11:46:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:24:41.777 11:46:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:24:41.777 11:46:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@368 -- # return 0 00:24:41.777 11:46:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:24:41.777 11:46:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:24:41.777 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:24:41.777 --rc genhtml_branch_coverage=1 00:24:41.777 --rc genhtml_function_coverage=1 00:24:41.777 --rc genhtml_legend=1 00:24:41.777 --rc geninfo_all_blocks=1 00:24:41.777 --rc geninfo_unexecuted_blocks=1 00:24:41.777 00:24:41.777 ' 00:24:41.777 11:46:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:24:41.777 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:24:41.777 --rc genhtml_branch_coverage=1 00:24:41.777 --rc genhtml_function_coverage=1 00:24:41.777 --rc genhtml_legend=1 00:24:41.777 --rc geninfo_all_blocks=1 00:24:41.777 --rc geninfo_unexecuted_blocks=1 00:24:41.777 00:24:41.777 ' 00:24:41.777 11:46:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:24:41.777 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:24:41.777 --rc genhtml_branch_coverage=1 00:24:41.777 --rc genhtml_function_coverage=1 00:24:41.777 --rc genhtml_legend=1 00:24:41.777 --rc geninfo_all_blocks=1 00:24:41.777 --rc geninfo_unexecuted_blocks=1 00:24:41.777 00:24:41.777 ' 00:24:41.777 11:46:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:24:41.777 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:24:41.777 --rc genhtml_branch_coverage=1 00:24:41.777 --rc genhtml_function_coverage=1 00:24:41.777 --rc genhtml_legend=1 00:24:41.777 --rc geninfo_all_blocks=1 00:24:41.777 --rc geninfo_unexecuted_blocks=1 00:24:41.777 00:24:41.777 ' 00:24:41.777 11:46:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:24:41.777 11:46:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@7 -- # uname -s 00:24:41.777 11:46:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:24:41.777 11:46:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:24:41.777 11:46:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:24:41.777 11:46:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:24:41.777 11:46:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:24:41.777 11:46:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:24:41.777 11:46:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:24:41.777 11:46:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:24:41.777 11:46:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:24:41.777 11:46:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:24:41.777 11:46:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:2fb5c03d-b67f-4471-b8a9-a83bf7b15be8 00:24:41.777 11:46:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@18 -- # NVME_HOSTID=2fb5c03d-b67f-4471-b8a9-a83bf7b15be8 00:24:41.777 11:46:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:24:41.777 11:46:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:24:41.777 11:46:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:24:41.777 11:46:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:24:41.777 11:46:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@49 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:24:41.777 11:46:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@15 -- # shopt -s extglob 00:24:41.777 11:46:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:24:41.777 11:46:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:24:41.777 11:46:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:24:41.777 11:46:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:41.777 11:46:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:41.778 11:46:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:41.778 11:46:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- paths/export.sh@5 -- # export PATH 00:24:41.778 11:46:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:41.778 11:46:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@51 -- # : 0 00:24:41.778 11:46:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:24:41.778 11:46:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:24:41.778 11:46:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:24:41.778 11:46:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:24:41.778 11:46:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:24:41.778 11:46:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@33 -- # '[' 1 -eq 1 ']' 00:24:41.778 11:46:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@34 -- # NVMF_APP+=(--interrupt-mode) 00:24:41.778 11:46:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:24:41.778 11:46:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:24:41.778 11:46:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@55 -- # have_pci_nics=0 00:24:41.778 11:46:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@11 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:24:41.778 11:46:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@12 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:24:41.778 11:46:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@98 -- # nvmftestinit 00:24:41.778 11:46:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:24:41.778 11:46:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:24:41.778 11:46:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@476 -- # prepare_net_devs 00:24:41.778 11:46:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@438 -- # local -g is_hw=no 00:24:41.778 11:46:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@440 -- # remove_spdk_ns 00:24:41.778 11:46:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:24:41.778 11:46:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:24:41.778 11:46:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:24:41.778 11:46:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@442 -- # [[ virt != virt ]] 00:24:41.778 11:46:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@444 -- # [[ no == yes ]] 00:24:41.778 11:46:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@451 -- # [[ virt == phy ]] 00:24:41.778 11:46:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@454 -- # [[ virt == phy-fallback ]] 00:24:41.778 11:46:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@459 -- # [[ tcp == tcp ]] 00:24:41.778 11:46:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@460 -- # nvmf_veth_init 00:24:41.778 11:46:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@145 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:24:41.778 11:46:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@146 -- # NVMF_SECOND_INITIATOR_IP=10.0.0.2 00:24:41.778 11:46:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@147 -- # NVMF_FIRST_TARGET_IP=10.0.0.3 00:24:41.778 11:46:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@148 -- # NVMF_SECOND_TARGET_IP=10.0.0.4 00:24:41.778 11:46:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@149 -- # NVMF_INITIATOR_IP=10.0.0.1 00:24:41.778 11:46:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@150 -- # NVMF_BRIDGE=nvmf_br 00:24:41.778 11:46:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@151 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:24:41.778 11:46:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@152 -- # NVMF_INITIATOR_INTERFACE2=nvmf_init_if2 00:24:41.778 11:46:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@153 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:24:41.778 11:46:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@154 -- # NVMF_INITIATOR_BRIDGE2=nvmf_init_br2 00:24:41.778 11:46:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@155 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:24:41.778 11:46:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@156 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:24:41.778 11:46:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@157 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:24:41.778 11:46:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@158 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:24:41.778 11:46:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@159 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:24:41.778 11:46:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@160 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:24:41.778 11:46:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@162 -- # ip link set nvmf_init_br nomaster 00:24:41.778 Cannot find device "nvmf_init_br" 00:24:41.778 11:46:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@162 -- # true 00:24:41.778 11:46:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@163 -- # ip link set nvmf_init_br2 nomaster 00:24:41.778 Cannot find device "nvmf_init_br2" 00:24:41.778 11:46:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@163 -- # true 00:24:41.778 11:46:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@164 -- # ip link set nvmf_tgt_br nomaster 00:24:41.778 Cannot find device "nvmf_tgt_br" 00:24:41.778 11:46:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@164 -- # true 00:24:41.778 11:46:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@165 -- # ip link set nvmf_tgt_br2 nomaster 00:24:41.778 Cannot find device "nvmf_tgt_br2" 00:24:41.778 11:46:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@165 -- # true 00:24:41.778 11:46:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@166 -- # ip link set nvmf_init_br down 00:24:41.778 Cannot find device "nvmf_init_br" 00:24:41.778 11:46:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@166 -- # true 00:24:41.778 11:46:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@167 -- # ip link set nvmf_init_br2 down 00:24:41.778 Cannot find device "nvmf_init_br2" 00:24:41.778 11:46:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@167 -- # true 00:24:41.778 11:46:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@168 -- # ip link set nvmf_tgt_br down 00:24:41.778 Cannot find device "nvmf_tgt_br" 00:24:41.778 11:46:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@168 -- # true 00:24:41.778 11:46:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@169 -- # ip link set nvmf_tgt_br2 down 00:24:41.778 Cannot find device "nvmf_tgt_br2" 00:24:41.778 11:46:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@169 -- # true 00:24:41.778 11:46:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@170 -- # ip link delete nvmf_br type bridge 00:24:42.037 Cannot find device "nvmf_br" 00:24:42.037 11:46:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@170 -- # true 00:24:42.037 11:46:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@171 -- # ip link delete nvmf_init_if 00:24:42.037 Cannot find device "nvmf_init_if" 00:24:42.037 11:46:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@171 -- # true 00:24:42.037 11:46:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@172 -- # ip link delete nvmf_init_if2 00:24:42.037 Cannot find device "nvmf_init_if2" 00:24:42.037 11:46:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@172 -- # true 00:24:42.037 11:46:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@173 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:24:42.037 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:24:42.037 11:46:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@173 -- # true 00:24:42.037 11:46:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@174 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:24:42.037 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:24:42.037 11:46:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@174 -- # true 00:24:42.037 11:46:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@177 -- # ip netns add nvmf_tgt_ns_spdk 00:24:42.037 11:46:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@180 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:24:42.037 11:46:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@181 -- # ip link add nvmf_init_if2 type veth peer name nvmf_init_br2 00:24:42.037 11:46:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@182 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:24:42.037 11:46:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@183 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:24:42.037 11:46:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:24:42.037 11:46:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@187 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:24:42.037 11:46:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@190 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:24:42.037 11:46:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@191 -- # ip addr add 10.0.0.2/24 dev nvmf_init_if2 00:24:42.037 11:46:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@192 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if 00:24:42.037 11:46:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@193 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.4/24 dev nvmf_tgt_if2 00:24:42.037 11:46:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@196 -- # ip link set nvmf_init_if up 00:24:42.037 11:46:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@197 -- # ip link set nvmf_init_if2 up 00:24:42.037 11:46:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@198 -- # ip link set nvmf_init_br up 00:24:42.037 11:46:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@199 -- # ip link set nvmf_init_br2 up 00:24:42.037 11:46:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@200 -- # ip link set nvmf_tgt_br up 00:24:42.037 11:46:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@201 -- # ip link set nvmf_tgt_br2 up 00:24:42.037 11:46:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@202 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:24:42.037 11:46:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@203 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:24:42.037 11:46:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@204 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:24:42.038 11:46:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@207 -- # ip link add nvmf_br type bridge 00:24:42.038 11:46:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@208 -- # ip link set nvmf_br up 00:24:42.038 11:46:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@211 -- # ip link set nvmf_init_br master nvmf_br 00:24:42.038 11:46:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@212 -- # ip link set nvmf_init_br2 master nvmf_br 00:24:42.038 11:46:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@213 -- # ip link set nvmf_tgt_br master nvmf_br 00:24:42.038 11:46:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@214 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:24:42.038 11:46:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@217 -- # ipts -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:24:42.038 11:46:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT' 00:24:42.038 11:46:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@218 -- # ipts -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT 00:24:42.038 11:46:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT' 00:24:42.038 11:46:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@219 -- # ipts -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:24:42.038 11:46:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@790 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT -m comment --comment 'SPDK_NVMF:-A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT' 00:24:42.038 11:46:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@222 -- # ping -c 1 10.0.0.3 00:24:42.038 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:24:42.038 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.081 ms 00:24:42.038 00:24:42.038 --- 10.0.0.3 ping statistics --- 00:24:42.038 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:24:42.038 rtt min/avg/max/mdev = 0.081/0.081/0.081/0.000 ms 00:24:42.038 11:46:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@223 -- # ping -c 1 10.0.0.4 00:24:42.038 PING 10.0.0.4 (10.0.0.4) 56(84) bytes of data. 00:24:42.038 64 bytes from 10.0.0.4: icmp_seq=1 ttl=64 time=0.051 ms 00:24:42.038 00:24:42.038 --- 10.0.0.4 ping statistics --- 00:24:42.038 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:24:42.038 rtt min/avg/max/mdev = 0.051/0.051/0.051/0.000 ms 00:24:42.038 11:46:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@224 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:24:42.038 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:24:42.038 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.028 ms 00:24:42.038 00:24:42.038 --- 10.0.0.1 ping statistics --- 00:24:42.038 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:24:42.038 rtt min/avg/max/mdev = 0.028/0.028/0.028/0.000 ms 00:24:42.038 11:46:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@225 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.2 00:24:42.038 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:24:42.038 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.057 ms 00:24:42.038 00:24:42.038 --- 10.0.0.2 ping statistics --- 00:24:42.038 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:24:42.038 rtt min/avg/max/mdev = 0.057/0.057/0.057/0.000 ms 00:24:42.297 11:46:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@227 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:24:42.297 11:46:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@461 -- # return 0 00:24:42.297 11:46:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:24:42.297 11:46:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:24:42.297 11:46:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:24:42.297 11:46:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:24:42.297 11:46:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:24:42.297 11:46:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:24:42.297 11:46:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:24:42.297 11:46:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@99 -- # nvmfappstart -m 0x1 00:24:42.297 11:46:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:24:42.297 11:46:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@726 -- # xtrace_disable 00:24:42.297 11:46:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:24:42.297 11:46:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@509 -- # nvmfpid=104178 00:24:42.297 11:46:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@508 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --interrupt-mode -m 0x1 00:24:42.297 11:46:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@510 -- # waitforlisten 104178 00:24:42.297 11:46:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@835 -- # '[' -z 104178 ']' 00:24:42.297 11:46:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:24:42.297 11:46:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@840 -- # local max_retries=100 00:24:42.297 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:24:42.297 11:46:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:24:42.297 11:46:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@844 -- # xtrace_disable 00:24:42.297 11:46:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:24:42.297 [2024-12-16 11:46:42.866609] thread.c:3079:spdk_interrupt_mode_enable: *NOTICE*: Set SPDK running in interrupt mode. 00:24:42.297 [2024-12-16 11:46:42.867650] Starting SPDK v25.01-pre git sha1 c360129e7 / DPDK 24.03.0 initialization... 00:24:42.297 [2024-12-16 11:46:42.867711] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:24:42.297 [2024-12-16 11:46:43.014308] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:24:42.555 [2024-12-16 11:46:43.081419] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:24:42.555 [2024-12-16 11:46:43.081477] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:24:42.556 [2024-12-16 11:46:43.081493] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:24:42.556 [2024-12-16 11:46:43.081505] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:24:42.556 [2024-12-16 11:46:43.081516] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:24:42.556 [2024-12-16 11:46:43.081945] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:24:42.556 [2024-12-16 11:46:43.183577] thread.c:2144:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (app_thread) to intr mode from intr mode. 00:24:42.556 [2024-12-16 11:46:43.183962] thread.c:2144:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_000) to intr mode from intr mode. 00:24:42.556 11:46:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:24:42.556 11:46:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@868 -- # return 0 00:24:42.556 11:46:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:24:42.556 11:46:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@732 -- # xtrace_disable 00:24:42.556 11:46:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:24:42.556 11:46:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:24:42.556 11:46:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@100 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:24:42.814 [2024-12-16 11:46:43.550840] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:24:43.073 11:46:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@102 -- # run_test lvs_grow_clean lvs_grow 00:24:43.073 11:46:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:24:43.073 11:46:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@1111 -- # xtrace_disable 00:24:43.073 11:46:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:24:43.073 ************************************ 00:24:43.073 START TEST lvs_grow_clean 00:24:43.073 ************************************ 00:24:43.073 11:46:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@1129 -- # lvs_grow 00:24:43.073 11:46:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@15 -- # local aio_bdev lvs lvol 00:24:43.073 11:46:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@16 -- # local data_clusters free_clusters 00:24:43.073 11:46:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@17 -- # local bdevperf_pid run_test_pid 00:24:43.073 11:46:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@18 -- # local aio_init_size_mb=200 00:24:43.073 11:46:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@19 -- # local aio_final_size_mb=400 00:24:43.073 11:46:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@20 -- # local lvol_bdev_size_mb=150 00:24:43.073 11:46:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@23 -- # rm -f /home/vagrant/spdk_repo/spdk/test/nvmf/target/aio_bdev 00:24:43.073 11:46:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@24 -- # truncate -s 200M /home/vagrant/spdk_repo/spdk/test/nvmf/target/aio_bdev 00:24:43.073 11:46:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@25 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_aio_create /home/vagrant/spdk_repo/spdk/test/nvmf/target/aio_bdev aio_bdev 4096 00:24:43.332 11:46:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@25 -- # aio_bdev=aio_bdev 00:24:43.332 11:46:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create_lvstore --cluster-sz 4194304 --md-pages-per-cluster-ratio 300 aio_bdev lvs 00:24:43.591 11:46:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@28 -- # lvs=24951d26-1b13-4833-801c-77600760485b 00:24:43.591 11:46:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@29 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 24951d26-1b13-4833-801c-77600760485b 00:24:43.591 11:46:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@29 -- # jq -r '.[0].total_data_clusters' 00:24:43.849 11:46:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@29 -- # data_clusters=49 00:24:43.849 11:46:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@30 -- # (( data_clusters == 49 )) 00:24:43.849 11:46:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@33 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create -u 24951d26-1b13-4833-801c-77600760485b lvol 150 00:24:44.108 11:46:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@33 -- # lvol=e67a8e96-8705-49e2-9788-a7473028b0df 00:24:44.108 11:46:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@36 -- # truncate -s 400M /home/vagrant/spdk_repo/spdk/test/nvmf/target/aio_bdev 00:24:44.108 11:46:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@37 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_aio_rescan aio_bdev 00:24:44.366 [2024-12-16 11:46:45.050617] bdev_aio.c:1053:bdev_aio_rescan: *NOTICE*: AIO device is resized: bdev name /home/vagrant/spdk_repo/spdk/test/nvmf/target/aio_bdev, old block count 51200, new block count 102400 00:24:44.366 [2024-12-16 11:46:45.050791] vbdev_lvol.c: 165:vbdev_lvs_base_bdev_event_cb: *NOTICE*: Unsupported bdev event: type 1 00:24:44.366 true 00:24:44.366 11:46:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@38 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 24951d26-1b13-4833-801c-77600760485b 00:24:44.366 11:46:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@38 -- # jq -r '.[0].total_data_clusters' 00:24:44.627 11:46:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@38 -- # (( data_clusters == 49 )) 00:24:44.627 11:46:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@41 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 -a -s SPDK0 00:24:44.886 11:46:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@42 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 e67a8e96-8705-49e2-9788-a7473028b0df 00:24:45.454 11:46:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@43 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.3 -s 4420 00:24:45.712 [2024-12-16 11:46:46.223100] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4420 *** 00:24:45.712 11:46:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@44 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.3 -s 4420 00:24:45.970 11:46:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@47 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -r /var/tmp/bdevperf.sock -m 0x2 -o 4096 -q 128 -w randwrite -t 10 -S 1 -z 00:24:45.970 11:46:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@48 -- # bdevperf_pid=104328 00:24:45.970 11:46:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@49 -- # trap 'killprocess $bdevperf_pid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:24:45.970 11:46:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@50 -- # waitforlisten 104328 /var/tmp/bdevperf.sock 00:24:45.970 11:46:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@835 -- # '[' -z 104328 ']' 00:24:45.970 11:46:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:24:45.970 11:46:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@840 -- # local max_retries=100 00:24:45.970 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:24:45.970 11:46:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:24:45.970 11:46:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@844 -- # xtrace_disable 00:24:45.970 11:46:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@10 -- # set +x 00:24:45.970 [2024-12-16 11:46:46.593914] Starting SPDK v25.01-pre git sha1 c360129e7 / DPDK 24.03.0 initialization... 00:24:45.970 [2024-12-16 11:46:46.594002] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid104328 ] 00:24:46.229 [2024-12-16 11:46:46.731547] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:24:46.229 [2024-12-16 11:46:46.794439] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 1 00:24:47.165 11:46:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:24:47.165 11:46:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@868 -- # return 0 00:24:47.165 11:46:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@52 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b Nvme0 -t tcp -a 10.0.0.3 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 00:24:47.165 Nvme0n1 00:24:47.423 11:46:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@53 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_get_bdevs -b Nvme0n1 -t 3000 00:24:47.682 [ 00:24:47.682 { 00:24:47.682 "aliases": [ 00:24:47.682 "e67a8e96-8705-49e2-9788-a7473028b0df" 00:24:47.682 ], 00:24:47.682 "assigned_rate_limits": { 00:24:47.682 "r_mbytes_per_sec": 0, 00:24:47.682 "rw_ios_per_sec": 0, 00:24:47.682 "rw_mbytes_per_sec": 0, 00:24:47.682 "w_mbytes_per_sec": 0 00:24:47.682 }, 00:24:47.682 "block_size": 4096, 00:24:47.682 "claimed": false, 00:24:47.682 "driver_specific": { 00:24:47.682 "mp_policy": "active_passive", 00:24:47.682 "nvme": [ 00:24:47.682 { 00:24:47.682 "ctrlr_data": { 00:24:47.682 "ana_reporting": false, 00:24:47.682 "cntlid": 1, 00:24:47.682 "firmware_revision": "25.01", 00:24:47.682 "model_number": "SPDK bdev Controller", 00:24:47.682 "multi_ctrlr": true, 00:24:47.682 "oacs": { 00:24:47.682 "firmware": 0, 00:24:47.682 "format": 0, 00:24:47.682 "ns_manage": 0, 00:24:47.682 "security": 0 00:24:47.682 }, 00:24:47.682 "serial_number": "SPDK0", 00:24:47.682 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:24:47.682 "vendor_id": "0x8086" 00:24:47.682 }, 00:24:47.682 "ns_data": { 00:24:47.682 "can_share": true, 00:24:47.682 "id": 1 00:24:47.682 }, 00:24:47.682 "trid": { 00:24:47.682 "adrfam": "IPv4", 00:24:47.682 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:24:47.682 "traddr": "10.0.0.3", 00:24:47.682 "trsvcid": "4420", 00:24:47.682 "trtype": "TCP" 00:24:47.682 }, 00:24:47.682 "vs": { 00:24:47.682 "nvme_version": "1.3" 00:24:47.682 } 00:24:47.682 } 00:24:47.682 ] 00:24:47.682 }, 00:24:47.682 "memory_domains": [ 00:24:47.682 { 00:24:47.682 "dma_device_id": "system", 00:24:47.682 "dma_device_type": 1 00:24:47.682 } 00:24:47.682 ], 00:24:47.682 "name": "Nvme0n1", 00:24:47.682 "num_blocks": 38912, 00:24:47.682 "numa_id": -1, 00:24:47.682 "product_name": "NVMe disk", 00:24:47.682 "supported_io_types": { 00:24:47.682 "abort": true, 00:24:47.682 "compare": true, 00:24:47.682 "compare_and_write": true, 00:24:47.682 "copy": true, 00:24:47.682 "flush": true, 00:24:47.682 "get_zone_info": false, 00:24:47.682 "nvme_admin": true, 00:24:47.682 "nvme_io": true, 00:24:47.682 "nvme_io_md": false, 00:24:47.682 "nvme_iov_md": false, 00:24:47.682 "read": true, 00:24:47.682 "reset": true, 00:24:47.682 "seek_data": false, 00:24:47.682 "seek_hole": false, 00:24:47.682 "unmap": true, 00:24:47.682 "write": true, 00:24:47.682 "write_zeroes": true, 00:24:47.682 "zcopy": false, 00:24:47.682 "zone_append": false, 00:24:47.682 "zone_management": false 00:24:47.682 }, 00:24:47.682 "uuid": "e67a8e96-8705-49e2-9788-a7473028b0df", 00:24:47.682 "zoned": false 00:24:47.682 } 00:24:47.682 ] 00:24:47.682 11:46:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@56 -- # run_test_pid=104381 00:24:47.682 11:46:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@55 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:24:47.682 11:46:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@57 -- # sleep 2 00:24:47.682 Running I/O for 10 seconds... 00:24:48.619 Latency(us) 00:24:48.619 [2024-12-16T11:46:49.372Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:24:48.619 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:24:48.619 Nvme0n1 : 1.00 6926.00 27.05 0.00 0.00 0.00 0.00 0.00 00:24:48.619 [2024-12-16T11:46:49.372Z] =================================================================================================================== 00:24:48.619 [2024-12-16T11:46:49.372Z] Total : 6926.00 27.05 0.00 0.00 0.00 0.00 0.00 00:24:48.619 00:24:49.555 11:46:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@60 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_grow_lvstore -u 24951d26-1b13-4833-801c-77600760485b 00:24:49.814 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:24:49.814 Nvme0n1 : 2.00 7480.50 29.22 0.00 0.00 0.00 0.00 0.00 00:24:49.814 [2024-12-16T11:46:50.567Z] =================================================================================================================== 00:24:49.814 [2024-12-16T11:46:50.567Z] Total : 7480.50 29.22 0.00 0.00 0.00 0.00 0.00 00:24:49.814 00:24:49.814 true 00:24:50.073 11:46:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@61 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 24951d26-1b13-4833-801c-77600760485b 00:24:50.073 11:46:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@61 -- # jq -r '.[0].total_data_clusters' 00:24:50.331 11:46:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@61 -- # data_clusters=99 00:24:50.331 11:46:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@62 -- # (( data_clusters == 99 )) 00:24:50.331 11:46:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@65 -- # wait 104381 00:24:50.591 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:24:50.591 Nvme0n1 : 3.00 7661.67 29.93 0.00 0.00 0.00 0.00 0.00 00:24:50.591 [2024-12-16T11:46:51.344Z] =================================================================================================================== 00:24:50.591 [2024-12-16T11:46:51.344Z] Total : 7661.67 29.93 0.00 0.00 0.00 0.00 0.00 00:24:50.591 00:24:51.989 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:24:51.989 Nvme0n1 : 4.00 7748.00 30.27 0.00 0.00 0.00 0.00 0.00 00:24:51.989 [2024-12-16T11:46:52.742Z] =================================================================================================================== 00:24:51.989 [2024-12-16T11:46:52.742Z] Total : 7748.00 30.27 0.00 0.00 0.00 0.00 0.00 00:24:51.989 00:24:52.924 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:24:52.924 Nvme0n1 : 5.00 7782.40 30.40 0.00 0.00 0.00 0.00 0.00 00:24:52.924 [2024-12-16T11:46:53.677Z] =================================================================================================================== 00:24:52.924 [2024-12-16T11:46:53.677Z] Total : 7782.40 30.40 0.00 0.00 0.00 0.00 0.00 00:24:52.924 00:24:53.859 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:24:53.859 Nvme0n1 : 6.00 7786.17 30.41 0.00 0.00 0.00 0.00 0.00 00:24:53.859 [2024-12-16T11:46:54.612Z] =================================================================================================================== 00:24:53.859 [2024-12-16T11:46:54.612Z] Total : 7786.17 30.41 0.00 0.00 0.00 0.00 0.00 00:24:53.859 00:24:54.792 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:24:54.793 Nvme0n1 : 7.00 7804.43 30.49 0.00 0.00 0.00 0.00 0.00 00:24:54.793 [2024-12-16T11:46:55.546Z] =================================================================================================================== 00:24:54.793 [2024-12-16T11:46:55.546Z] Total : 7804.43 30.49 0.00 0.00 0.00 0.00 0.00 00:24:54.793 00:24:55.727 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:24:55.727 Nvme0n1 : 8.00 7798.62 30.46 0.00 0.00 0.00 0.00 0.00 00:24:55.727 [2024-12-16T11:46:56.480Z] =================================================================================================================== 00:24:55.727 [2024-12-16T11:46:56.480Z] Total : 7798.62 30.46 0.00 0.00 0.00 0.00 0.00 00:24:55.727 00:24:56.661 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:24:56.661 Nvme0n1 : 9.00 7791.00 30.43 0.00 0.00 0.00 0.00 0.00 00:24:56.661 [2024-12-16T11:46:57.414Z] =================================================================================================================== 00:24:56.661 [2024-12-16T11:46:57.414Z] Total : 7791.00 30.43 0.00 0.00 0.00 0.00 0.00 00:24:56.661 00:24:57.596 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:24:57.596 Nvme0n1 : 10.00 7786.80 30.42 0.00 0.00 0.00 0.00 0.00 00:24:57.596 [2024-12-16T11:46:58.349Z] =================================================================================================================== 00:24:57.596 [2024-12-16T11:46:58.349Z] Total : 7786.80 30.42 0.00 0.00 0.00 0.00 0.00 00:24:57.596 00:24:57.596 00:24:57.596 Latency(us) 00:24:57.596 [2024-12-16T11:46:58.349Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:24:57.596 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:24:57.596 Nvme0n1 : 10.02 7786.61 30.42 0.00 0.00 16426.69 7983.48 42896.29 00:24:57.596 [2024-12-16T11:46:58.349Z] =================================================================================================================== 00:24:57.596 [2024-12-16T11:46:58.349Z] Total : 7786.61 30.42 0.00 0.00 16426.69 7983.48 42896.29 00:24:57.596 { 00:24:57.596 "results": [ 00:24:57.596 { 00:24:57.596 "job": "Nvme0n1", 00:24:57.596 "core_mask": "0x2", 00:24:57.596 "workload": "randwrite", 00:24:57.596 "status": "finished", 00:24:57.596 "queue_depth": 128, 00:24:57.596 "io_size": 4096, 00:24:57.596 "runtime": 10.016677, 00:24:57.596 "iops": 7786.614263392939, 00:24:57.596 "mibps": 30.41646196637867, 00:24:57.596 "io_failed": 0, 00:24:57.596 "io_timeout": 0, 00:24:57.596 "avg_latency_us": 16426.69263001832, 00:24:57.596 "min_latency_us": 7983.476363636363, 00:24:57.596 "max_latency_us": 42896.29090909091 00:24:57.596 } 00:24:57.596 ], 00:24:57.596 "core_count": 1 00:24:57.596 } 00:24:57.854 11:46:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@66 -- # killprocess 104328 00:24:57.854 11:46:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@954 -- # '[' -z 104328 ']' 00:24:57.854 11:46:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@958 -- # kill -0 104328 00:24:57.854 11:46:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@959 -- # uname 00:24:57.854 11:46:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:24:57.854 11:46:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 104328 00:24:57.854 11:46:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:24:57.854 11:46:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:24:57.854 killing process with pid 104328 00:24:57.854 11:46:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@972 -- # echo 'killing process with pid 104328' 00:24:57.854 Received shutdown signal, test time was about 10.000000 seconds 00:24:57.854 00:24:57.854 Latency(us) 00:24:57.854 [2024-12-16T11:46:58.607Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:24:57.854 [2024-12-16T11:46:58.607Z] =================================================================================================================== 00:24:57.854 [2024-12-16T11:46:58.607Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:24:57.854 11:46:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@973 -- # kill 104328 00:24:57.854 11:46:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@978 -- # wait 104328 00:24:58.112 11:46:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@68 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_listener discovery -t tcp -a 10.0.0.3 -s 4420 00:24:58.370 11:46:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@69 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:24:58.627 11:46:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@70 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 24951d26-1b13-4833-801c-77600760485b 00:24:58.627 11:46:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@70 -- # jq -r '.[0].free_clusters' 00:24:58.885 11:46:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@70 -- # free_clusters=61 00:24:58.885 11:46:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@72 -- # [[ '' == \d\i\r\t\y ]] 00:24:58.885 11:46:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@84 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_aio_delete aio_bdev 00:24:59.143 [2024-12-16 11:46:59.834689] vbdev_lvol.c: 150:vbdev_lvs_hotremove_cb: *NOTICE*: bdev aio_bdev being removed: closing lvstore lvs 00:24:59.143 11:46:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@85 -- # NOT /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 24951d26-1b13-4833-801c-77600760485b 00:24:59.143 11:46:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@652 -- # local es=0 00:24:59.143 11:46:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@654 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 24951d26-1b13-4833-801c-77600760485b 00:24:59.143 11:46:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@640 -- # local arg=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:24:59.143 11:46:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:24:59.143 11:46:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@644 -- # type -t /home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:24:59.143 11:46:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:24:59.143 11:46:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@646 -- # type -P /home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:24:59.143 11:46:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:24:59.143 11:46:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@646 -- # arg=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:24:59.143 11:46:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@646 -- # [[ -x /home/vagrant/spdk_repo/spdk/scripts/rpc.py ]] 00:24:59.143 11:46:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@655 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 24951d26-1b13-4833-801c-77600760485b 00:24:59.401 2024/12/16 11:47:00 error on JSON-RPC call, method: bdev_lvol_get_lvstores, params: map[uuid:24951d26-1b13-4833-801c-77600760485b], err: error received for bdev_lvol_get_lvstores method, err: Code=-19 Msg=No such device 00:24:59.401 request: 00:24:59.401 { 00:24:59.401 "method": "bdev_lvol_get_lvstores", 00:24:59.401 "params": { 00:24:59.401 "uuid": "24951d26-1b13-4833-801c-77600760485b" 00:24:59.401 } 00:24:59.401 } 00:24:59.401 Got JSON-RPC error response 00:24:59.401 GoRPCClient: error on JSON-RPC call 00:24:59.401 11:47:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@655 -- # es=1 00:24:59.401 11:47:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:24:59.401 11:47:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:24:59.401 11:47:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:24:59.401 11:47:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@86 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_aio_create /home/vagrant/spdk_repo/spdk/test/nvmf/target/aio_bdev aio_bdev 4096 00:24:59.967 aio_bdev 00:24:59.967 11:47:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@87 -- # waitforbdev e67a8e96-8705-49e2-9788-a7473028b0df 00:24:59.967 11:47:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@903 -- # local bdev_name=e67a8e96-8705-49e2-9788-a7473028b0df 00:24:59.967 11:47:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:24:59.967 11:47:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@905 -- # local i 00:24:59.967 11:47:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:24:59.967 11:47:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:24:59.967 11:47:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@908 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_wait_for_examine 00:25:00.225 11:47:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@910 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b e67a8e96-8705-49e2-9788-a7473028b0df -t 2000 00:25:00.483 [ 00:25:00.483 { 00:25:00.483 "aliases": [ 00:25:00.483 "lvs/lvol" 00:25:00.483 ], 00:25:00.483 "assigned_rate_limits": { 00:25:00.483 "r_mbytes_per_sec": 0, 00:25:00.483 "rw_ios_per_sec": 0, 00:25:00.483 "rw_mbytes_per_sec": 0, 00:25:00.483 "w_mbytes_per_sec": 0 00:25:00.483 }, 00:25:00.483 "block_size": 4096, 00:25:00.483 "claimed": false, 00:25:00.483 "driver_specific": { 00:25:00.483 "lvol": { 00:25:00.483 "base_bdev": "aio_bdev", 00:25:00.483 "clone": false, 00:25:00.483 "esnap_clone": false, 00:25:00.483 "lvol_store_uuid": "24951d26-1b13-4833-801c-77600760485b", 00:25:00.483 "num_allocated_clusters": 38, 00:25:00.483 "snapshot": false, 00:25:00.483 "thin_provision": false 00:25:00.483 } 00:25:00.483 }, 00:25:00.483 "name": "e67a8e96-8705-49e2-9788-a7473028b0df", 00:25:00.483 "num_blocks": 38912, 00:25:00.483 "product_name": "Logical Volume", 00:25:00.483 "supported_io_types": { 00:25:00.483 "abort": false, 00:25:00.483 "compare": false, 00:25:00.483 "compare_and_write": false, 00:25:00.483 "copy": false, 00:25:00.483 "flush": false, 00:25:00.483 "get_zone_info": false, 00:25:00.483 "nvme_admin": false, 00:25:00.483 "nvme_io": false, 00:25:00.483 "nvme_io_md": false, 00:25:00.483 "nvme_iov_md": false, 00:25:00.483 "read": true, 00:25:00.483 "reset": true, 00:25:00.483 "seek_data": true, 00:25:00.483 "seek_hole": true, 00:25:00.483 "unmap": true, 00:25:00.483 "write": true, 00:25:00.483 "write_zeroes": true, 00:25:00.483 "zcopy": false, 00:25:00.483 "zone_append": false, 00:25:00.483 "zone_management": false 00:25:00.483 }, 00:25:00.483 "uuid": "e67a8e96-8705-49e2-9788-a7473028b0df", 00:25:00.483 "zoned": false 00:25:00.483 } 00:25:00.483 ] 00:25:00.483 11:47:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@911 -- # return 0 00:25:00.483 11:47:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@88 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 24951d26-1b13-4833-801c-77600760485b 00:25:00.483 11:47:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@88 -- # jq -r '.[0].free_clusters' 00:25:00.756 11:47:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@88 -- # (( free_clusters == 61 )) 00:25:00.756 11:47:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@89 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 24951d26-1b13-4833-801c-77600760485b 00:25:00.756 11:47:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@89 -- # jq -r '.[0].total_data_clusters' 00:25:01.037 11:47:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@89 -- # (( data_clusters == 99 )) 00:25:01.037 11:47:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@92 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_delete e67a8e96-8705-49e2-9788-a7473028b0df 00:25:01.295 11:47:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@93 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -u 24951d26-1b13-4833-801c-77600760485b 00:25:01.554 11:47:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@94 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_aio_delete aio_bdev 00:25:01.812 11:47:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@95 -- # rm -f /home/vagrant/spdk_repo/spdk/test/nvmf/target/aio_bdev 00:25:02.378 ************************************ 00:25:02.379 END TEST lvs_grow_clean 00:25:02.379 ************************************ 00:25:02.379 00:25:02.379 real 0m19.328s 00:25:02.379 user 0m18.692s 00:25:02.379 sys 0m2.253s 00:25:02.379 11:47:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@1130 -- # xtrace_disable 00:25:02.379 11:47:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@10 -- # set +x 00:25:02.379 11:47:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@103 -- # run_test lvs_grow_dirty lvs_grow dirty 00:25:02.379 11:47:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:25:02.379 11:47:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@1111 -- # xtrace_disable 00:25:02.379 11:47:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:25:02.379 ************************************ 00:25:02.379 START TEST lvs_grow_dirty 00:25:02.379 ************************************ 00:25:02.379 11:47:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@1129 -- # lvs_grow dirty 00:25:02.379 11:47:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@15 -- # local aio_bdev lvs lvol 00:25:02.379 11:47:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@16 -- # local data_clusters free_clusters 00:25:02.379 11:47:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@17 -- # local bdevperf_pid run_test_pid 00:25:02.379 11:47:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@18 -- # local aio_init_size_mb=200 00:25:02.379 11:47:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@19 -- # local aio_final_size_mb=400 00:25:02.379 11:47:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@20 -- # local lvol_bdev_size_mb=150 00:25:02.379 11:47:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@23 -- # rm -f /home/vagrant/spdk_repo/spdk/test/nvmf/target/aio_bdev 00:25:02.379 11:47:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@24 -- # truncate -s 200M /home/vagrant/spdk_repo/spdk/test/nvmf/target/aio_bdev 00:25:02.379 11:47:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@25 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_aio_create /home/vagrant/spdk_repo/spdk/test/nvmf/target/aio_bdev aio_bdev 4096 00:25:02.637 11:47:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@25 -- # aio_bdev=aio_bdev 00:25:02.637 11:47:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create_lvstore --cluster-sz 4194304 --md-pages-per-cluster-ratio 300 aio_bdev lvs 00:25:03.204 11:47:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@28 -- # lvs=6c149d7b-4159-4c42-af9f-e8b3c2036754 00:25:03.204 11:47:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@29 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 6c149d7b-4159-4c42-af9f-e8b3c2036754 00:25:03.204 11:47:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@29 -- # jq -r '.[0].total_data_clusters' 00:25:03.462 11:47:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@29 -- # data_clusters=49 00:25:03.462 11:47:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@30 -- # (( data_clusters == 49 )) 00:25:03.462 11:47:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@33 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create -u 6c149d7b-4159-4c42-af9f-e8b3c2036754 lvol 150 00:25:03.720 11:47:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@33 -- # lvol=88fbe358-4398-43c9-ae7e-45f19d4c4e5a 00:25:03.720 11:47:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@36 -- # truncate -s 400M /home/vagrant/spdk_repo/spdk/test/nvmf/target/aio_bdev 00:25:03.720 11:47:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@37 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_aio_rescan aio_bdev 00:25:03.978 [2024-12-16 11:47:04.542606] bdev_aio.c:1053:bdev_aio_rescan: *NOTICE*: AIO device is resized: bdev name /home/vagrant/spdk_repo/spdk/test/nvmf/target/aio_bdev, old block count 51200, new block count 102400 00:25:03.978 [2024-12-16 11:47:04.542753] vbdev_lvol.c: 165:vbdev_lvs_base_bdev_event_cb: *NOTICE*: Unsupported bdev event: type 1 00:25:03.978 true 00:25:03.978 11:47:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@38 -- # jq -r '.[0].total_data_clusters' 00:25:03.978 11:47:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@38 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 6c149d7b-4159-4c42-af9f-e8b3c2036754 00:25:04.237 11:47:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@38 -- # (( data_clusters == 49 )) 00:25:04.237 11:47:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@41 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 -a -s SPDK0 00:25:04.498 11:47:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@42 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 88fbe358-4398-43c9-ae7e-45f19d4c4e5a 00:25:04.757 11:47:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@43 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.3 -s 4420 00:25:05.323 [2024-12-16 11:47:05.779067] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4420 *** 00:25:05.323 11:47:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@44 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.3 -s 4420 00:25:05.323 11:47:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@47 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -r /var/tmp/bdevperf.sock -m 0x2 -o 4096 -q 128 -w randwrite -t 10 -S 1 -z 00:25:05.323 11:47:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@48 -- # bdevperf_pid=104777 00:25:05.323 11:47:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@49 -- # trap 'killprocess $bdevperf_pid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:25:05.324 11:47:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@50 -- # waitforlisten 104777 /var/tmp/bdevperf.sock 00:25:05.324 11:47:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@835 -- # '[' -z 104777 ']' 00:25:05.324 11:47:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:25:05.324 11:47:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@840 -- # local max_retries=100 00:25:05.324 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:25:05.324 11:47:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:25:05.324 11:47:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@844 -- # xtrace_disable 00:25:05.324 11:47:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@10 -- # set +x 00:25:05.582 [2024-12-16 11:47:06.117491] Starting SPDK v25.01-pre git sha1 c360129e7 / DPDK 24.03.0 initialization... 00:25:05.582 [2024-12-16 11:47:06.117584] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid104777 ] 00:25:05.582 [2024-12-16 11:47:06.260022] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:25:05.582 [2024-12-16 11:47:06.321636] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 1 00:25:05.840 11:47:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:25:05.840 11:47:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@868 -- # return 0 00:25:05.840 11:47:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@52 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b Nvme0 -t tcp -a 10.0.0.3 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 00:25:06.099 Nvme0n1 00:25:06.099 11:47:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@53 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_get_bdevs -b Nvme0n1 -t 3000 00:25:06.358 [ 00:25:06.358 { 00:25:06.358 "aliases": [ 00:25:06.358 "88fbe358-4398-43c9-ae7e-45f19d4c4e5a" 00:25:06.358 ], 00:25:06.358 "assigned_rate_limits": { 00:25:06.358 "r_mbytes_per_sec": 0, 00:25:06.358 "rw_ios_per_sec": 0, 00:25:06.358 "rw_mbytes_per_sec": 0, 00:25:06.358 "w_mbytes_per_sec": 0 00:25:06.358 }, 00:25:06.358 "block_size": 4096, 00:25:06.358 "claimed": false, 00:25:06.358 "driver_specific": { 00:25:06.358 "mp_policy": "active_passive", 00:25:06.358 "nvme": [ 00:25:06.358 { 00:25:06.358 "ctrlr_data": { 00:25:06.358 "ana_reporting": false, 00:25:06.358 "cntlid": 1, 00:25:06.358 "firmware_revision": "25.01", 00:25:06.358 "model_number": "SPDK bdev Controller", 00:25:06.358 "multi_ctrlr": true, 00:25:06.358 "oacs": { 00:25:06.358 "firmware": 0, 00:25:06.358 "format": 0, 00:25:06.358 "ns_manage": 0, 00:25:06.358 "security": 0 00:25:06.358 }, 00:25:06.358 "serial_number": "SPDK0", 00:25:06.358 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:25:06.358 "vendor_id": "0x8086" 00:25:06.358 }, 00:25:06.358 "ns_data": { 00:25:06.358 "can_share": true, 00:25:06.358 "id": 1 00:25:06.358 }, 00:25:06.358 "trid": { 00:25:06.358 "adrfam": "IPv4", 00:25:06.358 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:25:06.358 "traddr": "10.0.0.3", 00:25:06.358 "trsvcid": "4420", 00:25:06.358 "trtype": "TCP" 00:25:06.358 }, 00:25:06.358 "vs": { 00:25:06.358 "nvme_version": "1.3" 00:25:06.358 } 00:25:06.358 } 00:25:06.358 ] 00:25:06.358 }, 00:25:06.358 "memory_domains": [ 00:25:06.358 { 00:25:06.358 "dma_device_id": "system", 00:25:06.358 "dma_device_type": 1 00:25:06.358 } 00:25:06.358 ], 00:25:06.358 "name": "Nvme0n1", 00:25:06.358 "num_blocks": 38912, 00:25:06.358 "numa_id": -1, 00:25:06.358 "product_name": "NVMe disk", 00:25:06.358 "supported_io_types": { 00:25:06.358 "abort": true, 00:25:06.358 "compare": true, 00:25:06.358 "compare_and_write": true, 00:25:06.358 "copy": true, 00:25:06.358 "flush": true, 00:25:06.358 "get_zone_info": false, 00:25:06.358 "nvme_admin": true, 00:25:06.358 "nvme_io": true, 00:25:06.358 "nvme_io_md": false, 00:25:06.358 "nvme_iov_md": false, 00:25:06.358 "read": true, 00:25:06.358 "reset": true, 00:25:06.358 "seek_data": false, 00:25:06.358 "seek_hole": false, 00:25:06.358 "unmap": true, 00:25:06.358 "write": true, 00:25:06.358 "write_zeroes": true, 00:25:06.358 "zcopy": false, 00:25:06.358 "zone_append": false, 00:25:06.358 "zone_management": false 00:25:06.358 }, 00:25:06.358 "uuid": "88fbe358-4398-43c9-ae7e-45f19d4c4e5a", 00:25:06.358 "zoned": false 00:25:06.358 } 00:25:06.358 ] 00:25:06.358 11:47:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@56 -- # run_test_pid=104810 00:25:06.358 11:47:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@55 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:25:06.358 11:47:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@57 -- # sleep 2 00:25:06.616 Running I/O for 10 seconds... 00:25:07.551 Latency(us) 00:25:07.551 [2024-12-16T11:47:08.304Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:25:07.551 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:25:07.551 Nvme0n1 : 1.00 7887.00 30.81 0.00 0.00 0.00 0.00 0.00 00:25:07.551 [2024-12-16T11:47:08.304Z] =================================================================================================================== 00:25:07.551 [2024-12-16T11:47:08.304Z] Total : 7887.00 30.81 0.00 0.00 0.00 0.00 0.00 00:25:07.551 00:25:08.486 11:47:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@60 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_grow_lvstore -u 6c149d7b-4159-4c42-af9f-e8b3c2036754 00:25:08.486 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:25:08.486 Nvme0n1 : 2.00 8120.00 31.72 0.00 0.00 0.00 0.00 0.00 00:25:08.486 [2024-12-16T11:47:09.239Z] =================================================================================================================== 00:25:08.486 [2024-12-16T11:47:09.239Z] Total : 8120.00 31.72 0.00 0.00 0.00 0.00 0.00 00:25:08.486 00:25:08.744 true 00:25:08.744 11:47:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@61 -- # jq -r '.[0].total_data_clusters' 00:25:08.744 11:47:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@61 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 6c149d7b-4159-4c42-af9f-e8b3c2036754 00:25:09.003 11:47:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@61 -- # data_clusters=99 00:25:09.003 11:47:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@62 -- # (( data_clusters == 99 )) 00:25:09.003 11:47:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@65 -- # wait 104810 00:25:09.570 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:25:09.570 Nvme0n1 : 3.00 8110.33 31.68 0.00 0.00 0.00 0.00 0.00 00:25:09.570 [2024-12-16T11:47:10.323Z] =================================================================================================================== 00:25:09.570 [2024-12-16T11:47:10.323Z] Total : 8110.33 31.68 0.00 0.00 0.00 0.00 0.00 00:25:09.570 00:25:10.505 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:25:10.505 Nvme0n1 : 4.00 8067.25 31.51 0.00 0.00 0.00 0.00 0.00 00:25:10.505 [2024-12-16T11:47:11.258Z] =================================================================================================================== 00:25:10.505 [2024-12-16T11:47:11.258Z] Total : 8067.25 31.51 0.00 0.00 0.00 0.00 0.00 00:25:10.505 00:25:11.440 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:25:11.440 Nvme0n1 : 5.00 8065.20 31.50 0.00 0.00 0.00 0.00 0.00 00:25:11.440 [2024-12-16T11:47:12.193Z] =================================================================================================================== 00:25:11.440 [2024-12-16T11:47:12.193Z] Total : 8065.20 31.50 0.00 0.00 0.00 0.00 0.00 00:25:11.440 00:25:12.859 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:25:12.859 Nvme0n1 : 6.00 8041.67 31.41 0.00 0.00 0.00 0.00 0.00 00:25:12.859 [2024-12-16T11:47:13.612Z] =================================================================================================================== 00:25:12.859 [2024-12-16T11:47:13.612Z] Total : 8041.67 31.41 0.00 0.00 0.00 0.00 0.00 00:25:12.859 00:25:13.793 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:25:13.794 Nvme0n1 : 7.00 7653.57 29.90 0.00 0.00 0.00 0.00 0.00 00:25:13.794 [2024-12-16T11:47:14.547Z] =================================================================================================================== 00:25:13.794 [2024-12-16T11:47:14.547Z] Total : 7653.57 29.90 0.00 0.00 0.00 0.00 0.00 00:25:13.794 00:25:14.727 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:25:14.727 Nvme0n1 : 8.00 7659.75 29.92 0.00 0.00 0.00 0.00 0.00 00:25:14.727 [2024-12-16T11:47:15.480Z] =================================================================================================================== 00:25:14.727 [2024-12-16T11:47:15.480Z] Total : 7659.75 29.92 0.00 0.00 0.00 0.00 0.00 00:25:14.727 00:25:15.660 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:25:15.660 Nvme0n1 : 9.00 7662.22 29.93 0.00 0.00 0.00 0.00 0.00 00:25:15.660 [2024-12-16T11:47:16.413Z] =================================================================================================================== 00:25:15.660 [2024-12-16T11:47:16.413Z] Total : 7662.22 29.93 0.00 0.00 0.00 0.00 0.00 00:25:15.660 00:25:16.595 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:25:16.595 Nvme0n1 : 10.00 7640.40 29.85 0.00 0.00 0.00 0.00 0.00 00:25:16.595 [2024-12-16T11:47:17.348Z] =================================================================================================================== 00:25:16.595 [2024-12-16T11:47:17.348Z] Total : 7640.40 29.85 0.00 0.00 0.00 0.00 0.00 00:25:16.595 00:25:16.595 00:25:16.595 Latency(us) 00:25:16.595 [2024-12-16T11:47:17.348Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:25:16.595 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:25:16.595 Nvme0n1 : 10.02 7640.29 29.84 0.00 0.00 16747.45 5540.77 335544.32 00:25:16.595 [2024-12-16T11:47:17.348Z] =================================================================================================================== 00:25:16.595 [2024-12-16T11:47:17.348Z] Total : 7640.29 29.84 0.00 0.00 16747.45 5540.77 335544.32 00:25:16.595 { 00:25:16.595 "results": [ 00:25:16.595 { 00:25:16.595 "job": "Nvme0n1", 00:25:16.595 "core_mask": "0x2", 00:25:16.595 "workload": "randwrite", 00:25:16.595 "status": "finished", 00:25:16.595 "queue_depth": 128, 00:25:16.595 "io_size": 4096, 00:25:16.595 "runtime": 10.016893, 00:25:16.595 "iops": 7640.293252608369, 00:25:16.595 "mibps": 29.84489551800144, 00:25:16.595 "io_failed": 0, 00:25:16.595 "io_timeout": 0, 00:25:16.595 "avg_latency_us": 16747.449913001336, 00:25:16.595 "min_latency_us": 5540.770909090909, 00:25:16.595 "max_latency_us": 335544.32 00:25:16.595 } 00:25:16.595 ], 00:25:16.595 "core_count": 1 00:25:16.595 } 00:25:16.595 11:47:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@66 -- # killprocess 104777 00:25:16.595 11:47:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@954 -- # '[' -z 104777 ']' 00:25:16.595 11:47:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@958 -- # kill -0 104777 00:25:16.595 11:47:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@959 -- # uname 00:25:16.595 11:47:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:25:16.595 11:47:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 104777 00:25:16.595 11:47:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:25:16.595 killing process with pid 104777 00:25:16.595 11:47:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:25:16.595 11:47:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@972 -- # echo 'killing process with pid 104777' 00:25:16.595 Received shutdown signal, test time was about 10.000000 seconds 00:25:16.595 00:25:16.595 Latency(us) 00:25:16.595 [2024-12-16T11:47:17.348Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:25:16.595 [2024-12-16T11:47:17.348Z] =================================================================================================================== 00:25:16.595 [2024-12-16T11:47:17.348Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:25:16.595 11:47:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@973 -- # kill 104777 00:25:16.595 11:47:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@978 -- # wait 104777 00:25:16.853 11:47:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@68 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_listener discovery -t tcp -a 10.0.0.3 -s 4420 00:25:17.112 11:47:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@69 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:25:17.370 11:47:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@70 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 6c149d7b-4159-4c42-af9f-e8b3c2036754 00:25:17.370 11:47:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@70 -- # jq -r '.[0].free_clusters' 00:25:17.629 11:47:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@70 -- # free_clusters=61 00:25:17.629 11:47:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@72 -- # [[ dirty == \d\i\r\t\y ]] 00:25:17.629 11:47:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@74 -- # kill -9 104178 00:25:17.629 11:47:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@75 -- # wait 104178 00:25:17.887 /home/vagrant/spdk_repo/spdk/test/nvmf/target/nvmf_lvs_grow.sh: line 75: 104178 Killed "${NVMF_APP[@]}" "$@" 00:25:17.887 11:47:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@75 -- # true 00:25:17.887 11:47:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@76 -- # nvmfappstart -m 0x1 00:25:17.887 11:47:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:25:17.887 11:47:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@726 -- # xtrace_disable 00:25:17.887 11:47:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@10 -- # set +x 00:25:17.887 11:47:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- nvmf/common.sh@509 -- # nvmfpid=104969 00:25:17.887 11:47:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- nvmf/common.sh@508 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --interrupt-mode -m 0x1 00:25:17.887 11:47:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- nvmf/common.sh@510 -- # waitforlisten 104969 00:25:17.887 11:47:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@835 -- # '[' -z 104969 ']' 00:25:17.887 11:47:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:25:17.887 11:47:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@840 -- # local max_retries=100 00:25:17.887 11:47:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:25:17.887 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:25:17.887 11:47:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@844 -- # xtrace_disable 00:25:17.887 11:47:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@10 -- # set +x 00:25:17.887 [2024-12-16 11:47:18.458521] thread.c:3079:spdk_interrupt_mode_enable: *NOTICE*: Set SPDK running in interrupt mode. 00:25:17.887 [2024-12-16 11:47:18.459820] Starting SPDK v25.01-pre git sha1 c360129e7 / DPDK 24.03.0 initialization... 00:25:17.887 [2024-12-16 11:47:18.459935] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:25:17.887 [2024-12-16 11:47:18.614317] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:25:18.145 [2024-12-16 11:47:18.678956] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:25:18.145 [2024-12-16 11:47:18.679026] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:25:18.145 [2024-12-16 11:47:18.679065] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:25:18.145 [2024-12-16 11:47:18.679076] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:25:18.145 [2024-12-16 11:47:18.679085] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:25:18.145 [2024-12-16 11:47:18.679575] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:25:18.145 [2024-12-16 11:47:18.782299] thread.c:2144:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (app_thread) to intr mode from intr mode. 00:25:18.145 [2024-12-16 11:47:18.782723] thread.c:2144:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_000) to intr mode from intr mode. 00:25:19.080 11:47:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:25:19.080 11:47:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@868 -- # return 0 00:25:19.080 11:47:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:25:19.080 11:47:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@732 -- # xtrace_disable 00:25:19.080 11:47:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@10 -- # set +x 00:25:19.080 11:47:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:25:19.080 11:47:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@77 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_aio_create /home/vagrant/spdk_repo/spdk/test/nvmf/target/aio_bdev aio_bdev 4096 00:25:19.339 [2024-12-16 11:47:19.850007] blobstore.c:4899:bs_recover: *NOTICE*: Performing recovery on blobstore 00:25:19.339 [2024-12-16 11:47:19.850623] blobstore.c:4846:bs_load_replay_md_cpl: *NOTICE*: Recover: blob 0x0 00:25:19.339 [2024-12-16 11:47:19.850947] blobstore.c:4846:bs_load_replay_md_cpl: *NOTICE*: Recover: blob 0x1 00:25:19.339 11:47:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@77 -- # aio_bdev=aio_bdev 00:25:19.339 11:47:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@78 -- # waitforbdev 88fbe358-4398-43c9-ae7e-45f19d4c4e5a 00:25:19.339 11:47:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@903 -- # local bdev_name=88fbe358-4398-43c9-ae7e-45f19d4c4e5a 00:25:19.339 11:47:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:25:19.339 11:47:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@905 -- # local i 00:25:19.339 11:47:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:25:19.339 11:47:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:25:19.339 11:47:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@908 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_wait_for_examine 00:25:19.597 11:47:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@910 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b 88fbe358-4398-43c9-ae7e-45f19d4c4e5a -t 2000 00:25:19.855 [ 00:25:19.855 { 00:25:19.855 "aliases": [ 00:25:19.855 "lvs/lvol" 00:25:19.855 ], 00:25:19.855 "assigned_rate_limits": { 00:25:19.855 "r_mbytes_per_sec": 0, 00:25:19.855 "rw_ios_per_sec": 0, 00:25:19.855 "rw_mbytes_per_sec": 0, 00:25:19.855 "w_mbytes_per_sec": 0 00:25:19.855 }, 00:25:19.855 "block_size": 4096, 00:25:19.855 "claimed": false, 00:25:19.855 "driver_specific": { 00:25:19.855 "lvol": { 00:25:19.855 "base_bdev": "aio_bdev", 00:25:19.855 "clone": false, 00:25:19.855 "esnap_clone": false, 00:25:19.855 "lvol_store_uuid": "6c149d7b-4159-4c42-af9f-e8b3c2036754", 00:25:19.855 "num_allocated_clusters": 38, 00:25:19.855 "snapshot": false, 00:25:19.855 "thin_provision": false 00:25:19.855 } 00:25:19.855 }, 00:25:19.855 "name": "88fbe358-4398-43c9-ae7e-45f19d4c4e5a", 00:25:19.855 "num_blocks": 38912, 00:25:19.855 "product_name": "Logical Volume", 00:25:19.855 "supported_io_types": { 00:25:19.855 "abort": false, 00:25:19.855 "compare": false, 00:25:19.855 "compare_and_write": false, 00:25:19.855 "copy": false, 00:25:19.855 "flush": false, 00:25:19.855 "get_zone_info": false, 00:25:19.855 "nvme_admin": false, 00:25:19.855 "nvme_io": false, 00:25:19.855 "nvme_io_md": false, 00:25:19.855 "nvme_iov_md": false, 00:25:19.855 "read": true, 00:25:19.855 "reset": true, 00:25:19.855 "seek_data": true, 00:25:19.855 "seek_hole": true, 00:25:19.855 "unmap": true, 00:25:19.855 "write": true, 00:25:19.855 "write_zeroes": true, 00:25:19.855 "zcopy": false, 00:25:19.855 "zone_append": false, 00:25:19.855 "zone_management": false 00:25:19.855 }, 00:25:19.855 "uuid": "88fbe358-4398-43c9-ae7e-45f19d4c4e5a", 00:25:19.855 "zoned": false 00:25:19.855 } 00:25:19.855 ] 00:25:19.855 11:47:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@911 -- # return 0 00:25:19.855 11:47:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@79 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 6c149d7b-4159-4c42-af9f-e8b3c2036754 00:25:19.855 11:47:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@79 -- # jq -r '.[0].free_clusters' 00:25:20.113 11:47:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@79 -- # (( free_clusters == 61 )) 00:25:20.113 11:47:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@80 -- # jq -r '.[0].total_data_clusters' 00:25:20.113 11:47:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@80 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 6c149d7b-4159-4c42-af9f-e8b3c2036754 00:25:20.371 11:47:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@80 -- # (( data_clusters == 99 )) 00:25:20.371 11:47:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@84 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_aio_delete aio_bdev 00:25:20.629 [2024-12-16 11:47:21.312397] vbdev_lvol.c: 150:vbdev_lvs_hotremove_cb: *NOTICE*: bdev aio_bdev being removed: closing lvstore lvs 00:25:20.629 11:47:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@85 -- # NOT /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 6c149d7b-4159-4c42-af9f-e8b3c2036754 00:25:20.629 11:47:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@652 -- # local es=0 00:25:20.629 11:47:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@654 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 6c149d7b-4159-4c42-af9f-e8b3c2036754 00:25:20.629 11:47:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@640 -- # local arg=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:25:20.629 11:47:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:25:20.629 11:47:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@644 -- # type -t /home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:25:20.629 11:47:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:25:20.629 11:47:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@646 -- # type -P /home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:25:20.629 11:47:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:25:20.629 11:47:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@646 -- # arg=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:25:20.629 11:47:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@646 -- # [[ -x /home/vagrant/spdk_repo/spdk/scripts/rpc.py ]] 00:25:20.629 11:47:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@655 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 6c149d7b-4159-4c42-af9f-e8b3c2036754 00:25:21.196 2024/12/16 11:47:21 error on JSON-RPC call, method: bdev_lvol_get_lvstores, params: map[uuid:6c149d7b-4159-4c42-af9f-e8b3c2036754], err: error received for bdev_lvol_get_lvstores method, err: Code=-19 Msg=No such device 00:25:21.196 request: 00:25:21.196 { 00:25:21.196 "method": "bdev_lvol_get_lvstores", 00:25:21.196 "params": { 00:25:21.196 "uuid": "6c149d7b-4159-4c42-af9f-e8b3c2036754" 00:25:21.196 } 00:25:21.196 } 00:25:21.196 Got JSON-RPC error response 00:25:21.196 GoRPCClient: error on JSON-RPC call 00:25:21.196 11:47:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@655 -- # es=1 00:25:21.196 11:47:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:25:21.196 11:47:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:25:21.196 11:47:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:25:21.196 11:47:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@86 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_aio_create /home/vagrant/spdk_repo/spdk/test/nvmf/target/aio_bdev aio_bdev 4096 00:25:21.196 aio_bdev 00:25:21.196 11:47:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@87 -- # waitforbdev 88fbe358-4398-43c9-ae7e-45f19d4c4e5a 00:25:21.196 11:47:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@903 -- # local bdev_name=88fbe358-4398-43c9-ae7e-45f19d4c4e5a 00:25:21.196 11:47:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:25:21.196 11:47:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@905 -- # local i 00:25:21.196 11:47:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:25:21.196 11:47:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:25:21.196 11:47:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@908 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_wait_for_examine 00:25:21.454 11:47:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@910 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b 88fbe358-4398-43c9-ae7e-45f19d4c4e5a -t 2000 00:25:21.713 [ 00:25:21.713 { 00:25:21.713 "aliases": [ 00:25:21.713 "lvs/lvol" 00:25:21.713 ], 00:25:21.713 "assigned_rate_limits": { 00:25:21.713 "r_mbytes_per_sec": 0, 00:25:21.713 "rw_ios_per_sec": 0, 00:25:21.713 "rw_mbytes_per_sec": 0, 00:25:21.713 "w_mbytes_per_sec": 0 00:25:21.713 }, 00:25:21.713 "block_size": 4096, 00:25:21.713 "claimed": false, 00:25:21.713 "driver_specific": { 00:25:21.713 "lvol": { 00:25:21.713 "base_bdev": "aio_bdev", 00:25:21.713 "clone": false, 00:25:21.713 "esnap_clone": false, 00:25:21.713 "lvol_store_uuid": "6c149d7b-4159-4c42-af9f-e8b3c2036754", 00:25:21.713 "num_allocated_clusters": 38, 00:25:21.713 "snapshot": false, 00:25:21.713 "thin_provision": false 00:25:21.713 } 00:25:21.713 }, 00:25:21.713 "name": "88fbe358-4398-43c9-ae7e-45f19d4c4e5a", 00:25:21.713 "num_blocks": 38912, 00:25:21.713 "product_name": "Logical Volume", 00:25:21.713 "supported_io_types": { 00:25:21.713 "abort": false, 00:25:21.713 "compare": false, 00:25:21.713 "compare_and_write": false, 00:25:21.713 "copy": false, 00:25:21.713 "flush": false, 00:25:21.713 "get_zone_info": false, 00:25:21.713 "nvme_admin": false, 00:25:21.713 "nvme_io": false, 00:25:21.713 "nvme_io_md": false, 00:25:21.713 "nvme_iov_md": false, 00:25:21.713 "read": true, 00:25:21.713 "reset": true, 00:25:21.713 "seek_data": true, 00:25:21.713 "seek_hole": true, 00:25:21.713 "unmap": true, 00:25:21.713 "write": true, 00:25:21.713 "write_zeroes": true, 00:25:21.713 "zcopy": false, 00:25:21.713 "zone_append": false, 00:25:21.713 "zone_management": false 00:25:21.713 }, 00:25:21.713 "uuid": "88fbe358-4398-43c9-ae7e-45f19d4c4e5a", 00:25:21.713 "zoned": false 00:25:21.713 } 00:25:21.713 ] 00:25:21.971 11:47:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@911 -- # return 0 00:25:21.971 11:47:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@88 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 6c149d7b-4159-4c42-af9f-e8b3c2036754 00:25:21.971 11:47:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@88 -- # jq -r '.[0].free_clusters' 00:25:22.229 11:47:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@88 -- # (( free_clusters == 61 )) 00:25:22.229 11:47:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@89 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 6c149d7b-4159-4c42-af9f-e8b3c2036754 00:25:22.229 11:47:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@89 -- # jq -r '.[0].total_data_clusters' 00:25:22.487 11:47:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@89 -- # (( data_clusters == 99 )) 00:25:22.487 11:47:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@92 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_delete 88fbe358-4398-43c9-ae7e-45f19d4c4e5a 00:25:22.745 11:47:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@93 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -u 6c149d7b-4159-4c42-af9f-e8b3c2036754 00:25:23.005 11:47:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@94 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_aio_delete aio_bdev 00:25:23.268 11:47:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@95 -- # rm -f /home/vagrant/spdk_repo/spdk/test/nvmf/target/aio_bdev 00:25:23.835 ************************************ 00:25:23.835 END TEST lvs_grow_dirty 00:25:23.835 ************************************ 00:25:23.835 00:25:23.835 real 0m21.363s 00:25:23.835 user 0m28.708s 00:25:23.835 sys 0m8.140s 00:25:23.835 11:47:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@1130 -- # xtrace_disable 00:25:23.835 11:47:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@10 -- # set +x 00:25:23.835 11:47:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@1 -- # process_shm --id 0 00:25:23.835 11:47:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@812 -- # type=--id 00:25:23.835 11:47:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@813 -- # id=0 00:25:23.835 11:47:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@814 -- # '[' --id = --pid ']' 00:25:23.835 11:47:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@818 -- # find /dev/shm -name '*.0' -printf '%f\n' 00:25:23.835 11:47:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@818 -- # shm_files=nvmf_trace.0 00:25:23.835 11:47:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@820 -- # [[ -z nvmf_trace.0 ]] 00:25:23.835 11:47:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@824 -- # for n in $shm_files 00:25:23.835 11:47:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@825 -- # tar -C /dev/shm/ -cvzf /home/vagrant/spdk_repo/spdk/../output/nvmf_trace.0_shm.tar.gz nvmf_trace.0 00:25:23.835 nvmf_trace.0 00:25:23.835 11:47:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@827 -- # return 0 00:25:23.835 11:47:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@1 -- # nvmftestfini 00:25:23.835 11:47:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@516 -- # nvmfcleanup 00:25:23.835 11:47:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@121 -- # sync 00:25:24.093 11:47:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:25:24.093 11:47:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@124 -- # set +e 00:25:24.093 11:47:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@125 -- # for i in {1..20} 00:25:24.093 11:47:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:25:24.093 rmmod nvme_tcp 00:25:24.093 rmmod nvme_fabrics 00:25:24.093 rmmod nvme_keyring 00:25:24.093 11:47:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:25:24.093 11:47:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@128 -- # set -e 00:25:24.093 11:47:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@129 -- # return 0 00:25:24.093 11:47:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@517 -- # '[' -n 104969 ']' 00:25:24.093 11:47:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@518 -- # killprocess 104969 00:25:24.093 11:47:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@954 -- # '[' -z 104969 ']' 00:25:24.093 11:47:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@958 -- # kill -0 104969 00:25:24.093 11:47:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@959 -- # uname 00:25:24.093 11:47:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:25:24.093 11:47:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 104969 00:25:24.093 11:47:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:25:24.093 11:47:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:25:24.093 killing process with pid 104969 00:25:24.093 11:47:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@972 -- # echo 'killing process with pid 104969' 00:25:24.093 11:47:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@973 -- # kill 104969 00:25:24.093 11:47:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@978 -- # wait 104969 00:25:24.659 11:47:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:25:24.659 11:47:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:25:24.659 11:47:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:25:24.659 11:47:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@297 -- # iptr 00:25:24.659 11:47:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@791 -- # iptables-save 00:25:24.659 11:47:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@791 -- # iptables-restore 00:25:24.659 11:47:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:25:24.659 11:47:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@298 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:25:24.659 11:47:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@299 -- # nvmf_veth_fini 00:25:24.659 11:47:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@233 -- # ip link set nvmf_init_br nomaster 00:25:24.659 11:47:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@234 -- # ip link set nvmf_init_br2 nomaster 00:25:24.659 11:47:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@235 -- # ip link set nvmf_tgt_br nomaster 00:25:24.659 11:47:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@236 -- # ip link set nvmf_tgt_br2 nomaster 00:25:24.659 11:47:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@237 -- # ip link set nvmf_init_br down 00:25:24.659 11:47:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@238 -- # ip link set nvmf_init_br2 down 00:25:24.659 11:47:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@239 -- # ip link set nvmf_tgt_br down 00:25:24.659 11:47:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@240 -- # ip link set nvmf_tgt_br2 down 00:25:24.659 11:47:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@241 -- # ip link delete nvmf_br type bridge 00:25:24.659 11:47:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@242 -- # ip link delete nvmf_init_if 00:25:24.659 11:47:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@243 -- # ip link delete nvmf_init_if2 00:25:24.659 11:47:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@244 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:25:24.659 11:47:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@245 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:25:24.659 11:47:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@246 -- # remove_spdk_ns 00:25:24.659 11:47:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:25:24.659 11:47:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:25:24.659 11:47:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:25:24.659 11:47:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@300 -- # return 0 00:25:24.659 00:25:24.659 real 0m43.130s 00:25:24.659 user 0m48.472s 00:25:24.659 sys 0m11.706s 00:25:24.659 11:47:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@1130 -- # xtrace_disable 00:25:24.659 ************************************ 00:25:24.659 END TEST nvmf_lvs_grow 00:25:24.659 11:47:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:25:24.659 ************************************ 00:25:24.659 11:47:25 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/nvmf_target_core.sh@29 -- # run_test nvmf_bdev_io_wait /home/vagrant/spdk_repo/spdk/test/nvmf/target/bdev_io_wait.sh --transport=tcp --interrupt-mode 00:25:24.659 11:47:25 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:25:24.659 11:47:25 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1111 -- # xtrace_disable 00:25:24.659 11:47:25 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@10 -- # set +x 00:25:24.917 ************************************ 00:25:24.917 START TEST nvmf_bdev_io_wait 00:25:24.917 ************************************ 00:25:24.917 11:47:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/bdev_io_wait.sh --transport=tcp --interrupt-mode 00:25:24.917 * Looking for test storage... 00:25:24.917 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:25:24.917 11:47:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:25:24.917 11:47:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@1711 -- # lcov --version 00:25:24.917 11:47:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:25:24.917 11:47:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:25:24.917 11:47:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:25:24.918 11:47:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@333 -- # local ver1 ver1_l 00:25:24.918 11:47:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@334 -- # local ver2 ver2_l 00:25:24.918 11:47:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@336 -- # IFS=.-: 00:25:24.918 11:47:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@336 -- # read -ra ver1 00:25:24.918 11:47:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@337 -- # IFS=.-: 00:25:24.918 11:47:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@337 -- # read -ra ver2 00:25:24.918 11:47:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@338 -- # local 'op=<' 00:25:24.918 11:47:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@340 -- # ver1_l=2 00:25:24.918 11:47:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@341 -- # ver2_l=1 00:25:24.918 11:47:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:25:24.918 11:47:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@344 -- # case "$op" in 00:25:24.918 11:47:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@345 -- # : 1 00:25:24.918 11:47:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@364 -- # (( v = 0 )) 00:25:24.918 11:47:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:25:24.918 11:47:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@365 -- # decimal 1 00:25:24.918 11:47:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@353 -- # local d=1 00:25:24.918 11:47:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:25:24.918 11:47:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@355 -- # echo 1 00:25:24.918 11:47:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@365 -- # ver1[v]=1 00:25:24.918 11:47:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@366 -- # decimal 2 00:25:24.918 11:47:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@353 -- # local d=2 00:25:24.918 11:47:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:25:24.918 11:47:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@355 -- # echo 2 00:25:24.918 11:47:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@366 -- # ver2[v]=2 00:25:24.918 11:47:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:25:24.918 11:47:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:25:24.918 11:47:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@368 -- # return 0 00:25:24.918 11:47:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:25:24.918 11:47:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:25:24.918 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:25:24.918 --rc genhtml_branch_coverage=1 00:25:24.918 --rc genhtml_function_coverage=1 00:25:24.918 --rc genhtml_legend=1 00:25:24.918 --rc geninfo_all_blocks=1 00:25:24.918 --rc geninfo_unexecuted_blocks=1 00:25:24.918 00:25:24.918 ' 00:25:24.918 11:47:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:25:24.918 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:25:24.918 --rc genhtml_branch_coverage=1 00:25:24.918 --rc genhtml_function_coverage=1 00:25:24.918 --rc genhtml_legend=1 00:25:24.918 --rc geninfo_all_blocks=1 00:25:24.918 --rc geninfo_unexecuted_blocks=1 00:25:24.918 00:25:24.918 ' 00:25:24.918 11:47:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:25:24.918 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:25:24.918 --rc genhtml_branch_coverage=1 00:25:24.918 --rc genhtml_function_coverage=1 00:25:24.918 --rc genhtml_legend=1 00:25:24.918 --rc geninfo_all_blocks=1 00:25:24.918 --rc geninfo_unexecuted_blocks=1 00:25:24.918 00:25:24.918 ' 00:25:24.918 11:47:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:25:24.918 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:25:24.918 --rc genhtml_branch_coverage=1 00:25:24.918 --rc genhtml_function_coverage=1 00:25:24.918 --rc genhtml_legend=1 00:25:24.918 --rc geninfo_all_blocks=1 00:25:24.918 --rc geninfo_unexecuted_blocks=1 00:25:24.918 00:25:24.918 ' 00:25:24.918 11:47:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:25:24.918 11:47:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@7 -- # uname -s 00:25:24.918 11:47:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:25:24.918 11:47:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:25:24.918 11:47:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:25:24.918 11:47:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:25:24.918 11:47:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:25:24.918 11:47:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:25:24.918 11:47:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:25:24.918 11:47:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:25:24.918 11:47:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:25:24.918 11:47:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:25:24.918 11:47:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:2fb5c03d-b67f-4471-b8a9-a83bf7b15be8 00:25:24.918 11:47:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@18 -- # NVME_HOSTID=2fb5c03d-b67f-4471-b8a9-a83bf7b15be8 00:25:24.918 11:47:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:25:24.918 11:47:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:25:24.918 11:47:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:25:24.918 11:47:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:25:24.918 11:47:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@49 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:25:24.918 11:47:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@15 -- # shopt -s extglob 00:25:24.918 11:47:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:25:24.918 11:47:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:25:24.918 11:47:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:25:24.918 11:47:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:24.918 11:47:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:24.918 11:47:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:24.918 11:47:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- paths/export.sh@5 -- # export PATH 00:25:24.918 11:47:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:24.918 11:47:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@51 -- # : 0 00:25:24.918 11:47:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:25:24.918 11:47:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:25:24.918 11:47:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:25:24.918 11:47:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:25:24.918 11:47:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:25:24.918 11:47:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@33 -- # '[' 1 -eq 1 ']' 00:25:24.918 11:47:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@34 -- # NVMF_APP+=(--interrupt-mode) 00:25:24.918 11:47:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:25:24.918 11:47:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:25:24.918 11:47:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@55 -- # have_pci_nics=0 00:25:24.918 11:47:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@11 -- # MALLOC_BDEV_SIZE=64 00:25:24.918 11:47:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:25:24.918 11:47:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@14 -- # nvmftestinit 00:25:24.918 11:47:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:25:24.918 11:47:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:25:24.918 11:47:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@476 -- # prepare_net_devs 00:25:24.918 11:47:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@438 -- # local -g is_hw=no 00:25:24.918 11:47:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@440 -- # remove_spdk_ns 00:25:24.918 11:47:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:25:24.918 11:47:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:25:24.918 11:47:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:25:24.918 11:47:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@442 -- # [[ virt != virt ]] 00:25:24.918 11:47:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@444 -- # [[ no == yes ]] 00:25:24.918 11:47:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@451 -- # [[ virt == phy ]] 00:25:24.918 11:47:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@454 -- # [[ virt == phy-fallback ]] 00:25:24.918 11:47:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@459 -- # [[ tcp == tcp ]] 00:25:24.918 11:47:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@460 -- # nvmf_veth_init 00:25:24.918 11:47:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@145 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:25:24.918 11:47:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@146 -- # NVMF_SECOND_INITIATOR_IP=10.0.0.2 00:25:24.918 11:47:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@147 -- # NVMF_FIRST_TARGET_IP=10.0.0.3 00:25:24.918 11:47:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@148 -- # NVMF_SECOND_TARGET_IP=10.0.0.4 00:25:24.918 11:47:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@149 -- # NVMF_INITIATOR_IP=10.0.0.1 00:25:24.918 11:47:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@150 -- # NVMF_BRIDGE=nvmf_br 00:25:24.918 11:47:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@151 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:25:24.918 11:47:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@152 -- # NVMF_INITIATOR_INTERFACE2=nvmf_init_if2 00:25:24.918 11:47:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@153 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:25:24.918 11:47:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@154 -- # NVMF_INITIATOR_BRIDGE2=nvmf_init_br2 00:25:24.918 11:47:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@155 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:25:24.918 11:47:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@156 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:25:24.918 11:47:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@157 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:25:24.918 11:47:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@158 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:25:24.918 11:47:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@159 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:25:24.918 11:47:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@160 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:25:24.918 11:47:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@162 -- # ip link set nvmf_init_br nomaster 00:25:24.918 Cannot find device "nvmf_init_br" 00:25:24.918 11:47:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@162 -- # true 00:25:24.918 11:47:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@163 -- # ip link set nvmf_init_br2 nomaster 00:25:24.918 Cannot find device "nvmf_init_br2" 00:25:24.918 11:47:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@163 -- # true 00:25:24.918 11:47:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@164 -- # ip link set nvmf_tgt_br nomaster 00:25:25.176 Cannot find device "nvmf_tgt_br" 00:25:25.176 11:47:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@164 -- # true 00:25:25.176 11:47:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@165 -- # ip link set nvmf_tgt_br2 nomaster 00:25:25.176 Cannot find device "nvmf_tgt_br2" 00:25:25.176 11:47:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@165 -- # true 00:25:25.176 11:47:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@166 -- # ip link set nvmf_init_br down 00:25:25.176 Cannot find device "nvmf_init_br" 00:25:25.176 11:47:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@166 -- # true 00:25:25.176 11:47:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@167 -- # ip link set nvmf_init_br2 down 00:25:25.176 Cannot find device "nvmf_init_br2" 00:25:25.176 11:47:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@167 -- # true 00:25:25.176 11:47:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@168 -- # ip link set nvmf_tgt_br down 00:25:25.176 Cannot find device "nvmf_tgt_br" 00:25:25.176 11:47:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@168 -- # true 00:25:25.176 11:47:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@169 -- # ip link set nvmf_tgt_br2 down 00:25:25.176 Cannot find device "nvmf_tgt_br2" 00:25:25.176 11:47:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@169 -- # true 00:25:25.176 11:47:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@170 -- # ip link delete nvmf_br type bridge 00:25:25.176 Cannot find device "nvmf_br" 00:25:25.176 11:47:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@170 -- # true 00:25:25.176 11:47:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@171 -- # ip link delete nvmf_init_if 00:25:25.176 Cannot find device "nvmf_init_if" 00:25:25.176 11:47:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@171 -- # true 00:25:25.176 11:47:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@172 -- # ip link delete nvmf_init_if2 00:25:25.176 Cannot find device "nvmf_init_if2" 00:25:25.176 11:47:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@172 -- # true 00:25:25.176 11:47:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@173 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:25:25.176 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:25:25.176 11:47:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@173 -- # true 00:25:25.176 11:47:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@174 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:25:25.176 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:25:25.176 11:47:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@174 -- # true 00:25:25.176 11:47:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@177 -- # ip netns add nvmf_tgt_ns_spdk 00:25:25.176 11:47:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@180 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:25:25.176 11:47:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@181 -- # ip link add nvmf_init_if2 type veth peer name nvmf_init_br2 00:25:25.176 11:47:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@182 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:25:25.176 11:47:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@183 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:25:25.176 11:47:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:25:25.176 11:47:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@187 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:25:25.176 11:47:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@190 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:25:25.176 11:47:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@191 -- # ip addr add 10.0.0.2/24 dev nvmf_init_if2 00:25:25.176 11:47:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@192 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if 00:25:25.176 11:47:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@193 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.4/24 dev nvmf_tgt_if2 00:25:25.176 11:47:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@196 -- # ip link set nvmf_init_if up 00:25:25.176 11:47:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@197 -- # ip link set nvmf_init_if2 up 00:25:25.176 11:47:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@198 -- # ip link set nvmf_init_br up 00:25:25.176 11:47:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@199 -- # ip link set nvmf_init_br2 up 00:25:25.176 11:47:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@200 -- # ip link set nvmf_tgt_br up 00:25:25.176 11:47:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@201 -- # ip link set nvmf_tgt_br2 up 00:25:25.176 11:47:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@202 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:25:25.177 11:47:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@203 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:25:25.177 11:47:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@204 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:25:25.177 11:47:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@207 -- # ip link add nvmf_br type bridge 00:25:25.177 11:47:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@208 -- # ip link set nvmf_br up 00:25:25.177 11:47:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@211 -- # ip link set nvmf_init_br master nvmf_br 00:25:25.177 11:47:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@212 -- # ip link set nvmf_init_br2 master nvmf_br 00:25:25.435 11:47:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@213 -- # ip link set nvmf_tgt_br master nvmf_br 00:25:25.435 11:47:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@214 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:25:25.435 11:47:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@217 -- # ipts -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:25:25.435 11:47:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT' 00:25:25.435 11:47:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@218 -- # ipts -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT 00:25:25.435 11:47:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT' 00:25:25.435 11:47:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@219 -- # ipts -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:25:25.435 11:47:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@790 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT -m comment --comment 'SPDK_NVMF:-A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT' 00:25:25.435 11:47:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@222 -- # ping -c 1 10.0.0.3 00:25:25.435 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:25:25.435 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.100 ms 00:25:25.435 00:25:25.435 --- 10.0.0.3 ping statistics --- 00:25:25.435 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:25:25.435 rtt min/avg/max/mdev = 0.100/0.100/0.100/0.000 ms 00:25:25.435 11:47:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@223 -- # ping -c 1 10.0.0.4 00:25:25.435 PING 10.0.0.4 (10.0.0.4) 56(84) bytes of data. 00:25:25.435 64 bytes from 10.0.0.4: icmp_seq=1 ttl=64 time=0.048 ms 00:25:25.435 00:25:25.435 --- 10.0.0.4 ping statistics --- 00:25:25.435 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:25:25.435 rtt min/avg/max/mdev = 0.048/0.048/0.048/0.000 ms 00:25:25.435 11:47:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@224 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:25:25.435 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:25:25.435 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.027 ms 00:25:25.435 00:25:25.435 --- 10.0.0.1 ping statistics --- 00:25:25.435 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:25:25.435 rtt min/avg/max/mdev = 0.027/0.027/0.027/0.000 ms 00:25:25.435 11:47:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@225 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.2 00:25:25.435 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:25:25.435 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.062 ms 00:25:25.435 00:25:25.435 --- 10.0.0.2 ping statistics --- 00:25:25.435 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:25:25.435 rtt min/avg/max/mdev = 0.062/0.062/0.062/0.000 ms 00:25:25.435 11:47:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@227 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:25:25.435 11:47:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@461 -- # return 0 00:25:25.435 11:47:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:25:25.435 11:47:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:25:25.435 11:47:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:25:25.435 11:47:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:25:25.435 11:47:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:25:25.435 11:47:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:25:25.435 11:47:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:25:25.435 11:47:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@15 -- # nvmfappstart -m 0xF --wait-for-rpc 00:25:25.435 11:47:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:25:25.435 11:47:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@726 -- # xtrace_disable 00:25:25.435 11:47:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:25:25.435 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:25:25.435 11:47:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@509 -- # nvmfpid=105449 00:25:25.435 11:47:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@508 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --interrupt-mode -m 0xF --wait-for-rpc 00:25:25.435 11:47:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@510 -- # waitforlisten 105449 00:25:25.435 11:47:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@835 -- # '[' -z 105449 ']' 00:25:25.435 11:47:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:25:25.435 11:47:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@840 -- # local max_retries=100 00:25:25.435 11:47:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:25:25.435 11:47:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@844 -- # xtrace_disable 00:25:25.435 11:47:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:25:25.435 [2024-12-16 11:47:26.086633] thread.c:3079:spdk_interrupt_mode_enable: *NOTICE*: Set SPDK running in interrupt mode. 00:25:25.435 [2024-12-16 11:47:26.087974] Starting SPDK v25.01-pre git sha1 c360129e7 / DPDK 24.03.0 initialization... 00:25:25.435 [2024-12-16 11:47:26.088175] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:25:25.694 [2024-12-16 11:47:26.239536] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:25:25.694 [2024-12-16 11:47:26.329097] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:25:25.694 [2024-12-16 11:47:26.329442] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:25:25.694 [2024-12-16 11:47:26.329632] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:25:25.694 [2024-12-16 11:47:26.329885] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:25:25.694 [2024-12-16 11:47:26.329931] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:25:25.694 [2024-12-16 11:47:26.331423] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 1 00:25:25.694 [2024-12-16 11:47:26.331576] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 3 00:25:25.694 [2024-12-16 11:47:26.331531] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 2 00:25:25.694 [2024-12-16 11:47:26.331581] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:25:25.694 [2024-12-16 11:47:26.333132] thread.c:2144:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (app_thread) to intr mode from intr mode. 00:25:25.694 11:47:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:25:25.694 11:47:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@868 -- # return 0 00:25:25.694 11:47:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:25:25.694 11:47:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@732 -- # xtrace_disable 00:25:25.694 11:47:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:25:25.953 11:47:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:25:25.953 11:47:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@18 -- # rpc_cmd bdev_set_options -p 5 -c 1 00:25:25.953 11:47:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:25.953 11:47:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:25:25.953 11:47:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:25.953 11:47:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@19 -- # rpc_cmd framework_start_init 00:25:25.953 11:47:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:25.953 11:47:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:25:25.954 [2024-12-16 11:47:26.541907] thread.c:2144:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_000) to intr mode from intr mode. 00:25:25.954 [2024-12-16 11:47:26.542186] thread.c:2144:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_001) to intr mode from intr mode. 00:25:25.954 [2024-12-16 11:47:26.543106] thread.c:2144:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_002) to intr mode from intr mode. 00:25:25.954 [2024-12-16 11:47:26.543968] thread.c:2144:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_003) to intr mode from intr mode. 00:25:25.954 11:47:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:25.954 11:47:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@20 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:25:25.954 11:47:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:25.954 11:47:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:25:25.954 [2024-12-16 11:47:26.553396] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:25:25.954 11:47:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:25.954 11:47:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@22 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:25:25.954 11:47:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:25.954 11:47:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:25:25.954 Malloc0 00:25:25.954 11:47:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:25.954 11:47:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@23 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:25:25.954 11:47:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:25.954 11:47:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:25:25.954 11:47:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:25.954 11:47:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:25:25.954 11:47:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:25.954 11:47:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:25:25.954 11:47:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:25.954 11:47:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 00:25:25.954 11:47:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:25.954 11:47:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:25:25.954 [2024-12-16 11:47:26.629767] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4420 *** 00:25:25.954 11:47:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:25.954 11:47:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@28 -- # WRITE_PID=105483 00:25:25.954 11:47:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@27 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 0x10 -i 1 --json /dev/fd/63 -q 128 -o 4096 -w write -t 1 -s 256 00:25:25.954 11:47:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@27 -- # gen_nvmf_target_json 00:25:25.954 11:47:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@560 -- # config=() 00:25:25.954 11:47:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@560 -- # local subsystem config 00:25:25.954 11:47:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@30 -- # READ_PID=105485 00:25:25.954 11:47:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:25:25.954 11:47:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:25:25.954 { 00:25:25.954 "params": { 00:25:25.954 "name": "Nvme$subsystem", 00:25:25.954 "trtype": "$TEST_TRANSPORT", 00:25:25.954 "traddr": "$NVMF_FIRST_TARGET_IP", 00:25:25.954 "adrfam": "ipv4", 00:25:25.954 "trsvcid": "$NVMF_PORT", 00:25:25.954 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:25:25.954 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:25:25.954 "hdgst": ${hdgst:-false}, 00:25:25.954 "ddgst": ${ddgst:-false} 00:25:25.954 }, 00:25:25.954 "method": "bdev_nvme_attach_controller" 00:25:25.954 } 00:25:25.954 EOF 00:25:25.954 )") 00:25:25.954 11:47:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@29 -- # gen_nvmf_target_json 00:25:25.954 11:47:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@29 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 0x20 -i 2 --json /dev/fd/63 -q 128 -o 4096 -w read -t 1 -s 256 00:25:25.954 11:47:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@560 -- # config=() 00:25:25.954 11:47:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@32 -- # FLUSH_PID=105487 00:25:25.954 11:47:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@560 -- # local subsystem config 00:25:25.954 11:47:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:25:25.954 11:47:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:25:25.954 { 00:25:25.954 "params": { 00:25:25.954 "name": "Nvme$subsystem", 00:25:25.954 "trtype": "$TEST_TRANSPORT", 00:25:25.954 "traddr": "$NVMF_FIRST_TARGET_IP", 00:25:25.954 "adrfam": "ipv4", 00:25:25.954 "trsvcid": "$NVMF_PORT", 00:25:25.954 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:25:25.954 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:25:25.954 "hdgst": ${hdgst:-false}, 00:25:25.954 "ddgst": ${ddgst:-false} 00:25:25.954 }, 00:25:25.954 "method": "bdev_nvme_attach_controller" 00:25:25.954 } 00:25:25.954 EOF 00:25:25.954 )") 00:25:25.954 11:47:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@582 -- # cat 00:25:25.954 11:47:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@31 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 0x40 -i 3 --json /dev/fd/63 -q 128 -o 4096 -w flush -t 1 -s 256 00:25:25.954 11:47:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@34 -- # UNMAP_PID=105490 00:25:25.954 11:47:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@582 -- # cat 00:25:25.954 11:47:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@35 -- # sync 00:25:25.954 11:47:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@31 -- # gen_nvmf_target_json 00:25:25.954 11:47:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@560 -- # config=() 00:25:25.954 11:47:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@560 -- # local subsystem config 00:25:25.954 11:47:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:25:25.954 11:47:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:25:25.954 { 00:25:25.954 "params": { 00:25:25.954 "name": "Nvme$subsystem", 00:25:25.954 "trtype": "$TEST_TRANSPORT", 00:25:25.954 "traddr": "$NVMF_FIRST_TARGET_IP", 00:25:25.954 "adrfam": "ipv4", 00:25:25.954 "trsvcid": "$NVMF_PORT", 00:25:25.954 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:25:25.954 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:25:25.954 "hdgst": ${hdgst:-false}, 00:25:25.954 "ddgst": ${ddgst:-false} 00:25:25.954 }, 00:25:25.954 "method": "bdev_nvme_attach_controller" 00:25:25.954 } 00:25:25.954 EOF 00:25:25.954 )") 00:25:25.954 11:47:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@33 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 0x80 -i 4 --json /dev/fd/63 -q 128 -o 4096 -w unmap -t 1 -s 256 00:25:25.954 11:47:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@584 -- # jq . 00:25:25.954 11:47:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@584 -- # jq . 00:25:25.954 11:47:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@582 -- # cat 00:25:25.954 11:47:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@33 -- # gen_nvmf_target_json 00:25:25.954 11:47:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@585 -- # IFS=, 00:25:25.954 11:47:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@560 -- # config=() 00:25:25.954 11:47:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:25:25.954 "params": { 00:25:25.954 "name": "Nvme1", 00:25:25.954 "trtype": "tcp", 00:25:25.954 "traddr": "10.0.0.3", 00:25:25.954 "adrfam": "ipv4", 00:25:25.954 "trsvcid": "4420", 00:25:25.954 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:25:25.954 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:25:25.954 "hdgst": false, 00:25:25.954 "ddgst": false 00:25:25.954 }, 00:25:25.954 "method": "bdev_nvme_attach_controller" 00:25:25.954 }' 00:25:25.954 11:47:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@560 -- # local subsystem config 00:25:25.954 11:47:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@585 -- # IFS=, 00:25:25.954 11:47:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:25:25.954 11:47:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:25:25.954 "params": { 00:25:25.954 "name": "Nvme1", 00:25:25.954 "trtype": "tcp", 00:25:25.954 "traddr": "10.0.0.3", 00:25:25.954 "adrfam": "ipv4", 00:25:25.954 "trsvcid": "4420", 00:25:25.954 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:25:25.954 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:25:25.954 "hdgst": false, 00:25:25.954 "ddgst": false 00:25:25.954 }, 00:25:25.954 "method": "bdev_nvme_attach_controller" 00:25:25.954 }' 00:25:25.954 11:47:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:25:25.954 { 00:25:25.954 "params": { 00:25:25.954 "name": "Nvme$subsystem", 00:25:25.954 "trtype": "$TEST_TRANSPORT", 00:25:25.954 "traddr": "$NVMF_FIRST_TARGET_IP", 00:25:25.954 "adrfam": "ipv4", 00:25:25.954 "trsvcid": "$NVMF_PORT", 00:25:25.954 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:25:25.954 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:25:25.955 "hdgst": ${hdgst:-false}, 00:25:25.955 "ddgst": ${ddgst:-false} 00:25:25.955 }, 00:25:25.955 "method": "bdev_nvme_attach_controller" 00:25:25.955 } 00:25:25.955 EOF 00:25:25.955 )") 00:25:25.955 11:47:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@582 -- # cat 00:25:25.955 11:47:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@584 -- # jq . 00:25:25.955 11:47:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@584 -- # jq . 00:25:25.955 11:47:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@585 -- # IFS=, 00:25:25.955 11:47:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:25:25.955 "params": { 00:25:25.955 "name": "Nvme1", 00:25:25.955 "trtype": "tcp", 00:25:25.955 "traddr": "10.0.0.3", 00:25:25.955 "adrfam": "ipv4", 00:25:25.955 "trsvcid": "4420", 00:25:25.955 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:25:25.955 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:25:25.955 "hdgst": false, 00:25:25.955 "ddgst": false 00:25:25.955 }, 00:25:25.955 "method": "bdev_nvme_attach_controller" 00:25:25.955 }' 00:25:25.955 11:47:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@585 -- # IFS=, 00:25:25.955 11:47:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:25:25.955 "params": { 00:25:25.955 "name": "Nvme1", 00:25:25.955 "trtype": "tcp", 00:25:25.955 "traddr": "10.0.0.3", 00:25:25.955 "adrfam": "ipv4", 00:25:25.955 "trsvcid": "4420", 00:25:25.955 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:25:25.955 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:25:25.955 "hdgst": false, 00:25:25.955 "ddgst": false 00:25:25.955 }, 00:25:25.955 "method": "bdev_nvme_attach_controller" 00:25:25.955 }' 00:25:25.955 [2024-12-16 11:47:26.689820] Starting SPDK v25.01-pre git sha1 c360129e7 / DPDK 24.03.0 initialization... 00:25:25.955 [2024-12-16 11:47:26.690535] [ DPDK EAL parameters: bdevperf -c 0x20 -m 256 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk2 --proc-type=auto ] 00:25:25.955 [2024-12-16 11:47:26.703184] Starting SPDK v25.01-pre git sha1 c360129e7 / DPDK 24.03.0 initialization... 00:25:25.955 [2024-12-16 11:47:26.703267] [ DPDK EAL parameters: bdevperf -c 0x80 -m 256 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk4 --proc-type=auto ] 00:25:26.213 [2024-12-16 11:47:26.721051] Starting SPDK v25.01-pre git sha1 c360129e7 / DPDK 24.03.0 initialization... 00:25:26.213 [2024-12-16 11:47:26.721207] [ DPDK EAL parameters: bdevperf -c 0x10 -m 256 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk1 --proc-type=auto ] 00:25:26.213 11:47:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@37 -- # wait 105483 00:25:26.213 [2024-12-16 11:47:26.727954] Starting SPDK v25.01-pre git sha1 c360129e7 / DPDK 24.03.0 initialization... 00:25:26.213 [2024-12-16 11:47:26.728036] [ DPDK EAL parameters: bdevperf -c 0x40 -m 256 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk3 --proc-type=auto ] 00:25:26.213 [2024-12-16 11:47:26.909932] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:25:26.213 [2024-12-16 11:47:26.962921] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 5 00:25:26.472 [2024-12-16 11:47:26.995036] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:25:26.472 [2024-12-16 11:47:27.055077] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 7 00:25:26.472 [2024-12-16 11:47:27.085595] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:25:26.472 [2024-12-16 11:47:27.143959] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 6 00:25:26.472 [2024-12-16 11:47:27.165323] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:25:26.472 Running I/O for 1 seconds... 00:25:26.472 Running I/O for 1 seconds... 00:25:26.472 [2024-12-16 11:47:27.224483] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 4 00:25:26.730 Running I/O for 1 seconds... 00:25:26.730 Running I/O for 1 seconds... 00:25:27.663 8566.00 IOPS, 33.46 MiB/s 00:25:27.663 Latency(us) 00:25:27.663 [2024-12-16T11:47:28.416Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:25:27.663 Job: Nvme1n1 (Core Mask 0x20, workload: read, depth: 128, IO size: 4096) 00:25:27.663 Nvme1n1 : 1.01 8602.52 33.60 0.00 0.00 14795.45 4170.47 17277.67 00:25:27.663 [2024-12-16T11:47:28.416Z] =================================================================================================================== 00:25:27.663 [2024-12-16T11:47:28.417Z] Total : 8602.52 33.60 0.00 0.00 14795.45 4170.47 17277.67 00:25:27.664 7118.00 IOPS, 27.80 MiB/s 00:25:27.664 Latency(us) 00:25:27.664 [2024-12-16T11:47:28.417Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:25:27.664 Job: Nvme1n1 (Core Mask 0x80, workload: unmap, depth: 128, IO size: 4096) 00:25:27.664 Nvme1n1 : 1.01 7182.12 28.06 0.00 0.00 17729.54 8340.95 25618.62 00:25:27.664 [2024-12-16T11:47:28.417Z] =================================================================================================================== 00:25:27.664 [2024-12-16T11:47:28.417Z] Total : 7182.12 28.06 0.00 0.00 17729.54 8340.95 25618.62 00:25:27.664 189624.00 IOPS, 740.72 MiB/s 00:25:27.664 Latency(us) 00:25:27.664 [2024-12-16T11:47:28.417Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:25:27.664 Job: Nvme1n1 (Core Mask 0x40, workload: flush, depth: 128, IO size: 4096) 00:25:27.664 Nvme1n1 : 1.00 189264.26 739.31 0.00 0.00 672.73 292.31 1884.16 00:25:27.664 [2024-12-16T11:47:28.417Z] =================================================================================================================== 00:25:27.664 [2024-12-16T11:47:28.417Z] Total : 189264.26 739.31 0.00 0.00 672.73 292.31 1884.16 00:25:27.664 5690.00 IOPS, 22.23 MiB/s 00:25:27.664 Latency(us) 00:25:27.664 [2024-12-16T11:47:28.417Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:25:27.664 Job: Nvme1n1 (Core Mask 0x10, workload: write, depth: 128, IO size: 4096) 00:25:27.664 Nvme1n1 : 1.01 5789.26 22.61 0.00 0.00 22025.14 2829.96 36461.85 00:25:27.664 [2024-12-16T11:47:28.417Z] =================================================================================================================== 00:25:27.664 [2024-12-16T11:47:28.417Z] Total : 5789.26 22.61 0.00 0.00 22025.14 2829.96 36461.85 00:25:27.922 11:47:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@38 -- # wait 105485 00:25:27.922 11:47:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@39 -- # wait 105487 00:25:27.922 11:47:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@40 -- # wait 105490 00:25:27.922 11:47:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@42 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:25:27.922 11:47:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:27.922 11:47:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:25:27.922 11:47:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:27.922 11:47:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@44 -- # trap - SIGINT SIGTERM EXIT 00:25:27.922 11:47:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@46 -- # nvmftestfini 00:25:27.922 11:47:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@516 -- # nvmfcleanup 00:25:27.922 11:47:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@121 -- # sync 00:25:27.922 11:47:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:25:27.922 11:47:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@124 -- # set +e 00:25:27.922 11:47:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@125 -- # for i in {1..20} 00:25:27.922 11:47:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:25:27.922 rmmod nvme_tcp 00:25:27.922 rmmod nvme_fabrics 00:25:27.922 rmmod nvme_keyring 00:25:27.922 11:47:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:25:27.922 11:47:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@128 -- # set -e 00:25:27.922 11:47:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@129 -- # return 0 00:25:27.922 11:47:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@517 -- # '[' -n 105449 ']' 00:25:27.922 11:47:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@518 -- # killprocess 105449 00:25:27.922 11:47:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@954 -- # '[' -z 105449 ']' 00:25:27.922 11:47:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@958 -- # kill -0 105449 00:25:27.922 11:47:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@959 -- # uname 00:25:27.922 11:47:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:25:27.922 11:47:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 105449 00:25:27.922 11:47:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:25:27.922 11:47:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:25:27.922 killing process with pid 105449 00:25:27.922 11:47:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@972 -- # echo 'killing process with pid 105449' 00:25:27.922 11:47:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@973 -- # kill 105449 00:25:27.922 11:47:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@978 -- # wait 105449 00:25:28.180 11:47:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:25:28.180 11:47:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:25:28.180 11:47:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:25:28.180 11:47:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@297 -- # iptr 00:25:28.180 11:47:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@791 -- # iptables-save 00:25:28.180 11:47:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:25:28.180 11:47:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@791 -- # iptables-restore 00:25:28.180 11:47:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@298 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:25:28.180 11:47:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@299 -- # nvmf_veth_fini 00:25:28.180 11:47:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@233 -- # ip link set nvmf_init_br nomaster 00:25:28.180 11:47:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@234 -- # ip link set nvmf_init_br2 nomaster 00:25:28.180 11:47:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@235 -- # ip link set nvmf_tgt_br nomaster 00:25:28.438 11:47:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@236 -- # ip link set nvmf_tgt_br2 nomaster 00:25:28.438 11:47:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@237 -- # ip link set nvmf_init_br down 00:25:28.438 11:47:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@238 -- # ip link set nvmf_init_br2 down 00:25:28.438 11:47:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@239 -- # ip link set nvmf_tgt_br down 00:25:28.438 11:47:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@240 -- # ip link set nvmf_tgt_br2 down 00:25:28.438 11:47:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@241 -- # ip link delete nvmf_br type bridge 00:25:28.438 11:47:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@242 -- # ip link delete nvmf_init_if 00:25:28.438 11:47:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@243 -- # ip link delete nvmf_init_if2 00:25:28.438 11:47:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@244 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:25:28.438 11:47:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@245 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:25:28.438 11:47:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@246 -- # remove_spdk_ns 00:25:28.438 11:47:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:25:28.438 11:47:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:25:28.438 11:47:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:25:28.438 11:47:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@300 -- # return 0 00:25:28.438 00:25:28.438 real 0m3.713s 00:25:28.438 user 0m13.124s 00:25:28.438 sys 0m2.456s 00:25:28.438 11:47:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@1130 -- # xtrace_disable 00:25:28.438 11:47:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:25:28.438 ************************************ 00:25:28.438 END TEST nvmf_bdev_io_wait 00:25:28.438 ************************************ 00:25:28.438 11:47:29 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/nvmf_target_core.sh@30 -- # run_test nvmf_queue_depth /home/vagrant/spdk_repo/spdk/test/nvmf/target/queue_depth.sh --transport=tcp --interrupt-mode 00:25:28.438 11:47:29 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:25:28.438 11:47:29 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1111 -- # xtrace_disable 00:25:28.438 11:47:29 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@10 -- # set +x 00:25:28.438 ************************************ 00:25:28.438 START TEST nvmf_queue_depth 00:25:28.438 ************************************ 00:25:28.438 11:47:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/queue_depth.sh --transport=tcp --interrupt-mode 00:25:28.697 * Looking for test storage... 00:25:28.697 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:25:28.697 11:47:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:25:28.697 11:47:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@1711 -- # lcov --version 00:25:28.697 11:47:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:25:28.697 11:47:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:25:28.697 11:47:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:25:28.697 11:47:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@333 -- # local ver1 ver1_l 00:25:28.697 11:47:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@334 -- # local ver2 ver2_l 00:25:28.697 11:47:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@336 -- # IFS=.-: 00:25:28.697 11:47:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@336 -- # read -ra ver1 00:25:28.697 11:47:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@337 -- # IFS=.-: 00:25:28.697 11:47:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@337 -- # read -ra ver2 00:25:28.697 11:47:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@338 -- # local 'op=<' 00:25:28.697 11:47:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@340 -- # ver1_l=2 00:25:28.697 11:47:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@341 -- # ver2_l=1 00:25:28.697 11:47:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:25:28.697 11:47:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@344 -- # case "$op" in 00:25:28.697 11:47:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@345 -- # : 1 00:25:28.697 11:47:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@364 -- # (( v = 0 )) 00:25:28.697 11:47:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:25:28.697 11:47:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@365 -- # decimal 1 00:25:28.697 11:47:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@353 -- # local d=1 00:25:28.697 11:47:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:25:28.697 11:47:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@355 -- # echo 1 00:25:28.697 11:47:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@365 -- # ver1[v]=1 00:25:28.697 11:47:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@366 -- # decimal 2 00:25:28.697 11:47:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@353 -- # local d=2 00:25:28.697 11:47:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:25:28.697 11:47:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@355 -- # echo 2 00:25:28.697 11:47:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@366 -- # ver2[v]=2 00:25:28.697 11:47:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:25:28.697 11:47:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:25:28.697 11:47:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@368 -- # return 0 00:25:28.697 11:47:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:25:28.697 11:47:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:25:28.697 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:25:28.697 --rc genhtml_branch_coverage=1 00:25:28.697 --rc genhtml_function_coverage=1 00:25:28.697 --rc genhtml_legend=1 00:25:28.697 --rc geninfo_all_blocks=1 00:25:28.697 --rc geninfo_unexecuted_blocks=1 00:25:28.697 00:25:28.697 ' 00:25:28.697 11:47:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:25:28.697 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:25:28.697 --rc genhtml_branch_coverage=1 00:25:28.697 --rc genhtml_function_coverage=1 00:25:28.697 --rc genhtml_legend=1 00:25:28.697 --rc geninfo_all_blocks=1 00:25:28.697 --rc geninfo_unexecuted_blocks=1 00:25:28.697 00:25:28.697 ' 00:25:28.697 11:47:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:25:28.697 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:25:28.697 --rc genhtml_branch_coverage=1 00:25:28.697 --rc genhtml_function_coverage=1 00:25:28.697 --rc genhtml_legend=1 00:25:28.697 --rc geninfo_all_blocks=1 00:25:28.697 --rc geninfo_unexecuted_blocks=1 00:25:28.697 00:25:28.697 ' 00:25:28.697 11:47:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:25:28.697 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:25:28.697 --rc genhtml_branch_coverage=1 00:25:28.697 --rc genhtml_function_coverage=1 00:25:28.697 --rc genhtml_legend=1 00:25:28.697 --rc geninfo_all_blocks=1 00:25:28.697 --rc geninfo_unexecuted_blocks=1 00:25:28.697 00:25:28.697 ' 00:25:28.697 11:47:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- target/queue_depth.sh@12 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:25:28.697 11:47:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@7 -- # uname -s 00:25:28.697 11:47:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:25:28.697 11:47:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:25:28.697 11:47:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:25:28.697 11:47:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:25:28.697 11:47:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:25:28.697 11:47:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:25:28.697 11:47:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:25:28.697 11:47:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:25:28.697 11:47:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:25:28.697 11:47:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:25:28.697 11:47:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:2fb5c03d-b67f-4471-b8a9-a83bf7b15be8 00:25:28.697 11:47:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@18 -- # NVME_HOSTID=2fb5c03d-b67f-4471-b8a9-a83bf7b15be8 00:25:28.697 11:47:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:25:28.697 11:47:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:25:28.698 11:47:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:25:28.698 11:47:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:25:28.698 11:47:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@49 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:25:28.698 11:47:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@15 -- # shopt -s extglob 00:25:28.698 11:47:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:25:28.698 11:47:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:25:28.698 11:47:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:25:28.698 11:47:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:28.698 11:47:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:28.698 11:47:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:28.698 11:47:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- paths/export.sh@5 -- # export PATH 00:25:28.698 11:47:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:28.698 11:47:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@51 -- # : 0 00:25:28.698 11:47:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:25:28.698 11:47:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:25:28.698 11:47:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:25:28.698 11:47:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:25:28.698 11:47:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:25:28.698 11:47:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@33 -- # '[' 1 -eq 1 ']' 00:25:28.698 11:47:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@34 -- # NVMF_APP+=(--interrupt-mode) 00:25:28.698 11:47:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:25:28.698 11:47:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:25:28.698 11:47:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@55 -- # have_pci_nics=0 00:25:28.698 11:47:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- target/queue_depth.sh@14 -- # MALLOC_BDEV_SIZE=64 00:25:28.698 11:47:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- target/queue_depth.sh@15 -- # MALLOC_BLOCK_SIZE=512 00:25:28.698 11:47:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- target/queue_depth.sh@17 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:25:28.698 11:47:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- target/queue_depth.sh@19 -- # nvmftestinit 00:25:28.698 11:47:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:25:28.698 11:47:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:25:28.698 11:47:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@476 -- # prepare_net_devs 00:25:28.698 11:47:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@438 -- # local -g is_hw=no 00:25:28.698 11:47:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@440 -- # remove_spdk_ns 00:25:28.698 11:47:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:25:28.698 11:47:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:25:28.698 11:47:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:25:28.698 11:47:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@442 -- # [[ virt != virt ]] 00:25:28.698 11:47:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@444 -- # [[ no == yes ]] 00:25:28.698 11:47:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@451 -- # [[ virt == phy ]] 00:25:28.698 11:47:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@454 -- # [[ virt == phy-fallback ]] 00:25:28.698 11:47:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@459 -- # [[ tcp == tcp ]] 00:25:28.698 11:47:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@460 -- # nvmf_veth_init 00:25:28.698 11:47:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@145 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:25:28.698 11:47:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@146 -- # NVMF_SECOND_INITIATOR_IP=10.0.0.2 00:25:28.698 11:47:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@147 -- # NVMF_FIRST_TARGET_IP=10.0.0.3 00:25:28.698 11:47:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@148 -- # NVMF_SECOND_TARGET_IP=10.0.0.4 00:25:28.698 11:47:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@149 -- # NVMF_INITIATOR_IP=10.0.0.1 00:25:28.698 11:47:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@150 -- # NVMF_BRIDGE=nvmf_br 00:25:28.698 11:47:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@151 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:25:28.698 11:47:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@152 -- # NVMF_INITIATOR_INTERFACE2=nvmf_init_if2 00:25:28.698 11:47:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@153 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:25:28.698 11:47:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@154 -- # NVMF_INITIATOR_BRIDGE2=nvmf_init_br2 00:25:28.698 11:47:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@155 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:25:28.698 11:47:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@156 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:25:28.698 11:47:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@157 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:25:28.698 11:47:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@158 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:25:28.698 11:47:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@159 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:25:28.698 11:47:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@160 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:25:28.698 11:47:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@162 -- # ip link set nvmf_init_br nomaster 00:25:28.698 Cannot find device "nvmf_init_br" 00:25:28.698 11:47:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@162 -- # true 00:25:28.698 11:47:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@163 -- # ip link set nvmf_init_br2 nomaster 00:25:28.698 Cannot find device "nvmf_init_br2" 00:25:28.698 11:47:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@163 -- # true 00:25:28.698 11:47:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@164 -- # ip link set nvmf_tgt_br nomaster 00:25:28.698 Cannot find device "nvmf_tgt_br" 00:25:28.698 11:47:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@164 -- # true 00:25:28.698 11:47:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@165 -- # ip link set nvmf_tgt_br2 nomaster 00:25:28.698 Cannot find device "nvmf_tgt_br2" 00:25:28.698 11:47:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@165 -- # true 00:25:28.698 11:47:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@166 -- # ip link set nvmf_init_br down 00:25:28.698 Cannot find device "nvmf_init_br" 00:25:28.698 11:47:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@166 -- # true 00:25:28.698 11:47:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@167 -- # ip link set nvmf_init_br2 down 00:25:28.956 Cannot find device "nvmf_init_br2" 00:25:28.956 11:47:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@167 -- # true 00:25:28.956 11:47:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@168 -- # ip link set nvmf_tgt_br down 00:25:28.956 Cannot find device "nvmf_tgt_br" 00:25:28.956 11:47:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@168 -- # true 00:25:28.956 11:47:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@169 -- # ip link set nvmf_tgt_br2 down 00:25:28.956 Cannot find device "nvmf_tgt_br2" 00:25:28.956 11:47:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@169 -- # true 00:25:28.956 11:47:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@170 -- # ip link delete nvmf_br type bridge 00:25:28.956 Cannot find device "nvmf_br" 00:25:28.956 11:47:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@170 -- # true 00:25:28.956 11:47:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@171 -- # ip link delete nvmf_init_if 00:25:28.956 Cannot find device "nvmf_init_if" 00:25:28.956 11:47:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@171 -- # true 00:25:28.956 11:47:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@172 -- # ip link delete nvmf_init_if2 00:25:28.956 Cannot find device "nvmf_init_if2" 00:25:28.956 11:47:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@172 -- # true 00:25:28.956 11:47:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@173 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:25:28.956 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:25:28.956 11:47:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@173 -- # true 00:25:28.956 11:47:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@174 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:25:28.956 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:25:28.956 11:47:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@174 -- # true 00:25:28.956 11:47:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@177 -- # ip netns add nvmf_tgt_ns_spdk 00:25:28.957 11:47:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@180 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:25:28.957 11:47:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@181 -- # ip link add nvmf_init_if2 type veth peer name nvmf_init_br2 00:25:28.957 11:47:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@182 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:25:28.957 11:47:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@183 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:25:28.957 11:47:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:25:28.957 11:47:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@187 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:25:28.957 11:47:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@190 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:25:28.957 11:47:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@191 -- # ip addr add 10.0.0.2/24 dev nvmf_init_if2 00:25:28.957 11:47:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@192 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if 00:25:28.957 11:47:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@193 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.4/24 dev nvmf_tgt_if2 00:25:28.957 11:47:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@196 -- # ip link set nvmf_init_if up 00:25:28.957 11:47:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@197 -- # ip link set nvmf_init_if2 up 00:25:28.957 11:47:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@198 -- # ip link set nvmf_init_br up 00:25:28.957 11:47:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@199 -- # ip link set nvmf_init_br2 up 00:25:28.957 11:47:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@200 -- # ip link set nvmf_tgt_br up 00:25:28.957 11:47:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@201 -- # ip link set nvmf_tgt_br2 up 00:25:28.957 11:47:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@202 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:25:28.957 11:47:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@203 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:25:28.957 11:47:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@204 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:25:28.957 11:47:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@207 -- # ip link add nvmf_br type bridge 00:25:28.957 11:47:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@208 -- # ip link set nvmf_br up 00:25:28.957 11:47:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@211 -- # ip link set nvmf_init_br master nvmf_br 00:25:28.957 11:47:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@212 -- # ip link set nvmf_init_br2 master nvmf_br 00:25:28.957 11:47:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@213 -- # ip link set nvmf_tgt_br master nvmf_br 00:25:28.957 11:47:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@214 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:25:28.957 11:47:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@217 -- # ipts -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:25:28.957 11:47:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT' 00:25:29.215 11:47:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@218 -- # ipts -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT 00:25:29.215 11:47:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT' 00:25:29.215 11:47:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@219 -- # ipts -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:25:29.215 11:47:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@790 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT -m comment --comment 'SPDK_NVMF:-A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT' 00:25:29.215 11:47:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@222 -- # ping -c 1 10.0.0.3 00:25:29.215 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:25:29.215 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.084 ms 00:25:29.215 00:25:29.215 --- 10.0.0.3 ping statistics --- 00:25:29.215 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:25:29.215 rtt min/avg/max/mdev = 0.084/0.084/0.084/0.000 ms 00:25:29.215 11:47:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@223 -- # ping -c 1 10.0.0.4 00:25:29.215 PING 10.0.0.4 (10.0.0.4) 56(84) bytes of data. 00:25:29.215 64 bytes from 10.0.0.4: icmp_seq=1 ttl=64 time=0.046 ms 00:25:29.215 00:25:29.215 --- 10.0.0.4 ping statistics --- 00:25:29.215 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:25:29.215 rtt min/avg/max/mdev = 0.046/0.046/0.046/0.000 ms 00:25:29.215 11:47:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@224 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:25:29.215 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:25:29.215 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.025 ms 00:25:29.215 00:25:29.215 --- 10.0.0.1 ping statistics --- 00:25:29.215 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:25:29.215 rtt min/avg/max/mdev = 0.025/0.025/0.025/0.000 ms 00:25:29.215 11:47:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@225 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.2 00:25:29.215 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:25:29.215 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.051 ms 00:25:29.215 00:25:29.215 --- 10.0.0.2 ping statistics --- 00:25:29.215 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:25:29.215 rtt min/avg/max/mdev = 0.051/0.051/0.051/0.000 ms 00:25:29.215 11:47:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@227 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:25:29.215 11:47:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@461 -- # return 0 00:25:29.215 11:47:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:25:29.215 11:47:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:25:29.215 11:47:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:25:29.215 11:47:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:25:29.215 11:47:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:25:29.215 11:47:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:25:29.215 11:47:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:25:29.215 11:47:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- target/queue_depth.sh@21 -- # nvmfappstart -m 0x2 00:25:29.215 11:47:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:25:29.215 11:47:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@726 -- # xtrace_disable 00:25:29.215 11:47:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:25:29.215 11:47:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@509 -- # nvmfpid=105754 00:25:29.215 11:47:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@508 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --interrupt-mode -m 0x2 00:25:29.215 11:47:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@510 -- # waitforlisten 105754 00:25:29.215 11:47:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@835 -- # '[' -z 105754 ']' 00:25:29.215 11:47:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:25:29.215 11:47:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@840 -- # local max_retries=100 00:25:29.215 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:25:29.215 11:47:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:25:29.215 11:47:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@844 -- # xtrace_disable 00:25:29.215 11:47:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:25:29.215 [2024-12-16 11:47:29.831193] thread.c:3079:spdk_interrupt_mode_enable: *NOTICE*: Set SPDK running in interrupt mode. 00:25:29.215 [2024-12-16 11:47:29.832520] Starting SPDK v25.01-pre git sha1 c360129e7 / DPDK 24.03.0 initialization... 00:25:29.215 [2024-12-16 11:47:29.832598] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:25:29.473 [2024-12-16 11:47:29.992183] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:25:29.473 [2024-12-16 11:47:30.051683] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:25:29.473 [2024-12-16 11:47:30.051760] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:25:29.473 [2024-12-16 11:47:30.051784] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:25:29.473 [2024-12-16 11:47:30.051800] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:25:29.473 [2024-12-16 11:47:30.051813] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:25:29.473 [2024-12-16 11:47:30.052307] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 1 00:25:29.473 [2024-12-16 11:47:30.151764] thread.c:2144:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (app_thread) to intr mode from intr mode. 00:25:29.473 [2024-12-16 11:47:30.152231] thread.c:2144:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_000) to intr mode from intr mode. 00:25:29.473 11:47:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:25:29.473 11:47:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@868 -- # return 0 00:25:29.473 11:47:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:25:29.473 11:47:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@732 -- # xtrace_disable 00:25:29.473 11:47:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:25:29.473 11:47:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:25:29.473 11:47:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- target/queue_depth.sh@23 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:25:29.473 11:47:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:29.473 11:47:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:25:29.731 [2024-12-16 11:47:30.233287] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:25:29.731 11:47:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:29.731 11:47:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- target/queue_depth.sh@24 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:25:29.731 11:47:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:29.731 11:47:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:25:29.731 Malloc0 00:25:29.731 11:47:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:29.731 11:47:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- target/queue_depth.sh@25 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:25:29.731 11:47:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:29.731 11:47:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:25:29.731 11:47:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:29.731 11:47:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- target/queue_depth.sh@26 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:25:29.731 11:47:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:29.731 11:47:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:25:29.731 11:47:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:29.731 11:47:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- target/queue_depth.sh@27 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 00:25:29.731 11:47:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:29.731 11:47:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:25:29.731 [2024-12-16 11:47:30.297213] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4420 *** 00:25:29.731 11:47:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:29.731 11:47:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- target/queue_depth.sh@30 -- # bdevperf_pid=105791 00:25:29.731 11:47:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- target/queue_depth.sh@29 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -z -r /var/tmp/bdevperf.sock -q 1024 -o 4096 -w verify -t 10 00:25:29.731 11:47:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- target/queue_depth.sh@32 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; killprocess $bdevperf_pid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:25:29.731 11:47:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- target/queue_depth.sh@33 -- # waitforlisten 105791 /var/tmp/bdevperf.sock 00:25:29.731 11:47:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@835 -- # '[' -z 105791 ']' 00:25:29.731 11:47:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:25:29.731 11:47:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@840 -- # local max_retries=100 00:25:29.731 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:25:29.732 11:47:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:25:29.732 11:47:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@844 -- # xtrace_disable 00:25:29.732 11:47:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:25:29.732 [2024-12-16 11:47:30.361877] Starting SPDK v25.01-pre git sha1 c360129e7 / DPDK 24.03.0 initialization... 00:25:29.732 [2024-12-16 11:47:30.361972] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid105791 ] 00:25:29.990 [2024-12-16 11:47:30.511629] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:25:29.990 [2024-12-16 11:47:30.574043] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:25:29.990 11:47:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:25:29.990 11:47:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@868 -- # return 0 00:25:29.990 11:47:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- target/queue_depth.sh@34 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.3 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:25:29.990 11:47:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:29.990 11:47:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:25:30.247 NVMe0n1 00:25:30.247 11:47:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:30.247 11:47:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- target/queue_depth.sh@35 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:25:30.247 Running I/O for 10 seconds... 00:25:32.557 8157.00 IOPS, 31.86 MiB/s [2024-12-16T11:47:34.245Z] 8173.50 IOPS, 31.93 MiB/s [2024-12-16T11:47:35.179Z] 8255.67 IOPS, 32.25 MiB/s [2024-12-16T11:47:36.113Z] 8450.00 IOPS, 33.01 MiB/s [2024-12-16T11:47:37.047Z] 8486.80 IOPS, 33.15 MiB/s [2024-12-16T11:47:37.984Z] 8542.33 IOPS, 33.37 MiB/s [2024-12-16T11:47:38.920Z] 8615.29 IOPS, 33.65 MiB/s [2024-12-16T11:47:40.297Z] 8633.00 IOPS, 33.72 MiB/s [2024-12-16T11:47:41.233Z] 8645.22 IOPS, 33.77 MiB/s [2024-12-16T11:47:41.233Z] 8632.40 IOPS, 33.72 MiB/s 00:25:40.480 Latency(us) 00:25:40.480 [2024-12-16T11:47:41.233Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:25:40.480 Job: NVMe0n1 (Core Mask 0x1, workload: verify, depth: 1024, IO size: 4096) 00:25:40.480 Verification LBA range: start 0x0 length 0x4000 00:25:40.480 NVMe0n1 : 10.06 8679.25 33.90 0.00 0.00 117453.39 11796.48 80549.70 00:25:40.480 [2024-12-16T11:47:41.233Z] =================================================================================================================== 00:25:40.480 [2024-12-16T11:47:41.233Z] Total : 8679.25 33.90 0.00 0.00 117453.39 11796.48 80549.70 00:25:40.480 { 00:25:40.480 "results": [ 00:25:40.480 { 00:25:40.480 "job": "NVMe0n1", 00:25:40.480 "core_mask": "0x1", 00:25:40.480 "workload": "verify", 00:25:40.480 "status": "finished", 00:25:40.480 "verify_range": { 00:25:40.480 "start": 0, 00:25:40.480 "length": 16384 00:25:40.480 }, 00:25:40.480 "queue_depth": 1024, 00:25:40.480 "io_size": 4096, 00:25:40.480 "runtime": 10.064006, 00:25:40.480 "iops": 8679.24760776176, 00:25:40.480 "mibps": 33.90331096781937, 00:25:40.480 "io_failed": 0, 00:25:40.480 "io_timeout": 0, 00:25:40.480 "avg_latency_us": 117453.38888941621, 00:25:40.480 "min_latency_us": 11796.48, 00:25:40.480 "max_latency_us": 80549.70181818181 00:25:40.480 } 00:25:40.480 ], 00:25:40.480 "core_count": 1 00:25:40.480 } 00:25:40.480 11:47:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- target/queue_depth.sh@39 -- # killprocess 105791 00:25:40.480 11:47:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@954 -- # '[' -z 105791 ']' 00:25:40.480 11:47:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@958 -- # kill -0 105791 00:25:40.480 11:47:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@959 -- # uname 00:25:40.480 11:47:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:25:40.480 11:47:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 105791 00:25:40.480 11:47:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:25:40.480 killing process with pid 105791 00:25:40.480 11:47:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:25:40.480 11:47:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@972 -- # echo 'killing process with pid 105791' 00:25:40.480 Received shutdown signal, test time was about 10.000000 seconds 00:25:40.480 00:25:40.480 Latency(us) 00:25:40.480 [2024-12-16T11:47:41.233Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:25:40.480 [2024-12-16T11:47:41.233Z] =================================================================================================================== 00:25:40.480 [2024-12-16T11:47:41.233Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:25:40.480 11:47:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@973 -- # kill 105791 00:25:40.480 11:47:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@978 -- # wait 105791 00:25:40.480 11:47:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- target/queue_depth.sh@41 -- # trap - SIGINT SIGTERM EXIT 00:25:40.480 11:47:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- target/queue_depth.sh@43 -- # nvmftestfini 00:25:40.480 11:47:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@516 -- # nvmfcleanup 00:25:40.480 11:47:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@121 -- # sync 00:25:40.739 11:47:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:25:40.739 11:47:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@124 -- # set +e 00:25:40.739 11:47:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@125 -- # for i in {1..20} 00:25:40.739 11:47:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:25:40.739 rmmod nvme_tcp 00:25:40.739 rmmod nvme_fabrics 00:25:40.739 rmmod nvme_keyring 00:25:40.739 11:47:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:25:40.739 11:47:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@128 -- # set -e 00:25:40.739 11:47:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@129 -- # return 0 00:25:40.739 11:47:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@517 -- # '[' -n 105754 ']' 00:25:40.739 11:47:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@518 -- # killprocess 105754 00:25:40.739 11:47:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@954 -- # '[' -z 105754 ']' 00:25:40.739 11:47:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@958 -- # kill -0 105754 00:25:40.739 11:47:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@959 -- # uname 00:25:40.739 11:47:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:25:40.739 11:47:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 105754 00:25:40.739 11:47:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:25:40.739 11:47:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:25:40.739 killing process with pid 105754 00:25:40.739 11:47:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@972 -- # echo 'killing process with pid 105754' 00:25:40.739 11:47:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@973 -- # kill 105754 00:25:40.739 11:47:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@978 -- # wait 105754 00:25:40.998 11:47:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:25:40.998 11:47:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:25:40.998 11:47:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:25:40.999 11:47:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@297 -- # iptr 00:25:40.999 11:47:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@791 -- # iptables-save 00:25:40.999 11:47:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:25:40.999 11:47:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@791 -- # iptables-restore 00:25:40.999 11:47:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@298 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:25:40.999 11:47:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@299 -- # nvmf_veth_fini 00:25:40.999 11:47:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@233 -- # ip link set nvmf_init_br nomaster 00:25:40.999 11:47:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@234 -- # ip link set nvmf_init_br2 nomaster 00:25:40.999 11:47:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@235 -- # ip link set nvmf_tgt_br nomaster 00:25:40.999 11:47:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@236 -- # ip link set nvmf_tgt_br2 nomaster 00:25:41.257 11:47:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@237 -- # ip link set nvmf_init_br down 00:25:41.257 11:47:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@238 -- # ip link set nvmf_init_br2 down 00:25:41.257 11:47:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@239 -- # ip link set nvmf_tgt_br down 00:25:41.257 11:47:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@240 -- # ip link set nvmf_tgt_br2 down 00:25:41.257 11:47:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@241 -- # ip link delete nvmf_br type bridge 00:25:41.257 11:47:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@242 -- # ip link delete nvmf_init_if 00:25:41.257 11:47:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@243 -- # ip link delete nvmf_init_if2 00:25:41.257 11:47:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@244 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:25:41.258 11:47:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@245 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:25:41.258 11:47:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@246 -- # remove_spdk_ns 00:25:41.258 11:47:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:25:41.258 11:47:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:25:41.258 11:47:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:25:41.258 11:47:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@300 -- # return 0 00:25:41.258 00:25:41.258 real 0m12.737s 00:25:41.258 user 0m20.819s 00:25:41.258 sys 0m2.561s 00:25:41.258 11:47:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@1130 -- # xtrace_disable 00:25:41.258 ************************************ 00:25:41.258 END TEST nvmf_queue_depth 00:25:41.258 ************************************ 00:25:41.258 11:47:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:25:41.258 11:47:41 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/nvmf_target_core.sh@31 -- # run_test nvmf_target_multipath /home/vagrant/spdk_repo/spdk/test/nvmf/target/multipath.sh --transport=tcp --interrupt-mode 00:25:41.258 11:47:41 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:25:41.258 11:47:41 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1111 -- # xtrace_disable 00:25:41.258 11:47:41 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@10 -- # set +x 00:25:41.258 ************************************ 00:25:41.258 START TEST nvmf_target_multipath 00:25:41.258 ************************************ 00:25:41.258 11:47:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/multipath.sh --transport=tcp --interrupt-mode 00:25:41.517 * Looking for test storage... 00:25:41.517 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:25:41.517 11:47:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:25:41.517 11:47:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- common/autotest_common.sh@1711 -- # lcov --version 00:25:41.517 11:47:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:25:41.517 11:47:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:25:41.517 11:47:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:25:41.517 11:47:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@333 -- # local ver1 ver1_l 00:25:41.517 11:47:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@334 -- # local ver2 ver2_l 00:25:41.517 11:47:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@336 -- # IFS=.-: 00:25:41.517 11:47:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@336 -- # read -ra ver1 00:25:41.517 11:47:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@337 -- # IFS=.-: 00:25:41.518 11:47:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@337 -- # read -ra ver2 00:25:41.518 11:47:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@338 -- # local 'op=<' 00:25:41.518 11:47:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@340 -- # ver1_l=2 00:25:41.518 11:47:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@341 -- # ver2_l=1 00:25:41.518 11:47:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:25:41.518 11:47:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@344 -- # case "$op" in 00:25:41.518 11:47:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@345 -- # : 1 00:25:41.518 11:47:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@364 -- # (( v = 0 )) 00:25:41.518 11:47:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:25:41.518 11:47:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@365 -- # decimal 1 00:25:41.518 11:47:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@353 -- # local d=1 00:25:41.518 11:47:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:25:41.518 11:47:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@355 -- # echo 1 00:25:41.518 11:47:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@365 -- # ver1[v]=1 00:25:41.518 11:47:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@366 -- # decimal 2 00:25:41.518 11:47:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@353 -- # local d=2 00:25:41.518 11:47:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:25:41.518 11:47:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@355 -- # echo 2 00:25:41.518 11:47:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@366 -- # ver2[v]=2 00:25:41.518 11:47:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:25:41.518 11:47:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:25:41.518 11:47:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@368 -- # return 0 00:25:41.518 11:47:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:25:41.518 11:47:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:25:41.518 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:25:41.518 --rc genhtml_branch_coverage=1 00:25:41.518 --rc genhtml_function_coverage=1 00:25:41.518 --rc genhtml_legend=1 00:25:41.518 --rc geninfo_all_blocks=1 00:25:41.518 --rc geninfo_unexecuted_blocks=1 00:25:41.518 00:25:41.518 ' 00:25:41.518 11:47:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:25:41.518 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:25:41.518 --rc genhtml_branch_coverage=1 00:25:41.518 --rc genhtml_function_coverage=1 00:25:41.518 --rc genhtml_legend=1 00:25:41.518 --rc geninfo_all_blocks=1 00:25:41.518 --rc geninfo_unexecuted_blocks=1 00:25:41.518 00:25:41.518 ' 00:25:41.518 11:47:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:25:41.518 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:25:41.518 --rc genhtml_branch_coverage=1 00:25:41.518 --rc genhtml_function_coverage=1 00:25:41.518 --rc genhtml_legend=1 00:25:41.518 --rc geninfo_all_blocks=1 00:25:41.518 --rc geninfo_unexecuted_blocks=1 00:25:41.518 00:25:41.518 ' 00:25:41.518 11:47:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:25:41.518 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:25:41.518 --rc genhtml_branch_coverage=1 00:25:41.518 --rc genhtml_function_coverage=1 00:25:41.518 --rc genhtml_legend=1 00:25:41.518 --rc geninfo_all_blocks=1 00:25:41.518 --rc geninfo_unexecuted_blocks=1 00:25:41.518 00:25:41.518 ' 00:25:41.518 11:47:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:25:41.518 11:47:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@7 -- # uname -s 00:25:41.518 11:47:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:25:41.518 11:47:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:25:41.518 11:47:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:25:41.518 11:47:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:25:41.518 11:47:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:25:41.518 11:47:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:25:41.518 11:47:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:25:41.518 11:47:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:25:41.518 11:47:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:25:41.518 11:47:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:25:41.518 11:47:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:2fb5c03d-b67f-4471-b8a9-a83bf7b15be8 00:25:41.518 11:47:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@18 -- # NVME_HOSTID=2fb5c03d-b67f-4471-b8a9-a83bf7b15be8 00:25:41.518 11:47:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:25:41.518 11:47:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:25:41.518 11:47:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:25:41.518 11:47:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:25:41.518 11:47:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@49 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:25:41.518 11:47:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@15 -- # shopt -s extglob 00:25:41.518 11:47:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:25:41.518 11:47:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:25:41.518 11:47:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:25:41.518 11:47:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:41.518 11:47:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:41.518 11:47:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:41.518 11:47:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- paths/export.sh@5 -- # export PATH 00:25:41.518 11:47:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:41.518 11:47:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@51 -- # : 0 00:25:41.518 11:47:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:25:41.518 11:47:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:25:41.518 11:47:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:25:41.518 11:47:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:25:41.518 11:47:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:25:41.518 11:47:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@33 -- # '[' 1 -eq 1 ']' 00:25:41.518 11:47:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@34 -- # NVMF_APP+=(--interrupt-mode) 00:25:41.518 11:47:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:25:41.518 11:47:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:25:41.518 11:47:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@55 -- # have_pci_nics=0 00:25:41.518 11:47:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@11 -- # MALLOC_BDEV_SIZE=64 00:25:41.518 11:47:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:25:41.518 11:47:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@13 -- # nqn=nqn.2016-06.io.spdk:cnode1 00:25:41.518 11:47:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@15 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:25:41.518 11:47:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@43 -- # nvmftestinit 00:25:41.519 11:47:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:25:41.519 11:47:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:25:41.519 11:47:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@476 -- # prepare_net_devs 00:25:41.519 11:47:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@438 -- # local -g is_hw=no 00:25:41.519 11:47:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@440 -- # remove_spdk_ns 00:25:41.519 11:47:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:25:41.519 11:47:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:25:41.519 11:47:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:25:41.519 11:47:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@442 -- # [[ virt != virt ]] 00:25:41.519 11:47:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@444 -- # [[ no == yes ]] 00:25:41.519 11:47:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@451 -- # [[ virt == phy ]] 00:25:41.519 11:47:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@454 -- # [[ virt == phy-fallback ]] 00:25:41.519 11:47:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@459 -- # [[ tcp == tcp ]] 00:25:41.519 11:47:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@460 -- # nvmf_veth_init 00:25:41.519 11:47:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@145 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:25:41.519 11:47:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@146 -- # NVMF_SECOND_INITIATOR_IP=10.0.0.2 00:25:41.519 11:47:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@147 -- # NVMF_FIRST_TARGET_IP=10.0.0.3 00:25:41.519 11:47:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@148 -- # NVMF_SECOND_TARGET_IP=10.0.0.4 00:25:41.519 11:47:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@149 -- # NVMF_INITIATOR_IP=10.0.0.1 00:25:41.519 11:47:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@150 -- # NVMF_BRIDGE=nvmf_br 00:25:41.519 11:47:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@151 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:25:41.519 11:47:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@152 -- # NVMF_INITIATOR_INTERFACE2=nvmf_init_if2 00:25:41.519 11:47:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@153 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:25:41.519 11:47:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@154 -- # NVMF_INITIATOR_BRIDGE2=nvmf_init_br2 00:25:41.519 11:47:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@155 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:25:41.519 11:47:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@156 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:25:41.519 11:47:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@157 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:25:41.519 11:47:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@158 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:25:41.519 11:47:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@159 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:25:41.519 11:47:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@160 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:25:41.519 11:47:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@162 -- # ip link set nvmf_init_br nomaster 00:25:41.519 Cannot find device "nvmf_init_br" 00:25:41.519 11:47:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@162 -- # true 00:25:41.519 11:47:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@163 -- # ip link set nvmf_init_br2 nomaster 00:25:41.519 Cannot find device "nvmf_init_br2" 00:25:41.519 11:47:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@163 -- # true 00:25:41.519 11:47:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@164 -- # ip link set nvmf_tgt_br nomaster 00:25:41.519 Cannot find device "nvmf_tgt_br" 00:25:41.519 11:47:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@164 -- # true 00:25:41.519 11:47:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@165 -- # ip link set nvmf_tgt_br2 nomaster 00:25:41.519 Cannot find device "nvmf_tgt_br2" 00:25:41.519 11:47:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@165 -- # true 00:25:41.519 11:47:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@166 -- # ip link set nvmf_init_br down 00:25:41.519 Cannot find device "nvmf_init_br" 00:25:41.519 11:47:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@166 -- # true 00:25:41.519 11:47:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@167 -- # ip link set nvmf_init_br2 down 00:25:41.778 Cannot find device "nvmf_init_br2" 00:25:41.778 11:47:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@167 -- # true 00:25:41.778 11:47:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@168 -- # ip link set nvmf_tgt_br down 00:25:41.778 Cannot find device "nvmf_tgt_br" 00:25:41.778 11:47:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@168 -- # true 00:25:41.778 11:47:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@169 -- # ip link set nvmf_tgt_br2 down 00:25:41.778 Cannot find device "nvmf_tgt_br2" 00:25:41.778 11:47:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@169 -- # true 00:25:41.778 11:47:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@170 -- # ip link delete nvmf_br type bridge 00:25:41.778 Cannot find device "nvmf_br" 00:25:41.778 11:47:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@170 -- # true 00:25:41.778 11:47:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@171 -- # ip link delete nvmf_init_if 00:25:41.778 Cannot find device "nvmf_init_if" 00:25:41.778 11:47:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@171 -- # true 00:25:41.778 11:47:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@172 -- # ip link delete nvmf_init_if2 00:25:41.778 Cannot find device "nvmf_init_if2" 00:25:41.778 11:47:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@172 -- # true 00:25:41.778 11:47:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@173 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:25:41.778 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:25:41.778 11:47:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@173 -- # true 00:25:41.778 11:47:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@174 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:25:41.778 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:25:41.778 11:47:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@174 -- # true 00:25:41.778 11:47:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@177 -- # ip netns add nvmf_tgt_ns_spdk 00:25:41.778 11:47:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@180 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:25:41.778 11:47:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@181 -- # ip link add nvmf_init_if2 type veth peer name nvmf_init_br2 00:25:41.778 11:47:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@182 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:25:41.778 11:47:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@183 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:25:41.778 11:47:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:25:41.778 11:47:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@187 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:25:41.778 11:47:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@190 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:25:41.778 11:47:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@191 -- # ip addr add 10.0.0.2/24 dev nvmf_init_if2 00:25:41.778 11:47:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@192 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if 00:25:41.778 11:47:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@193 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.4/24 dev nvmf_tgt_if2 00:25:41.778 11:47:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@196 -- # ip link set nvmf_init_if up 00:25:41.778 11:47:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@197 -- # ip link set nvmf_init_if2 up 00:25:41.778 11:47:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@198 -- # ip link set nvmf_init_br up 00:25:41.779 11:47:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@199 -- # ip link set nvmf_init_br2 up 00:25:41.779 11:47:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@200 -- # ip link set nvmf_tgt_br up 00:25:41.779 11:47:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@201 -- # ip link set nvmf_tgt_br2 up 00:25:41.779 11:47:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@202 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:25:41.779 11:47:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@203 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:25:41.779 11:47:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@204 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:25:41.779 11:47:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@207 -- # ip link add nvmf_br type bridge 00:25:41.779 11:47:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@208 -- # ip link set nvmf_br up 00:25:41.779 11:47:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@211 -- # ip link set nvmf_init_br master nvmf_br 00:25:42.049 11:47:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@212 -- # ip link set nvmf_init_br2 master nvmf_br 00:25:42.049 11:47:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@213 -- # ip link set nvmf_tgt_br master nvmf_br 00:25:42.049 11:47:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@214 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:25:42.049 11:47:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@217 -- # ipts -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:25:42.049 11:47:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT' 00:25:42.049 11:47:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@218 -- # ipts -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT 00:25:42.049 11:47:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT' 00:25:42.049 11:47:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@219 -- # ipts -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:25:42.049 11:47:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@790 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT -m comment --comment 'SPDK_NVMF:-A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT' 00:25:42.049 11:47:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@222 -- # ping -c 1 10.0.0.3 00:25:42.049 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:25:42.049 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.076 ms 00:25:42.049 00:25:42.050 --- 10.0.0.3 ping statistics --- 00:25:42.050 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:25:42.050 rtt min/avg/max/mdev = 0.076/0.076/0.076/0.000 ms 00:25:42.050 11:47:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@223 -- # ping -c 1 10.0.0.4 00:25:42.050 PING 10.0.0.4 (10.0.0.4) 56(84) bytes of data. 00:25:42.050 64 bytes from 10.0.0.4: icmp_seq=1 ttl=64 time=0.041 ms 00:25:42.050 00:25:42.050 --- 10.0.0.4 ping statistics --- 00:25:42.050 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:25:42.050 rtt min/avg/max/mdev = 0.041/0.041/0.041/0.000 ms 00:25:42.050 11:47:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@224 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:25:42.050 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:25:42.050 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.022 ms 00:25:42.050 00:25:42.050 --- 10.0.0.1 ping statistics --- 00:25:42.050 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:25:42.050 rtt min/avg/max/mdev = 0.022/0.022/0.022/0.000 ms 00:25:42.050 11:47:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@225 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.2 00:25:42.050 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:25:42.050 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.046 ms 00:25:42.050 00:25:42.050 --- 10.0.0.2 ping statistics --- 00:25:42.050 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:25:42.050 rtt min/avg/max/mdev = 0.046/0.046/0.046/0.000 ms 00:25:42.050 11:47:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@227 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:25:42.050 11:47:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@461 -- # return 0 00:25:42.050 11:47:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:25:42.050 11:47:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:25:42.050 11:47:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:25:42.050 11:47:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:25:42.050 11:47:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:25:42.050 11:47:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:25:42.050 11:47:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:25:42.050 11:47:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@45 -- # '[' -z 10.0.0.4 ']' 00:25:42.050 11:47:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@51 -- # '[' tcp '!=' tcp ']' 00:25:42.050 11:47:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@57 -- # nvmfappstart -m 0xF 00:25:42.050 11:47:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:25:42.050 11:47:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- common/autotest_common.sh@726 -- # xtrace_disable 00:25:42.050 11:47:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- common/autotest_common.sh@10 -- # set +x 00:25:42.050 11:47:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@509 -- # nvmfpid=106159 00:25:42.050 11:47:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@508 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --interrupt-mode -m 0xF 00:25:42.050 11:47:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@510 -- # waitforlisten 106159 00:25:42.050 11:47:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- common/autotest_common.sh@835 -- # '[' -z 106159 ']' 00:25:42.050 11:47:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:25:42.050 11:47:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- common/autotest_common.sh@840 -- # local max_retries=100 00:25:42.050 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:25:42.050 11:47:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:25:42.050 11:47:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- common/autotest_common.sh@844 -- # xtrace_disable 00:25:42.050 11:47:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- common/autotest_common.sh@10 -- # set +x 00:25:42.050 [2024-12-16 11:47:42.702722] thread.c:3079:spdk_interrupt_mode_enable: *NOTICE*: Set SPDK running in interrupt mode. 00:25:42.050 [2024-12-16 11:47:42.704092] Starting SPDK v25.01-pre git sha1 c360129e7 / DPDK 24.03.0 initialization... 00:25:42.050 [2024-12-16 11:47:42.704188] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:25:42.322 [2024-12-16 11:47:42.862042] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:25:42.322 [2024-12-16 11:47:42.926011] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:25:42.322 [2024-12-16 11:47:42.926087] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:25:42.322 [2024-12-16 11:47:42.926103] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:25:42.322 [2024-12-16 11:47:42.926116] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:25:42.322 [2024-12-16 11:47:42.926127] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:25:42.322 [2024-12-16 11:47:42.927451] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 1 00:25:42.322 [2024-12-16 11:47:42.927596] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 3 00:25:42.322 [2024-12-16 11:47:42.927604] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:25:42.322 [2024-12-16 11:47:42.927521] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 2 00:25:42.322 [2024-12-16 11:47:43.034955] thread.c:2144:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_000) to intr mode from intr mode. 00:25:42.322 [2024-12-16 11:47:43.035461] thread.c:2144:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_001) to intr mode from intr mode. 00:25:42.322 [2024-12-16 11:47:43.035602] thread.c:2144:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_002) to intr mode from intr mode. 00:25:42.322 [2024-12-16 11:47:43.036220] thread.c:2144:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (app_thread) to intr mode from intr mode. 00:25:42.322 [2024-12-16 11:47:43.036547] thread.c:2144:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_003) to intr mode from intr mode. 00:25:43.259 11:47:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:25:43.259 11:47:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- common/autotest_common.sh@868 -- # return 0 00:25:43.259 11:47:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:25:43.259 11:47:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- common/autotest_common.sh@732 -- # xtrace_disable 00:25:43.259 11:47:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- common/autotest_common.sh@10 -- # set +x 00:25:43.259 11:47:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:25:43.259 11:47:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@59 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:25:43.535 [2024-12-16 11:47:44.128741] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:25:43.535 11:47:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@61 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc0 00:25:43.793 Malloc0 00:25:43.793 11:47:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@62 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME -r 00:25:44.051 11:47:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:25:44.617 11:47:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 00:25:44.618 [2024-12-16 11:47:45.340991] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4420 *** 00:25:44.618 11:47:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@65 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.4 -s 4420 00:25:44.876 [2024-12-16 11:47:45.604894] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.4 port 4420 *** 00:25:44.876 11:47:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@67 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:2fb5c03d-b67f-4471-b8a9-a83bf7b15be8 --hostid=2fb5c03d-b67f-4471-b8a9-a83bf7b15be8 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.3 -s 4420 -g -G 00:25:45.135 11:47:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@68 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:2fb5c03d-b67f-4471-b8a9-a83bf7b15be8 --hostid=2fb5c03d-b67f-4471-b8a9-a83bf7b15be8 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.4 -s 4420 -g -G 00:25:45.135 11:47:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@69 -- # waitforserial SPDKISFASTANDAWESOME 00:25:45.135 11:47:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- common/autotest_common.sh@1202 -- # local i=0 00:25:45.135 11:47:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- common/autotest_common.sh@1203 -- # local nvme_device_counter=1 nvme_devices=0 00:25:45.135 11:47:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- common/autotest_common.sh@1204 -- # [[ -n '' ]] 00:25:45.135 11:47:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- common/autotest_common.sh@1209 -- # sleep 2 00:25:47.681 11:47:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- common/autotest_common.sh@1210 -- # (( i++ <= 15 )) 00:25:47.681 11:47:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- common/autotest_common.sh@1211 -- # lsblk -l -o NAME,SERIAL 00:25:47.681 11:47:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- common/autotest_common.sh@1211 -- # grep -c SPDKISFASTANDAWESOME 00:25:47.681 11:47:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- common/autotest_common.sh@1211 -- # nvme_devices=1 00:25:47.681 11:47:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- common/autotest_common.sh@1212 -- # (( nvme_devices == nvme_device_counter )) 00:25:47.681 11:47:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- common/autotest_common.sh@1212 -- # return 0 00:25:47.681 11:47:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@72 -- # get_subsystem nqn.2016-06.io.spdk:cnode1 SPDKISFASTANDAWESOME 00:25:47.681 11:47:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@34 -- # local nqn=nqn.2016-06.io.spdk:cnode1 serial=SPDKISFASTANDAWESOME s 00:25:47.681 11:47:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@36 -- # for s in /sys/class/nvme-subsystem/* 00:25:47.681 11:47:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@37 -- # [[ nqn.2016-06.io.spdk:cnode1 == \n\q\n\.\2\0\1\6\-\0\6\.\i\o\.\s\p\d\k\:\c\n\o\d\e\1 ]] 00:25:47.681 11:47:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@37 -- # [[ SPDKISFASTANDAWESOME == \S\P\D\K\I\S\F\A\S\T\A\N\D\A\W\E\S\O\M\E ]] 00:25:47.681 11:47:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@38 -- # echo nvme-subsys0 00:25:47.681 11:47:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@38 -- # return 0 00:25:47.681 11:47:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@72 -- # subsystem=nvme-subsys0 00:25:47.681 11:47:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@73 -- # paths=(/sys/class/nvme-subsystem/$subsystem/nvme*/nvme*c*) 00:25:47.681 11:47:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@74 -- # paths=("${paths[@]##*/}") 00:25:47.681 11:47:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@76 -- # (( 2 == 2 )) 00:25:47.681 11:47:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@78 -- # p0=nvme0c0n1 00:25:47.681 11:47:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@79 -- # p1=nvme0c1n1 00:25:47.681 11:47:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@81 -- # check_ana_state nvme0c0n1 optimized 00:25:47.681 11:47:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@18 -- # local path=nvme0c0n1 ana_state=optimized 00:25:47.681 11:47:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@22 -- # local timeout=20 00:25:47.681 11:47:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@23 -- # local ana_state_f=/sys/block/nvme0c0n1/ana_state 00:25:47.681 11:47:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@25 -- # [[ ! -e /sys/block/nvme0c0n1/ana_state ]] 00:25:47.681 11:47:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@25 -- # [[ optimized != \o\p\t\i\m\i\z\e\d ]] 00:25:47.681 11:47:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@82 -- # check_ana_state nvme0c1n1 optimized 00:25:47.681 11:47:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@18 -- # local path=nvme0c1n1 ana_state=optimized 00:25:47.681 11:47:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@22 -- # local timeout=20 00:25:47.681 11:47:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@23 -- # local ana_state_f=/sys/block/nvme0c1n1/ana_state 00:25:47.681 11:47:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@25 -- # [[ ! -e /sys/block/nvme0c1n1/ana_state ]] 00:25:47.681 11:47:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@25 -- # [[ optimized != \o\p\t\i\m\i\z\e\d ]] 00:25:47.681 11:47:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@85 -- # echo numa 00:25:47.681 11:47:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@88 -- # fio_pid=106300 00:25:47.682 11:47:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@87 -- # /home/vagrant/spdk_repo/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 128 -t randrw -r 6 -v 00:25:47.682 11:47:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@90 -- # sleep 1 00:25:47.682 [global] 00:25:47.682 thread=1 00:25:47.682 invalidate=1 00:25:47.682 rw=randrw 00:25:47.682 time_based=1 00:25:47.682 runtime=6 00:25:47.682 ioengine=libaio 00:25:47.682 direct=1 00:25:47.682 bs=4096 00:25:47.682 iodepth=128 00:25:47.682 norandommap=0 00:25:47.682 numjobs=1 00:25:47.682 00:25:47.682 verify_dump=1 00:25:47.682 verify_backlog=512 00:25:47.682 verify_state_save=0 00:25:47.682 do_verify=1 00:25:47.682 verify=crc32c-intel 00:25:47.682 [job0] 00:25:47.682 filename=/dev/nvme0n1 00:25:47.682 Could not set queue depth (nvme0n1) 00:25:47.682 job0: (g=0): rw=randrw, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:25:47.682 fio-3.35 00:25:47.682 Starting 1 thread 00:25:48.247 11:47:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@92 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 -n inaccessible 00:25:48.505 11:47:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@93 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.4 -s 4420 -n non_optimized 00:25:49.071 11:47:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@95 -- # check_ana_state nvme0c0n1 inaccessible 00:25:49.071 11:47:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@18 -- # local path=nvme0c0n1 ana_state=inaccessible 00:25:49.071 11:47:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@22 -- # local timeout=20 00:25:49.071 11:47:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@23 -- # local ana_state_f=/sys/block/nvme0c0n1/ana_state 00:25:49.071 11:47:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@25 -- # [[ ! -e /sys/block/nvme0c0n1/ana_state ]] 00:25:49.071 11:47:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@25 -- # [[ inaccessible != \i\n\a\c\c\e\s\s\i\b\l\e ]] 00:25:49.071 11:47:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@96 -- # check_ana_state nvme0c1n1 non-optimized 00:25:49.071 11:47:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@18 -- # local path=nvme0c1n1 ana_state=non-optimized 00:25:49.071 11:47:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@22 -- # local timeout=20 00:25:49.071 11:47:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@23 -- # local ana_state_f=/sys/block/nvme0c1n1/ana_state 00:25:49.071 11:47:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@25 -- # [[ ! -e /sys/block/nvme0c1n1/ana_state ]] 00:25:49.071 11:47:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@25 -- # [[ optimized != \n\o\n\-\o\p\t\i\m\i\z\e\d ]] 00:25:49.071 11:47:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@25 -- # sleep 1s 00:25:50.007 11:47:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@26 -- # (( timeout-- == 0 )) 00:25:50.007 11:47:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@25 -- # [[ ! -e /sys/block/nvme0c1n1/ana_state ]] 00:25:50.007 11:47:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@25 -- # [[ non-optimized != \n\o\n\-\o\p\t\i\m\i\z\e\d ]] 00:25:50.007 11:47:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@98 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 -n non_optimized 00:25:50.265 11:47:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@99 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.4 -s 4420 -n inaccessible 00:25:50.524 11:47:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@101 -- # check_ana_state nvme0c0n1 non-optimized 00:25:50.524 11:47:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@18 -- # local path=nvme0c0n1 ana_state=non-optimized 00:25:50.524 11:47:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@22 -- # local timeout=20 00:25:50.524 11:47:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@23 -- # local ana_state_f=/sys/block/nvme0c0n1/ana_state 00:25:50.524 11:47:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@25 -- # [[ ! -e /sys/block/nvme0c0n1/ana_state ]] 00:25:50.524 11:47:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@25 -- # [[ non-optimized != \n\o\n\-\o\p\t\i\m\i\z\e\d ]] 00:25:50.524 11:47:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@102 -- # check_ana_state nvme0c1n1 inaccessible 00:25:50.524 11:47:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@18 -- # local path=nvme0c1n1 ana_state=inaccessible 00:25:50.524 11:47:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@22 -- # local timeout=20 00:25:50.524 11:47:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@23 -- # local ana_state_f=/sys/block/nvme0c1n1/ana_state 00:25:50.524 11:47:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@25 -- # [[ ! -e /sys/block/nvme0c1n1/ana_state ]] 00:25:50.524 11:47:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@25 -- # [[ non-optimized != \i\n\a\c\c\e\s\s\i\b\l\e ]] 00:25:50.524 11:47:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@25 -- # sleep 1s 00:25:51.486 11:47:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@26 -- # (( timeout-- == 0 )) 00:25:51.486 11:47:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@25 -- # [[ ! -e /sys/block/nvme0c1n1/ana_state ]] 00:25:51.486 11:47:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@25 -- # [[ inaccessible != \i\n\a\c\c\e\s\s\i\b\l\e ]] 00:25:51.486 11:47:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@104 -- # wait 106300 00:25:54.021 00:25:54.021 job0: (groupid=0, jobs=1): err= 0: pid=106321: Mon Dec 16 11:47:54 2024 00:25:54.021 read: IOPS=10.9k, BW=42.4MiB/s (44.5MB/s)(255MiB/6006msec) 00:25:54.021 slat (usec): min=2, max=6051, avg=53.38, stdev=261.30 00:25:54.021 clat (usec): min=1759, max=15857, avg=7909.18, stdev=1268.11 00:25:54.021 lat (usec): min=1779, max=15868, avg=7962.56, stdev=1282.79 00:25:54.021 clat percentiles (usec): 00:25:54.021 | 1.00th=[ 4948], 5.00th=[ 5866], 10.00th=[ 6587], 20.00th=[ 7111], 00:25:54.021 | 30.00th=[ 7373], 40.00th=[ 7570], 50.00th=[ 7767], 60.00th=[ 8029], 00:25:54.021 | 70.00th=[ 8291], 80.00th=[ 8717], 90.00th=[ 9372], 95.00th=[10159], 00:25:54.021 | 99.00th=[11863], 99.50th=[12387], 99.90th=[13566], 99.95th=[14746], 00:25:54.021 | 99.99th=[15270] 00:25:54.021 bw ( KiB/s): min=11657, max=27384, per=51.80%, avg=22508.73, stdev=5085.34, samples=11 00:25:54.021 iops : min= 2914, max= 6846, avg=5627.09, stdev=1271.33, samples=11 00:25:54.021 write: IOPS=6382, BW=24.9MiB/s (26.1MB/s)(133MiB/5331msec); 0 zone resets 00:25:54.021 slat (usec): min=11, max=3198, avg=64.07, stdev=153.37 00:25:54.021 clat (usec): min=311, max=17958, avg=7223.11, stdev=931.52 00:25:54.021 lat (usec): min=1043, max=17983, avg=7287.17, stdev=935.22 00:25:54.021 clat percentiles (usec): 00:25:54.021 | 1.00th=[ 4146], 5.00th=[ 5538], 10.00th=[ 6325], 20.00th=[ 6718], 00:25:54.021 | 30.00th=[ 6980], 40.00th=[ 7177], 50.00th=[ 7308], 60.00th=[ 7439], 00:25:54.021 | 70.00th=[ 7635], 80.00th=[ 7767], 90.00th=[ 8029], 95.00th=[ 8291], 00:25:54.021 | 99.00th=[10159], 99.50th=[11076], 99.90th=[12256], 99.95th=[12780], 00:25:54.021 | 99.99th=[14484] 00:25:54.021 bw ( KiB/s): min=12239, max=27097, per=88.26%, avg=22534.55, stdev=4805.76, samples=11 00:25:54.021 iops : min= 3059, max= 6774, avg=5633.55, stdev=1201.58, samples=11 00:25:54.021 lat (usec) : 500=0.01% 00:25:54.021 lat (msec) : 2=0.01%, 4=0.38%, 10=95.19%, 20=4.42% 00:25:54.021 cpu : usr=5.33%, sys=22.16%, ctx=7538, majf=0, minf=102 00:25:54.021 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.2%, >=64=99.7% 00:25:54.021 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:25:54.021 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:25:54.021 issued rwts: total=65238,34027,0,0 short=0,0,0,0 dropped=0,0,0,0 00:25:54.021 latency : target=0, window=0, percentile=100.00%, depth=128 00:25:54.021 00:25:54.021 Run status group 0 (all jobs): 00:25:54.021 READ: bw=42.4MiB/s (44.5MB/s), 42.4MiB/s-42.4MiB/s (44.5MB/s-44.5MB/s), io=255MiB (267MB), run=6006-6006msec 00:25:54.021 WRITE: bw=24.9MiB/s (26.1MB/s), 24.9MiB/s-24.9MiB/s (26.1MB/s-26.1MB/s), io=133MiB (139MB), run=5331-5331msec 00:25:54.021 00:25:54.021 Disk stats (read/write): 00:25:54.021 nvme0n1: ios=64455/33280, merge=0/0, ticks=477390/229745, in_queue=707135, util=98.67% 00:25:54.021 11:47:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@106 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 -n optimized 00:25:54.021 11:47:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@107 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.4 -s 4420 -n optimized 00:25:54.280 11:47:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@109 -- # check_ana_state nvme0c0n1 optimized 00:25:54.280 11:47:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@18 -- # local path=nvme0c0n1 ana_state=optimized 00:25:54.280 11:47:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@22 -- # local timeout=20 00:25:54.280 11:47:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@23 -- # local ana_state_f=/sys/block/nvme0c0n1/ana_state 00:25:54.280 11:47:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@25 -- # [[ ! -e /sys/block/nvme0c0n1/ana_state ]] 00:25:54.280 11:47:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@25 -- # [[ optimized != \o\p\t\i\m\i\z\e\d ]] 00:25:54.280 11:47:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@110 -- # check_ana_state nvme0c1n1 optimized 00:25:54.280 11:47:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@18 -- # local path=nvme0c1n1 ana_state=optimized 00:25:54.280 11:47:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@22 -- # local timeout=20 00:25:54.280 11:47:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@23 -- # local ana_state_f=/sys/block/nvme0c1n1/ana_state 00:25:54.280 11:47:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@25 -- # [[ ! -e /sys/block/nvme0c1n1/ana_state ]] 00:25:54.280 11:47:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@25 -- # [[ inaccessible != \o\p\t\i\m\i\z\e\d ]] 00:25:54.280 11:47:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@25 -- # sleep 1s 00:25:55.218 11:47:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@26 -- # (( timeout-- == 0 )) 00:25:55.218 11:47:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@25 -- # [[ ! -e /sys/block/nvme0c1n1/ana_state ]] 00:25:55.218 11:47:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@25 -- # [[ optimized != \o\p\t\i\m\i\z\e\d ]] 00:25:55.218 11:47:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@113 -- # echo round-robin 00:25:55.218 11:47:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@116 -- # fio_pid=106449 00:25:55.218 11:47:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@115 -- # /home/vagrant/spdk_repo/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 128 -t randrw -r 6 -v 00:25:55.218 11:47:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@118 -- # sleep 1 00:25:55.218 [global] 00:25:55.218 thread=1 00:25:55.218 invalidate=1 00:25:55.218 rw=randrw 00:25:55.218 time_based=1 00:25:55.218 runtime=6 00:25:55.218 ioengine=libaio 00:25:55.218 direct=1 00:25:55.218 bs=4096 00:25:55.218 iodepth=128 00:25:55.218 norandommap=0 00:25:55.218 numjobs=1 00:25:55.218 00:25:55.218 verify_dump=1 00:25:55.218 verify_backlog=512 00:25:55.218 verify_state_save=0 00:25:55.218 do_verify=1 00:25:55.218 verify=crc32c-intel 00:25:55.218 [job0] 00:25:55.218 filename=/dev/nvme0n1 00:25:55.218 Could not set queue depth (nvme0n1) 00:25:55.218 job0: (g=0): rw=randrw, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:25:55.218 fio-3.35 00:25:55.218 Starting 1 thread 00:25:56.153 11:47:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@120 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 -n inaccessible 00:25:56.412 11:47:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@121 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.4 -s 4420 -n non_optimized 00:25:56.980 11:47:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@123 -- # check_ana_state nvme0c0n1 inaccessible 00:25:56.980 11:47:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@18 -- # local path=nvme0c0n1 ana_state=inaccessible 00:25:56.980 11:47:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@22 -- # local timeout=20 00:25:56.980 11:47:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@23 -- # local ana_state_f=/sys/block/nvme0c0n1/ana_state 00:25:56.980 11:47:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@25 -- # [[ ! -e /sys/block/nvme0c0n1/ana_state ]] 00:25:56.980 11:47:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@25 -- # [[ inaccessible != \i\n\a\c\c\e\s\s\i\b\l\e ]] 00:25:56.980 11:47:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@124 -- # check_ana_state nvme0c1n1 non-optimized 00:25:56.980 11:47:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@18 -- # local path=nvme0c1n1 ana_state=non-optimized 00:25:56.980 11:47:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@22 -- # local timeout=20 00:25:56.980 11:47:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@23 -- # local ana_state_f=/sys/block/nvme0c1n1/ana_state 00:25:56.980 11:47:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@25 -- # [[ ! -e /sys/block/nvme0c1n1/ana_state ]] 00:25:56.980 11:47:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@25 -- # [[ optimized != \n\o\n\-\o\p\t\i\m\i\z\e\d ]] 00:25:56.980 11:47:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@25 -- # sleep 1s 00:25:57.915 11:47:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@26 -- # (( timeout-- == 0 )) 00:25:57.915 11:47:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@25 -- # [[ ! -e /sys/block/nvme0c1n1/ana_state ]] 00:25:57.915 11:47:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@25 -- # [[ non-optimized != \n\o\n\-\o\p\t\i\m\i\z\e\d ]] 00:25:57.915 11:47:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 -n non_optimized 00:25:58.174 11:47:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.4 -s 4420 -n inaccessible 00:25:58.432 11:47:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@129 -- # check_ana_state nvme0c0n1 non-optimized 00:25:58.432 11:47:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@18 -- # local path=nvme0c0n1 ana_state=non-optimized 00:25:58.432 11:47:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@22 -- # local timeout=20 00:25:58.432 11:47:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@23 -- # local ana_state_f=/sys/block/nvme0c0n1/ana_state 00:25:58.432 11:47:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@25 -- # [[ ! -e /sys/block/nvme0c0n1/ana_state ]] 00:25:58.432 11:47:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@25 -- # [[ non-optimized != \n\o\n\-\o\p\t\i\m\i\z\e\d ]] 00:25:58.432 11:47:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@130 -- # check_ana_state nvme0c1n1 inaccessible 00:25:58.432 11:47:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@18 -- # local path=nvme0c1n1 ana_state=inaccessible 00:25:58.432 11:47:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@22 -- # local timeout=20 00:25:58.432 11:47:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@23 -- # local ana_state_f=/sys/block/nvme0c1n1/ana_state 00:25:58.432 11:47:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@25 -- # [[ ! -e /sys/block/nvme0c1n1/ana_state ]] 00:25:58.432 11:47:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@25 -- # [[ non-optimized != \i\n\a\c\c\e\s\s\i\b\l\e ]] 00:25:58.432 11:47:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@25 -- # sleep 1s 00:25:59.420 11:48:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@26 -- # (( timeout-- == 0 )) 00:25:59.420 11:48:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@25 -- # [[ ! -e /sys/block/nvme0c1n1/ana_state ]] 00:25:59.420 11:48:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@25 -- # [[ inaccessible != \i\n\a\c\c\e\s\s\i\b\l\e ]] 00:25:59.420 11:48:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@132 -- # wait 106449 00:26:01.952 00:26:01.953 job0: (groupid=0, jobs=1): err= 0: pid=106470: Mon Dec 16 11:48:02 2024 00:26:01.953 read: IOPS=11.5k, BW=44.9MiB/s (47.1MB/s)(270MiB/6006msec) 00:26:01.953 slat (usec): min=3, max=8369, avg=43.63, stdev=229.76 00:26:01.953 clat (usec): min=266, max=18975, avg=7501.87, stdev=2056.45 00:26:01.953 lat (usec): min=317, max=18988, avg=7545.50, stdev=2067.14 00:26:01.953 clat percentiles (usec): 00:26:01.953 | 1.00th=[ 1303], 5.00th=[ 3916], 10.00th=[ 5145], 20.00th=[ 6259], 00:26:01.953 | 30.00th=[ 6980], 40.00th=[ 7308], 50.00th=[ 7570], 60.00th=[ 7898], 00:26:01.953 | 70.00th=[ 8225], 80.00th=[ 8717], 90.00th=[ 9765], 95.00th=[10814], 00:26:01.953 | 99.00th=[12780], 99.50th=[14091], 99.90th=[16188], 99.95th=[17171], 00:26:01.953 | 99.99th=[18220] 00:26:01.953 bw ( KiB/s): min=11872, max=36176, per=53.23%, avg=24495.33, stdev=5856.78, samples=12 00:26:01.953 iops : min= 2968, max= 9044, avg=6123.83, stdev=1464.19, samples=12 00:26:01.953 write: IOPS=6669, BW=26.1MiB/s (27.3MB/s)(144MiB/5510msec); 0 zone resets 00:26:01.953 slat (usec): min=4, max=3258, avg=53.42, stdev=132.49 00:26:01.953 clat (usec): min=262, max=16621, avg=6645.23, stdev=1809.33 00:26:01.953 lat (usec): min=310, max=16677, avg=6698.65, stdev=1816.38 00:26:01.953 clat percentiles (usec): 00:26:01.953 | 1.00th=[ 1020], 5.00th=[ 3359], 10.00th=[ 4228], 20.00th=[ 5473], 00:26:01.953 | 30.00th=[ 6390], 40.00th=[ 6718], 50.00th=[ 6980], 60.00th=[ 7177], 00:26:01.953 | 70.00th=[ 7439], 80.00th=[ 7701], 90.00th=[ 8225], 95.00th=[ 9110], 00:26:01.953 | 99.00th=[11207], 99.50th=[11863], 99.90th=[14353], 99.95th=[14746], 00:26:01.953 | 99.99th=[16057] 00:26:01.953 bw ( KiB/s): min=12368, max=35672, per=91.69%, avg=24462.67, stdev=5577.00, samples=12 00:26:01.953 iops : min= 3092, max= 8918, avg=6115.67, stdev=1394.25, samples=12 00:26:01.953 lat (usec) : 500=0.05%, 750=0.21%, 1000=0.39% 00:26:01.953 lat (msec) : 2=1.91%, 4=3.59%, 10=87.12%, 20=6.73% 00:26:01.953 cpu : usr=5.76%, sys=22.65%, ctx=8530, majf=0, minf=90 00:26:01.953 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.2%, >=64=99.7% 00:26:01.953 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:26:01.953 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:26:01.953 issued rwts: total=69091,36749,0,0 short=0,0,0,0 dropped=0,0,0,0 00:26:01.953 latency : target=0, window=0, percentile=100.00%, depth=128 00:26:01.953 00:26:01.953 Run status group 0 (all jobs): 00:26:01.953 READ: bw=44.9MiB/s (47.1MB/s), 44.9MiB/s-44.9MiB/s (47.1MB/s-47.1MB/s), io=270MiB (283MB), run=6006-6006msec 00:26:01.953 WRITE: bw=26.1MiB/s (27.3MB/s), 26.1MiB/s-26.1MiB/s (27.3MB/s-27.3MB/s), io=144MiB (151MB), run=5510-5510msec 00:26:01.953 00:26:01.953 Disk stats (read/write): 00:26:01.953 nvme0n1: ios=68417/35860, merge=0/0, ticks=483688/227302, in_queue=710990, util=98.65% 00:26:01.953 11:48:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@134 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:26:01.953 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 2 controller(s) 00:26:01.953 11:48:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@135 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:26:01.953 11:48:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- common/autotest_common.sh@1223 -- # local i=0 00:26:01.953 11:48:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- common/autotest_common.sh@1224 -- # lsblk -o NAME,SERIAL 00:26:01.953 11:48:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- common/autotest_common.sh@1224 -- # grep -q -w SPDKISFASTANDAWESOME 00:26:01.953 11:48:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- common/autotest_common.sh@1231 -- # lsblk -l -o NAME,SERIAL 00:26:01.953 11:48:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- common/autotest_common.sh@1231 -- # grep -q -w SPDKISFASTANDAWESOME 00:26:01.953 11:48:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- common/autotest_common.sh@1235 -- # return 0 00:26:01.953 11:48:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@137 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:26:01.953 11:48:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@139 -- # rm -f ./local-job0-0-verify.state 00:26:01.953 11:48:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@140 -- # rm -f ./local-job1-1-verify.state 00:26:01.953 11:48:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@142 -- # trap - SIGINT SIGTERM EXIT 00:26:01.953 11:48:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@144 -- # nvmftestfini 00:26:01.953 11:48:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@516 -- # nvmfcleanup 00:26:01.953 11:48:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@121 -- # sync 00:26:01.953 11:48:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:26:01.953 11:48:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@124 -- # set +e 00:26:01.953 11:48:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@125 -- # for i in {1..20} 00:26:01.953 11:48:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:26:01.953 rmmod nvme_tcp 00:26:01.953 rmmod nvme_fabrics 00:26:02.213 rmmod nvme_keyring 00:26:02.213 11:48:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:26:02.213 11:48:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@128 -- # set -e 00:26:02.213 11:48:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@129 -- # return 0 00:26:02.213 11:48:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@517 -- # '[' -n 106159 ']' 00:26:02.213 11:48:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@518 -- # killprocess 106159 00:26:02.213 11:48:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- common/autotest_common.sh@954 -- # '[' -z 106159 ']' 00:26:02.213 11:48:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- common/autotest_common.sh@958 -- # kill -0 106159 00:26:02.213 11:48:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- common/autotest_common.sh@959 -- # uname 00:26:02.213 11:48:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:26:02.213 11:48:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 106159 00:26:02.213 killing process with pid 106159 00:26:02.213 11:48:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:26:02.213 11:48:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:26:02.213 11:48:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- common/autotest_common.sh@972 -- # echo 'killing process with pid 106159' 00:26:02.213 11:48:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- common/autotest_common.sh@973 -- # kill 106159 00:26:02.213 11:48:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- common/autotest_common.sh@978 -- # wait 106159 00:26:02.472 11:48:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:26:02.472 11:48:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:26:02.472 11:48:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:26:02.472 11:48:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@297 -- # iptr 00:26:02.472 11:48:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@791 -- # iptables-restore 00:26:02.472 11:48:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@791 -- # iptables-save 00:26:02.472 11:48:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:26:02.472 11:48:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@298 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:26:02.472 11:48:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@299 -- # nvmf_veth_fini 00:26:02.472 11:48:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@233 -- # ip link set nvmf_init_br nomaster 00:26:02.472 11:48:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@234 -- # ip link set nvmf_init_br2 nomaster 00:26:02.472 11:48:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@235 -- # ip link set nvmf_tgt_br nomaster 00:26:02.472 11:48:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@236 -- # ip link set nvmf_tgt_br2 nomaster 00:26:02.472 11:48:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@237 -- # ip link set nvmf_init_br down 00:26:02.472 11:48:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@238 -- # ip link set nvmf_init_br2 down 00:26:02.472 11:48:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@239 -- # ip link set nvmf_tgt_br down 00:26:02.472 11:48:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@240 -- # ip link set nvmf_tgt_br2 down 00:26:02.472 11:48:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@241 -- # ip link delete nvmf_br type bridge 00:26:02.472 11:48:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@242 -- # ip link delete nvmf_init_if 00:26:02.472 11:48:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@243 -- # ip link delete nvmf_init_if2 00:26:02.472 11:48:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@244 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:26:02.472 11:48:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@245 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:26:02.472 11:48:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@246 -- # remove_spdk_ns 00:26:02.472 11:48:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:26:02.472 11:48:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:26:02.472 11:48:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:26:02.732 11:48:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@300 -- # return 0 00:26:02.732 00:26:02.732 real 0m21.267s 00:26:02.732 user 1m12.756s 00:26:02.732 sys 0m7.884s 00:26:02.732 11:48:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- common/autotest_common.sh@1130 -- # xtrace_disable 00:26:02.732 11:48:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- common/autotest_common.sh@10 -- # set +x 00:26:02.732 ************************************ 00:26:02.732 END TEST nvmf_target_multipath 00:26:02.732 ************************************ 00:26:02.732 11:48:03 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/nvmf_target_core.sh@32 -- # run_test nvmf_zcopy /home/vagrant/spdk_repo/spdk/test/nvmf/target/zcopy.sh --transport=tcp --interrupt-mode 00:26:02.732 11:48:03 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:26:02.732 11:48:03 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1111 -- # xtrace_disable 00:26:02.732 11:48:03 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@10 -- # set +x 00:26:02.732 ************************************ 00:26:02.732 START TEST nvmf_zcopy 00:26:02.732 ************************************ 00:26:02.732 11:48:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/zcopy.sh --transport=tcp --interrupt-mode 00:26:02.732 * Looking for test storage... 00:26:02.732 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:26:02.732 11:48:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:26:02.732 11:48:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@1711 -- # lcov --version 00:26:02.732 11:48:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:26:02.732 11:48:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:26:02.732 11:48:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:26:02.732 11:48:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@333 -- # local ver1 ver1_l 00:26:02.732 11:48:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@334 -- # local ver2 ver2_l 00:26:02.732 11:48:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@336 -- # IFS=.-: 00:26:02.732 11:48:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@336 -- # read -ra ver1 00:26:02.732 11:48:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@337 -- # IFS=.-: 00:26:02.732 11:48:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@337 -- # read -ra ver2 00:26:02.732 11:48:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@338 -- # local 'op=<' 00:26:02.732 11:48:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@340 -- # ver1_l=2 00:26:02.732 11:48:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@341 -- # ver2_l=1 00:26:02.732 11:48:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:26:02.732 11:48:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@344 -- # case "$op" in 00:26:02.732 11:48:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@345 -- # : 1 00:26:02.732 11:48:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@364 -- # (( v = 0 )) 00:26:02.732 11:48:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:26:02.732 11:48:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@365 -- # decimal 1 00:26:02.732 11:48:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@353 -- # local d=1 00:26:02.732 11:48:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:26:02.732 11:48:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@355 -- # echo 1 00:26:02.732 11:48:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@365 -- # ver1[v]=1 00:26:02.732 11:48:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@366 -- # decimal 2 00:26:02.732 11:48:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@353 -- # local d=2 00:26:02.732 11:48:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:26:02.732 11:48:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@355 -- # echo 2 00:26:02.732 11:48:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@366 -- # ver2[v]=2 00:26:02.732 11:48:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:26:02.732 11:48:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:26:02.732 11:48:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@368 -- # return 0 00:26:02.732 11:48:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:26:02.732 11:48:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:26:02.732 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:26:02.732 --rc genhtml_branch_coverage=1 00:26:02.732 --rc genhtml_function_coverage=1 00:26:02.732 --rc genhtml_legend=1 00:26:02.733 --rc geninfo_all_blocks=1 00:26:02.733 --rc geninfo_unexecuted_blocks=1 00:26:02.733 00:26:02.733 ' 00:26:02.733 11:48:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:26:02.733 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:26:02.733 --rc genhtml_branch_coverage=1 00:26:02.733 --rc genhtml_function_coverage=1 00:26:02.733 --rc genhtml_legend=1 00:26:02.733 --rc geninfo_all_blocks=1 00:26:02.733 --rc geninfo_unexecuted_blocks=1 00:26:02.733 00:26:02.733 ' 00:26:02.733 11:48:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:26:02.733 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:26:02.733 --rc genhtml_branch_coverage=1 00:26:02.733 --rc genhtml_function_coverage=1 00:26:02.733 --rc genhtml_legend=1 00:26:02.733 --rc geninfo_all_blocks=1 00:26:02.733 --rc geninfo_unexecuted_blocks=1 00:26:02.733 00:26:02.733 ' 00:26:02.733 11:48:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:26:02.733 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:26:02.733 --rc genhtml_branch_coverage=1 00:26:02.733 --rc genhtml_function_coverage=1 00:26:02.733 --rc genhtml_legend=1 00:26:02.733 --rc geninfo_all_blocks=1 00:26:02.733 --rc geninfo_unexecuted_blocks=1 00:26:02.733 00:26:02.733 ' 00:26:02.733 11:48:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- target/zcopy.sh@10 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:26:02.733 11:48:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@7 -- # uname -s 00:26:02.733 11:48:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:26:02.733 11:48:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:26:02.733 11:48:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:26:02.733 11:48:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:26:02.733 11:48:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:26:02.733 11:48:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:26:02.733 11:48:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:26:02.733 11:48:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:26:02.733 11:48:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:26:02.733 11:48:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:26:02.733 11:48:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:2fb5c03d-b67f-4471-b8a9-a83bf7b15be8 00:26:02.733 11:48:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@18 -- # NVME_HOSTID=2fb5c03d-b67f-4471-b8a9-a83bf7b15be8 00:26:02.733 11:48:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:26:02.733 11:48:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:26:02.733 11:48:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:26:02.733 11:48:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:26:02.733 11:48:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@49 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:26:02.733 11:48:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@15 -- # shopt -s extglob 00:26:02.733 11:48:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:26:02.733 11:48:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:26:02.733 11:48:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:26:02.733 11:48:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:26:02.733 11:48:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:26:02.733 11:48:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:26:02.733 11:48:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- paths/export.sh@5 -- # export PATH 00:26:02.992 11:48:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:26:02.992 11:48:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@51 -- # : 0 00:26:02.993 11:48:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:26:02.993 11:48:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:26:02.993 11:48:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:26:02.993 11:48:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:26:02.993 11:48:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:26:02.993 11:48:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@33 -- # '[' 1 -eq 1 ']' 00:26:02.993 11:48:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@34 -- # NVMF_APP+=(--interrupt-mode) 00:26:02.993 11:48:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:26:02.993 11:48:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:26:02.993 11:48:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@55 -- # have_pci_nics=0 00:26:02.993 11:48:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- target/zcopy.sh@12 -- # nvmftestinit 00:26:02.993 11:48:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:26:02.993 11:48:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:26:02.993 11:48:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@476 -- # prepare_net_devs 00:26:02.993 11:48:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@438 -- # local -g is_hw=no 00:26:02.993 11:48:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@440 -- # remove_spdk_ns 00:26:02.993 11:48:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:26:02.993 11:48:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:26:02.993 11:48:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:26:02.993 11:48:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@442 -- # [[ virt != virt ]] 00:26:02.993 11:48:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@444 -- # [[ no == yes ]] 00:26:02.993 11:48:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@451 -- # [[ virt == phy ]] 00:26:02.993 11:48:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@454 -- # [[ virt == phy-fallback ]] 00:26:02.993 11:48:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@459 -- # [[ tcp == tcp ]] 00:26:02.993 11:48:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@460 -- # nvmf_veth_init 00:26:02.993 11:48:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@145 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:26:02.993 11:48:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@146 -- # NVMF_SECOND_INITIATOR_IP=10.0.0.2 00:26:02.993 11:48:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@147 -- # NVMF_FIRST_TARGET_IP=10.0.0.3 00:26:02.993 11:48:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@148 -- # NVMF_SECOND_TARGET_IP=10.0.0.4 00:26:02.993 11:48:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@149 -- # NVMF_INITIATOR_IP=10.0.0.1 00:26:02.993 11:48:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@150 -- # NVMF_BRIDGE=nvmf_br 00:26:02.993 11:48:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@151 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:26:02.993 11:48:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@152 -- # NVMF_INITIATOR_INTERFACE2=nvmf_init_if2 00:26:02.993 11:48:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@153 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:26:02.993 11:48:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@154 -- # NVMF_INITIATOR_BRIDGE2=nvmf_init_br2 00:26:02.993 11:48:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@155 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:26:02.993 11:48:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@156 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:26:02.993 11:48:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@157 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:26:02.993 11:48:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@158 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:26:02.993 11:48:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@159 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:26:02.993 11:48:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@160 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:26:02.993 11:48:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@162 -- # ip link set nvmf_init_br nomaster 00:26:02.993 Cannot find device "nvmf_init_br" 00:26:02.993 11:48:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@162 -- # true 00:26:02.993 11:48:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@163 -- # ip link set nvmf_init_br2 nomaster 00:26:02.993 Cannot find device "nvmf_init_br2" 00:26:02.993 11:48:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@163 -- # true 00:26:02.993 11:48:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@164 -- # ip link set nvmf_tgt_br nomaster 00:26:02.993 Cannot find device "nvmf_tgt_br" 00:26:02.993 11:48:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@164 -- # true 00:26:02.993 11:48:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@165 -- # ip link set nvmf_tgt_br2 nomaster 00:26:02.993 Cannot find device "nvmf_tgt_br2" 00:26:02.993 11:48:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@165 -- # true 00:26:02.993 11:48:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@166 -- # ip link set nvmf_init_br down 00:26:02.993 Cannot find device "nvmf_init_br" 00:26:02.993 11:48:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@166 -- # true 00:26:02.993 11:48:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@167 -- # ip link set nvmf_init_br2 down 00:26:02.993 Cannot find device "nvmf_init_br2" 00:26:02.993 11:48:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@167 -- # true 00:26:02.993 11:48:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@168 -- # ip link set nvmf_tgt_br down 00:26:02.993 Cannot find device "nvmf_tgt_br" 00:26:02.993 11:48:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@168 -- # true 00:26:02.993 11:48:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@169 -- # ip link set nvmf_tgt_br2 down 00:26:02.993 Cannot find device "nvmf_tgt_br2" 00:26:02.993 11:48:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@169 -- # true 00:26:02.993 11:48:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@170 -- # ip link delete nvmf_br type bridge 00:26:02.993 Cannot find device "nvmf_br" 00:26:02.993 11:48:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@170 -- # true 00:26:02.993 11:48:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@171 -- # ip link delete nvmf_init_if 00:26:02.993 Cannot find device "nvmf_init_if" 00:26:02.993 11:48:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@171 -- # true 00:26:02.993 11:48:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@172 -- # ip link delete nvmf_init_if2 00:26:02.993 Cannot find device "nvmf_init_if2" 00:26:02.993 11:48:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@172 -- # true 00:26:02.993 11:48:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@173 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:26:02.993 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:26:02.993 11:48:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@173 -- # true 00:26:02.993 11:48:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@174 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:26:02.993 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:26:02.993 11:48:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@174 -- # true 00:26:02.993 11:48:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@177 -- # ip netns add nvmf_tgt_ns_spdk 00:26:02.993 11:48:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@180 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:26:02.993 11:48:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@181 -- # ip link add nvmf_init_if2 type veth peer name nvmf_init_br2 00:26:02.993 11:48:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@182 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:26:02.993 11:48:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@183 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:26:02.993 11:48:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:26:02.993 11:48:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@187 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:26:02.993 11:48:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@190 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:26:02.993 11:48:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@191 -- # ip addr add 10.0.0.2/24 dev nvmf_init_if2 00:26:02.993 11:48:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@192 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if 00:26:02.993 11:48:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@193 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.4/24 dev nvmf_tgt_if2 00:26:02.993 11:48:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@196 -- # ip link set nvmf_init_if up 00:26:02.993 11:48:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@197 -- # ip link set nvmf_init_if2 up 00:26:02.993 11:48:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@198 -- # ip link set nvmf_init_br up 00:26:03.253 11:48:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@199 -- # ip link set nvmf_init_br2 up 00:26:03.253 11:48:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@200 -- # ip link set nvmf_tgt_br up 00:26:03.253 11:48:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@201 -- # ip link set nvmf_tgt_br2 up 00:26:03.253 11:48:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@202 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:26:03.253 11:48:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@203 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:26:03.253 11:48:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@204 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:26:03.253 11:48:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@207 -- # ip link add nvmf_br type bridge 00:26:03.253 11:48:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@208 -- # ip link set nvmf_br up 00:26:03.253 11:48:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@211 -- # ip link set nvmf_init_br master nvmf_br 00:26:03.253 11:48:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@212 -- # ip link set nvmf_init_br2 master nvmf_br 00:26:03.253 11:48:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@213 -- # ip link set nvmf_tgt_br master nvmf_br 00:26:03.253 11:48:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@214 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:26:03.253 11:48:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@217 -- # ipts -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:26:03.253 11:48:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT' 00:26:03.253 11:48:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@218 -- # ipts -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT 00:26:03.253 11:48:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT' 00:26:03.253 11:48:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@219 -- # ipts -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:26:03.253 11:48:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@790 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT -m comment --comment 'SPDK_NVMF:-A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT' 00:26:03.253 11:48:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@222 -- # ping -c 1 10.0.0.3 00:26:03.253 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:26:03.253 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.067 ms 00:26:03.253 00:26:03.253 --- 10.0.0.3 ping statistics --- 00:26:03.253 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:26:03.253 rtt min/avg/max/mdev = 0.067/0.067/0.067/0.000 ms 00:26:03.253 11:48:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@223 -- # ping -c 1 10.0.0.4 00:26:03.253 PING 10.0.0.4 (10.0.0.4) 56(84) bytes of data. 00:26:03.253 64 bytes from 10.0.0.4: icmp_seq=1 ttl=64 time=0.034 ms 00:26:03.253 00:26:03.253 --- 10.0.0.4 ping statistics --- 00:26:03.253 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:26:03.253 rtt min/avg/max/mdev = 0.034/0.034/0.034/0.000 ms 00:26:03.253 11:48:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@224 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:26:03.253 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:26:03.253 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.020 ms 00:26:03.253 00:26:03.253 --- 10.0.0.1 ping statistics --- 00:26:03.253 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:26:03.253 rtt min/avg/max/mdev = 0.020/0.020/0.020/0.000 ms 00:26:03.253 11:48:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@225 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.2 00:26:03.253 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:26:03.253 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.040 ms 00:26:03.253 00:26:03.253 --- 10.0.0.2 ping statistics --- 00:26:03.253 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:26:03.253 rtt min/avg/max/mdev = 0.040/0.040/0.040/0.000 ms 00:26:03.253 11:48:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@227 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:26:03.253 11:48:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@461 -- # return 0 00:26:03.253 11:48:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:26:03.253 11:48:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:26:03.253 11:48:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:26:03.253 11:48:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:26:03.253 11:48:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:26:03.253 11:48:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:26:03.253 11:48:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:26:03.253 11:48:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- target/zcopy.sh@13 -- # nvmfappstart -m 0x2 00:26:03.253 11:48:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:26:03.253 11:48:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@726 -- # xtrace_disable 00:26:03.253 11:48:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:26:03.253 11:48:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@509 -- # nvmfpid=106822 00:26:03.253 11:48:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@508 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --interrupt-mode -m 0x2 00:26:03.254 11:48:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@510 -- # waitforlisten 106822 00:26:03.254 11:48:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@835 -- # '[' -z 106822 ']' 00:26:03.254 11:48:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:26:03.254 11:48:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@840 -- # local max_retries=100 00:26:03.254 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:26:03.254 11:48:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:26:03.254 11:48:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@844 -- # xtrace_disable 00:26:03.254 11:48:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:26:03.254 [2024-12-16 11:48:03.982270] thread.c:3079:spdk_interrupt_mode_enable: *NOTICE*: Set SPDK running in interrupt mode. 00:26:03.254 [2024-12-16 11:48:03.983521] Starting SPDK v25.01-pre git sha1 c360129e7 / DPDK 24.03.0 initialization... 00:26:03.254 [2024-12-16 11:48:03.983592] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:26:03.513 [2024-12-16 11:48:04.136288] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:26:03.513 [2024-12-16 11:48:04.198601] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:26:03.513 [2024-12-16 11:48:04.198667] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:26:03.513 [2024-12-16 11:48:04.198681] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:26:03.513 [2024-12-16 11:48:04.198692] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:26:03.513 [2024-12-16 11:48:04.198701] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:26:03.513 [2024-12-16 11:48:04.199157] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 1 00:26:03.772 [2024-12-16 11:48:04.296392] thread.c:2144:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (app_thread) to intr mode from intr mode. 00:26:03.772 [2024-12-16 11:48:04.296750] thread.c:2144:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_000) to intr mode from intr mode. 00:26:03.772 11:48:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:26:03.772 11:48:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@868 -- # return 0 00:26:03.772 11:48:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:26:03.772 11:48:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@732 -- # xtrace_disable 00:26:03.772 11:48:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:26:03.772 11:48:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:26:03.772 11:48:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- target/zcopy.sh@15 -- # '[' tcp '!=' tcp ']' 00:26:03.773 11:48:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- target/zcopy.sh@22 -- # rpc_cmd nvmf_create_transport -t tcp -o -c 0 --zcopy 00:26:03.773 11:48:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:03.773 11:48:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:26:03.773 [2024-12-16 11:48:04.376126] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:26:03.773 11:48:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:03.773 11:48:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- target/zcopy.sh@24 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 10 00:26:03.773 11:48:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:03.773 11:48:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:26:03.773 11:48:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:03.773 11:48:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- target/zcopy.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 00:26:03.773 11:48:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:03.773 11:48:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:26:03.773 [2024-12-16 11:48:04.396010] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4420 *** 00:26:03.773 11:48:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:03.773 11:48:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- target/zcopy.sh@27 -- # rpc_cmd nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.3 -s 4420 00:26:03.773 11:48:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:03.773 11:48:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:26:03.773 11:48:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:03.773 11:48:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- target/zcopy.sh@29 -- # rpc_cmd bdev_malloc_create 32 4096 -b malloc0 00:26:03.773 11:48:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:03.773 11:48:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:26:03.773 malloc0 00:26:03.773 11:48:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:03.773 11:48:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- target/zcopy.sh@30 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 malloc0 -n 1 00:26:03.773 11:48:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:03.773 11:48:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:26:03.773 11:48:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:03.773 11:48:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- target/zcopy.sh@33 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /dev/fd/62 -t 10 -q 128 -w verify -o 8192 00:26:03.773 11:48:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- target/zcopy.sh@33 -- # gen_nvmf_target_json 00:26:03.773 11:48:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@560 -- # config=() 00:26:03.773 11:48:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@560 -- # local subsystem config 00:26:03.773 11:48:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:26:03.773 11:48:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:26:03.773 { 00:26:03.773 "params": { 00:26:03.773 "name": "Nvme$subsystem", 00:26:03.773 "trtype": "$TEST_TRANSPORT", 00:26:03.773 "traddr": "$NVMF_FIRST_TARGET_IP", 00:26:03.773 "adrfam": "ipv4", 00:26:03.773 "trsvcid": "$NVMF_PORT", 00:26:03.773 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:26:03.773 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:26:03.773 "hdgst": ${hdgst:-false}, 00:26:03.773 "ddgst": ${ddgst:-false} 00:26:03.773 }, 00:26:03.773 "method": "bdev_nvme_attach_controller" 00:26:03.773 } 00:26:03.773 EOF 00:26:03.773 )") 00:26:03.773 11:48:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@582 -- # cat 00:26:03.773 11:48:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@584 -- # jq . 00:26:03.773 11:48:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@585 -- # IFS=, 00:26:03.773 11:48:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:26:03.773 "params": { 00:26:03.773 "name": "Nvme1", 00:26:03.773 "trtype": "tcp", 00:26:03.773 "traddr": "10.0.0.3", 00:26:03.773 "adrfam": "ipv4", 00:26:03.773 "trsvcid": "4420", 00:26:03.773 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:26:03.773 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:26:03.773 "hdgst": false, 00:26:03.773 "ddgst": false 00:26:03.773 }, 00:26:03.773 "method": "bdev_nvme_attach_controller" 00:26:03.773 }' 00:26:03.773 [2024-12-16 11:48:04.513687] Starting SPDK v25.01-pre git sha1 c360129e7 / DPDK 24.03.0 initialization... 00:26:03.773 [2024-12-16 11:48:04.513831] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid106854 ] 00:26:04.032 [2024-12-16 11:48:04.672462] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:26:04.032 [2024-12-16 11:48:04.740939] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:26:04.291 Running I/O for 10 seconds... 00:26:06.605 5550.00 IOPS, 43.36 MiB/s [2024-12-16T11:48:08.296Z] 5611.50 IOPS, 43.84 MiB/s [2024-12-16T11:48:09.252Z] 5633.33 IOPS, 44.01 MiB/s [2024-12-16T11:48:10.192Z] 5669.75 IOPS, 44.29 MiB/s [2024-12-16T11:48:11.129Z] 5709.60 IOPS, 44.61 MiB/s [2024-12-16T11:48:12.066Z] 5725.00 IOPS, 44.73 MiB/s [2024-12-16T11:48:13.003Z] 5745.29 IOPS, 44.89 MiB/s [2024-12-16T11:48:13.940Z] 5751.12 IOPS, 44.93 MiB/s [2024-12-16T11:48:15.315Z] 5744.11 IOPS, 44.88 MiB/s [2024-12-16T11:48:15.315Z] 5718.60 IOPS, 44.68 MiB/s 00:26:14.562 Latency(us) 00:26:14.562 [2024-12-16T11:48:15.315Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:26:14.562 Job: Nvme1n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 8192) 00:26:14.562 Verification LBA range: start 0x0 length 0x1000 00:26:14.562 Nvme1n1 : 10.02 5722.23 44.70 0.00 0.00 22295.54 2889.54 37176.79 00:26:14.562 [2024-12-16T11:48:15.315Z] =================================================================================================================== 00:26:14.562 [2024-12-16T11:48:15.315Z] Total : 5722.23 44.70 0.00 0.00 22295.54 2889.54 37176.79 00:26:14.562 11:48:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- target/zcopy.sh@39 -- # perfpid=106971 00:26:14.562 11:48:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- target/zcopy.sh@41 -- # xtrace_disable 00:26:14.562 11:48:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:26:14.562 11:48:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- target/zcopy.sh@37 -- # gen_nvmf_target_json 00:26:14.562 11:48:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@560 -- # config=() 00:26:14.562 11:48:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@560 -- # local subsystem config 00:26:14.562 11:48:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:26:14.562 11:48:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- target/zcopy.sh@37 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /dev/fd/63 -t 5 -q 128 -w randrw -M 50 -o 8192 00:26:14.562 11:48:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:26:14.562 { 00:26:14.562 "params": { 00:26:14.562 "name": "Nvme$subsystem", 00:26:14.562 "trtype": "$TEST_TRANSPORT", 00:26:14.562 "traddr": "$NVMF_FIRST_TARGET_IP", 00:26:14.562 "adrfam": "ipv4", 00:26:14.562 "trsvcid": "$NVMF_PORT", 00:26:14.562 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:26:14.562 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:26:14.562 "hdgst": ${hdgst:-false}, 00:26:14.562 "ddgst": ${ddgst:-false} 00:26:14.562 }, 00:26:14.562 "method": "bdev_nvme_attach_controller" 00:26:14.562 } 00:26:14.562 EOF 00:26:14.562 )") 00:26:14.562 11:48:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@582 -- # cat 00:26:14.562 [2024-12-16 11:48:15.163824] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:26:14.562 [2024-12-16 11:48:15.163887] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:26:14.562 11:48:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@584 -- # jq . 00:26:14.562 11:48:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@585 -- # IFS=, 00:26:14.562 11:48:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:26:14.562 "params": { 00:26:14.562 "name": "Nvme1", 00:26:14.562 "trtype": "tcp", 00:26:14.562 "traddr": "10.0.0.3", 00:26:14.562 "adrfam": "ipv4", 00:26:14.562 "trsvcid": "4420", 00:26:14.562 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:26:14.562 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:26:14.562 "hdgst": false, 00:26:14.562 "ddgst": false 00:26:14.562 }, 00:26:14.562 "method": "bdev_nvme_attach_controller" 00:26:14.562 }' 00:26:14.562 2024/12/16 11:48:15 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:26:14.562 [2024-12-16 11:48:15.175728] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:26:14.562 [2024-12-16 11:48:15.175756] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:26:14.562 2024/12/16 11:48:15 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:26:14.562 [2024-12-16 11:48:15.187723] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:26:14.562 [2024-12-16 11:48:15.187748] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:26:14.562 2024/12/16 11:48:15 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:26:14.562 [2024-12-16 11:48:15.199713] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:26:14.562 [2024-12-16 11:48:15.199741] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:26:14.562 2024/12/16 11:48:15 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:26:14.562 [2024-12-16 11:48:15.211672] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:26:14.562 [2024-12-16 11:48:15.211698] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:26:14.562 2024/12/16 11:48:15 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:26:14.562 [2024-12-16 11:48:15.223089] Starting SPDK v25.01-pre git sha1 c360129e7 / DPDK 24.03.0 initialization... 00:26:14.562 [2024-12-16 11:48:15.223217] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid106971 ] 00:26:14.562 [2024-12-16 11:48:15.223698] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:26:14.562 [2024-12-16 11:48:15.223721] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:26:14.562 2024/12/16 11:48:15 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:26:14.562 [2024-12-16 11:48:15.235721] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:26:14.562 [2024-12-16 11:48:15.235745] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:26:14.562 2024/12/16 11:48:15 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:26:14.562 [2024-12-16 11:48:15.247683] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:26:14.562 [2024-12-16 11:48:15.247705] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:26:14.562 2024/12/16 11:48:15 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:26:14.562 [2024-12-16 11:48:15.259689] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:26:14.562 [2024-12-16 11:48:15.259715] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:26:14.562 2024/12/16 11:48:15 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:26:14.562 [2024-12-16 11:48:15.271730] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:26:14.562 [2024-12-16 11:48:15.271754] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:26:14.563 2024/12/16 11:48:15 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:26:14.563 [2024-12-16 11:48:15.283720] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:26:14.563 [2024-12-16 11:48:15.283756] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:26:14.563 2024/12/16 11:48:15 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:26:14.563 [2024-12-16 11:48:15.295696] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:26:14.563 [2024-12-16 11:48:15.295723] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:26:14.563 2024/12/16 11:48:15 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:26:14.563 [2024-12-16 11:48:15.307686] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:26:14.563 [2024-12-16 11:48:15.307709] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:26:14.563 2024/12/16 11:48:15 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:26:14.821 [2024-12-16 11:48:15.319771] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:26:14.821 [2024-12-16 11:48:15.319798] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:26:14.821 2024/12/16 11:48:15 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:26:14.821 [2024-12-16 11:48:15.331721] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:26:14.821 [2024-12-16 11:48:15.331748] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:26:14.821 2024/12/16 11:48:15 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:26:14.821 [2024-12-16 11:48:15.343700] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:26:14.821 [2024-12-16 11:48:15.343730] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:26:14.821 2024/12/16 11:48:15 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:26:14.821 [2024-12-16 11:48:15.355730] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:26:14.821 [2024-12-16 11:48:15.355758] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:26:14.821 2024/12/16 11:48:15 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:26:14.821 [2024-12-16 11:48:15.367737] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:26:14.821 [2024-12-16 11:48:15.367762] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:26:14.822 2024/12/16 11:48:15 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:26:14.822 [2024-12-16 11:48:15.377878] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:26:14.822 [2024-12-16 11:48:15.379700] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:26:14.822 [2024-12-16 11:48:15.379727] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:26:14.822 2024/12/16 11:48:15 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:26:14.822 [2024-12-16 11:48:15.391739] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:26:14.822 [2024-12-16 11:48:15.391763] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:26:14.822 2024/12/16 11:48:15 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:26:14.822 [2024-12-16 11:48:15.403731] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:26:14.822 [2024-12-16 11:48:15.403755] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:26:14.822 2024/12/16 11:48:15 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:26:14.822 [2024-12-16 11:48:15.415706] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:26:14.822 [2024-12-16 11:48:15.415730] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:26:14.822 2024/12/16 11:48:15 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:26:14.822 [2024-12-16 11:48:15.427684] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:26:14.822 [2024-12-16 11:48:15.427705] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:26:14.822 2024/12/16 11:48:15 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:26:14.822 [2024-12-16 11:48:15.436866] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:26:14.822 [2024-12-16 11:48:15.439728] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:26:14.822 [2024-12-16 11:48:15.439752] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:26:14.822 2024/12/16 11:48:15 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:26:14.822 [2024-12-16 11:48:15.451712] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:26:14.822 [2024-12-16 11:48:15.451735] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:26:14.822 2024/12/16 11:48:15 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:26:14.822 [2024-12-16 11:48:15.463717] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:26:14.822 [2024-12-16 11:48:15.463742] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:26:14.822 2024/12/16 11:48:15 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:26:14.822 [2024-12-16 11:48:15.475720] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:26:14.822 [2024-12-16 11:48:15.475745] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:26:14.822 2024/12/16 11:48:15 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:26:14.822 [2024-12-16 11:48:15.487683] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:26:14.822 [2024-12-16 11:48:15.487705] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:26:14.822 2024/12/16 11:48:15 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:26:14.822 [2024-12-16 11:48:15.499697] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:26:14.822 [2024-12-16 11:48:15.499720] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:26:14.822 2024/12/16 11:48:15 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:26:14.822 [2024-12-16 11:48:15.511732] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:26:14.822 [2024-12-16 11:48:15.511757] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:26:14.822 2024/12/16 11:48:15 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:26:14.822 [2024-12-16 11:48:15.523683] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:26:14.822 [2024-12-16 11:48:15.523706] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:26:14.822 2024/12/16 11:48:15 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:26:14.822 [2024-12-16 11:48:15.535719] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:26:14.822 [2024-12-16 11:48:15.535748] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:26:14.822 2024/12/16 11:48:15 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:26:14.822 [2024-12-16 11:48:15.547682] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:26:14.822 [2024-12-16 11:48:15.547743] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:26:14.822 2024/12/16 11:48:15 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:26:14.822 [2024-12-16 11:48:15.559712] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:26:14.822 [2024-12-16 11:48:15.559743] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:26:14.822 2024/12/16 11:48:15 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:26:14.822 [2024-12-16 11:48:15.571740] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:26:14.822 [2024-12-16 11:48:15.571771] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:26:15.081 2024/12/16 11:48:15 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:26:15.081 [2024-12-16 11:48:15.583763] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:26:15.081 [2024-12-16 11:48:15.583808] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:26:15.081 2024/12/16 11:48:15 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:26:15.081 [2024-12-16 11:48:15.595717] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:26:15.081 [2024-12-16 11:48:15.595744] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:26:15.081 2024/12/16 11:48:15 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:26:15.081 [2024-12-16 11:48:15.607742] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:26:15.081 [2024-12-16 11:48:15.607786] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:26:15.081 2024/12/16 11:48:15 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:26:15.081 Running I/O for 5 seconds... 00:26:15.081 [2024-12-16 11:48:15.619857] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:26:15.081 [2024-12-16 11:48:15.619885] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:26:15.081 2024/12/16 11:48:15 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:26:15.081 [2024-12-16 11:48:15.637430] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:26:15.081 [2024-12-16 11:48:15.637463] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:26:15.081 2024/12/16 11:48:15 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:26:15.081 [2024-12-16 11:48:15.653958] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:26:15.081 [2024-12-16 11:48:15.654005] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:26:15.081 2024/12/16 11:48:15 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:26:15.081 [2024-12-16 11:48:15.664805] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:26:15.081 [2024-12-16 11:48:15.664862] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:26:15.081 2024/12/16 11:48:15 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:26:15.081 [2024-12-16 11:48:15.682090] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:26:15.081 [2024-12-16 11:48:15.682121] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:26:15.082 2024/12/16 11:48:15 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:26:15.082 [2024-12-16 11:48:15.692802] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:26:15.082 [2024-12-16 11:48:15.692835] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:26:15.082 2024/12/16 11:48:15 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:26:15.082 [2024-12-16 11:48:15.709809] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:26:15.082 [2024-12-16 11:48:15.709842] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:26:15.082 2024/12/16 11:48:15 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:26:15.082 [2024-12-16 11:48:15.720675] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:26:15.082 [2024-12-16 11:48:15.720724] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:26:15.082 2024/12/16 11:48:15 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:26:15.082 [2024-12-16 11:48:15.740034] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:26:15.082 [2024-12-16 11:48:15.740070] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:26:15.082 2024/12/16 11:48:15 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:26:15.082 [2024-12-16 11:48:15.750387] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:26:15.082 [2024-12-16 11:48:15.750421] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:26:15.082 2024/12/16 11:48:15 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:26:15.082 [2024-12-16 11:48:15.764737] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:26:15.082 [2024-12-16 11:48:15.764776] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:26:15.082 2024/12/16 11:48:15 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:26:15.082 [2024-12-16 11:48:15.783150] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:26:15.082 [2024-12-16 11:48:15.783194] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:26:15.082 2024/12/16 11:48:15 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:26:15.082 [2024-12-16 11:48:15.804600] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:26:15.082 [2024-12-16 11:48:15.804651] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:26:15.082 2024/12/16 11:48:15 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:26:15.082 [2024-12-16 11:48:15.820135] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:26:15.082 [2024-12-16 11:48:15.820187] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:26:15.082 2024/12/16 11:48:15 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:26:15.341 [2024-12-16 11:48:15.840623] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:26:15.341 [2024-12-16 11:48:15.840676] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:26:15.341 2024/12/16 11:48:15 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:26:15.341 [2024-12-16 11:48:15.859830] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:26:15.341 [2024-12-16 11:48:15.859867] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:26:15.341 2024/12/16 11:48:15 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:26:15.341 [2024-12-16 11:48:15.870427] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:26:15.341 [2024-12-16 11:48:15.870463] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:26:15.341 2024/12/16 11:48:15 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:26:15.341 [2024-12-16 11:48:15.882793] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:26:15.341 [2024-12-16 11:48:15.882843] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:26:15.341 2024/12/16 11:48:15 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:26:15.341 [2024-12-16 11:48:15.903933] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:26:15.341 [2024-12-16 11:48:15.904010] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:26:15.341 2024/12/16 11:48:15 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:26:15.341 [2024-12-16 11:48:15.915198] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:26:15.341 [2024-12-16 11:48:15.915262] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:26:15.341 2024/12/16 11:48:15 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:26:15.341 [2024-12-16 11:48:15.926379] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:26:15.341 [2024-12-16 11:48:15.926415] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:26:15.341 2024/12/16 11:48:15 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:26:15.341 [2024-12-16 11:48:15.941499] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:26:15.341 [2024-12-16 11:48:15.941535] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:26:15.341 2024/12/16 11:48:15 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:26:15.341 [2024-12-16 11:48:15.958306] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:26:15.341 [2024-12-16 11:48:15.958348] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:26:15.341 2024/12/16 11:48:15 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:26:15.341 [2024-12-16 11:48:15.973368] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:26:15.341 [2024-12-16 11:48:15.973402] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:26:15.341 2024/12/16 11:48:15 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:26:15.341 [2024-12-16 11:48:15.991716] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:26:15.341 [2024-12-16 11:48:15.991798] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:26:15.341 2024/12/16 11:48:15 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:26:15.341 [2024-12-16 11:48:16.003011] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:26:15.341 [2024-12-16 11:48:16.003066] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:26:15.341 2024/12/16 11:48:16 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:26:15.341 [2024-12-16 11:48:16.025993] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:26:15.341 [2024-12-16 11:48:16.026051] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:26:15.341 2024/12/16 11:48:16 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:26:15.341 [2024-12-16 11:48:16.048691] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:26:15.341 [2024-12-16 11:48:16.048727] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:26:15.341 2024/12/16 11:48:16 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:26:15.341 [2024-12-16 11:48:16.063706] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:26:15.341 [2024-12-16 11:48:16.063757] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:26:15.342 2024/12/16 11:48:16 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:26:15.342 [2024-12-16 11:48:16.073594] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:26:15.342 [2024-12-16 11:48:16.073646] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:26:15.342 2024/12/16 11:48:16 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:26:15.342 [2024-12-16 11:48:16.088594] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:26:15.342 [2024-12-16 11:48:16.088645] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:26:15.342 2024/12/16 11:48:16 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:26:15.601 [2024-12-16 11:48:16.108317] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:26:15.601 [2024-12-16 11:48:16.108355] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:26:15.601 2024/12/16 11:48:16 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:26:15.601 [2024-12-16 11:48:16.128438] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:26:15.601 [2024-12-16 11:48:16.128477] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:26:15.601 2024/12/16 11:48:16 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:26:15.601 [2024-12-16 11:48:16.145248] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:26:15.601 [2024-12-16 11:48:16.145284] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:26:15.601 2024/12/16 11:48:16 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:26:15.601 [2024-12-16 11:48:16.163256] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:26:15.601 [2024-12-16 11:48:16.163313] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:26:15.601 2024/12/16 11:48:16 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:26:15.601 [2024-12-16 11:48:16.184416] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:26:15.601 [2024-12-16 11:48:16.184452] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:26:15.601 2024/12/16 11:48:16 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:26:15.601 [2024-12-16 11:48:16.203292] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:26:15.601 [2024-12-16 11:48:16.203325] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:26:15.601 2024/12/16 11:48:16 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:26:15.601 [2024-12-16 11:48:16.223328] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:26:15.601 [2024-12-16 11:48:16.223361] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:26:15.601 2024/12/16 11:48:16 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:26:15.601 [2024-12-16 11:48:16.243234] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:26:15.601 [2024-12-16 11:48:16.243270] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:26:15.601 2024/12/16 11:48:16 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:26:15.601 [2024-12-16 11:48:16.264968] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:26:15.601 [2024-12-16 11:48:16.265196] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:26:15.601 2024/12/16 11:48:16 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:26:15.601 [2024-12-16 11:48:16.280236] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:26:15.601 [2024-12-16 11:48:16.280285] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:26:15.601 2024/12/16 11:48:16 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:26:15.601 [2024-12-16 11:48:16.299782] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:26:15.601 [2024-12-16 11:48:16.299819] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:26:15.601 2024/12/16 11:48:16 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:26:15.601 [2024-12-16 11:48:16.310134] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:26:15.601 [2024-12-16 11:48:16.310176] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:26:15.601 2024/12/16 11:48:16 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:26:15.601 [2024-12-16 11:48:16.323693] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:26:15.601 [2024-12-16 11:48:16.323726] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:26:15.601 2024/12/16 11:48:16 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:26:15.601 [2024-12-16 11:48:16.333664] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:26:15.601 [2024-12-16 11:48:16.333697] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:26:15.601 2024/12/16 11:48:16 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:26:15.601 [2024-12-16 11:48:16.348175] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:26:15.601 [2024-12-16 11:48:16.348212] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:26:15.601 2024/12/16 11:48:16 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:26:15.861 [2024-12-16 11:48:16.363770] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:26:15.861 [2024-12-16 11:48:16.363809] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:26:15.861 2024/12/16 11:48:16 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:26:15.861 [2024-12-16 11:48:16.374020] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:26:15.861 [2024-12-16 11:48:16.374072] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:26:15.861 2024/12/16 11:48:16 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:26:15.861 [2024-12-16 11:48:16.388660] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:26:15.861 [2024-12-16 11:48:16.388696] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:26:15.861 2024/12/16 11:48:16 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:26:15.861 [2024-12-16 11:48:16.407765] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:26:15.861 [2024-12-16 11:48:16.407797] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:26:15.861 2024/12/16 11:48:16 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:26:15.861 [2024-12-16 11:48:16.418375] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:26:15.861 [2024-12-16 11:48:16.418408] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:26:15.861 2024/12/16 11:48:16 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:26:15.861 [2024-12-16 11:48:16.433141] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:26:15.861 [2024-12-16 11:48:16.433192] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:26:15.861 2024/12/16 11:48:16 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:26:15.861 [2024-12-16 11:48:16.452217] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:26:15.861 [2024-12-16 11:48:16.452251] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:26:15.861 2024/12/16 11:48:16 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:26:15.861 [2024-12-16 11:48:16.472594] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:26:15.861 [2024-12-16 11:48:16.472628] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:26:15.861 2024/12/16 11:48:16 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:26:15.861 [2024-12-16 11:48:16.490687] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:26:15.861 [2024-12-16 11:48:16.490736] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:26:15.861 2024/12/16 11:48:16 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:26:15.861 [2024-12-16 11:48:16.505090] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:26:15.861 [2024-12-16 11:48:16.505125] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:26:15.861 2024/12/16 11:48:16 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:26:15.861 [2024-12-16 11:48:16.524255] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:26:15.861 [2024-12-16 11:48:16.524290] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:26:15.861 2024/12/16 11:48:16 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:26:15.861 [2024-12-16 11:48:16.544813] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:26:15.861 [2024-12-16 11:48:16.544850] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:26:15.861 2024/12/16 11:48:16 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:26:15.861 [2024-12-16 11:48:16.561756] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:26:15.861 [2024-12-16 11:48:16.561790] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:26:15.861 2024/12/16 11:48:16 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:26:15.861 [2024-12-16 11:48:16.577899] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:26:15.861 [2024-12-16 11:48:16.577933] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:26:15.861 2024/12/16 11:48:16 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:26:15.861 [2024-12-16 11:48:16.593440] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:26:15.861 [2024-12-16 11:48:16.593474] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:26:15.861 2024/12/16 11:48:16 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:26:15.861 [2024-12-16 11:48:16.609561] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:26:15.861 [2024-12-16 11:48:16.609598] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:26:16.120 2024/12/16 11:48:16 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:26:16.120 11303.00 IOPS, 88.30 MiB/s [2024-12-16T11:48:16.873Z] [2024-12-16 11:48:16.624861] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:26:16.120 [2024-12-16 11:48:16.624895] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:26:16.120 2024/12/16 11:48:16 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:26:16.120 [2024-12-16 11:48:16.643849] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:26:16.120 [2024-12-16 11:48:16.643882] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:26:16.120 2024/12/16 11:48:16 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:26:16.120 [2024-12-16 11:48:16.654000] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:26:16.120 [2024-12-16 11:48:16.654050] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:26:16.120 2024/12/16 11:48:16 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:26:16.120 [2024-12-16 11:48:16.668229] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:26:16.120 [2024-12-16 11:48:16.668264] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:26:16.120 2024/12/16 11:48:16 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:26:16.120 [2024-12-16 11:48:16.687804] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:26:16.120 [2024-12-16 11:48:16.687840] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:26:16.120 2024/12/16 11:48:16 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:26:16.120 [2024-12-16 11:48:16.698364] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:26:16.120 [2024-12-16 11:48:16.698398] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:26:16.120 2024/12/16 11:48:16 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:26:16.120 [2024-12-16 11:48:16.713875] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:26:16.120 [2024-12-16 11:48:16.713909] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:26:16.121 2024/12/16 11:48:16 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:26:16.121 [2024-12-16 11:48:16.731862] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:26:16.121 [2024-12-16 11:48:16.731895] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:26:16.121 2024/12/16 11:48:16 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:26:16.121 [2024-12-16 11:48:16.741869] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:26:16.121 [2024-12-16 11:48:16.741917] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:26:16.121 2024/12/16 11:48:16 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:26:16.121 [2024-12-16 11:48:16.755893] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:26:16.121 [2024-12-16 11:48:16.755927] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:26:16.121 2024/12/16 11:48:16 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:26:16.121 [2024-12-16 11:48:16.765805] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:26:16.121 [2024-12-16 11:48:16.765838] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:26:16.121 2024/12/16 11:48:16 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:26:16.121 [2024-12-16 11:48:16.782086] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:26:16.121 [2024-12-16 11:48:16.782118] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:26:16.121 2024/12/16 11:48:16 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:26:16.121 [2024-12-16 11:48:16.799922] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:26:16.121 [2024-12-16 11:48:16.799955] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:26:16.121 2024/12/16 11:48:16 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:26:16.121 [2024-12-16 11:48:16.810168] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:26:16.121 [2024-12-16 11:48:16.810201] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:26:16.121 2024/12/16 11:48:16 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:26:16.121 [2024-12-16 11:48:16.823245] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:26:16.121 [2024-12-16 11:48:16.823281] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:26:16.121 2024/12/16 11:48:16 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:26:16.121 [2024-12-16 11:48:16.832844] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:26:16.121 [2024-12-16 11:48:16.832876] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:26:16.121 2024/12/16 11:48:16 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:26:16.121 [2024-12-16 11:48:16.847827] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:26:16.121 [2024-12-16 11:48:16.847861] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:26:16.121 2024/12/16 11:48:16 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:26:16.121 [2024-12-16 11:48:16.858026] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:26:16.121 [2024-12-16 11:48:16.858061] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:26:16.121 2024/12/16 11:48:16 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:26:16.380 [2024-12-16 11:48:16.874157] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:26:16.380 [2024-12-16 11:48:16.874209] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:26:16.380 2024/12/16 11:48:16 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:26:16.380 [2024-12-16 11:48:16.884940] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:26:16.380 [2024-12-16 11:48:16.884975] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:26:16.380 2024/12/16 11:48:16 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:26:16.380 [2024-12-16 11:48:16.902087] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:26:16.380 [2024-12-16 11:48:16.902126] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:26:16.380 2024/12/16 11:48:16 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:26:16.380 [2024-12-16 11:48:16.923893] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:26:16.380 [2024-12-16 11:48:16.923936] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:26:16.380 2024/12/16 11:48:16 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:26:16.380 [2024-12-16 11:48:16.934877] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:26:16.380 [2024-12-16 11:48:16.934912] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:26:16.380 2024/12/16 11:48:16 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:26:16.380 [2024-12-16 11:48:16.948585] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:26:16.380 [2024-12-16 11:48:16.948634] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:26:16.380 2024/12/16 11:48:16 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:26:16.380 [2024-12-16 11:48:16.967625] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:26:16.380 [2024-12-16 11:48:16.967662] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:26:16.380 2024/12/16 11:48:16 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:26:16.380 [2024-12-16 11:48:16.978010] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:26:16.380 [2024-12-16 11:48:16.978044] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:26:16.380 2024/12/16 11:48:16 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:26:16.380 [2024-12-16 11:48:16.993881] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:26:16.380 [2024-12-16 11:48:16.993923] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:26:16.380 2024/12/16 11:48:16 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:26:16.380 [2024-12-16 11:48:17.009769] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:26:16.380 [2024-12-16 11:48:17.009814] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:26:16.380 2024/12/16 11:48:17 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:26:16.380 [2024-12-16 11:48:17.025762] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:26:16.380 [2024-12-16 11:48:17.025806] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:26:16.380 2024/12/16 11:48:17 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:26:16.380 [2024-12-16 11:48:17.042105] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:26:16.380 [2024-12-16 11:48:17.042171] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:26:16.380 2024/12/16 11:48:17 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:26:16.380 [2024-12-16 11:48:17.057938] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:26:16.380 [2024-12-16 11:48:17.057971] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:26:16.380 2024/12/16 11:48:17 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:26:16.380 [2024-12-16 11:48:17.073487] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:26:16.380 [2024-12-16 11:48:17.073520] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:26:16.380 2024/12/16 11:48:17 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:26:16.380 [2024-12-16 11:48:17.090912] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:26:16.380 [2024-12-16 11:48:17.090946] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:26:16.381 2024/12/16 11:48:17 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:26:16.381 [2024-12-16 11:48:17.112632] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:26:16.381 [2024-12-16 11:48:17.112665] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:26:16.381 2024/12/16 11:48:17 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:26:16.381 [2024-12-16 11:48:17.127419] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:26:16.381 [2024-12-16 11:48:17.127454] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:26:16.381 2024/12/16 11:48:17 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:26:16.640 [2024-12-16 11:48:17.137979] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:26:16.640 [2024-12-16 11:48:17.138013] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:26:16.640 2024/12/16 11:48:17 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:26:16.640 [2024-12-16 11:48:17.152730] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:26:16.640 [2024-12-16 11:48:17.152763] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:26:16.640 2024/12/16 11:48:17 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:26:16.640 [2024-12-16 11:48:17.171632] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:26:16.640 [2024-12-16 11:48:17.171682] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:26:16.640 2024/12/16 11:48:17 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:26:16.640 [2024-12-16 11:48:17.183032] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:26:16.640 [2024-12-16 11:48:17.183069] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:26:16.640 2024/12/16 11:48:17 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:26:16.640 [2024-12-16 11:48:17.193630] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:26:16.640 [2024-12-16 11:48:17.193677] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:26:16.640 2024/12/16 11:48:17 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:26:16.640 [2024-12-16 11:48:17.207793] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:26:16.640 [2024-12-16 11:48:17.207829] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:26:16.640 2024/12/16 11:48:17 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:26:16.640 [2024-12-16 11:48:17.217778] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:26:16.640 [2024-12-16 11:48:17.217812] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:26:16.640 2024/12/16 11:48:17 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:26:16.640 [2024-12-16 11:48:17.233609] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:26:16.640 [2024-12-16 11:48:17.233642] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:26:16.640 2024/12/16 11:48:17 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:26:16.640 [2024-12-16 11:48:17.249925] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:26:16.640 [2024-12-16 11:48:17.249958] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:26:16.640 2024/12/16 11:48:17 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:26:16.640 [2024-12-16 11:48:17.266874] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:26:16.640 [2024-12-16 11:48:17.266911] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:26:16.640 2024/12/16 11:48:17 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:26:16.640 [2024-12-16 11:48:17.277451] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:26:16.640 [2024-12-16 11:48:17.277501] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:26:16.640 2024/12/16 11:48:17 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:26:16.640 [2024-12-16 11:48:17.292952] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:26:16.640 [2024-12-16 11:48:17.292988] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:26:16.640 2024/12/16 11:48:17 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:26:16.640 [2024-12-16 11:48:17.306673] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:26:16.640 [2024-12-16 11:48:17.306708] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:26:16.640 2024/12/16 11:48:17 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:26:16.640 [2024-12-16 11:48:17.316648] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:26:16.640 [2024-12-16 11:48:17.316683] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:26:16.640 2024/12/16 11:48:17 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:26:16.640 [2024-12-16 11:48:17.333197] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:26:16.640 [2024-12-16 11:48:17.333231] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:26:16.640 2024/12/16 11:48:17 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:26:16.640 [2024-12-16 11:48:17.351611] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:26:16.640 [2024-12-16 11:48:17.351645] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:26:16.640 2024/12/16 11:48:17 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:26:16.640 [2024-12-16 11:48:17.361302] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:26:16.640 [2024-12-16 11:48:17.361334] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:26:16.641 2024/12/16 11:48:17 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:26:16.641 [2024-12-16 11:48:17.377831] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:26:16.641 [2024-12-16 11:48:17.377878] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:26:16.641 2024/12/16 11:48:17 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:26:16.899 [2024-12-16 11:48:17.394123] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:26:16.899 [2024-12-16 11:48:17.394187] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:26:16.899 2024/12/16 11:48:17 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:26:16.899 [2024-12-16 11:48:17.409234] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:26:16.899 [2024-12-16 11:48:17.409267] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:26:16.900 2024/12/16 11:48:17 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:26:16.900 [2024-12-16 11:48:17.428520] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:26:16.900 [2024-12-16 11:48:17.428553] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:26:16.900 2024/12/16 11:48:17 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:26:16.900 [2024-12-16 11:48:17.449384] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:26:16.900 [2024-12-16 11:48:17.449436] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:26:16.900 2024/12/16 11:48:17 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:26:16.900 [2024-12-16 11:48:17.465408] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:26:16.900 [2024-12-16 11:48:17.465453] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:26:16.900 2024/12/16 11:48:17 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:26:16.900 [2024-12-16 11:48:17.483415] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:26:16.900 [2024-12-16 11:48:17.483452] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:26:16.900 2024/12/16 11:48:17 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:26:16.900 [2024-12-16 11:48:17.503821] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:26:16.900 [2024-12-16 11:48:17.503854] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:26:16.900 2024/12/16 11:48:17 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:26:16.900 [2024-12-16 11:48:17.514046] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:26:16.900 [2024-12-16 11:48:17.514096] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:26:16.900 2024/12/16 11:48:17 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:26:16.900 [2024-12-16 11:48:17.529310] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:26:16.900 [2024-12-16 11:48:17.529343] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:26:16.900 2024/12/16 11:48:17 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:26:16.900 [2024-12-16 11:48:17.546778] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:26:16.900 [2024-12-16 11:48:17.546812] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:26:16.900 2024/12/16 11:48:17 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:26:16.900 [2024-12-16 11:48:17.561350] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:26:16.900 [2024-12-16 11:48:17.561381] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:26:16.900 2024/12/16 11:48:17 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:26:16.900 [2024-12-16 11:48:17.579749] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:26:16.900 [2024-12-16 11:48:17.579782] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:26:16.900 2024/12/16 11:48:17 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:26:16.900 [2024-12-16 11:48:17.589341] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:26:16.900 [2024-12-16 11:48:17.589373] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:26:16.900 2024/12/16 11:48:17 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:26:16.900 [2024-12-16 11:48:17.604578] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:26:16.900 [2024-12-16 11:48:17.604611] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:26:16.900 2024/12/16 11:48:17 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:26:16.900 11347.50 IOPS, 88.65 MiB/s [2024-12-16T11:48:17.653Z] [2024-12-16 11:48:17.624633] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:26:16.900 [2024-12-16 11:48:17.624665] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:26:16.900 2024/12/16 11:48:17 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:26:16.900 [2024-12-16 11:48:17.641244] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:26:16.900 [2024-12-16 11:48:17.641288] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:26:16.900 2024/12/16 11:48:17 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:26:17.160 [2024-12-16 11:48:17.659844] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:26:17.160 [2024-12-16 11:48:17.659894] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:26:17.160 2024/12/16 11:48:17 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:26:17.160 [2024-12-16 11:48:17.670271] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:26:17.160 [2024-12-16 11:48:17.670305] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:26:17.160 2024/12/16 11:48:17 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:26:17.160 [2024-12-16 11:48:17.685728] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:26:17.160 [2024-12-16 11:48:17.685763] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:26:17.160 2024/12/16 11:48:17 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:26:17.160 [2024-12-16 11:48:17.703363] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:26:17.160 [2024-12-16 11:48:17.703411] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:26:17.160 2024/12/16 11:48:17 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:26:17.160 [2024-12-16 11:48:17.725179] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:26:17.160 [2024-12-16 11:48:17.725212] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:26:17.160 2024/12/16 11:48:17 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:26:17.160 [2024-12-16 11:48:17.740960] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:26:17.160 [2024-12-16 11:48:17.740994] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:26:17.160 2024/12/16 11:48:17 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:26:17.160 [2024-12-16 11:48:17.760547] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:26:17.160 [2024-12-16 11:48:17.760579] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:26:17.160 2024/12/16 11:48:17 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:26:17.160 [2024-12-16 11:48:17.780405] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:26:17.160 [2024-12-16 11:48:17.780439] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:26:17.160 2024/12/16 11:48:17 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:26:17.160 [2024-12-16 11:48:17.799694] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:26:17.160 [2024-12-16 11:48:17.799728] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:26:17.160 2024/12/16 11:48:17 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:26:17.160 [2024-12-16 11:48:17.809287] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:26:17.160 [2024-12-16 11:48:17.809319] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:26:17.160 2024/12/16 11:48:17 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:26:17.160 [2024-12-16 11:48:17.825404] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:26:17.160 [2024-12-16 11:48:17.825444] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:26:17.160 2024/12/16 11:48:17 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:26:17.160 [2024-12-16 11:48:17.845882] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:26:17.160 [2024-12-16 11:48:17.845933] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:26:17.160 2024/12/16 11:48:17 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:26:17.160 [2024-12-16 11:48:17.860106] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:26:17.160 [2024-12-16 11:48:17.860169] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:26:17.160 2024/12/16 11:48:17 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:26:17.160 [2024-12-16 11:48:17.880425] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:26:17.160 [2024-12-16 11:48:17.880458] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:26:17.160 2024/12/16 11:48:17 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:26:17.160 [2024-12-16 11:48:17.897664] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:26:17.160 [2024-12-16 11:48:17.897715] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:26:17.160 2024/12/16 11:48:17 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:26:17.419 [2024-12-16 11:48:17.914873] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:26:17.419 [2024-12-16 11:48:17.914922] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:26:17.419 2024/12/16 11:48:17 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:26:17.419 [2024-12-16 11:48:17.935476] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:26:17.419 [2024-12-16 11:48:17.935510] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:26:17.419 2024/12/16 11:48:17 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:26:17.419 [2024-12-16 11:48:17.955775] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:26:17.419 [2024-12-16 11:48:17.955807] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:26:17.419 2024/12/16 11:48:17 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:26:17.419 [2024-12-16 11:48:17.966548] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:26:17.419 [2024-12-16 11:48:17.966583] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:26:17.419 2024/12/16 11:48:17 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:26:17.419 [2024-12-16 11:48:17.986866] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:26:17.419 [2024-12-16 11:48:17.986899] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:26:17.419 2024/12/16 11:48:17 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:26:17.419 [2024-12-16 11:48:18.008233] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:26:17.419 [2024-12-16 11:48:18.008268] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:26:17.419 2024/12/16 11:48:18 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:26:17.419 [2024-12-16 11:48:18.027615] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:26:17.419 [2024-12-16 11:48:18.027649] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:26:17.419 2024/12/16 11:48:18 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:26:17.419 [2024-12-16 11:48:18.037385] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:26:17.419 [2024-12-16 11:48:18.037417] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:26:17.419 2024/12/16 11:48:18 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:26:17.419 [2024-12-16 11:48:18.053806] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:26:17.419 [2024-12-16 11:48:18.053837] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:26:17.419 2024/12/16 11:48:18 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:26:17.419 [2024-12-16 11:48:18.071269] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:26:17.419 [2024-12-16 11:48:18.071307] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:26:17.419 2024/12/16 11:48:18 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:26:17.419 [2024-12-16 11:48:18.092116] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:26:17.419 [2024-12-16 11:48:18.092162] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:26:17.419 2024/12/16 11:48:18 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:26:17.419 [2024-12-16 11:48:18.112354] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:26:17.419 [2024-12-16 11:48:18.112398] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:26:17.419 2024/12/16 11:48:18 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:26:17.419 [2024-12-16 11:48:18.131964] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:26:17.419 [2024-12-16 11:48:18.132023] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:26:17.419 2024/12/16 11:48:18 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:26:17.419 [2024-12-16 11:48:18.142011] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:26:17.419 [2024-12-16 11:48:18.142042] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:26:17.419 2024/12/16 11:48:18 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:26:17.419 [2024-12-16 11:48:18.157385] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:26:17.419 [2024-12-16 11:48:18.157416] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:26:17.419 2024/12/16 11:48:18 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:26:17.682 [2024-12-16 11:48:18.176318] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:26:17.682 [2024-12-16 11:48:18.176367] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:26:17.682 2024/12/16 11:48:18 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:26:17.682 [2024-12-16 11:48:18.196095] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:26:17.682 [2024-12-16 11:48:18.196131] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:26:17.682 2024/12/16 11:48:18 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:26:17.682 [2024-12-16 11:48:18.206843] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:26:17.682 [2024-12-16 11:48:18.206889] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:26:17.682 2024/12/16 11:48:18 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:26:17.682 [2024-12-16 11:48:18.224605] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:26:17.682 [2024-12-16 11:48:18.224638] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:26:17.682 2024/12/16 11:48:18 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:26:17.682 [2024-12-16 11:48:18.243654] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:26:17.682 [2024-12-16 11:48:18.243685] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:26:17.682 2024/12/16 11:48:18 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:26:17.682 [2024-12-16 11:48:18.253731] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:26:17.682 [2024-12-16 11:48:18.253762] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:26:17.682 2024/12/16 11:48:18 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:26:17.682 [2024-12-16 11:48:18.268788] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:26:17.682 [2024-12-16 11:48:18.268820] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:26:17.682 2024/12/16 11:48:18 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:26:17.682 [2024-12-16 11:48:18.287281] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:26:17.682 [2024-12-16 11:48:18.287312] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:26:17.682 2024/12/16 11:48:18 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:26:17.682 [2024-12-16 11:48:18.308660] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:26:17.682 [2024-12-16 11:48:18.308692] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:26:17.682 2024/12/16 11:48:18 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:26:17.682 [2024-12-16 11:48:18.325301] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:26:17.682 [2024-12-16 11:48:18.325334] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:26:17.682 2024/12/16 11:48:18 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:26:17.682 [2024-12-16 11:48:18.343859] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:26:17.682 [2024-12-16 11:48:18.343890] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:26:17.682 2024/12/16 11:48:18 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:26:17.682 [2024-12-16 11:48:18.355002] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:26:17.682 [2024-12-16 11:48:18.355050] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:26:17.682 2024/12/16 11:48:18 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:26:17.682 [2024-12-16 11:48:18.371647] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:26:17.682 [2024-12-16 11:48:18.371696] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:26:17.682 2024/12/16 11:48:18 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:26:17.682 [2024-12-16 11:48:18.382878] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:26:17.682 [2024-12-16 11:48:18.382910] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:26:17.682 2024/12/16 11:48:18 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:26:17.682 [2024-12-16 11:48:18.394183] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:26:17.682 [2024-12-16 11:48:18.394212] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:26:17.682 2024/12/16 11:48:18 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:26:17.682 [2024-12-16 11:48:18.409641] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:26:17.682 [2024-12-16 11:48:18.409682] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:26:17.682 2024/12/16 11:48:18 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:26:17.682 [2024-12-16 11:48:18.425664] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:26:17.682 [2024-12-16 11:48:18.425697] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:26:17.683 2024/12/16 11:48:18 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:26:17.946 [2024-12-16 11:48:18.439092] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:26:17.946 [2024-12-16 11:48:18.439156] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:26:17.946 2024/12/16 11:48:18 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:26:17.946 [2024-12-16 11:48:18.455858] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:26:17.946 [2024-12-16 11:48:18.455901] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:26:17.946 2024/12/16 11:48:18 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:26:17.946 [2024-12-16 11:48:18.467894] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:26:17.946 [2024-12-16 11:48:18.467941] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:26:17.946 2024/12/16 11:48:18 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:26:17.946 [2024-12-16 11:48:18.479488] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:26:17.946 [2024-12-16 11:48:18.479537] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:26:17.946 2024/12/16 11:48:18 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:26:17.946 [2024-12-16 11:48:18.500506] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:26:17.946 [2024-12-16 11:48:18.500538] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:26:17.946 2024/12/16 11:48:18 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:26:17.946 [2024-12-16 11:48:18.519263] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:26:17.946 [2024-12-16 11:48:18.519297] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:26:17.946 2024/12/16 11:48:18 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:26:17.946 [2024-12-16 11:48:18.540637] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:26:17.946 [2024-12-16 11:48:18.540674] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:26:17.946 2024/12/16 11:48:18 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:26:17.946 [2024-12-16 11:48:18.558733] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:26:17.946 [2024-12-16 11:48:18.558775] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:26:17.946 2024/12/16 11:48:18 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:26:17.946 [2024-12-16 11:48:18.568629] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:26:17.946 [2024-12-16 11:48:18.568662] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:26:17.946 2024/12/16 11:48:18 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:26:17.946 [2024-12-16 11:48:18.584932] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:26:17.946 [2024-12-16 11:48:18.584963] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:26:17.946 2024/12/16 11:48:18 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:26:17.946 [2024-12-16 11:48:18.603643] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:26:17.946 [2024-12-16 11:48:18.603692] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:26:17.946 2024/12/16 11:48:18 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:26:17.946 [2024-12-16 11:48:18.613701] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:26:17.946 [2024-12-16 11:48:18.613731] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:26:17.946 2024/12/16 11:48:18 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:26:17.946 11309.00 IOPS, 88.35 MiB/s [2024-12-16T11:48:18.699Z] [2024-12-16 11:48:18.629410] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:26:17.946 [2024-12-16 11:48:18.629442] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:26:17.946 2024/12/16 11:48:18 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:26:17.946 [2024-12-16 11:48:18.647587] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:26:17.946 [2024-12-16 11:48:18.647624] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:26:17.946 2024/12/16 11:48:18 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:26:17.946 [2024-12-16 11:48:18.657325] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:26:17.946 [2024-12-16 11:48:18.657355] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:26:17.946 2024/12/16 11:48:18 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:26:17.946 [2024-12-16 11:48:18.672836] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:26:17.946 [2024-12-16 11:48:18.672876] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:26:17.946 2024/12/16 11:48:18 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:26:17.946 [2024-12-16 11:48:18.689187] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:26:17.946 [2024-12-16 11:48:18.689238] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:26:17.946 2024/12/16 11:48:18 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:26:18.206 [2024-12-16 11:48:18.708566] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:26:18.206 [2024-12-16 11:48:18.708598] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:26:18.206 2024/12/16 11:48:18 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:26:18.206 [2024-12-16 11:48:18.728334] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:26:18.206 [2024-12-16 11:48:18.728369] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:26:18.206 2024/12/16 11:48:18 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:26:18.206 [2024-12-16 11:48:18.744301] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:26:18.206 [2024-12-16 11:48:18.744335] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:26:18.206 2024/12/16 11:48:18 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:26:18.206 [2024-12-16 11:48:18.764685] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:26:18.206 [2024-12-16 11:48:18.764729] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:26:18.206 2024/12/16 11:48:18 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:26:18.206 [2024-12-16 11:48:18.780486] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:26:18.206 [2024-12-16 11:48:18.780518] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:26:18.206 2024/12/16 11:48:18 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:26:18.206 [2024-12-16 11:48:18.801141] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:26:18.206 [2024-12-16 11:48:18.801186] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:26:18.206 2024/12/16 11:48:18 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:26:18.206 [2024-12-16 11:48:18.818015] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:26:18.206 [2024-12-16 11:48:18.818046] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:26:18.206 2024/12/16 11:48:18 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:26:18.206 [2024-12-16 11:48:18.835546] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:26:18.206 [2024-12-16 11:48:18.835580] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:26:18.206 2024/12/16 11:48:18 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:26:18.206 [2024-12-16 11:48:18.845768] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:26:18.206 [2024-12-16 11:48:18.845798] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:26:18.206 2024/12/16 11:48:18 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:26:18.206 [2024-12-16 11:48:18.860230] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:26:18.206 [2024-12-16 11:48:18.860262] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:26:18.206 2024/12/16 11:48:18 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:26:18.206 [2024-12-16 11:48:18.879932] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:26:18.206 [2024-12-16 11:48:18.879965] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:26:18.206 2024/12/16 11:48:18 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:26:18.206 [2024-12-16 11:48:18.890456] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:26:18.206 [2024-12-16 11:48:18.890487] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:26:18.206 2024/12/16 11:48:18 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:26:18.206 [2024-12-16 11:48:18.906406] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:26:18.206 [2024-12-16 11:48:18.906452] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:26:18.206 2024/12/16 11:48:18 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:26:18.206 [2024-12-16 11:48:18.921437] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:26:18.206 [2024-12-16 11:48:18.921470] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:26:18.206 2024/12/16 11:48:18 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:26:18.206 [2024-12-16 11:48:18.940080] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:26:18.206 [2024-12-16 11:48:18.940115] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:26:18.206 2024/12/16 11:48:18 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:26:18.465 [2024-12-16 11:48:18.961833] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:26:18.465 [2024-12-16 11:48:18.961868] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:26:18.465 2024/12/16 11:48:18 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:26:18.465 [2024-12-16 11:48:18.975544] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:26:18.465 [2024-12-16 11:48:18.975603] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:26:18.466 2024/12/16 11:48:18 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:26:18.466 [2024-12-16 11:48:18.986985] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:26:18.466 [2024-12-16 11:48:18.987018] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:26:18.466 2024/12/16 11:48:18 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:26:18.466 [2024-12-16 11:48:19.000498] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:26:18.466 [2024-12-16 11:48:19.000545] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:26:18.466 2024/12/16 11:48:19 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:26:18.466 [2024-12-16 11:48:19.016818] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:26:18.466 [2024-12-16 11:48:19.016860] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:26:18.466 2024/12/16 11:48:19 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:26:18.466 [2024-12-16 11:48:19.035935] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:26:18.466 [2024-12-16 11:48:19.035965] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:26:18.466 2024/12/16 11:48:19 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:26:18.466 [2024-12-16 11:48:19.045160] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:26:18.466 [2024-12-16 11:48:19.045204] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:26:18.466 2024/12/16 11:48:19 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:26:18.466 [2024-12-16 11:48:19.061329] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:26:18.466 [2024-12-16 11:48:19.061360] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:26:18.466 2024/12/16 11:48:19 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:26:18.466 [2024-12-16 11:48:19.083158] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:26:18.466 [2024-12-16 11:48:19.083206] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:26:18.466 2024/12/16 11:48:19 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:26:18.466 [2024-12-16 11:48:19.093810] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:26:18.466 [2024-12-16 11:48:19.093841] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:26:18.466 2024/12/16 11:48:19 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:26:18.466 [2024-12-16 11:48:19.108664] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:26:18.466 [2024-12-16 11:48:19.108696] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:26:18.466 2024/12/16 11:48:19 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:26:18.466 [2024-12-16 11:48:19.127272] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:26:18.466 [2024-12-16 11:48:19.127320] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:26:18.466 2024/12/16 11:48:19 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:26:18.466 [2024-12-16 11:48:19.137202] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:26:18.466 [2024-12-16 11:48:19.137232] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:26:18.466 2024/12/16 11:48:19 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:26:18.466 [2024-12-16 11:48:19.152235] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:26:18.466 [2024-12-16 11:48:19.152266] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:26:18.466 2024/12/16 11:48:19 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:26:18.466 [2024-12-16 11:48:19.172249] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:26:18.466 [2024-12-16 11:48:19.172282] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:26:18.466 2024/12/16 11:48:19 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:26:18.466 [2024-12-16 11:48:19.190506] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:26:18.466 [2024-12-16 11:48:19.190540] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:26:18.466 2024/12/16 11:48:19 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:26:18.466 [2024-12-16 11:48:19.205084] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:26:18.466 [2024-12-16 11:48:19.205115] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:26:18.466 2024/12/16 11:48:19 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:26:18.726 [2024-12-16 11:48:19.222493] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:26:18.726 [2024-12-16 11:48:19.222536] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:26:18.726 2024/12/16 11:48:19 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:26:18.726 [2024-12-16 11:48:19.238048] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:26:18.726 [2024-12-16 11:48:19.238097] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:26:18.726 2024/12/16 11:48:19 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:26:18.726 [2024-12-16 11:48:19.253403] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:26:18.726 [2024-12-16 11:48:19.253437] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:26:18.726 2024/12/16 11:48:19 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:26:18.726 [2024-12-16 11:48:19.271895] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:26:18.726 [2024-12-16 11:48:19.271927] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:26:18.726 2024/12/16 11:48:19 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:26:18.726 [2024-12-16 11:48:19.282310] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:26:18.726 [2024-12-16 11:48:19.282342] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:26:18.726 2024/12/16 11:48:19 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:26:18.726 [2024-12-16 11:48:19.297466] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:26:18.726 [2024-12-16 11:48:19.297498] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:26:18.726 2024/12/16 11:48:19 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:26:18.726 [2024-12-16 11:48:19.315618] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:26:18.726 [2024-12-16 11:48:19.315667] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:26:18.726 2024/12/16 11:48:19 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:26:18.726 [2024-12-16 11:48:19.325993] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:26:18.726 [2024-12-16 11:48:19.326023] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:26:18.726 2024/12/16 11:48:19 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:26:18.726 [2024-12-16 11:48:19.341339] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:26:18.726 [2024-12-16 11:48:19.341369] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:26:18.726 2024/12/16 11:48:19 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:26:18.726 [2024-12-16 11:48:19.360576] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:26:18.726 [2024-12-16 11:48:19.360607] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:26:18.726 2024/12/16 11:48:19 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:26:18.726 [2024-12-16 11:48:19.378462] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:26:18.726 [2024-12-16 11:48:19.378495] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:26:18.726 2024/12/16 11:48:19 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:26:18.726 [2024-12-16 11:48:19.392187] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:26:18.726 [2024-12-16 11:48:19.392220] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:26:18.726 2024/12/16 11:48:19 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:26:18.726 [2024-12-16 11:48:19.412274] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:26:18.726 [2024-12-16 11:48:19.412307] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:26:18.726 2024/12/16 11:48:19 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:26:18.726 [2024-12-16 11:48:19.432423] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:26:18.726 [2024-12-16 11:48:19.432455] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:26:18.726 2024/12/16 11:48:19 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:26:18.726 [2024-12-16 11:48:19.448478] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:26:18.726 [2024-12-16 11:48:19.448521] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:26:18.726 2024/12/16 11:48:19 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:26:18.726 [2024-12-16 11:48:19.468598] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:26:18.726 [2024-12-16 11:48:19.468632] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:26:18.726 2024/12/16 11:48:19 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:26:18.985 [2024-12-16 11:48:19.487392] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:26:18.985 [2024-12-16 11:48:19.487425] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:26:18.985 2024/12/16 11:48:19 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:26:18.985 [2024-12-16 11:48:19.509325] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:26:18.985 [2024-12-16 11:48:19.509358] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:26:18.985 2024/12/16 11:48:19 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:26:18.985 [2024-12-16 11:48:19.526209] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:26:18.985 [2024-12-16 11:48:19.526240] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:26:18.985 2024/12/16 11:48:19 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:26:18.985 [2024-12-16 11:48:19.542229] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:26:18.985 [2024-12-16 11:48:19.542259] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:26:18.985 2024/12/16 11:48:19 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:26:18.985 [2024-12-16 11:48:19.557132] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:26:18.985 [2024-12-16 11:48:19.557196] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:26:18.985 2024/12/16 11:48:19 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:26:18.985 [2024-12-16 11:48:19.575331] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:26:18.985 [2024-12-16 11:48:19.575362] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:26:18.985 2024/12/16 11:48:19 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:26:18.985 [2024-12-16 11:48:19.596315] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:26:18.985 [2024-12-16 11:48:19.596347] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:26:18.985 2024/12/16 11:48:19 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:26:18.985 [2024-12-16 11:48:19.616916] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:26:18.985 [2024-12-16 11:48:19.616947] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:26:18.985 2024/12/16 11:48:19 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:26:18.985 11291.25 IOPS, 88.21 MiB/s [2024-12-16T11:48:19.738Z] [2024-12-16 11:48:19.633023] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:26:18.985 [2024-12-16 11:48:19.633055] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:26:18.985 2024/12/16 11:48:19 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:26:18.985 [2024-12-16 11:48:19.649666] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:26:18.985 [2024-12-16 11:48:19.649702] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:26:18.985 2024/12/16 11:48:19 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:26:18.985 [2024-12-16 11:48:19.667639] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:26:18.985 [2024-12-16 11:48:19.667671] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:26:18.985 2024/12/16 11:48:19 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:26:18.985 [2024-12-16 11:48:19.676954] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:26:18.985 [2024-12-16 11:48:19.676986] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:26:18.985 2024/12/16 11:48:19 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:26:18.985 [2024-12-16 11:48:19.693887] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:26:18.985 [2024-12-16 11:48:19.693918] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:26:18.985 2024/12/16 11:48:19 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:26:18.985 [2024-12-16 11:48:19.709867] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:26:18.985 [2024-12-16 11:48:19.709898] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:26:18.985 2024/12/16 11:48:19 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:26:18.985 [2024-12-16 11:48:19.727285] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:26:18.985 [2024-12-16 11:48:19.727321] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:26:18.985 2024/12/16 11:48:19 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:26:19.245 [2024-12-16 11:48:19.748593] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:26:19.245 [2024-12-16 11:48:19.748625] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:26:19.245 2024/12/16 11:48:19 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:26:19.245 [2024-12-16 11:48:19.768471] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:26:19.245 [2024-12-16 11:48:19.768520] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:26:19.245 2024/12/16 11:48:19 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:26:19.245 [2024-12-16 11:48:19.788957] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:26:19.245 [2024-12-16 11:48:19.788990] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:26:19.245 2024/12/16 11:48:19 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:26:19.245 [2024-12-16 11:48:19.806504] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:26:19.245 [2024-12-16 11:48:19.806536] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:26:19.245 2024/12/16 11:48:19 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:26:19.245 [2024-12-16 11:48:19.816908] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:26:19.245 [2024-12-16 11:48:19.816953] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:26:19.245 2024/12/16 11:48:19 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:26:19.245 [2024-12-16 11:48:19.832607] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:26:19.245 [2024-12-16 11:48:19.832638] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:26:19.245 2024/12/16 11:48:19 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:26:19.245 [2024-12-16 11:48:19.852833] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:26:19.245 [2024-12-16 11:48:19.852867] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:26:19.245 2024/12/16 11:48:19 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:26:19.245 [2024-12-16 11:48:19.871651] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:26:19.245 [2024-12-16 11:48:19.871683] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:26:19.245 2024/12/16 11:48:19 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:26:19.245 [2024-12-16 11:48:19.881008] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:26:19.245 [2024-12-16 11:48:19.881039] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:26:19.245 2024/12/16 11:48:19 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:26:19.245 [2024-12-16 11:48:19.896470] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:26:19.245 [2024-12-16 11:48:19.896502] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:26:19.245 2024/12/16 11:48:19 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:26:19.245 [2024-12-16 11:48:19.917127] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:26:19.245 [2024-12-16 11:48:19.917185] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:26:19.245 2024/12/16 11:48:19 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:26:19.245 [2024-12-16 11:48:19.934230] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:26:19.245 [2024-12-16 11:48:19.934271] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:26:19.245 2024/12/16 11:48:19 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:26:19.245 [2024-12-16 11:48:19.944784] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:26:19.245 [2024-12-16 11:48:19.944836] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:26:19.245 2024/12/16 11:48:19 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:26:19.245 [2024-12-16 11:48:19.960207] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:26:19.245 [2024-12-16 11:48:19.960251] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:26:19.245 2024/12/16 11:48:19 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:26:19.245 [2024-12-16 11:48:19.979547] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:26:19.245 [2024-12-16 11:48:19.979598] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:26:19.245 2024/12/16 11:48:19 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:26:19.245 [2024-12-16 11:48:19.989631] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:26:19.245 [2024-12-16 11:48:19.989677] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:26:19.245 2024/12/16 11:48:19 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:26:19.505 [2024-12-16 11:48:20.004809] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:26:19.505 [2024-12-16 11:48:20.004845] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:26:19.505 2024/12/16 11:48:20 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:26:19.505 [2024-12-16 11:48:20.024950] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:26:19.505 [2024-12-16 11:48:20.024998] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:26:19.505 2024/12/16 11:48:20 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:26:19.505 [2024-12-16 11:48:20.042032] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:26:19.505 [2024-12-16 11:48:20.042082] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:26:19.505 2024/12/16 11:48:20 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:26:19.505 [2024-12-16 11:48:20.057212] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:26:19.505 [2024-12-16 11:48:20.057244] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:26:19.505 2024/12/16 11:48:20 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:26:19.505 [2024-12-16 11:48:20.074224] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:26:19.505 [2024-12-16 11:48:20.074267] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:26:19.505 2024/12/16 11:48:20 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:26:19.505 [2024-12-16 11:48:20.090063] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:26:19.505 [2024-12-16 11:48:20.090114] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:26:19.505 2024/12/16 11:48:20 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:26:19.505 [2024-12-16 11:48:20.105704] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:26:19.505 [2024-12-16 11:48:20.105770] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:26:19.505 2024/12/16 11:48:20 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:26:19.505 [2024-12-16 11:48:20.121027] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:26:19.505 [2024-12-16 11:48:20.121070] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:26:19.505 2024/12/16 11:48:20 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:26:19.505 [2024-12-16 11:48:20.140294] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:26:19.505 [2024-12-16 11:48:20.140357] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:26:19.505 2024/12/16 11:48:20 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:26:19.505 [2024-12-16 11:48:20.159439] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:26:19.505 [2024-12-16 11:48:20.159471] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:26:19.505 2024/12/16 11:48:20 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:26:19.505 [2024-12-16 11:48:20.179860] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:26:19.505 [2024-12-16 11:48:20.179901] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:26:19.505 2024/12/16 11:48:20 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:26:19.505 [2024-12-16 11:48:20.190868] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:26:19.505 [2024-12-16 11:48:20.190900] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:26:19.505 2024/12/16 11:48:20 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:26:19.505 [2024-12-16 11:48:20.204201] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:26:19.505 [2024-12-16 11:48:20.204234] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:26:19.505 2024/12/16 11:48:20 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:26:19.505 [2024-12-16 11:48:20.225252] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:26:19.505 [2024-12-16 11:48:20.225285] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:26:19.505 2024/12/16 11:48:20 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:26:19.505 [2024-12-16 11:48:20.241300] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:26:19.505 [2024-12-16 11:48:20.241332] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:26:19.505 2024/12/16 11:48:20 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:26:19.765 [2024-12-16 11:48:20.258408] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:26:19.765 [2024-12-16 11:48:20.258443] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:26:19.765 2024/12/16 11:48:20 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:26:19.765 [2024-12-16 11:48:20.268816] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:26:19.765 [2024-12-16 11:48:20.268861] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:26:19.765 2024/12/16 11:48:20 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:26:19.765 [2024-12-16 11:48:20.285793] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:26:19.765 [2024-12-16 11:48:20.285840] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:26:19.765 2024/12/16 11:48:20 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:26:19.765 [2024-12-16 11:48:20.302006] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:26:19.765 [2024-12-16 11:48:20.302039] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:26:19.765 2024/12/16 11:48:20 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:26:19.765 [2024-12-16 11:48:20.316215] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:26:19.765 [2024-12-16 11:48:20.316248] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:26:19.765 2024/12/16 11:48:20 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:26:19.765 [2024-12-16 11:48:20.336694] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:26:19.765 [2024-12-16 11:48:20.336727] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:26:19.765 2024/12/16 11:48:20 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:26:19.765 [2024-12-16 11:48:20.354613] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:26:19.765 [2024-12-16 11:48:20.354646] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:26:19.765 2024/12/16 11:48:20 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:26:19.765 [2024-12-16 11:48:20.364824] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:26:19.765 [2024-12-16 11:48:20.364866] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:26:19.765 2024/12/16 11:48:20 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:26:19.765 [2024-12-16 11:48:20.380967] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:26:19.765 [2024-12-16 11:48:20.380998] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:26:19.765 2024/12/16 11:48:20 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:26:19.765 [2024-12-16 11:48:20.400839] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:26:19.765 [2024-12-16 11:48:20.400874] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:26:19.765 2024/12/16 11:48:20 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:26:19.765 [2024-12-16 11:48:20.419666] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:26:19.765 [2024-12-16 11:48:20.419697] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:26:19.765 2024/12/16 11:48:20 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:26:19.765 [2024-12-16 11:48:20.429341] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:26:19.765 [2024-12-16 11:48:20.429371] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:26:19.765 2024/12/16 11:48:20 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:26:19.765 [2024-12-16 11:48:20.445829] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:26:19.765 [2024-12-16 11:48:20.445876] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:26:19.765 2024/12/16 11:48:20 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:26:19.765 [2024-12-16 11:48:20.462049] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:26:19.765 [2024-12-16 11:48:20.462098] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:26:19.765 2024/12/16 11:48:20 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:26:19.765 [2024-12-16 11:48:20.478144] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:26:19.765 [2024-12-16 11:48:20.478196] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:26:19.765 2024/12/16 11:48:20 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:26:19.765 [2024-12-16 11:48:20.493097] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:26:19.765 [2024-12-16 11:48:20.493127] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:26:19.766 2024/12/16 11:48:20 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:26:19.766 [2024-12-16 11:48:20.512206] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:26:19.766 [2024-12-16 11:48:20.512238] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:26:19.766 2024/12/16 11:48:20 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:26:20.024 [2024-12-16 11:48:20.529692] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:26:20.025 [2024-12-16 11:48:20.529726] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:26:20.025 2024/12/16 11:48:20 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:26:20.025 [2024-12-16 11:48:20.548174] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:26:20.025 [2024-12-16 11:48:20.548208] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:26:20.025 2024/12/16 11:48:20 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:26:20.025 [2024-12-16 11:48:20.568624] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:26:20.025 [2024-12-16 11:48:20.568655] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:26:20.025 2024/12/16 11:48:20 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:26:20.025 [2024-12-16 11:48:20.586012] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:26:20.025 [2024-12-16 11:48:20.586045] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:26:20.025 2024/12/16 11:48:20 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:26:20.025 [2024-12-16 11:48:20.599013] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:26:20.025 [2024-12-16 11:48:20.599048] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:26:20.025 2024/12/16 11:48:20 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:26:20.025 [2024-12-16 11:48:20.609006] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:26:20.025 [2024-12-16 11:48:20.609038] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:26:20.025 2024/12/16 11:48:20 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:26:20.025 11297.80 IOPS, 88.26 MiB/s [2024-12-16T11:48:20.778Z] [2024-12-16 11:48:20.624351] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:26:20.025 [2024-12-16 11:48:20.624382] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:26:20.025 2024/12/16 11:48:20 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:26:20.025 00:26:20.025 Latency(us) 00:26:20.025 [2024-12-16T11:48:20.778Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:26:20.025 Job: Nvme1n1 (Core Mask 0x1, workload: randrw, percentage: 50, depth: 128, IO size: 8192) 00:26:20.025 Nvme1n1 : 5.01 11298.53 88.27 0.00 0.00 11314.10 3068.28 22520.55 00:26:20.025 [2024-12-16T11:48:20.778Z] =================================================================================================================== 00:26:20.025 [2024-12-16T11:48:20.778Z] Total : 11298.53 88.27 0.00 0.00 11314.10 3068.28 22520.55 00:26:20.025 [2024-12-16 11:48:20.635697] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:26:20.025 [2024-12-16 11:48:20.635727] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:26:20.025 2024/12/16 11:48:20 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:26:20.025 [2024-12-16 11:48:20.647687] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:26:20.025 [2024-12-16 11:48:20.647716] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:26:20.025 2024/12/16 11:48:20 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:26:20.025 [2024-12-16 11:48:20.659724] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:26:20.025 [2024-12-16 11:48:20.659750] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:26:20.025 2024/12/16 11:48:20 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:26:20.025 [2024-12-16 11:48:20.671726] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:26:20.025 [2024-12-16 11:48:20.671752] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:26:20.025 2024/12/16 11:48:20 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:26:20.025 [2024-12-16 11:48:20.683677] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:26:20.025 [2024-12-16 11:48:20.683702] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:26:20.025 2024/12/16 11:48:20 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:26:20.025 [2024-12-16 11:48:20.695677] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:26:20.025 [2024-12-16 11:48:20.695699] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:26:20.025 2024/12/16 11:48:20 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:26:20.025 [2024-12-16 11:48:20.707741] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:26:20.025 [2024-12-16 11:48:20.707765] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:26:20.025 2024/12/16 11:48:20 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:26:20.025 [2024-12-16 11:48:20.719723] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:26:20.025 [2024-12-16 11:48:20.719747] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:26:20.025 2024/12/16 11:48:20 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:26:20.025 [2024-12-16 11:48:20.731685] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:26:20.025 [2024-12-16 11:48:20.731710] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:26:20.025 2024/12/16 11:48:20 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:26:20.025 [2024-12-16 11:48:20.743753] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:26:20.025 [2024-12-16 11:48:20.743785] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:26:20.025 2024/12/16 11:48:20 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:26:20.025 [2024-12-16 11:48:20.755698] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:26:20.025 [2024-12-16 11:48:20.755726] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:26:20.025 2024/12/16 11:48:20 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:26:20.025 [2024-12-16 11:48:20.767694] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:26:20.025 [2024-12-16 11:48:20.767719] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:26:20.025 2024/12/16 11:48:20 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:26:20.284 [2024-12-16 11:48:20.779744] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:26:20.284 [2024-12-16 11:48:20.779773] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:26:20.284 2024/12/16 11:48:20 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:26:20.284 [2024-12-16 11:48:20.791702] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:26:20.284 [2024-12-16 11:48:20.791729] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:26:20.284 2024/12/16 11:48:20 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:26:20.284 [2024-12-16 11:48:20.803734] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:26:20.284 [2024-12-16 11:48:20.803759] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:26:20.284 2024/12/16 11:48:20 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:26:20.284 [2024-12-16 11:48:20.815680] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:26:20.284 [2024-12-16 11:48:20.815704] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:26:20.284 2024/12/16 11:48:20 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:26:20.284 [2024-12-16 11:48:20.827685] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:26:20.284 [2024-12-16 11:48:20.827709] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:26:20.284 2024/12/16 11:48:20 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:26:20.284 /home/vagrant/spdk_repo/spdk/test/nvmf/target/zcopy.sh: line 42: kill: (106971) - No such process 00:26:20.284 11:48:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- target/zcopy.sh@49 -- # wait 106971 00:26:20.284 11:48:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- target/zcopy.sh@52 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:26:20.284 11:48:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:20.284 11:48:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:26:20.284 11:48:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:20.284 11:48:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- target/zcopy.sh@53 -- # rpc_cmd bdev_delay_create -b malloc0 -d delay0 -r 1000000 -t 1000000 -w 1000000 -n 1000000 00:26:20.284 11:48:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:20.285 11:48:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:26:20.285 delay0 00:26:20.285 11:48:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:20.285 11:48:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- target/zcopy.sh@54 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 delay0 -n 1 00:26:20.285 11:48:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:20.285 11:48:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:26:20.285 11:48:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:20.285 11:48:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- target/zcopy.sh@56 -- # /home/vagrant/spdk_repo/spdk/build/examples/abort -c 0x1 -t 5 -q 64 -w randrw -M 50 -l warning -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.3 trsvcid:4420 ns:1' 00:26:20.285 [2024-12-16 11:48:21.029570] nvme_fabric.c: 295:nvme_fabric_discover_probe: *WARNING*: Skipping unsupported current discovery service or discovery service referral 00:26:28.405 Initializing NVMe Controllers 00:26:28.405 Attached to NVMe over Fabrics controller at 10.0.0.3:4420: nqn.2016-06.io.spdk:cnode1 00:26:28.405 Associating TCP (addr:10.0.0.3 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:26:28.405 Initialization complete. Launching workers. 00:26:28.405 NS: TCP (addr:10.0.0.3 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 I/O completed: 235, failed: 20736 00:26:28.405 CTRLR: TCP (addr:10.0.0.3 subnqn:nqn.2016-06.io.spdk:cnode1) abort submitted 20851, failed to submit 120 00:26:28.405 success 20765, unsuccessful 86, failed 0 00:26:28.405 11:48:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- target/zcopy.sh@59 -- # trap - SIGINT SIGTERM EXIT 00:26:28.405 11:48:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- target/zcopy.sh@60 -- # nvmftestfini 00:26:28.405 11:48:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@516 -- # nvmfcleanup 00:26:28.405 11:48:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@121 -- # sync 00:26:28.405 11:48:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:26:28.405 11:48:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@124 -- # set +e 00:26:28.405 11:48:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@125 -- # for i in {1..20} 00:26:28.405 11:48:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:26:28.405 rmmod nvme_tcp 00:26:28.405 rmmod nvme_fabrics 00:26:28.405 rmmod nvme_keyring 00:26:28.405 11:48:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:26:28.405 11:48:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@128 -- # set -e 00:26:28.405 11:48:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@129 -- # return 0 00:26:28.405 11:48:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@517 -- # '[' -n 106822 ']' 00:26:28.405 11:48:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@518 -- # killprocess 106822 00:26:28.405 11:48:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@954 -- # '[' -z 106822 ']' 00:26:28.405 11:48:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@958 -- # kill -0 106822 00:26:28.405 11:48:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@959 -- # uname 00:26:28.405 11:48:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:26:28.405 11:48:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 106822 00:26:28.405 11:48:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:26:28.405 11:48:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:26:28.405 killing process with pid 106822 00:26:28.405 11:48:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@972 -- # echo 'killing process with pid 106822' 00:26:28.405 11:48:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@973 -- # kill 106822 00:26:28.405 11:48:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@978 -- # wait 106822 00:26:28.405 11:48:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:26:28.405 11:48:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:26:28.405 11:48:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:26:28.405 11:48:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@297 -- # iptr 00:26:28.405 11:48:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@791 -- # iptables-save 00:26:28.405 11:48:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:26:28.405 11:48:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@791 -- # iptables-restore 00:26:28.405 11:48:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@298 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:26:28.405 11:48:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@299 -- # nvmf_veth_fini 00:26:28.405 11:48:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@233 -- # ip link set nvmf_init_br nomaster 00:26:28.405 11:48:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@234 -- # ip link set nvmf_init_br2 nomaster 00:26:28.405 11:48:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@235 -- # ip link set nvmf_tgt_br nomaster 00:26:28.405 11:48:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@236 -- # ip link set nvmf_tgt_br2 nomaster 00:26:28.405 11:48:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@237 -- # ip link set nvmf_init_br down 00:26:28.405 11:48:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@238 -- # ip link set nvmf_init_br2 down 00:26:28.405 11:48:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@239 -- # ip link set nvmf_tgt_br down 00:26:28.405 11:48:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@240 -- # ip link set nvmf_tgt_br2 down 00:26:28.405 11:48:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@241 -- # ip link delete nvmf_br type bridge 00:26:28.405 11:48:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@242 -- # ip link delete nvmf_init_if 00:26:28.405 11:48:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@243 -- # ip link delete nvmf_init_if2 00:26:28.405 11:48:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@244 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:26:28.405 11:48:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@245 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:26:28.405 11:48:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@246 -- # remove_spdk_ns 00:26:28.405 11:48:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:26:28.405 11:48:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:26:28.405 11:48:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:26:28.405 11:48:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@300 -- # return 0 00:26:28.405 00:26:28.405 real 0m25.430s 00:26:28.405 user 0m37.870s 00:26:28.405 sys 0m9.421s 00:26:28.405 11:48:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@1130 -- # xtrace_disable 00:26:28.405 ************************************ 00:26:28.405 END TEST nvmf_zcopy 00:26:28.405 ************************************ 00:26:28.405 11:48:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:26:28.405 11:48:28 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/nvmf_target_core.sh@33 -- # run_test nvmf_nmic /home/vagrant/spdk_repo/spdk/test/nvmf/target/nmic.sh --transport=tcp --interrupt-mode 00:26:28.405 11:48:28 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:26:28.405 11:48:28 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1111 -- # xtrace_disable 00:26:28.405 11:48:28 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@10 -- # set +x 00:26:28.405 ************************************ 00:26:28.405 START TEST nvmf_nmic 00:26:28.405 ************************************ 00:26:28.405 11:48:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/nmic.sh --transport=tcp --interrupt-mode 00:26:28.405 * Looking for test storage... 00:26:28.405 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:26:28.405 11:48:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:26:28.405 11:48:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1711 -- # lcov --version 00:26:28.405 11:48:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:26:28.405 11:48:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:26:28.405 11:48:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:26:28.406 11:48:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@333 -- # local ver1 ver1_l 00:26:28.406 11:48:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@334 -- # local ver2 ver2_l 00:26:28.406 11:48:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@336 -- # IFS=.-: 00:26:28.406 11:48:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@336 -- # read -ra ver1 00:26:28.406 11:48:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@337 -- # IFS=.-: 00:26:28.406 11:48:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@337 -- # read -ra ver2 00:26:28.406 11:48:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@338 -- # local 'op=<' 00:26:28.406 11:48:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@340 -- # ver1_l=2 00:26:28.406 11:48:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@341 -- # ver2_l=1 00:26:28.406 11:48:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:26:28.406 11:48:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@344 -- # case "$op" in 00:26:28.406 11:48:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@345 -- # : 1 00:26:28.406 11:48:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@364 -- # (( v = 0 )) 00:26:28.406 11:48:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:26:28.406 11:48:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@365 -- # decimal 1 00:26:28.406 11:48:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@353 -- # local d=1 00:26:28.406 11:48:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:26:28.406 11:48:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@355 -- # echo 1 00:26:28.406 11:48:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@365 -- # ver1[v]=1 00:26:28.406 11:48:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@366 -- # decimal 2 00:26:28.406 11:48:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@353 -- # local d=2 00:26:28.406 11:48:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:26:28.406 11:48:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@355 -- # echo 2 00:26:28.406 11:48:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@366 -- # ver2[v]=2 00:26:28.406 11:48:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:26:28.406 11:48:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:26:28.406 11:48:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@368 -- # return 0 00:26:28.406 11:48:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:26:28.406 11:48:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:26:28.406 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:26:28.406 --rc genhtml_branch_coverage=1 00:26:28.406 --rc genhtml_function_coverage=1 00:26:28.406 --rc genhtml_legend=1 00:26:28.406 --rc geninfo_all_blocks=1 00:26:28.406 --rc geninfo_unexecuted_blocks=1 00:26:28.406 00:26:28.406 ' 00:26:28.406 11:48:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:26:28.406 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:26:28.406 --rc genhtml_branch_coverage=1 00:26:28.406 --rc genhtml_function_coverage=1 00:26:28.406 --rc genhtml_legend=1 00:26:28.406 --rc geninfo_all_blocks=1 00:26:28.406 --rc geninfo_unexecuted_blocks=1 00:26:28.406 00:26:28.406 ' 00:26:28.406 11:48:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:26:28.406 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:26:28.406 --rc genhtml_branch_coverage=1 00:26:28.406 --rc genhtml_function_coverage=1 00:26:28.406 --rc genhtml_legend=1 00:26:28.406 --rc geninfo_all_blocks=1 00:26:28.406 --rc geninfo_unexecuted_blocks=1 00:26:28.406 00:26:28.406 ' 00:26:28.406 11:48:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:26:28.406 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:26:28.406 --rc genhtml_branch_coverage=1 00:26:28.406 --rc genhtml_function_coverage=1 00:26:28.406 --rc genhtml_legend=1 00:26:28.406 --rc geninfo_all_blocks=1 00:26:28.406 --rc geninfo_unexecuted_blocks=1 00:26:28.406 00:26:28.406 ' 00:26:28.406 11:48:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:26:28.406 11:48:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@7 -- # uname -s 00:26:28.406 11:48:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:26:28.406 11:48:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:26:28.406 11:48:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:26:28.406 11:48:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:26:28.406 11:48:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:26:28.406 11:48:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:26:28.406 11:48:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:26:28.406 11:48:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:26:28.406 11:48:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:26:28.406 11:48:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:26:28.406 11:48:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:2fb5c03d-b67f-4471-b8a9-a83bf7b15be8 00:26:28.406 11:48:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@18 -- # NVME_HOSTID=2fb5c03d-b67f-4471-b8a9-a83bf7b15be8 00:26:28.406 11:48:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:26:28.406 11:48:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:26:28.406 11:48:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:26:28.406 11:48:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:26:28.406 11:48:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@49 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:26:28.406 11:48:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@15 -- # shopt -s extglob 00:26:28.406 11:48:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:26:28.406 11:48:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:26:28.406 11:48:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:26:28.406 11:48:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:26:28.406 11:48:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:26:28.406 11:48:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:26:28.406 11:48:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- paths/export.sh@5 -- # export PATH 00:26:28.406 11:48:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:26:28.406 11:48:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@51 -- # : 0 00:26:28.406 11:48:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:26:28.406 11:48:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:26:28.406 11:48:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:26:28.406 11:48:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:26:28.406 11:48:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:26:28.406 11:48:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@33 -- # '[' 1 -eq 1 ']' 00:26:28.406 11:48:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@34 -- # NVMF_APP+=(--interrupt-mode) 00:26:28.406 11:48:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:26:28.406 11:48:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:26:28.406 11:48:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@55 -- # have_pci_nics=0 00:26:28.406 11:48:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@11 -- # MALLOC_BDEV_SIZE=64 00:26:28.406 11:48:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:26:28.406 11:48:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@14 -- # nvmftestinit 00:26:28.406 11:48:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:26:28.406 11:48:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:26:28.406 11:48:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@476 -- # prepare_net_devs 00:26:28.406 11:48:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@438 -- # local -g is_hw=no 00:26:28.407 11:48:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@440 -- # remove_spdk_ns 00:26:28.407 11:48:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:26:28.407 11:48:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:26:28.407 11:48:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:26:28.407 11:48:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@442 -- # [[ virt != virt ]] 00:26:28.407 11:48:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@444 -- # [[ no == yes ]] 00:26:28.407 11:48:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@451 -- # [[ virt == phy ]] 00:26:28.407 11:48:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@454 -- # [[ virt == phy-fallback ]] 00:26:28.407 11:48:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@459 -- # [[ tcp == tcp ]] 00:26:28.407 11:48:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@460 -- # nvmf_veth_init 00:26:28.407 11:48:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@145 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:26:28.407 11:48:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@146 -- # NVMF_SECOND_INITIATOR_IP=10.0.0.2 00:26:28.407 11:48:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@147 -- # NVMF_FIRST_TARGET_IP=10.0.0.3 00:26:28.407 11:48:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@148 -- # NVMF_SECOND_TARGET_IP=10.0.0.4 00:26:28.407 11:48:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@149 -- # NVMF_INITIATOR_IP=10.0.0.1 00:26:28.407 11:48:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@150 -- # NVMF_BRIDGE=nvmf_br 00:26:28.407 11:48:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@151 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:26:28.407 11:48:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@152 -- # NVMF_INITIATOR_INTERFACE2=nvmf_init_if2 00:26:28.407 11:48:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@153 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:26:28.407 11:48:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@154 -- # NVMF_INITIATOR_BRIDGE2=nvmf_init_br2 00:26:28.407 11:48:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@155 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:26:28.407 11:48:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@156 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:26:28.407 11:48:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@157 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:26:28.407 11:48:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@158 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:26:28.407 11:48:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@159 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:26:28.407 11:48:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@160 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:26:28.407 11:48:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@162 -- # ip link set nvmf_init_br nomaster 00:26:28.407 Cannot find device "nvmf_init_br" 00:26:28.407 11:48:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@162 -- # true 00:26:28.407 11:48:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@163 -- # ip link set nvmf_init_br2 nomaster 00:26:28.407 Cannot find device "nvmf_init_br2" 00:26:28.407 11:48:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@163 -- # true 00:26:28.407 11:48:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@164 -- # ip link set nvmf_tgt_br nomaster 00:26:28.407 Cannot find device "nvmf_tgt_br" 00:26:28.407 11:48:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@164 -- # true 00:26:28.407 11:48:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@165 -- # ip link set nvmf_tgt_br2 nomaster 00:26:28.407 Cannot find device "nvmf_tgt_br2" 00:26:28.407 11:48:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@165 -- # true 00:26:28.407 11:48:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@166 -- # ip link set nvmf_init_br down 00:26:28.407 Cannot find device "nvmf_init_br" 00:26:28.407 11:48:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@166 -- # true 00:26:28.407 11:48:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@167 -- # ip link set nvmf_init_br2 down 00:26:28.407 Cannot find device "nvmf_init_br2" 00:26:28.407 11:48:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@167 -- # true 00:26:28.407 11:48:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@168 -- # ip link set nvmf_tgt_br down 00:26:28.407 Cannot find device "nvmf_tgt_br" 00:26:28.407 11:48:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@168 -- # true 00:26:28.407 11:48:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@169 -- # ip link set nvmf_tgt_br2 down 00:26:28.407 Cannot find device "nvmf_tgt_br2" 00:26:28.407 11:48:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@169 -- # true 00:26:28.407 11:48:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@170 -- # ip link delete nvmf_br type bridge 00:26:28.407 Cannot find device "nvmf_br" 00:26:28.407 11:48:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@170 -- # true 00:26:28.407 11:48:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@171 -- # ip link delete nvmf_init_if 00:26:28.407 Cannot find device "nvmf_init_if" 00:26:28.407 11:48:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@171 -- # true 00:26:28.407 11:48:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@172 -- # ip link delete nvmf_init_if2 00:26:28.407 Cannot find device "nvmf_init_if2" 00:26:28.407 11:48:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@172 -- # true 00:26:28.407 11:48:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@173 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:26:28.407 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:26:28.407 11:48:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@173 -- # true 00:26:28.407 11:48:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@174 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:26:28.407 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:26:28.407 11:48:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@174 -- # true 00:26:28.407 11:48:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@177 -- # ip netns add nvmf_tgt_ns_spdk 00:26:28.407 11:48:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@180 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:26:28.407 11:48:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@181 -- # ip link add nvmf_init_if2 type veth peer name nvmf_init_br2 00:26:28.407 11:48:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@182 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:26:28.407 11:48:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@183 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:26:28.407 11:48:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:26:28.666 11:48:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@187 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:26:28.666 11:48:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@190 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:26:28.666 11:48:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@191 -- # ip addr add 10.0.0.2/24 dev nvmf_init_if2 00:26:28.666 11:48:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@192 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if 00:26:28.666 11:48:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@193 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.4/24 dev nvmf_tgt_if2 00:26:28.666 11:48:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@196 -- # ip link set nvmf_init_if up 00:26:28.666 11:48:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@197 -- # ip link set nvmf_init_if2 up 00:26:28.666 11:48:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@198 -- # ip link set nvmf_init_br up 00:26:28.666 11:48:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@199 -- # ip link set nvmf_init_br2 up 00:26:28.666 11:48:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@200 -- # ip link set nvmf_tgt_br up 00:26:28.666 11:48:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@201 -- # ip link set nvmf_tgt_br2 up 00:26:28.666 11:48:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@202 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:26:28.666 11:48:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@203 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:26:28.666 11:48:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@204 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:26:28.666 11:48:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@207 -- # ip link add nvmf_br type bridge 00:26:28.666 11:48:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@208 -- # ip link set nvmf_br up 00:26:28.667 11:48:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@211 -- # ip link set nvmf_init_br master nvmf_br 00:26:28.667 11:48:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@212 -- # ip link set nvmf_init_br2 master nvmf_br 00:26:28.667 11:48:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@213 -- # ip link set nvmf_tgt_br master nvmf_br 00:26:28.667 11:48:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@214 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:26:28.667 11:48:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@217 -- # ipts -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:26:28.667 11:48:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT' 00:26:28.667 11:48:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@218 -- # ipts -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT 00:26:28.667 11:48:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT' 00:26:28.667 11:48:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@219 -- # ipts -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:26:28.667 11:48:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@790 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT -m comment --comment 'SPDK_NVMF:-A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT' 00:26:28.667 11:48:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@222 -- # ping -c 1 10.0.0.3 00:26:28.667 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:26:28.667 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.117 ms 00:26:28.667 00:26:28.667 --- 10.0.0.3 ping statistics --- 00:26:28.667 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:26:28.667 rtt min/avg/max/mdev = 0.117/0.117/0.117/0.000 ms 00:26:28.667 11:48:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@223 -- # ping -c 1 10.0.0.4 00:26:28.667 PING 10.0.0.4 (10.0.0.4) 56(84) bytes of data. 00:26:28.667 64 bytes from 10.0.0.4: icmp_seq=1 ttl=64 time=0.057 ms 00:26:28.667 00:26:28.667 --- 10.0.0.4 ping statistics --- 00:26:28.667 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:26:28.667 rtt min/avg/max/mdev = 0.057/0.057/0.057/0.000 ms 00:26:28.667 11:48:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@224 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:26:28.667 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:26:28.667 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.026 ms 00:26:28.667 00:26:28.667 --- 10.0.0.1 ping statistics --- 00:26:28.667 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:26:28.667 rtt min/avg/max/mdev = 0.026/0.026/0.026/0.000 ms 00:26:28.667 11:48:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@225 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.2 00:26:28.667 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:26:28.667 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.065 ms 00:26:28.667 00:26:28.667 --- 10.0.0.2 ping statistics --- 00:26:28.667 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:26:28.667 rtt min/avg/max/mdev = 0.065/0.065/0.065/0.000 ms 00:26:28.667 11:48:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@227 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:26:28.667 11:48:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@461 -- # return 0 00:26:28.667 11:48:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:26:28.667 11:48:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:26:28.667 11:48:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:26:28.667 11:48:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:26:28.667 11:48:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:26:28.667 11:48:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:26:28.667 11:48:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:26:28.667 11:48:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@15 -- # nvmfappstart -m 0xF 00:26:28.667 11:48:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:26:28.667 11:48:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@726 -- # xtrace_disable 00:26:28.667 11:48:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:26:28.667 11:48:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@509 -- # nvmfpid=107346 00:26:28.667 11:48:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@508 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --interrupt-mode -m 0xF 00:26:28.667 11:48:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@510 -- # waitforlisten 107346 00:26:28.667 11:48:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@835 -- # '[' -z 107346 ']' 00:26:28.667 11:48:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:26:28.667 11:48:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@840 -- # local max_retries=100 00:26:28.667 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:26:28.667 11:48:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:26:28.667 11:48:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@844 -- # xtrace_disable 00:26:28.667 11:48:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:26:28.926 [2024-12-16 11:48:29.468090] thread.c:3079:spdk_interrupt_mode_enable: *NOTICE*: Set SPDK running in interrupt mode. 00:26:28.926 [2024-12-16 11:48:29.469361] Starting SPDK v25.01-pre git sha1 c360129e7 / DPDK 24.03.0 initialization... 00:26:28.926 [2024-12-16 11:48:29.469425] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:26:28.926 [2024-12-16 11:48:29.621078] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:26:29.185 [2024-12-16 11:48:29.685118] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:26:29.185 [2024-12-16 11:48:29.685571] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:26:29.185 [2024-12-16 11:48:29.685810] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:26:29.185 [2024-12-16 11:48:29.686097] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:26:29.185 [2024-12-16 11:48:29.686311] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:26:29.185 [2024-12-16 11:48:29.687668] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 1 00:26:29.185 [2024-12-16 11:48:29.687741] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 2 00:26:29.185 [2024-12-16 11:48:29.687834] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 3 00:26:29.185 [2024-12-16 11:48:29.687880] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:26:29.185 [2024-12-16 11:48:29.786535] thread.c:2144:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_000) to intr mode from intr mode. 00:26:29.185 [2024-12-16 11:48:29.786641] thread.c:2144:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_001) to intr mode from intr mode. 00:26:29.185 [2024-12-16 11:48:29.786805] thread.c:2144:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_002) to intr mode from intr mode. 00:26:29.185 [2024-12-16 11:48:29.787266] thread.c:2144:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (app_thread) to intr mode from intr mode. 00:26:29.185 [2024-12-16 11:48:29.787585] thread.c:2144:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_003) to intr mode from intr mode. 00:26:29.185 11:48:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:26:29.185 11:48:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@868 -- # return 0 00:26:29.185 11:48:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:26:29.185 11:48:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@732 -- # xtrace_disable 00:26:29.185 11:48:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:26:29.185 11:48:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:26:29.185 11:48:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@17 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:26:29.185 11:48:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:29.185 11:48:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:26:29.185 [2024-12-16 11:48:29.869189] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:26:29.185 11:48:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:29.185 11:48:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@20 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:26:29.185 11:48:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:29.185 11:48:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:26:29.185 Malloc0 00:26:29.185 11:48:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:29.185 11:48:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@21 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:26:29.185 11:48:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:29.185 11:48:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:26:29.185 11:48:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:29.185 11:48:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@22 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:26:29.185 11:48:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:29.185 11:48:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:26:29.455 11:48:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:29.455 11:48:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@23 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 00:26:29.455 11:48:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:29.455 11:48:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:26:29.455 [2024-12-16 11:48:29.949181] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4420 *** 00:26:29.455 11:48:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:29.455 test case1: single bdev can't be used in multiple subsystems 00:26:29.455 11:48:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@25 -- # echo 'test case1: single bdev can'\''t be used in multiple subsystems' 00:26:29.455 11:48:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@26 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode2 -a -s SPDK2 00:26:29.455 11:48:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:29.455 11:48:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:26:29.455 11:48:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:29.455 11:48:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@27 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode2 -t tcp -a 10.0.0.3 -s 4420 00:26:29.455 11:48:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:29.455 11:48:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:26:29.455 11:48:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:29.455 11:48:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@28 -- # nmic_status=0 00:26:29.455 11:48:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@29 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode2 Malloc0 00:26:29.455 11:48:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:29.455 11:48:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:26:29.455 [2024-12-16 11:48:29.972814] bdev.c:8538:bdev_open: *ERROR*: bdev Malloc0 already claimed: type exclusive_write by module NVMe-oF Target 00:26:29.455 [2024-12-16 11:48:29.972979] subsystem.c:2160:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode2: bdev Malloc0 cannot be opened, error=-1 00:26:29.455 [2024-12-16 11:48:29.973092] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:26:29.455 2024/12/16 11:48:29 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:Malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false)] nqn:nqn.2016-06.io.spdk:cnode2], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:26:29.455 request: 00:26:29.455 { 00:26:29.455 "method": "nvmf_subsystem_add_ns", 00:26:29.455 "params": { 00:26:29.455 "nqn": "nqn.2016-06.io.spdk:cnode2", 00:26:29.455 "namespace": { 00:26:29.455 "bdev_name": "Malloc0", 00:26:29.455 "no_auto_visible": false, 00:26:29.455 "hide_metadata": false 00:26:29.455 } 00:26:29.455 } 00:26:29.455 } 00:26:29.455 Got JSON-RPC error response 00:26:29.455 GoRPCClient: error on JSON-RPC call 00:26:29.455 11:48:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:26:29.455 11:48:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@29 -- # nmic_status=1 00:26:29.455 11:48:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@31 -- # '[' 1 -eq 0 ']' 00:26:29.455 11:48:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@36 -- # echo ' Adding namespace failed - expected result.' 00:26:29.455 Adding namespace failed - expected result. 00:26:29.455 test case2: host connect to nvmf target in multiple paths 00:26:29.455 11:48:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@39 -- # echo 'test case2: host connect to nvmf target in multiple paths' 00:26:29.455 11:48:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@40 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4421 00:26:29.455 11:48:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:29.455 11:48:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:26:29.455 [2024-12-16 11:48:29.988909] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4421 *** 00:26:29.455 11:48:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:29.455 11:48:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@41 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:2fb5c03d-b67f-4471-b8a9-a83bf7b15be8 --hostid=2fb5c03d-b67f-4471-b8a9-a83bf7b15be8 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.3 -s 4420 00:26:29.455 11:48:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@42 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:2fb5c03d-b67f-4471-b8a9-a83bf7b15be8 --hostid=2fb5c03d-b67f-4471-b8a9-a83bf7b15be8 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.3 -s 4421 00:26:29.455 11:48:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@44 -- # waitforserial SPDKISFASTANDAWESOME 00:26:29.455 11:48:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1202 -- # local i=0 00:26:29.455 11:48:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1203 -- # local nvme_device_counter=1 nvme_devices=0 00:26:29.455 11:48:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1204 -- # [[ -n '' ]] 00:26:29.455 11:48:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1209 -- # sleep 2 00:26:31.988 11:48:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1210 -- # (( i++ <= 15 )) 00:26:31.988 11:48:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1211 -- # lsblk -l -o NAME,SERIAL 00:26:31.988 11:48:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1211 -- # grep -c SPDKISFASTANDAWESOME 00:26:31.988 11:48:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1211 -- # nvme_devices=1 00:26:31.988 11:48:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1212 -- # (( nvme_devices == nvme_device_counter )) 00:26:31.988 11:48:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1212 -- # return 0 00:26:31.988 11:48:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@46 -- # /home/vagrant/spdk_repo/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 1 -t write -r 1 -v 00:26:31.988 [global] 00:26:31.988 thread=1 00:26:31.988 invalidate=1 00:26:31.988 rw=write 00:26:31.988 time_based=1 00:26:31.988 runtime=1 00:26:31.988 ioengine=libaio 00:26:31.988 direct=1 00:26:31.988 bs=4096 00:26:31.988 iodepth=1 00:26:31.988 norandommap=0 00:26:31.988 numjobs=1 00:26:31.988 00:26:31.988 verify_dump=1 00:26:31.988 verify_backlog=512 00:26:31.988 verify_state_save=0 00:26:31.988 do_verify=1 00:26:31.988 verify=crc32c-intel 00:26:31.988 [job0] 00:26:31.988 filename=/dev/nvme0n1 00:26:31.988 Could not set queue depth (nvme0n1) 00:26:31.988 job0: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:26:31.988 fio-3.35 00:26:31.988 Starting 1 thread 00:26:32.924 00:26:32.924 job0: (groupid=0, jobs=1): err= 0: pid=107437: Mon Dec 16 11:48:33 2024 00:26:32.924 read: IOPS=2829, BW=11.1MiB/s (11.6MB/s)(11.1MiB/1001msec) 00:26:32.924 slat (nsec): min=13921, max=56641, avg=16659.56, stdev=3911.88 00:26:32.924 clat (usec): min=150, max=242, avg=173.88, stdev=12.34 00:26:32.924 lat (usec): min=165, max=271, avg=190.54, stdev=13.12 00:26:32.924 clat percentiles (usec): 00:26:32.924 | 1.00th=[ 157], 5.00th=[ 159], 10.00th=[ 161], 20.00th=[ 165], 00:26:32.924 | 30.00th=[ 165], 40.00th=[ 167], 50.00th=[ 169], 60.00th=[ 174], 00:26:32.924 | 70.00th=[ 180], 80.00th=[ 184], 90.00th=[ 192], 95.00th=[ 200], 00:26:32.924 | 99.00th=[ 212], 99.50th=[ 215], 99.90th=[ 219], 99.95th=[ 227], 00:26:32.924 | 99.99th=[ 243] 00:26:32.924 write: IOPS=3068, BW=12.0MiB/s (12.6MB/s)(12.0MiB/1001msec); 0 zone resets 00:26:32.924 slat (nsec): min=19270, max=87299, avg=24237.40, stdev=6397.25 00:26:32.924 clat (usec): min=105, max=343, avg=122.17, stdev=11.81 00:26:32.924 lat (usec): min=125, max=365, avg=146.41, stdev=14.38 00:26:32.924 clat percentiles (usec): 00:26:32.924 | 1.00th=[ 109], 5.00th=[ 112], 10.00th=[ 113], 20.00th=[ 115], 00:26:32.924 | 30.00th=[ 116], 40.00th=[ 118], 50.00th=[ 119], 60.00th=[ 121], 00:26:32.924 | 70.00th=[ 125], 80.00th=[ 130], 90.00th=[ 137], 95.00th=[ 143], 00:26:32.924 | 99.00th=[ 157], 99.50th=[ 165], 99.90th=[ 231], 99.95th=[ 253], 00:26:32.924 | 99.99th=[ 343] 00:26:32.924 bw ( KiB/s): min=12288, max=12288, per=100.00%, avg=12288.00, stdev= 0.00, samples=1 00:26:32.924 iops : min= 3072, max= 3072, avg=3072.00, stdev= 0.00, samples=1 00:26:32.924 lat (usec) : 250=99.97%, 500=0.03% 00:26:32.924 cpu : usr=2.30%, sys=9.20%, ctx=5904, majf=0, minf=5 00:26:32.924 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:26:32.924 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:26:32.924 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:26:32.924 issued rwts: total=2832,3072,0,0 short=0,0,0,0 dropped=0,0,0,0 00:26:32.924 latency : target=0, window=0, percentile=100.00%, depth=1 00:26:32.924 00:26:32.924 Run status group 0 (all jobs): 00:26:32.924 READ: bw=11.1MiB/s (11.6MB/s), 11.1MiB/s-11.1MiB/s (11.6MB/s-11.6MB/s), io=11.1MiB (11.6MB), run=1001-1001msec 00:26:32.924 WRITE: bw=12.0MiB/s (12.6MB/s), 12.0MiB/s-12.0MiB/s (12.6MB/s-12.6MB/s), io=12.0MiB (12.6MB), run=1001-1001msec 00:26:32.924 00:26:32.924 Disk stats (read/write): 00:26:32.924 nvme0n1: ios=2610/2746, merge=0/0, ticks=465/362, in_queue=827, util=91.38% 00:26:32.924 11:48:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@48 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:26:32.924 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 2 controller(s) 00:26:32.924 11:48:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@49 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:26:32.924 11:48:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1223 -- # local i=0 00:26:32.924 11:48:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1224 -- # lsblk -o NAME,SERIAL 00:26:32.924 11:48:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1224 -- # grep -q -w SPDKISFASTANDAWESOME 00:26:32.924 11:48:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1231 -- # lsblk -l -o NAME,SERIAL 00:26:32.924 11:48:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1231 -- # grep -q -w SPDKISFASTANDAWESOME 00:26:32.924 11:48:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1235 -- # return 0 00:26:32.924 11:48:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@51 -- # trap - SIGINT SIGTERM EXIT 00:26:32.924 11:48:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@53 -- # nvmftestfini 00:26:32.924 11:48:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@516 -- # nvmfcleanup 00:26:32.924 11:48:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@121 -- # sync 00:26:33.183 11:48:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:26:33.183 11:48:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@124 -- # set +e 00:26:33.183 11:48:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@125 -- # for i in {1..20} 00:26:33.183 11:48:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:26:33.183 rmmod nvme_tcp 00:26:33.183 rmmod nvme_fabrics 00:26:33.183 rmmod nvme_keyring 00:26:33.183 11:48:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:26:33.183 11:48:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@128 -- # set -e 00:26:33.183 11:48:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@129 -- # return 0 00:26:33.183 11:48:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@517 -- # '[' -n 107346 ']' 00:26:33.183 11:48:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@518 -- # killprocess 107346 00:26:33.183 11:48:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@954 -- # '[' -z 107346 ']' 00:26:33.183 11:48:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@958 -- # kill -0 107346 00:26:33.183 11:48:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@959 -- # uname 00:26:33.183 11:48:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:26:33.183 11:48:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 107346 00:26:33.183 11:48:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:26:33.183 11:48:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:26:33.183 11:48:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@972 -- # echo 'killing process with pid 107346' 00:26:33.183 killing process with pid 107346 00:26:33.183 11:48:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@973 -- # kill 107346 00:26:33.183 11:48:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@978 -- # wait 107346 00:26:33.442 11:48:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:26:33.442 11:48:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:26:33.442 11:48:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:26:33.442 11:48:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@297 -- # iptr 00:26:33.442 11:48:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:26:33.442 11:48:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@791 -- # iptables-save 00:26:33.442 11:48:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@791 -- # iptables-restore 00:26:33.442 11:48:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@298 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:26:33.442 11:48:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@299 -- # nvmf_veth_fini 00:26:33.442 11:48:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@233 -- # ip link set nvmf_init_br nomaster 00:26:33.442 11:48:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@234 -- # ip link set nvmf_init_br2 nomaster 00:26:33.442 11:48:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@235 -- # ip link set nvmf_tgt_br nomaster 00:26:33.442 11:48:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@236 -- # ip link set nvmf_tgt_br2 nomaster 00:26:33.442 11:48:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@237 -- # ip link set nvmf_init_br down 00:26:33.442 11:48:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@238 -- # ip link set nvmf_init_br2 down 00:26:33.442 11:48:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@239 -- # ip link set nvmf_tgt_br down 00:26:33.442 11:48:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@240 -- # ip link set nvmf_tgt_br2 down 00:26:33.442 11:48:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@241 -- # ip link delete nvmf_br type bridge 00:26:33.442 11:48:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@242 -- # ip link delete nvmf_init_if 00:26:33.442 11:48:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@243 -- # ip link delete nvmf_init_if2 00:26:33.442 11:48:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@244 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:26:33.442 11:48:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@245 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:26:33.700 11:48:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@246 -- # remove_spdk_ns 00:26:33.700 11:48:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:26:33.700 11:48:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:26:33.700 11:48:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:26:33.700 11:48:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@300 -- # return 0 00:26:33.700 00:26:33.700 real 0m5.482s 00:26:33.700 user 0m14.777s 00:26:33.700 sys 0m2.341s 00:26:33.700 11:48:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1130 -- # xtrace_disable 00:26:33.700 11:48:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:26:33.700 ************************************ 00:26:33.700 END TEST nvmf_nmic 00:26:33.700 ************************************ 00:26:33.700 11:48:34 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/nvmf_target_core.sh@34 -- # run_test nvmf_fio_target /home/vagrant/spdk_repo/spdk/test/nvmf/target/fio.sh --transport=tcp --interrupt-mode 00:26:33.700 11:48:34 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:26:33.700 11:48:34 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1111 -- # xtrace_disable 00:26:33.700 11:48:34 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@10 -- # set +x 00:26:33.700 ************************************ 00:26:33.700 START TEST nvmf_fio_target 00:26:33.700 ************************************ 00:26:33.700 11:48:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/fio.sh --transport=tcp --interrupt-mode 00:26:33.700 * Looking for test storage... 00:26:33.700 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:26:33.700 11:48:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:26:33.700 11:48:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1711 -- # lcov --version 00:26:33.700 11:48:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:26:33.960 11:48:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:26:33.960 11:48:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:26:33.960 11:48:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@333 -- # local ver1 ver1_l 00:26:33.960 11:48:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@334 -- # local ver2 ver2_l 00:26:33.960 11:48:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@336 -- # IFS=.-: 00:26:33.960 11:48:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@336 -- # read -ra ver1 00:26:33.960 11:48:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@337 -- # IFS=.-: 00:26:33.960 11:48:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@337 -- # read -ra ver2 00:26:33.960 11:48:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@338 -- # local 'op=<' 00:26:33.960 11:48:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@340 -- # ver1_l=2 00:26:33.960 11:48:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@341 -- # ver2_l=1 00:26:33.960 11:48:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:26:33.960 11:48:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@344 -- # case "$op" in 00:26:33.960 11:48:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@345 -- # : 1 00:26:33.960 11:48:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@364 -- # (( v = 0 )) 00:26:33.960 11:48:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:26:33.960 11:48:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@365 -- # decimal 1 00:26:33.960 11:48:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@353 -- # local d=1 00:26:33.960 11:48:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:26:33.960 11:48:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@355 -- # echo 1 00:26:33.960 11:48:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@365 -- # ver1[v]=1 00:26:33.960 11:48:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@366 -- # decimal 2 00:26:33.960 11:48:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@353 -- # local d=2 00:26:33.960 11:48:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:26:33.960 11:48:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@355 -- # echo 2 00:26:33.960 11:48:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@366 -- # ver2[v]=2 00:26:33.961 11:48:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:26:33.961 11:48:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:26:33.961 11:48:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@368 -- # return 0 00:26:33.961 11:48:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:26:33.961 11:48:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:26:33.961 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:26:33.961 --rc genhtml_branch_coverage=1 00:26:33.961 --rc genhtml_function_coverage=1 00:26:33.961 --rc genhtml_legend=1 00:26:33.961 --rc geninfo_all_blocks=1 00:26:33.961 --rc geninfo_unexecuted_blocks=1 00:26:33.961 00:26:33.961 ' 00:26:33.961 11:48:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:26:33.961 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:26:33.961 --rc genhtml_branch_coverage=1 00:26:33.961 --rc genhtml_function_coverage=1 00:26:33.961 --rc genhtml_legend=1 00:26:33.961 --rc geninfo_all_blocks=1 00:26:33.961 --rc geninfo_unexecuted_blocks=1 00:26:33.961 00:26:33.961 ' 00:26:33.961 11:48:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:26:33.961 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:26:33.961 --rc genhtml_branch_coverage=1 00:26:33.961 --rc genhtml_function_coverage=1 00:26:33.961 --rc genhtml_legend=1 00:26:33.961 --rc geninfo_all_blocks=1 00:26:33.961 --rc geninfo_unexecuted_blocks=1 00:26:33.961 00:26:33.961 ' 00:26:33.961 11:48:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:26:33.961 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:26:33.961 --rc genhtml_branch_coverage=1 00:26:33.961 --rc genhtml_function_coverage=1 00:26:33.961 --rc genhtml_legend=1 00:26:33.961 --rc geninfo_all_blocks=1 00:26:33.961 --rc geninfo_unexecuted_blocks=1 00:26:33.961 00:26:33.961 ' 00:26:33.961 11:48:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:26:33.961 11:48:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@7 -- # uname -s 00:26:33.961 11:48:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:26:33.961 11:48:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:26:33.961 11:48:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:26:33.961 11:48:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:26:33.961 11:48:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:26:33.961 11:48:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:26:33.961 11:48:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:26:33.961 11:48:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:26:33.961 11:48:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:26:33.961 11:48:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:26:33.961 11:48:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:2fb5c03d-b67f-4471-b8a9-a83bf7b15be8 00:26:33.961 11:48:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@18 -- # NVME_HOSTID=2fb5c03d-b67f-4471-b8a9-a83bf7b15be8 00:26:33.961 11:48:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:26:33.961 11:48:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:26:33.961 11:48:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:26:33.961 11:48:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:26:33.961 11:48:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@49 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:26:33.961 11:48:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@15 -- # shopt -s extglob 00:26:33.961 11:48:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:26:33.961 11:48:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:26:33.961 11:48:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:26:33.961 11:48:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:26:33.961 11:48:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:26:33.961 11:48:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:26:33.961 11:48:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- paths/export.sh@5 -- # export PATH 00:26:33.961 11:48:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:26:33.961 11:48:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@51 -- # : 0 00:26:33.961 11:48:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:26:33.961 11:48:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:26:33.961 11:48:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:26:33.961 11:48:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:26:33.961 11:48:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:26:33.961 11:48:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@33 -- # '[' 1 -eq 1 ']' 00:26:33.961 11:48:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@34 -- # NVMF_APP+=(--interrupt-mode) 00:26:33.961 11:48:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:26:33.961 11:48:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:26:33.961 11:48:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@55 -- # have_pci_nics=0 00:26:33.961 11:48:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@11 -- # MALLOC_BDEV_SIZE=64 00:26:33.961 11:48:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:26:33.961 11:48:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@14 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:26:33.961 11:48:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@16 -- # nvmftestinit 00:26:33.961 11:48:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:26:33.961 11:48:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:26:33.961 11:48:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@476 -- # prepare_net_devs 00:26:33.961 11:48:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@438 -- # local -g is_hw=no 00:26:33.961 11:48:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@440 -- # remove_spdk_ns 00:26:33.961 11:48:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:26:33.961 11:48:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:26:33.961 11:48:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:26:33.961 11:48:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@442 -- # [[ virt != virt ]] 00:26:33.961 11:48:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@444 -- # [[ no == yes ]] 00:26:33.961 11:48:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@451 -- # [[ virt == phy ]] 00:26:33.961 11:48:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@454 -- # [[ virt == phy-fallback ]] 00:26:33.961 11:48:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@459 -- # [[ tcp == tcp ]] 00:26:33.961 11:48:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@460 -- # nvmf_veth_init 00:26:33.961 11:48:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@145 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:26:33.961 11:48:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@146 -- # NVMF_SECOND_INITIATOR_IP=10.0.0.2 00:26:33.961 11:48:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@147 -- # NVMF_FIRST_TARGET_IP=10.0.0.3 00:26:33.961 11:48:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@148 -- # NVMF_SECOND_TARGET_IP=10.0.0.4 00:26:33.961 11:48:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@149 -- # NVMF_INITIATOR_IP=10.0.0.1 00:26:33.961 11:48:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@150 -- # NVMF_BRIDGE=nvmf_br 00:26:33.961 11:48:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@151 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:26:33.961 11:48:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@152 -- # NVMF_INITIATOR_INTERFACE2=nvmf_init_if2 00:26:33.961 11:48:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@153 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:26:33.962 11:48:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@154 -- # NVMF_INITIATOR_BRIDGE2=nvmf_init_br2 00:26:33.962 11:48:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@155 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:26:33.962 11:48:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@156 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:26:33.962 11:48:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@157 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:26:33.962 11:48:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@158 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:26:33.962 11:48:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@159 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:26:33.962 11:48:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@160 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:26:33.962 11:48:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@162 -- # ip link set nvmf_init_br nomaster 00:26:33.962 Cannot find device "nvmf_init_br" 00:26:33.962 11:48:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@162 -- # true 00:26:33.962 11:48:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@163 -- # ip link set nvmf_init_br2 nomaster 00:26:33.962 Cannot find device "nvmf_init_br2" 00:26:33.962 11:48:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@163 -- # true 00:26:33.962 11:48:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@164 -- # ip link set nvmf_tgt_br nomaster 00:26:33.962 Cannot find device "nvmf_tgt_br" 00:26:33.962 11:48:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@164 -- # true 00:26:33.962 11:48:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@165 -- # ip link set nvmf_tgt_br2 nomaster 00:26:33.962 Cannot find device "nvmf_tgt_br2" 00:26:33.962 11:48:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@165 -- # true 00:26:33.962 11:48:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@166 -- # ip link set nvmf_init_br down 00:26:33.962 Cannot find device "nvmf_init_br" 00:26:33.962 11:48:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@166 -- # true 00:26:33.962 11:48:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@167 -- # ip link set nvmf_init_br2 down 00:26:33.962 Cannot find device "nvmf_init_br2" 00:26:33.962 11:48:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@167 -- # true 00:26:33.962 11:48:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@168 -- # ip link set nvmf_tgt_br down 00:26:33.962 Cannot find device "nvmf_tgt_br" 00:26:33.962 11:48:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@168 -- # true 00:26:33.962 11:48:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@169 -- # ip link set nvmf_tgt_br2 down 00:26:33.962 Cannot find device "nvmf_tgt_br2" 00:26:33.962 11:48:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@169 -- # true 00:26:33.962 11:48:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@170 -- # ip link delete nvmf_br type bridge 00:26:33.962 Cannot find device "nvmf_br" 00:26:33.962 11:48:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@170 -- # true 00:26:33.962 11:48:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@171 -- # ip link delete nvmf_init_if 00:26:33.962 Cannot find device "nvmf_init_if" 00:26:33.962 11:48:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@171 -- # true 00:26:33.962 11:48:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@172 -- # ip link delete nvmf_init_if2 00:26:33.962 Cannot find device "nvmf_init_if2" 00:26:33.962 11:48:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@172 -- # true 00:26:33.962 11:48:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@173 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:26:33.962 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:26:33.962 11:48:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@173 -- # true 00:26:33.962 11:48:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@174 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:26:33.962 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:26:33.962 11:48:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@174 -- # true 00:26:33.962 11:48:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@177 -- # ip netns add nvmf_tgt_ns_spdk 00:26:33.962 11:48:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@180 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:26:33.962 11:48:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@181 -- # ip link add nvmf_init_if2 type veth peer name nvmf_init_br2 00:26:33.962 11:48:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@182 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:26:33.962 11:48:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@183 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:26:34.221 11:48:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:26:34.221 11:48:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@187 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:26:34.221 11:48:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@190 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:26:34.221 11:48:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@191 -- # ip addr add 10.0.0.2/24 dev nvmf_init_if2 00:26:34.221 11:48:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@192 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if 00:26:34.221 11:48:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@193 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.4/24 dev nvmf_tgt_if2 00:26:34.221 11:48:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@196 -- # ip link set nvmf_init_if up 00:26:34.221 11:48:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@197 -- # ip link set nvmf_init_if2 up 00:26:34.221 11:48:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@198 -- # ip link set nvmf_init_br up 00:26:34.221 11:48:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@199 -- # ip link set nvmf_init_br2 up 00:26:34.221 11:48:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@200 -- # ip link set nvmf_tgt_br up 00:26:34.221 11:48:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@201 -- # ip link set nvmf_tgt_br2 up 00:26:34.221 11:48:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@202 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:26:34.221 11:48:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@203 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:26:34.221 11:48:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@204 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:26:34.221 11:48:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@207 -- # ip link add nvmf_br type bridge 00:26:34.221 11:48:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@208 -- # ip link set nvmf_br up 00:26:34.221 11:48:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@211 -- # ip link set nvmf_init_br master nvmf_br 00:26:34.221 11:48:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@212 -- # ip link set nvmf_init_br2 master nvmf_br 00:26:34.221 11:48:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@213 -- # ip link set nvmf_tgt_br master nvmf_br 00:26:34.221 11:48:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@214 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:26:34.221 11:48:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@217 -- # ipts -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:26:34.221 11:48:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT' 00:26:34.221 11:48:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@218 -- # ipts -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT 00:26:34.221 11:48:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT' 00:26:34.221 11:48:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@219 -- # ipts -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:26:34.221 11:48:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@790 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT -m comment --comment 'SPDK_NVMF:-A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT' 00:26:34.221 11:48:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@222 -- # ping -c 1 10.0.0.3 00:26:34.221 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:26:34.221 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.052 ms 00:26:34.221 00:26:34.221 --- 10.0.0.3 ping statistics --- 00:26:34.221 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:26:34.221 rtt min/avg/max/mdev = 0.052/0.052/0.052/0.000 ms 00:26:34.221 11:48:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@223 -- # ping -c 1 10.0.0.4 00:26:34.221 PING 10.0.0.4 (10.0.0.4) 56(84) bytes of data. 00:26:34.221 64 bytes from 10.0.0.4: icmp_seq=1 ttl=64 time=0.052 ms 00:26:34.221 00:26:34.221 --- 10.0.0.4 ping statistics --- 00:26:34.221 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:26:34.221 rtt min/avg/max/mdev = 0.052/0.052/0.052/0.000 ms 00:26:34.222 11:48:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@224 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:26:34.222 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:26:34.222 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.022 ms 00:26:34.222 00:26:34.222 --- 10.0.0.1 ping statistics --- 00:26:34.222 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:26:34.222 rtt min/avg/max/mdev = 0.022/0.022/0.022/0.000 ms 00:26:34.222 11:48:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@225 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.2 00:26:34.222 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:26:34.222 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.093 ms 00:26:34.222 00:26:34.222 --- 10.0.0.2 ping statistics --- 00:26:34.222 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:26:34.222 rtt min/avg/max/mdev = 0.093/0.093/0.093/0.000 ms 00:26:34.222 11:48:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@227 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:26:34.222 11:48:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@461 -- # return 0 00:26:34.222 11:48:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:26:34.222 11:48:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:26:34.222 11:48:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:26:34.222 11:48:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:26:34.222 11:48:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:26:34.222 11:48:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:26:34.222 11:48:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:26:34.222 11:48:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@17 -- # nvmfappstart -m 0xF 00:26:34.222 11:48:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:26:34.222 11:48:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@726 -- # xtrace_disable 00:26:34.222 11:48:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@10 -- # set +x 00:26:34.222 11:48:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@509 -- # nvmfpid=107675 00:26:34.222 11:48:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@508 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --interrupt-mode -m 0xF 00:26:34.222 11:48:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@510 -- # waitforlisten 107675 00:26:34.222 11:48:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@835 -- # '[' -z 107675 ']' 00:26:34.222 11:48:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:26:34.222 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:26:34.222 11:48:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@840 -- # local max_retries=100 00:26:34.222 11:48:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:26:34.222 11:48:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@844 -- # xtrace_disable 00:26:34.222 11:48:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@10 -- # set +x 00:26:34.480 [2024-12-16 11:48:35.024504] thread.c:3079:spdk_interrupt_mode_enable: *NOTICE*: Set SPDK running in interrupt mode. 00:26:34.481 [2024-12-16 11:48:35.025769] Starting SPDK v25.01-pre git sha1 c360129e7 / DPDK 24.03.0 initialization... 00:26:34.481 [2024-12-16 11:48:35.025847] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:26:34.481 [2024-12-16 11:48:35.175273] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:26:34.739 [2024-12-16 11:48:35.236324] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:26:34.739 [2024-12-16 11:48:35.236378] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:26:34.739 [2024-12-16 11:48:35.236390] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:26:34.739 [2024-12-16 11:48:35.236398] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:26:34.739 [2024-12-16 11:48:35.236405] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:26:34.739 [2024-12-16 11:48:35.237569] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 1 00:26:34.739 [2024-12-16 11:48:35.237784] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 2 00:26:34.739 [2024-12-16 11:48:35.237883] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 3 00:26:34.739 [2024-12-16 11:48:35.237890] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:26:34.739 [2024-12-16 11:48:35.338517] thread.c:2144:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_000) to intr mode from intr mode. 00:26:34.739 [2024-12-16 11:48:35.338801] thread.c:2144:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_001) to intr mode from intr mode. 00:26:34.739 [2024-12-16 11:48:35.339017] thread.c:2144:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_002) to intr mode from intr mode. 00:26:34.739 [2024-12-16 11:48:35.339445] thread.c:2144:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (app_thread) to intr mode from intr mode. 00:26:34.739 [2024-12-16 11:48:35.339687] thread.c:2144:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_003) to intr mode from intr mode. 00:26:35.696 11:48:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:26:35.696 11:48:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@868 -- # return 0 00:26:35.696 11:48:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:26:35.696 11:48:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@732 -- # xtrace_disable 00:26:35.696 11:48:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@10 -- # set +x 00:26:35.696 11:48:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:26:35.696 11:48:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@19 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:26:35.696 [2024-12-16 11:48:36.443407] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:26:35.954 11:48:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@21 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:26:36.213 11:48:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@21 -- # malloc_bdevs='Malloc0 ' 00:26:36.213 11:48:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@22 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:26:36.472 11:48:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@22 -- # malloc_bdevs+=Malloc1 00:26:36.472 11:48:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@24 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:26:36.732 11:48:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@24 -- # raid_malloc_bdevs='Malloc2 ' 00:26:36.732 11:48:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@25 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:26:37.299 11:48:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@25 -- # raid_malloc_bdevs+=Malloc3 00:26:37.299 11:48:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@26 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_raid_create -n raid0 -z 64 -r 0 -b 'Malloc2 Malloc3' 00:26:37.558 11:48:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@29 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:26:37.816 11:48:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@29 -- # concat_malloc_bdevs='Malloc4 ' 00:26:37.816 11:48:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@30 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:26:38.075 11:48:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@30 -- # concat_malloc_bdevs+='Malloc5 ' 00:26:38.075 11:48:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:26:38.333 11:48:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@31 -- # concat_malloc_bdevs+=Malloc6 00:26:38.334 11:48:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@32 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_raid_create -n concat0 -r concat -z 64 -b 'Malloc4 Malloc5 Malloc6' 00:26:38.592 11:48:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@34 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:26:39.157 11:48:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@35 -- # for malloc_bdev in $malloc_bdevs 00:26:39.157 11:48:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@36 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:26:39.414 11:48:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@35 -- # for malloc_bdev in $malloc_bdevs 00:26:39.414 11:48:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@36 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:26:39.672 11:48:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@38 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 00:26:39.930 [2024-12-16 11:48:40.531300] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4420 *** 00:26:39.930 11:48:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@41 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 raid0 00:26:40.189 11:48:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@44 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 concat0 00:26:40.447 11:48:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@46 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:2fb5c03d-b67f-4471-b8a9-a83bf7b15be8 --hostid=2fb5c03d-b67f-4471-b8a9-a83bf7b15be8 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.3 -s 4420 00:26:40.705 11:48:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@48 -- # waitforserial SPDKISFASTANDAWESOME 4 00:26:40.705 11:48:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1202 -- # local i=0 00:26:40.705 11:48:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1203 -- # local nvme_device_counter=1 nvme_devices=0 00:26:40.705 11:48:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1204 -- # [[ -n 4 ]] 00:26:40.705 11:48:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1205 -- # nvme_device_counter=4 00:26:40.705 11:48:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1209 -- # sleep 2 00:26:42.650 11:48:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1210 -- # (( i++ <= 15 )) 00:26:42.650 11:48:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1211 -- # lsblk -l -o NAME,SERIAL 00:26:42.650 11:48:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1211 -- # grep -c SPDKISFASTANDAWESOME 00:26:42.650 11:48:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1211 -- # nvme_devices=4 00:26:42.650 11:48:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1212 -- # (( nvme_devices == nvme_device_counter )) 00:26:42.650 11:48:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1212 -- # return 0 00:26:42.650 11:48:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@50 -- # /home/vagrant/spdk_repo/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 1 -t write -r 1 -v 00:26:42.650 [global] 00:26:42.650 thread=1 00:26:42.650 invalidate=1 00:26:42.650 rw=write 00:26:42.650 time_based=1 00:26:42.650 runtime=1 00:26:42.650 ioengine=libaio 00:26:42.650 direct=1 00:26:42.650 bs=4096 00:26:42.650 iodepth=1 00:26:42.650 norandommap=0 00:26:42.650 numjobs=1 00:26:42.650 00:26:42.650 verify_dump=1 00:26:42.650 verify_backlog=512 00:26:42.650 verify_state_save=0 00:26:42.650 do_verify=1 00:26:42.650 verify=crc32c-intel 00:26:42.650 [job0] 00:26:42.650 filename=/dev/nvme0n1 00:26:42.650 [job1] 00:26:42.650 filename=/dev/nvme0n2 00:26:42.650 [job2] 00:26:42.650 filename=/dev/nvme0n3 00:26:42.650 [job3] 00:26:42.650 filename=/dev/nvme0n4 00:26:42.650 Could not set queue depth (nvme0n1) 00:26:42.650 Could not set queue depth (nvme0n2) 00:26:42.650 Could not set queue depth (nvme0n3) 00:26:42.650 Could not set queue depth (nvme0n4) 00:26:42.909 job0: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:26:42.910 job1: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:26:42.910 job2: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:26:42.910 job3: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:26:42.910 fio-3.35 00:26:42.910 Starting 4 threads 00:26:44.287 00:26:44.287 job0: (groupid=0, jobs=1): err= 0: pid=107968: Mon Dec 16 11:48:44 2024 00:26:44.287 read: IOPS=1534, BW=6138KiB/s (6285kB/s)(6144KiB/1001msec) 00:26:44.287 slat (nsec): min=13884, max=63744, avg=20754.80, stdev=5031.79 00:26:44.287 clat (usec): min=180, max=2183, avg=312.82, stdev=81.32 00:26:44.287 lat (usec): min=199, max=2203, avg=333.57, stdev=81.54 00:26:44.287 clat percentiles (usec): 00:26:44.287 | 1.00th=[ 190], 5.00th=[ 200], 10.00th=[ 208], 20.00th=[ 281], 00:26:44.287 | 30.00th=[ 293], 40.00th=[ 302], 50.00th=[ 310], 60.00th=[ 314], 00:26:44.287 | 70.00th=[ 322], 80.00th=[ 347], 90.00th=[ 412], 95.00th=[ 429], 00:26:44.287 | 99.00th=[ 461], 99.50th=[ 486], 99.90th=[ 553], 99.95th=[ 2180], 00:26:44.287 | 99.99th=[ 2180] 00:26:44.287 write: IOPS=1892, BW=7568KiB/s (7750kB/s)(7576KiB/1001msec); 0 zone resets 00:26:44.287 slat (nsec): min=19799, max=96574, avg=29472.10, stdev=7428.12 00:26:44.287 clat (usec): min=110, max=833, avg=223.79, stdev=34.18 00:26:44.287 lat (usec): min=139, max=865, avg=253.27, stdev=34.40 00:26:44.287 clat percentiles (usec): 00:26:44.287 | 1.00th=[ 139], 5.00th=[ 194], 10.00th=[ 202], 20.00th=[ 208], 00:26:44.287 | 30.00th=[ 212], 40.00th=[ 217], 50.00th=[ 221], 60.00th=[ 225], 00:26:44.287 | 70.00th=[ 229], 80.00th=[ 237], 90.00th=[ 247], 95.00th=[ 258], 00:26:44.287 | 99.00th=[ 347], 99.50th=[ 375], 99.90th=[ 791], 99.95th=[ 832], 00:26:44.287 | 99.99th=[ 832] 00:26:44.287 bw ( KiB/s): min= 8192, max= 8192, per=23.01%, avg=8192.00, stdev= 0.00, samples=1 00:26:44.287 iops : min= 2048, max= 2048, avg=2048.00, stdev= 0.00, samples=1 00:26:44.287 lat (usec) : 250=58.37%, 500=41.46%, 750=0.09%, 1000=0.06% 00:26:44.287 lat (msec) : 4=0.03% 00:26:44.287 cpu : usr=1.50%, sys=6.90%, ctx=3430, majf=0, minf=7 00:26:44.287 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:26:44.287 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:26:44.287 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:26:44.287 issued rwts: total=1536,1894,0,0 short=0,0,0,0 dropped=0,0,0,0 00:26:44.287 latency : target=0, window=0, percentile=100.00%, depth=1 00:26:44.287 job1: (groupid=0, jobs=1): err= 0: pid=107969: Mon Dec 16 11:48:44 2024 00:26:44.287 read: IOPS=2517, BW=9.83MiB/s (10.3MB/s)(9.84MiB/1001msec) 00:26:44.287 slat (nsec): min=13930, max=49924, avg=17597.82, stdev=4183.05 00:26:44.287 clat (usec): min=172, max=1163, avg=201.24, stdev=22.78 00:26:44.287 lat (usec): min=188, max=1193, avg=218.84, stdev=23.34 00:26:44.287 clat percentiles (usec): 00:26:44.287 | 1.00th=[ 180], 5.00th=[ 184], 10.00th=[ 188], 20.00th=[ 192], 00:26:44.287 | 30.00th=[ 194], 40.00th=[ 196], 50.00th=[ 200], 60.00th=[ 202], 00:26:44.287 | 70.00th=[ 206], 80.00th=[ 210], 90.00th=[ 217], 95.00th=[ 223], 00:26:44.287 | 99.00th=[ 235], 99.50th=[ 243], 99.90th=[ 310], 99.95th=[ 334], 00:26:44.287 | 99.99th=[ 1172] 00:26:44.287 write: IOPS=2557, BW=9.99MiB/s (10.5MB/s)(10.0MiB/1001msec); 0 zone resets 00:26:44.287 slat (nsec): min=19408, max=92954, avg=25396.88, stdev=7380.20 00:26:44.287 clat (usec): min=121, max=307, avg=145.88, stdev=11.98 00:26:44.287 lat (usec): min=143, max=346, avg=171.27, stdev=15.25 00:26:44.287 clat percentiles (usec): 00:26:44.287 | 1.00th=[ 126], 5.00th=[ 131], 10.00th=[ 133], 20.00th=[ 137], 00:26:44.287 | 30.00th=[ 141], 40.00th=[ 143], 50.00th=[ 145], 60.00th=[ 147], 00:26:44.287 | 70.00th=[ 151], 80.00th=[ 155], 90.00th=[ 161], 95.00th=[ 167], 00:26:44.287 | 99.00th=[ 182], 99.50th=[ 190], 99.90th=[ 247], 99.95th=[ 253], 00:26:44.287 | 99.99th=[ 310] 00:26:44.287 bw ( KiB/s): min=12288, max=12288, per=34.51%, avg=12288.00, stdev= 0.00, samples=1 00:26:44.287 iops : min= 3072, max= 3072, avg=3072.00, stdev= 0.00, samples=1 00:26:44.287 lat (usec) : 250=99.84%, 500=0.14% 00:26:44.287 lat (msec) : 2=0.02% 00:26:44.287 cpu : usr=1.60%, sys=8.70%, ctx=5080, majf=0, minf=13 00:26:44.287 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:26:44.287 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:26:44.287 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:26:44.287 issued rwts: total=2520,2560,0,0 short=0,0,0,0 dropped=0,0,0,0 00:26:44.287 latency : target=0, window=0, percentile=100.00%, depth=1 00:26:44.287 job2: (groupid=0, jobs=1): err= 0: pid=107970: Mon Dec 16 11:48:44 2024 00:26:44.287 read: IOPS=1534, BW=6138KiB/s (6285kB/s)(6144KiB/1001msec) 00:26:44.287 slat (nsec): min=15971, max=55799, avg=20134.86, stdev=3445.36 00:26:44.287 clat (usec): min=190, max=659, avg=311.36, stdev=60.23 00:26:44.287 lat (usec): min=208, max=678, avg=331.49, stdev=60.31 00:26:44.287 clat percentiles (usec): 00:26:44.287 | 1.00th=[ 200], 5.00th=[ 210], 10.00th=[ 221], 20.00th=[ 285], 00:26:44.287 | 30.00th=[ 297], 40.00th=[ 302], 50.00th=[ 310], 60.00th=[ 314], 00:26:44.287 | 70.00th=[ 322], 80.00th=[ 338], 90.00th=[ 404], 95.00th=[ 416], 00:26:44.287 | 99.00th=[ 465], 99.50th=[ 490], 99.90th=[ 578], 99.95th=[ 660], 00:26:44.287 | 99.99th=[ 660] 00:26:44.287 write: IOPS=1895, BW=7580KiB/s (7762kB/s)(7588KiB/1001msec); 0 zone resets 00:26:44.287 slat (nsec): min=22434, max=84480, avg=28848.72, stdev=6492.66 00:26:44.287 clat (usec): min=120, max=2506, avg=225.72, stdev=59.91 00:26:44.287 lat (usec): min=145, max=2547, avg=254.57, stdev=60.31 00:26:44.287 clat percentiles (usec): 00:26:44.287 | 1.00th=[ 174], 5.00th=[ 198], 10.00th=[ 204], 20.00th=[ 210], 00:26:44.287 | 30.00th=[ 215], 40.00th=[ 219], 50.00th=[ 223], 60.00th=[ 227], 00:26:44.287 | 70.00th=[ 231], 80.00th=[ 237], 90.00th=[ 245], 95.00th=[ 255], 00:26:44.287 | 99.00th=[ 347], 99.50th=[ 375], 99.90th=[ 742], 99.95th=[ 2507], 00:26:44.287 | 99.99th=[ 2507] 00:26:44.287 bw ( KiB/s): min= 8192, max= 8192, per=23.01%, avg=8192.00, stdev= 0.00, samples=1 00:26:44.287 iops : min= 2048, max= 2048, avg=2048.00, stdev= 0.00, samples=1 00:26:44.287 lat (usec) : 250=58.64%, 500=41.16%, 750=0.17% 00:26:44.287 lat (msec) : 4=0.03% 00:26:44.287 cpu : usr=1.40%, sys=6.70%, ctx=3433, majf=0, minf=10 00:26:44.287 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:26:44.287 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:26:44.287 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:26:44.287 issued rwts: total=1536,1897,0,0 short=0,0,0,0 dropped=0,0,0,0 00:26:44.287 latency : target=0, window=0, percentile=100.00%, depth=1 00:26:44.287 job3: (groupid=0, jobs=1): err= 0: pid=107971: Mon Dec 16 11:48:44 2024 00:26:44.287 read: IOPS=2342, BW=9371KiB/s (9596kB/s)(9380KiB/1001msec) 00:26:44.287 slat (nsec): min=17495, max=66560, avg=22410.35, stdev=5371.65 00:26:44.287 clat (usec): min=178, max=1796, avg=203.91, stdev=37.37 00:26:44.287 lat (usec): min=197, max=1815, avg=226.32, stdev=38.03 00:26:44.287 clat percentiles (usec): 00:26:44.287 | 1.00th=[ 184], 5.00th=[ 188], 10.00th=[ 190], 20.00th=[ 192], 00:26:44.287 | 30.00th=[ 196], 40.00th=[ 198], 50.00th=[ 200], 60.00th=[ 204], 00:26:44.287 | 70.00th=[ 208], 80.00th=[ 212], 90.00th=[ 219], 95.00th=[ 227], 00:26:44.287 | 99.00th=[ 245], 99.50th=[ 334], 99.90th=[ 416], 99.95th=[ 420], 00:26:44.287 | 99.99th=[ 1795] 00:26:44.287 write: IOPS=2557, BW=9.99MiB/s (10.5MB/s)(10.0MiB/1001msec); 0 zone resets 00:26:44.287 slat (usec): min=24, max=132, avg=31.05, stdev= 8.75 00:26:44.287 clat (usec): min=123, max=356, avg=147.77, stdev=13.92 00:26:44.287 lat (usec): min=152, max=387, avg=178.82, stdev=18.11 00:26:44.287 clat percentiles (usec): 00:26:44.287 | 1.00th=[ 129], 5.00th=[ 133], 10.00th=[ 135], 20.00th=[ 137], 00:26:44.287 | 30.00th=[ 141], 40.00th=[ 143], 50.00th=[ 147], 60.00th=[ 149], 00:26:44.287 | 70.00th=[ 153], 80.00th=[ 157], 90.00th=[ 165], 95.00th=[ 172], 00:26:44.287 | 99.00th=[ 186], 99.50th=[ 194], 99.90th=[ 269], 99.95th=[ 314], 00:26:44.287 | 99.99th=[ 355] 00:26:44.287 bw ( KiB/s): min=11632, max=11632, per=32.67%, avg=11632.00, stdev= 0.00, samples=1 00:26:44.287 iops : min= 2908, max= 2908, avg=2908.00, stdev= 0.00, samples=1 00:26:44.287 lat (usec) : 250=99.47%, 500=0.51% 00:26:44.287 lat (msec) : 2=0.02% 00:26:44.287 cpu : usr=2.60%, sys=9.80%, ctx=4920, majf=0, minf=9 00:26:44.288 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:26:44.288 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:26:44.288 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:26:44.288 issued rwts: total=2345,2560,0,0 short=0,0,0,0 dropped=0,0,0,0 00:26:44.288 latency : target=0, window=0, percentile=100.00%, depth=1 00:26:44.288 00:26:44.288 Run status group 0 (all jobs): 00:26:44.288 READ: bw=31.0MiB/s (32.5MB/s), 6138KiB/s-9.83MiB/s (6285kB/s-10.3MB/s), io=31.0MiB (32.5MB), run=1001-1001msec 00:26:44.288 WRITE: bw=34.8MiB/s (36.5MB/s), 7568KiB/s-9.99MiB/s (7750kB/s-10.5MB/s), io=34.8MiB (36.5MB), run=1001-1001msec 00:26:44.288 00:26:44.288 Disk stats (read/write): 00:26:44.288 nvme0n1: ios=1435/1536, merge=0/0, ticks=464/371, in_queue=835, util=88.08% 00:26:44.288 nvme0n2: ios=2097/2358, merge=0/0, ticks=500/365, in_queue=865, util=93.23% 00:26:44.288 nvme0n3: ios=1436/1536, merge=0/0, ticks=506/369, in_queue=875, util=92.78% 00:26:44.288 nvme0n4: ios=2048/2178, merge=0/0, ticks=426/333, in_queue=759, util=89.72% 00:26:44.288 11:48:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@51 -- # /home/vagrant/spdk_repo/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 1 -t randwrite -r 1 -v 00:26:44.288 [global] 00:26:44.288 thread=1 00:26:44.288 invalidate=1 00:26:44.288 rw=randwrite 00:26:44.288 time_based=1 00:26:44.288 runtime=1 00:26:44.288 ioengine=libaio 00:26:44.288 direct=1 00:26:44.288 bs=4096 00:26:44.288 iodepth=1 00:26:44.288 norandommap=0 00:26:44.288 numjobs=1 00:26:44.288 00:26:44.288 verify_dump=1 00:26:44.288 verify_backlog=512 00:26:44.288 verify_state_save=0 00:26:44.288 do_verify=1 00:26:44.288 verify=crc32c-intel 00:26:44.288 [job0] 00:26:44.288 filename=/dev/nvme0n1 00:26:44.288 [job1] 00:26:44.288 filename=/dev/nvme0n2 00:26:44.288 [job2] 00:26:44.288 filename=/dev/nvme0n3 00:26:44.288 [job3] 00:26:44.288 filename=/dev/nvme0n4 00:26:44.288 Could not set queue depth (nvme0n1) 00:26:44.288 Could not set queue depth (nvme0n2) 00:26:44.288 Could not set queue depth (nvme0n3) 00:26:44.288 Could not set queue depth (nvme0n4) 00:26:44.288 job0: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:26:44.288 job1: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:26:44.288 job2: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:26:44.288 job3: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:26:44.288 fio-3.35 00:26:44.288 Starting 4 threads 00:26:45.669 00:26:45.669 job0: (groupid=0, jobs=1): err= 0: pid=108030: Mon Dec 16 11:48:46 2024 00:26:45.669 read: IOPS=2149, BW=8599KiB/s (8806kB/s)(8608KiB/1001msec) 00:26:45.669 slat (nsec): min=10499, max=73860, avg=17133.79, stdev=5832.12 00:26:45.669 clat (usec): min=164, max=632, avg=216.80, stdev=46.31 00:26:45.669 lat (usec): min=181, max=654, avg=233.93, stdev=46.35 00:26:45.669 clat percentiles (usec): 00:26:45.669 | 1.00th=[ 172], 5.00th=[ 180], 10.00th=[ 184], 20.00th=[ 190], 00:26:45.669 | 30.00th=[ 194], 40.00th=[ 198], 50.00th=[ 202], 60.00th=[ 208], 00:26:45.669 | 70.00th=[ 215], 80.00th=[ 227], 90.00th=[ 281], 95.00th=[ 326], 00:26:45.669 | 99.00th=[ 396], 99.50th=[ 433], 99.90th=[ 506], 99.95th=[ 562], 00:26:45.669 | 99.99th=[ 635] 00:26:45.669 write: IOPS=2557, BW=9.99MiB/s (10.5MB/s)(10.0MiB/1001msec); 0 zone resets 00:26:45.669 slat (nsec): min=14658, max=88956, avg=27767.50, stdev=10725.44 00:26:45.669 clat (usec): min=111, max=383, avg=162.39, stdev=42.60 00:26:45.669 lat (usec): min=133, max=402, avg=190.16, stdev=42.55 00:26:45.669 clat percentiles (usec): 00:26:45.669 | 1.00th=[ 121], 5.00th=[ 127], 10.00th=[ 130], 20.00th=[ 135], 00:26:45.669 | 30.00th=[ 139], 40.00th=[ 143], 50.00th=[ 147], 60.00th=[ 153], 00:26:45.669 | 70.00th=[ 161], 80.00th=[ 178], 90.00th=[ 239], 95.00th=[ 269], 00:26:45.669 | 99.00th=[ 302], 99.50th=[ 314], 99.90th=[ 338], 99.95th=[ 343], 00:26:45.669 | 99.99th=[ 383] 00:26:45.669 bw ( KiB/s): min=12288, max=12288, per=42.90%, avg=12288.00, stdev= 0.00, samples=1 00:26:45.669 iops : min= 3072, max= 3072, avg=3072.00, stdev= 0.00, samples=1 00:26:45.669 lat (usec) : 250=89.64%, 500=10.29%, 750=0.06% 00:26:45.669 cpu : usr=1.20%, sys=9.00%, ctx=4717, majf=0, minf=13 00:26:45.669 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:26:45.669 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:26:45.669 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:26:45.669 issued rwts: total=2152,2560,0,0 short=0,0,0,0 dropped=0,0,0,0 00:26:45.669 latency : target=0, window=0, percentile=100.00%, depth=1 00:26:45.669 job1: (groupid=0, jobs=1): err= 0: pid=108031: Mon Dec 16 11:48:46 2024 00:26:45.669 read: IOPS=1468, BW=5874KiB/s (6015kB/s)(5880KiB/1001msec) 00:26:45.669 slat (nsec): min=8181, max=79345, avg=16202.30, stdev=6338.97 00:26:45.669 clat (usec): min=192, max=1215, avg=361.47, stdev=73.32 00:26:45.669 lat (usec): min=204, max=1235, avg=377.68, stdev=74.31 00:26:45.669 clat percentiles (usec): 00:26:45.669 | 1.00th=[ 204], 5.00th=[ 277], 10.00th=[ 289], 20.00th=[ 302], 00:26:45.669 | 30.00th=[ 318], 40.00th=[ 343], 50.00th=[ 363], 60.00th=[ 379], 00:26:45.669 | 70.00th=[ 392], 80.00th=[ 408], 90.00th=[ 429], 95.00th=[ 457], 00:26:45.669 | 99.00th=[ 586], 99.50th=[ 775], 99.90th=[ 889], 99.95th=[ 1221], 00:26:45.669 | 99.99th=[ 1221] 00:26:45.669 write: IOPS=1534, BW=6138KiB/s (6285kB/s)(6144KiB/1001msec); 0 zone resets 00:26:45.669 slat (usec): min=16, max=131, avg=26.20, stdev= 6.95 00:26:45.669 clat (usec): min=121, max=1302, avg=259.48, stdev=49.78 00:26:45.670 lat (usec): min=155, max=1322, avg=285.68, stdev=49.79 00:26:45.670 clat percentiles (usec): 00:26:45.670 | 1.00th=[ 192], 5.00th=[ 204], 10.00th=[ 210], 20.00th=[ 219], 00:26:45.670 | 30.00th=[ 227], 40.00th=[ 241], 50.00th=[ 262], 60.00th=[ 273], 00:26:45.670 | 70.00th=[ 281], 80.00th=[ 293], 90.00th=[ 310], 95.00th=[ 326], 00:26:45.670 | 99.00th=[ 371], 99.50th=[ 416], 99.90th=[ 586], 99.95th=[ 1303], 00:26:45.670 | 99.99th=[ 1303] 00:26:45.670 bw ( KiB/s): min= 8192, max= 8192, per=28.60%, avg=8192.00, stdev= 0.00, samples=1 00:26:45.670 iops : min= 2048, max= 2048, avg=2048.00, stdev= 0.00, samples=1 00:26:45.670 lat (usec) : 250=24.25%, 500=74.62%, 750=0.83%, 1000=0.23% 00:26:45.670 lat (msec) : 2=0.07% 00:26:45.670 cpu : usr=1.30%, sys=5.30%, ctx=3033, majf=0, minf=5 00:26:45.670 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:26:45.670 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:26:45.670 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:26:45.670 issued rwts: total=1470,1536,0,0 short=0,0,0,0 dropped=0,0,0,0 00:26:45.670 latency : target=0, window=0, percentile=100.00%, depth=1 00:26:45.670 job2: (groupid=0, jobs=1): err= 0: pid=108032: Mon Dec 16 11:48:46 2024 00:26:45.670 read: IOPS=1504, BW=6018KiB/s (6162kB/s)(6024KiB/1001msec) 00:26:45.670 slat (nsec): min=8518, max=79056, avg=19372.69, stdev=9102.80 00:26:45.670 clat (usec): min=200, max=880, avg=350.29, stdev=64.47 00:26:45.670 lat (usec): min=213, max=892, avg=369.66, stdev=61.74 00:26:45.670 clat percentiles (usec): 00:26:45.670 | 1.00th=[ 247], 5.00th=[ 262], 10.00th=[ 269], 20.00th=[ 289], 00:26:45.670 | 30.00th=[ 306], 40.00th=[ 326], 50.00th=[ 359], 60.00th=[ 375], 00:26:45.670 | 70.00th=[ 388], 80.00th=[ 400], 90.00th=[ 420], 95.00th=[ 445], 00:26:45.670 | 99.00th=[ 523], 99.50th=[ 578], 99.90th=[ 766], 99.95th=[ 881], 00:26:45.670 | 99.99th=[ 881] 00:26:45.670 write: IOPS=1534, BW=6138KiB/s (6285kB/s)(6144KiB/1001msec); 0 zone resets 00:26:45.670 slat (nsec): min=14368, max=80624, avg=29969.76, stdev=10186.84 00:26:45.670 clat (usec): min=132, max=697, avg=253.74, stdev=50.97 00:26:45.670 lat (usec): min=161, max=730, avg=283.71, stdev=47.12 00:26:45.670 clat percentiles (usec): 00:26:45.670 | 1.00th=[ 165], 5.00th=[ 188], 10.00th=[ 192], 20.00th=[ 200], 00:26:45.670 | 30.00th=[ 210], 40.00th=[ 239], 50.00th=[ 265], 60.00th=[ 277], 00:26:45.670 | 70.00th=[ 285], 80.00th=[ 293], 90.00th=[ 310], 95.00th=[ 322], 00:26:45.670 | 99.00th=[ 371], 99.50th=[ 445], 99.90th=[ 506], 99.95th=[ 701], 00:26:45.670 | 99.99th=[ 701] 00:26:45.670 bw ( KiB/s): min= 8192, max= 8192, per=28.60%, avg=8192.00, stdev= 0.00, samples=1 00:26:45.670 iops : min= 2048, max= 2048, avg=2048.00, stdev= 0.00, samples=1 00:26:45.670 lat (usec) : 250=22.16%, 500=77.02%, 750=0.76%, 1000=0.07% 00:26:45.670 cpu : usr=1.20%, sys=6.30%, ctx=3053, majf=0, minf=19 00:26:45.670 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:26:45.670 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:26:45.670 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:26:45.670 issued rwts: total=1506,1536,0,0 short=0,0,0,0 dropped=0,0,0,0 00:26:45.670 latency : target=0, window=0, percentile=100.00%, depth=1 00:26:45.670 job3: (groupid=0, jobs=1): err= 0: pid=108033: Mon Dec 16 11:48:46 2024 00:26:45.670 read: IOPS=1468, BW=5874KiB/s (6015kB/s)(5880KiB/1001msec) 00:26:45.670 slat (nsec): min=8460, max=63621, avg=15564.79, stdev=5726.74 00:26:45.670 clat (usec): min=188, max=1294, avg=361.85, stdev=75.14 00:26:45.670 lat (usec): min=208, max=1316, avg=377.42, stdev=75.44 00:26:45.670 clat percentiles (usec): 00:26:45.670 | 1.00th=[ 212], 5.00th=[ 273], 10.00th=[ 285], 20.00th=[ 302], 00:26:45.670 | 30.00th=[ 318], 40.00th=[ 347], 50.00th=[ 367], 60.00th=[ 379], 00:26:45.670 | 70.00th=[ 392], 80.00th=[ 408], 90.00th=[ 429], 95.00th=[ 457], 00:26:45.670 | 99.00th=[ 594], 99.50th=[ 742], 99.90th=[ 889], 99.95th=[ 1303], 00:26:45.670 | 99.99th=[ 1303] 00:26:45.670 write: IOPS=1534, BW=6138KiB/s (6285kB/s)(6144KiB/1001msec); 0 zone resets 00:26:45.670 slat (usec): min=13, max=103, avg=23.77, stdev= 7.23 00:26:45.670 clat (usec): min=127, max=1268, avg=262.45, stdev=52.21 00:26:45.670 lat (usec): min=159, max=1291, avg=286.22, stdev=50.45 00:26:45.670 clat percentiles (usec): 00:26:45.670 | 1.00th=[ 194], 5.00th=[ 204], 10.00th=[ 210], 20.00th=[ 219], 00:26:45.670 | 30.00th=[ 227], 40.00th=[ 239], 50.00th=[ 265], 60.00th=[ 277], 00:26:45.670 | 70.00th=[ 289], 80.00th=[ 302], 90.00th=[ 318], 95.00th=[ 330], 00:26:45.670 | 99.00th=[ 400], 99.50th=[ 449], 99.90th=[ 594], 99.95th=[ 1270], 00:26:45.670 | 99.99th=[ 1270] 00:26:45.670 bw ( KiB/s): min= 8192, max= 8192, per=28.60%, avg=8192.00, stdev= 0.00, samples=1 00:26:45.670 iops : min= 2048, max= 2048, avg=2048.00, stdev= 0.00, samples=1 00:26:45.670 lat (usec) : 250=24.22%, 500=74.48%, 750=1.03%, 1000=0.20% 00:26:45.670 lat (msec) : 2=0.07% 00:26:45.670 cpu : usr=1.00%, sys=5.10%, ctx=3019, majf=0, minf=7 00:26:45.670 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:26:45.670 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:26:45.670 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:26:45.670 issued rwts: total=1470,1536,0,0 short=0,0,0,0 dropped=0,0,0,0 00:26:45.670 latency : target=0, window=0, percentile=100.00%, depth=1 00:26:45.670 00:26:45.670 Run status group 0 (all jobs): 00:26:45.670 READ: bw=25.7MiB/s (27.0MB/s), 5874KiB/s-8599KiB/s (6015kB/s-8806kB/s), io=25.8MiB (27.0MB), run=1001-1001msec 00:26:45.670 WRITE: bw=28.0MiB/s (29.3MB/s), 6138KiB/s-9.99MiB/s (6285kB/s-10.5MB/s), io=28.0MiB (29.4MB), run=1001-1001msec 00:26:45.670 00:26:45.670 Disk stats (read/write): 00:26:45.670 nvme0n1: ios=2098/2167, merge=0/0, ticks=509/361, in_queue=870, util=89.78% 00:26:45.670 nvme0n2: ios=1119/1536, merge=0/0, ticks=448/400, in_queue=848, util=89.98% 00:26:45.670 nvme0n3: ios=1121/1536, merge=0/0, ticks=431/406, in_queue=837, util=89.60% 00:26:45.670 nvme0n4: ios=1073/1536, merge=0/0, ticks=383/394, in_queue=777, util=89.74% 00:26:45.670 11:48:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@52 -- # /home/vagrant/spdk_repo/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 128 -t write -r 1 -v 00:26:45.670 [global] 00:26:45.670 thread=1 00:26:45.670 invalidate=1 00:26:45.670 rw=write 00:26:45.670 time_based=1 00:26:45.670 runtime=1 00:26:45.670 ioengine=libaio 00:26:45.670 direct=1 00:26:45.670 bs=4096 00:26:45.670 iodepth=128 00:26:45.670 norandommap=0 00:26:45.670 numjobs=1 00:26:45.670 00:26:45.670 verify_dump=1 00:26:45.670 verify_backlog=512 00:26:45.670 verify_state_save=0 00:26:45.670 do_verify=1 00:26:45.670 verify=crc32c-intel 00:26:45.670 [job0] 00:26:45.670 filename=/dev/nvme0n1 00:26:45.670 [job1] 00:26:45.670 filename=/dev/nvme0n2 00:26:45.670 [job2] 00:26:45.670 filename=/dev/nvme0n3 00:26:45.670 [job3] 00:26:45.670 filename=/dev/nvme0n4 00:26:45.670 Could not set queue depth (nvme0n1) 00:26:45.670 Could not set queue depth (nvme0n2) 00:26:45.670 Could not set queue depth (nvme0n3) 00:26:45.670 Could not set queue depth (nvme0n4) 00:26:45.670 job0: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:26:45.670 job1: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:26:45.670 job2: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:26:45.670 job3: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:26:45.670 fio-3.35 00:26:45.670 Starting 4 threads 00:26:47.045 00:26:47.045 job0: (groupid=0, jobs=1): err= 0: pid=108089: Mon Dec 16 11:48:47 2024 00:26:47.045 read: IOPS=5505, BW=21.5MiB/s (22.6MB/s)(21.6MiB/1002msec) 00:26:47.045 slat (usec): min=9, max=2739, avg=88.24, stdev=404.33 00:26:47.045 clat (usec): min=441, max=13891, avg=11553.91, stdev=1000.26 00:26:47.045 lat (usec): min=2983, max=14662, avg=11642.15, stdev=927.89 00:26:47.045 clat percentiles (usec): 00:26:47.045 | 1.00th=[ 6456], 5.00th=[ 9634], 10.00th=[11207], 20.00th=[11338], 00:26:47.045 | 30.00th=[11469], 40.00th=[11600], 50.00th=[11731], 60.00th=[11863], 00:26:47.045 | 70.00th=[11994], 80.00th=[11994], 90.00th=[12256], 95.00th=[12387], 00:26:47.045 | 99.00th=[12780], 99.50th=[13435], 99.90th=[13829], 99.95th=[13829], 00:26:47.045 | 99.99th=[13829] 00:26:47.045 write: IOPS=5620, BW=22.0MiB/s (23.0MB/s)(22.0MiB/1002msec); 0 zone resets 00:26:47.045 slat (usec): min=10, max=2766, avg=83.77, stdev=351.82 00:26:47.045 clat (usec): min=8437, max=14149, avg=11150.42, stdev=1151.11 00:26:47.045 lat (usec): min=8457, max=14168, avg=11234.19, stdev=1149.61 00:26:47.045 clat percentiles (usec): 00:26:47.045 | 1.00th=[ 9110], 5.00th=[ 9503], 10.00th=[ 9765], 20.00th=[10028], 00:26:47.045 | 30.00th=[10290], 40.00th=[10552], 50.00th=[11207], 60.00th=[11600], 00:26:47.045 | 70.00th=[11994], 80.00th=[12256], 90.00th=[12649], 95.00th=[12911], 00:26:47.045 | 99.00th=[13173], 99.50th=[13435], 99.90th=[14091], 99.95th=[14091], 00:26:47.045 | 99.99th=[14091] 00:26:47.045 bw ( KiB/s): min=21992, max=23064, per=34.22%, avg=22528.00, stdev=758.02, samples=2 00:26:47.045 iops : min= 5498, max= 5766, avg=5632.00, stdev=189.50, samples=2 00:26:47.045 lat (usec) : 500=0.01% 00:26:47.045 lat (msec) : 4=0.29%, 10=13.17%, 20=86.54% 00:26:47.045 cpu : usr=3.90%, sys=15.68%, ctx=558, majf=0, minf=9 00:26:47.045 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.3%, >=64=99.4% 00:26:47.045 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:26:47.045 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:26:47.045 issued rwts: total=5517,5632,0,0 short=0,0,0,0 dropped=0,0,0,0 00:26:47.045 latency : target=0, window=0, percentile=100.00%, depth=128 00:26:47.045 job1: (groupid=0, jobs=1): err= 0: pid=108090: Mon Dec 16 11:48:47 2024 00:26:47.045 read: IOPS=3041, BW=11.9MiB/s (12.5MB/s)(12.0MiB/1010msec) 00:26:47.045 slat (usec): min=4, max=8860, avg=136.67, stdev=698.58 00:26:47.045 clat (usec): min=9669, max=37020, avg=16513.23, stdev=4168.66 00:26:47.045 lat (usec): min=9692, max=37068, avg=16649.90, stdev=4220.88 00:26:47.045 clat percentiles (usec): 00:26:47.045 | 1.00th=[11207], 5.00th=[12518], 10.00th=[13173], 20.00th=[13698], 00:26:47.045 | 30.00th=[13960], 40.00th=[14353], 50.00th=[14746], 60.00th=[15664], 00:26:47.045 | 70.00th=[17433], 80.00th=[19792], 90.00th=[21627], 95.00th=[24511], 00:26:47.045 | 99.00th=[31589], 99.50th=[33162], 99.90th=[36963], 99.95th=[36963], 00:26:47.045 | 99.99th=[36963] 00:26:47.045 write: IOPS=3341, BW=13.1MiB/s (13.7MB/s)(13.2MiB/1010msec); 0 zone resets 00:26:47.045 slat (usec): min=9, max=7864, avg=164.01, stdev=646.89 00:26:47.045 clat (usec): min=8850, max=47955, avg=22822.76, stdev=9122.40 00:26:47.045 lat (usec): min=9862, max=47967, avg=22986.76, stdev=9190.00 00:26:47.045 clat percentiles (usec): 00:26:47.045 | 1.00th=[12256], 5.00th=[13042], 10.00th=[13435], 20.00th=[14877], 00:26:47.045 | 30.00th=[16909], 40.00th=[18482], 50.00th=[20579], 60.00th=[23200], 00:26:47.045 | 70.00th=[24249], 80.00th=[28967], 90.00th=[39060], 95.00th=[44303], 00:26:47.045 | 99.00th=[45351], 99.50th=[45351], 99.90th=[47973], 99.95th=[47973], 00:26:47.045 | 99.99th=[47973] 00:26:47.045 bw ( KiB/s): min=12768, max=13216, per=19.73%, avg=12992.00, stdev=316.78, samples=2 00:26:47.045 iops : min= 3192, max= 3304, avg=3248.00, stdev=79.20, samples=2 00:26:47.045 lat (msec) : 10=0.28%, 20=64.23%, 50=35.49% 00:26:47.045 cpu : usr=3.77%, sys=9.42%, ctx=379, majf=0, minf=11 00:26:47.045 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.5%, >=64=99.0% 00:26:47.045 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:26:47.045 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:26:47.045 issued rwts: total=3072,3375,0,0 short=0,0,0,0 dropped=0,0,0,0 00:26:47.045 latency : target=0, window=0, percentile=100.00%, depth=128 00:26:47.045 job2: (groupid=0, jobs=1): err= 0: pid=108091: Mon Dec 16 11:48:47 2024 00:26:47.045 read: IOPS=2029, BW=8119KiB/s (8314kB/s)(8192KiB/1009msec) 00:26:47.045 slat (usec): min=5, max=9601, avg=179.19, stdev=846.19 00:26:47.045 clat (usec): min=13780, max=47068, avg=23413.68, stdev=7742.75 00:26:47.045 lat (usec): min=13803, max=48432, avg=23592.87, stdev=7813.39 00:26:47.045 clat percentiles (usec): 00:26:47.045 | 1.00th=[14484], 5.00th=[16581], 10.00th=[16712], 20.00th=[16909], 00:26:47.045 | 30.00th=[16909], 40.00th=[17171], 50.00th=[19530], 60.00th=[23987], 00:26:47.045 | 70.00th=[28967], 80.00th=[31589], 90.00th=[34341], 95.00th=[36439], 00:26:47.045 | 99.00th=[42206], 99.50th=[42730], 99.90th=[46924], 99.95th=[46924], 00:26:47.045 | 99.99th=[46924] 00:26:47.045 write: IOPS=2475, BW=9903KiB/s (10.1MB/s)(9992KiB/1009msec); 0 zone resets 00:26:47.045 slat (usec): min=16, max=12497, avg=244.79, stdev=867.26 00:26:47.045 clat (usec): min=7023, max=62062, avg=31388.12, stdev=11145.38 00:26:47.045 lat (usec): min=8195, max=62094, avg=31632.91, stdev=11207.78 00:26:47.045 clat percentiles (usec): 00:26:47.045 | 1.00th=[11207], 5.00th=[21627], 10.00th=[22938], 20.00th=[23725], 00:26:47.045 | 30.00th=[24249], 40.00th=[24773], 50.00th=[26608], 60.00th=[28705], 00:26:47.045 | 70.00th=[33424], 80.00th=[41681], 90.00th=[50070], 95.00th=[55313], 00:26:47.045 | 99.00th=[60556], 99.50th=[61604], 99.90th=[62129], 99.95th=[62129], 00:26:47.045 | 99.99th=[62129] 00:26:47.045 bw ( KiB/s): min= 8208, max=10768, per=14.41%, avg=9488.00, stdev=1810.19, samples=2 00:26:47.045 iops : min= 2052, max= 2692, avg=2372.00, stdev=452.55, samples=2 00:26:47.045 lat (msec) : 10=0.37%, 20=24.07%, 50=70.00%, 100=5.57% 00:26:47.045 cpu : usr=2.78%, sys=8.04%, ctx=392, majf=0, minf=11 00:26:47.045 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.2%, 16=0.4%, 32=0.7%, >=64=98.6% 00:26:47.045 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:26:47.045 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:26:47.045 issued rwts: total=2048,2498,0,0 short=0,0,0,0 dropped=0,0,0,0 00:26:47.045 latency : target=0, window=0, percentile=100.00%, depth=128 00:26:47.045 job3: (groupid=0, jobs=1): err= 0: pid=108092: Mon Dec 16 11:48:47 2024 00:26:47.045 read: IOPS=4846, BW=18.9MiB/s (19.8MB/s)(19.0MiB/1002msec) 00:26:47.045 slat (usec): min=5, max=3243, avg=99.57, stdev=447.17 00:26:47.045 clat (usec): min=337, max=15976, avg=12811.24, stdev=1309.88 00:26:47.045 lat (usec): min=3104, max=17996, avg=12910.82, stdev=1261.97 00:26:47.045 clat percentiles (usec): 00:26:47.046 | 1.00th=[ 6849], 5.00th=[10814], 10.00th=[11469], 20.00th=[12256], 00:26:47.046 | 30.00th=[12649], 40.00th=[12780], 50.00th=[13042], 60.00th=[13173], 00:26:47.046 | 70.00th=[13435], 80.00th=[13566], 90.00th=[13829], 95.00th=[14222], 00:26:47.046 | 99.00th=[15008], 99.50th=[15139], 99.90th=[15926], 99.95th=[15926], 00:26:47.046 | 99.99th=[15926] 00:26:47.046 write: IOPS=5109, BW=20.0MiB/s (20.9MB/s)(20.0MiB/1002msec); 0 zone resets 00:26:47.046 slat (usec): min=10, max=3130, avg=93.48, stdev=347.38 00:26:47.046 clat (usec): min=9646, max=15960, avg=12563.54, stdev=1178.23 00:26:47.046 lat (usec): min=9679, max=15978, avg=12657.03, stdev=1173.26 00:26:47.046 clat percentiles (usec): 00:26:47.046 | 1.00th=[10159], 5.00th=[10552], 10.00th=[10814], 20.00th=[11338], 00:26:47.046 | 30.00th=[11994], 40.00th=[12518], 50.00th=[12649], 60.00th=[12780], 00:26:47.046 | 70.00th=[13173], 80.00th=[13566], 90.00th=[14091], 95.00th=[14484], 00:26:47.046 | 99.00th=[15139], 99.50th=[15270], 99.90th=[15926], 99.95th=[15926], 00:26:47.046 | 99.99th=[15926] 00:26:47.046 bw ( KiB/s): min=20480, max=20480, per=31.10%, avg=20480.00, stdev= 0.00, samples=2 00:26:47.046 iops : min= 5120, max= 5120, avg=5120.00, stdev= 0.00, samples=2 00:26:47.046 lat (usec) : 500=0.01% 00:26:47.046 lat (msec) : 4=0.32%, 10=0.59%, 20=99.08% 00:26:47.046 cpu : usr=3.20%, sys=15.38%, ctx=647, majf=0, minf=10 00:26:47.046 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.3%, >=64=99.4% 00:26:47.046 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:26:47.046 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:26:47.046 issued rwts: total=4856,5120,0,0 short=0,0,0,0 dropped=0,0,0,0 00:26:47.046 latency : target=0, window=0, percentile=100.00%, depth=128 00:26:47.046 00:26:47.046 Run status group 0 (all jobs): 00:26:47.046 READ: bw=59.9MiB/s (62.8MB/s), 8119KiB/s-21.5MiB/s (8314kB/s-22.6MB/s), io=60.5MiB (63.5MB), run=1002-1010msec 00:26:47.046 WRITE: bw=64.3MiB/s (67.4MB/s), 9903KiB/s-22.0MiB/s (10.1MB/s-23.0MB/s), io=64.9MiB (68.1MB), run=1002-1010msec 00:26:47.046 00:26:47.046 Disk stats (read/write): 00:26:47.046 nvme0n1: ios=4658/5023, merge=0/0, ticks=12426/12151, in_queue=24577, util=88.88% 00:26:47.046 nvme0n2: ios=2609/2967, merge=0/0, ticks=21406/29702, in_queue=51108, util=89.59% 00:26:47.046 nvme0n3: ios=1821/2048, merge=0/0, ticks=13583/21259, in_queue=34842, util=89.08% 00:26:47.046 nvme0n4: ios=4096/4503, merge=0/0, ticks=12190/12322, in_queue=24512, util=89.73% 00:26:47.046 11:48:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@53 -- # /home/vagrant/spdk_repo/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 128 -t randwrite -r 1 -v 00:26:47.046 [global] 00:26:47.046 thread=1 00:26:47.046 invalidate=1 00:26:47.046 rw=randwrite 00:26:47.046 time_based=1 00:26:47.046 runtime=1 00:26:47.046 ioengine=libaio 00:26:47.046 direct=1 00:26:47.046 bs=4096 00:26:47.046 iodepth=128 00:26:47.046 norandommap=0 00:26:47.046 numjobs=1 00:26:47.046 00:26:47.046 verify_dump=1 00:26:47.046 verify_backlog=512 00:26:47.046 verify_state_save=0 00:26:47.046 do_verify=1 00:26:47.046 verify=crc32c-intel 00:26:47.046 [job0] 00:26:47.046 filename=/dev/nvme0n1 00:26:47.046 [job1] 00:26:47.046 filename=/dev/nvme0n2 00:26:47.046 [job2] 00:26:47.046 filename=/dev/nvme0n3 00:26:47.046 [job3] 00:26:47.046 filename=/dev/nvme0n4 00:26:47.046 Could not set queue depth (nvme0n1) 00:26:47.046 Could not set queue depth (nvme0n2) 00:26:47.046 Could not set queue depth (nvme0n3) 00:26:47.046 Could not set queue depth (nvme0n4) 00:26:47.046 job0: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:26:47.046 job1: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:26:47.046 job2: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:26:47.046 job3: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:26:47.046 fio-3.35 00:26:47.046 Starting 4 threads 00:26:48.420 00:26:48.420 job0: (groupid=0, jobs=1): err= 0: pid=108145: Mon Dec 16 11:48:48 2024 00:26:48.420 read: IOPS=3041, BW=11.9MiB/s (12.5MB/s)(12.0MiB/1010msec) 00:26:48.420 slat (usec): min=6, max=25681, avg=174.34, stdev=1211.37 00:26:48.420 clat (msec): min=4, max=110, avg=21.41, stdev=15.38 00:26:48.420 lat (msec): min=4, max=110, avg=21.58, stdev=15.49 00:26:48.420 clat percentiles (msec): 00:26:48.420 | 1.00th=[ 9], 5.00th=[ 10], 10.00th=[ 11], 20.00th=[ 12], 00:26:48.421 | 30.00th=[ 14], 40.00th=[ 17], 50.00th=[ 19], 60.00th=[ 20], 00:26:48.421 | 70.00th=[ 21], 80.00th=[ 27], 90.00th=[ 31], 95.00th=[ 50], 00:26:48.421 | 99.00th=[ 99], 99.50th=[ 109], 99.90th=[ 111], 99.95th=[ 111], 00:26:48.421 | 99.99th=[ 111] 00:26:48.421 write: IOPS=3246, BW=12.7MiB/s (13.3MB/s)(12.8MiB/1010msec); 0 zone resets 00:26:48.421 slat (usec): min=3, max=21783, avg=134.64, stdev=1010.24 00:26:48.421 clat (msec): min=4, max=110, avg=18.99, stdev=10.33 00:26:48.421 lat (msec): min=4, max=110, avg=19.13, stdev=10.39 00:26:48.421 clat percentiles (msec): 00:26:48.421 | 1.00th=[ 6], 5.00th=[ 9], 10.00th=[ 10], 20.00th=[ 11], 00:26:48.421 | 30.00th=[ 14], 40.00th=[ 18], 50.00th=[ 19], 60.00th=[ 21], 00:26:48.421 | 70.00th=[ 22], 80.00th=[ 22], 90.00th=[ 26], 95.00th=[ 34], 00:26:48.421 | 99.00th=[ 59], 99.50th=[ 77], 99.90th=[ 87], 99.95th=[ 111], 00:26:48.421 | 99.99th=[ 111] 00:26:48.421 bw ( KiB/s): min= 9608, max=15600, per=23.52%, avg=12604.00, stdev=4236.98, samples=2 00:26:48.421 iops : min= 2402, max= 3900, avg=3151.00, stdev=1059.25, samples=2 00:26:48.421 lat (msec) : 10=14.30%, 20=47.02%, 50=34.81%, 100=3.39%, 250=0.49% 00:26:48.421 cpu : usr=2.68%, sys=9.12%, ctx=278, majf=0, minf=5 00:26:48.421 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.3%, 32=0.5%, >=64=99.0% 00:26:48.421 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:26:48.421 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:26:48.421 issued rwts: total=3072,3279,0,0 short=0,0,0,0 dropped=0,0,0,0 00:26:48.421 latency : target=0, window=0, percentile=100.00%, depth=128 00:26:48.421 job1: (groupid=0, jobs=1): err= 0: pid=108146: Mon Dec 16 11:48:48 2024 00:26:48.421 read: IOPS=3343, BW=13.1MiB/s (13.7MB/s)(13.1MiB/1005msec) 00:26:48.421 slat (usec): min=2, max=7042, avg=148.29, stdev=679.56 00:26:48.421 clat (usec): min=1891, max=28239, avg=19103.62, stdev=4266.25 00:26:48.421 lat (usec): min=5202, max=28257, avg=19251.91, stdev=4275.59 00:26:48.421 clat percentiles (usec): 00:26:48.421 | 1.00th=[ 8291], 5.00th=[10290], 10.00th=[11600], 20.00th=[16909], 00:26:48.421 | 30.00th=[18220], 40.00th=[19006], 50.00th=[20055], 60.00th=[20841], 00:26:48.421 | 70.00th=[21627], 80.00th=[22152], 90.00th=[23725], 95.00th=[24511], 00:26:48.421 | 99.00th=[26084], 99.50th=[27657], 99.90th=[28181], 99.95th=[28181], 00:26:48.421 | 99.99th=[28181] 00:26:48.421 write: IOPS=3566, BW=13.9MiB/s (14.6MB/s)(14.0MiB/1005msec); 0 zone resets 00:26:48.421 slat (usec): min=4, max=8722, avg=133.12, stdev=596.17 00:26:48.421 clat (usec): min=3632, max=25123, avg=17549.35, stdev=3414.36 00:26:48.421 lat (usec): min=3655, max=25143, avg=17682.47, stdev=3423.40 00:26:48.421 clat percentiles (usec): 00:26:48.421 | 1.00th=[10159], 5.00th=[10421], 10.00th=[11338], 20.00th=[15401], 00:26:48.421 | 30.00th=[16909], 40.00th=[17957], 50.00th=[18482], 60.00th=[19006], 00:26:48.421 | 70.00th=[19268], 80.00th=[19792], 90.00th=[21103], 95.00th=[21890], 00:26:48.421 | 99.00th=[24511], 99.50th=[25035], 99.90th=[25035], 99.95th=[25035], 00:26:48.421 | 99.99th=[25035] 00:26:48.421 bw ( KiB/s): min=12288, max=16416, per=26.78%, avg=14352.00, stdev=2918.94, samples=2 00:26:48.421 iops : min= 3072, max= 4104, avg=3588.00, stdev=729.73, samples=2 00:26:48.421 lat (msec) : 2=0.01%, 4=0.07%, 10=1.97%, 20=63.22%, 50=34.72% 00:26:48.421 cpu : usr=2.99%, sys=9.26%, ctx=775, majf=0, minf=20 00:26:48.421 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.5%, >=64=99.1% 00:26:48.421 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:26:48.421 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:26:48.421 issued rwts: total=3360,3584,0,0 short=0,0,0,0 dropped=0,0,0,0 00:26:48.421 latency : target=0, window=0, percentile=100.00%, depth=128 00:26:48.421 job2: (groupid=0, jobs=1): err= 0: pid=108147: Mon Dec 16 11:48:48 2024 00:26:48.421 read: IOPS=3107, BW=12.1MiB/s (12.7MB/s)(12.2MiB/1002msec) 00:26:48.421 slat (usec): min=3, max=5988, avg=152.12, stdev=669.74 00:26:48.421 clat (usec): min=980, max=27404, avg=20184.92, stdev=3370.22 00:26:48.421 lat (usec): min=4105, max=29067, avg=20337.05, stdev=3350.51 00:26:48.421 clat percentiles (usec): 00:26:48.421 | 1.00th=[ 7177], 5.00th=[13698], 10.00th=[14353], 20.00th=[18482], 00:26:48.421 | 30.00th=[19530], 40.00th=[20055], 50.00th=[20579], 60.00th=[21365], 00:26:48.421 | 70.00th=[21890], 80.00th=[22414], 90.00th=[23725], 95.00th=[24511], 00:26:48.421 | 99.00th=[26084], 99.50th=[26346], 99.90th=[27395], 99.95th=[27395], 00:26:48.421 | 99.99th=[27395] 00:26:48.421 write: IOPS=3576, BW=14.0MiB/s (14.7MB/s)(14.0MiB/1002msec); 0 zone resets 00:26:48.421 slat (usec): min=8, max=6136, avg=140.34, stdev=586.55 00:26:48.421 clat (usec): min=7429, max=24385, avg=17773.91, stdev=2912.32 00:26:48.421 lat (usec): min=7471, max=24406, avg=17914.26, stdev=2906.08 00:26:48.421 clat percentiles (usec): 00:26:48.421 | 1.00th=[10814], 5.00th=[11600], 10.00th=[13698], 20.00th=[14877], 00:26:48.421 | 30.00th=[16581], 40.00th=[17695], 50.00th=[18744], 60.00th=[19006], 00:26:48.421 | 70.00th=[19530], 80.00th=[20317], 90.00th=[20841], 95.00th=[21365], 00:26:48.421 | 99.00th=[22414], 99.50th=[22938], 99.90th=[23987], 99.95th=[24249], 00:26:48.421 | 99.99th=[24511] 00:26:48.421 bw ( KiB/s): min=13280, max=14733, per=26.14%, avg=14006.50, stdev=1027.43, samples=2 00:26:48.421 iops : min= 3320, max= 3683, avg=3501.50, stdev=256.68, samples=2 00:26:48.421 lat (usec) : 1000=0.01% 00:26:48.421 lat (msec) : 10=0.78%, 20=56.54%, 50=42.67% 00:26:48.421 cpu : usr=2.20%, sys=9.89%, ctx=752, majf=0, minf=11 00:26:48.421 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.5%, >=64=99.1% 00:26:48.421 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:26:48.421 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:26:48.421 issued rwts: total=3114,3584,0,0 short=0,0,0,0 dropped=0,0,0,0 00:26:48.421 latency : target=0, window=0, percentile=100.00%, depth=128 00:26:48.421 job3: (groupid=0, jobs=1): err= 0: pid=108148: Mon Dec 16 11:48:48 2024 00:26:48.421 read: IOPS=3014, BW=11.8MiB/s (12.3MB/s)(12.0MiB/1019msec) 00:26:48.421 slat (usec): min=6, max=16417, avg=135.94, stdev=1014.30 00:26:48.421 clat (usec): min=5592, max=39694, avg=18287.39, stdev=5543.95 00:26:48.421 lat (usec): min=5602, max=39735, avg=18423.33, stdev=5600.06 00:26:48.421 clat percentiles (usec): 00:26:48.421 | 1.00th=[ 8586], 5.00th=[ 9896], 10.00th=[11076], 20.00th=[12518], 00:26:48.421 | 30.00th=[14746], 40.00th=[16581], 50.00th=[18744], 60.00th=[19006], 00:26:48.421 | 70.00th=[20317], 80.00th=[23200], 90.00th=[26346], 95.00th=[27919], 00:26:48.421 | 99.00th=[30540], 99.50th=[30802], 99.90th=[32375], 99.95th=[34866], 00:26:48.421 | 99.99th=[39584] 00:26:48.421 write: IOPS=3145, BW=12.3MiB/s (12.9MB/s)(12.5MiB/1019msec); 0 zone resets 00:26:48.421 slat (usec): min=5, max=26137, avg=175.25, stdev=1215.08 00:26:48.421 clat (usec): min=4938, max=80026, avg=22740.46, stdev=12046.03 00:26:48.421 lat (usec): min=4955, max=80045, avg=22915.71, stdev=12135.58 00:26:48.421 clat percentiles (usec): 00:26:48.421 | 1.00th=[ 7963], 5.00th=[10552], 10.00th=[10945], 20.00th=[17433], 00:26:48.421 | 30.00th=[18482], 40.00th=[19006], 50.00th=[19792], 60.00th=[21627], 00:26:48.421 | 70.00th=[21890], 80.00th=[24511], 90.00th=[32375], 95.00th=[51119], 00:26:48.421 | 99.00th=[74974], 99.50th=[76022], 99.90th=[80217], 99.95th=[80217], 00:26:48.421 | 99.99th=[80217] 00:26:48.421 bw ( KiB/s): min=12296, max=12352, per=23.00%, avg=12324.00, stdev=39.60, samples=2 00:26:48.421 iops : min= 3074, max= 3088, avg=3081.00, stdev= 9.90, samples=2 00:26:48.421 lat (msec) : 10=4.25%, 20=54.64%, 50=38.31%, 100=2.79% 00:26:48.421 cpu : usr=2.95%, sys=8.15%, ctx=246, majf=0, minf=17 00:26:48.421 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.3%, 32=0.5%, >=64=99.0% 00:26:48.421 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:26:48.421 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:26:48.421 issued rwts: total=3072,3205,0,0 short=0,0,0,0 dropped=0,0,0,0 00:26:48.421 latency : target=0, window=0, percentile=100.00%, depth=128 00:26:48.421 00:26:48.421 Run status group 0 (all jobs): 00:26:48.421 READ: bw=48.4MiB/s (50.7MB/s), 11.8MiB/s-13.1MiB/s (12.3MB/s-13.7MB/s), io=49.3MiB (51.7MB), run=1002-1019msec 00:26:48.421 WRITE: bw=52.3MiB/s (54.9MB/s), 12.3MiB/s-14.0MiB/s (12.9MB/s-14.7MB/s), io=53.3MiB (55.9MB), run=1002-1019msec 00:26:48.421 00:26:48.421 Disk stats (read/write): 00:26:48.421 nvme0n1: ios=2610/2759, merge=0/0, ticks=53024/51328, in_queue=104352, util=88.98% 00:26:48.421 nvme0n2: ios=3040/3072, merge=0/0, ticks=17732/15547, in_queue=33279, util=89.69% 00:26:48.421 nvme0n3: ios=2752/3072, merge=0/0, ticks=13004/12012, in_queue=25016, util=89.29% 00:26:48.421 nvme0n4: ios=2560/2774, merge=0/0, ticks=42796/62270, in_queue=105066, util=89.84% 00:26:48.421 11:48:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@55 -- # sync 00:26:48.421 11:48:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@59 -- # fio_pid=108167 00:26:48.421 11:48:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@58 -- # /home/vagrant/spdk_repo/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 1 -t read -r 10 00:26:48.421 11:48:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@61 -- # sleep 3 00:26:48.421 [global] 00:26:48.421 thread=1 00:26:48.421 invalidate=1 00:26:48.421 rw=read 00:26:48.421 time_based=1 00:26:48.421 runtime=10 00:26:48.421 ioengine=libaio 00:26:48.421 direct=1 00:26:48.421 bs=4096 00:26:48.421 iodepth=1 00:26:48.421 norandommap=1 00:26:48.421 numjobs=1 00:26:48.421 00:26:48.421 [job0] 00:26:48.421 filename=/dev/nvme0n1 00:26:48.421 [job1] 00:26:48.421 filename=/dev/nvme0n2 00:26:48.421 [job2] 00:26:48.421 filename=/dev/nvme0n3 00:26:48.421 [job3] 00:26:48.421 filename=/dev/nvme0n4 00:26:48.421 Could not set queue depth (nvme0n1) 00:26:48.421 Could not set queue depth (nvme0n2) 00:26:48.421 Could not set queue depth (nvme0n3) 00:26:48.421 Could not set queue depth (nvme0n4) 00:26:48.421 job0: (g=0): rw=read, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:26:48.421 job1: (g=0): rw=read, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:26:48.421 job2: (g=0): rw=read, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:26:48.421 job3: (g=0): rw=read, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:26:48.421 fio-3.35 00:26:48.421 Starting 4 threads 00:26:51.751 11:48:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_raid_delete concat0 00:26:51.751 fio: io_u error on file /dev/nvme0n4: Operation not supported: read offset=38952960, buflen=4096 00:26:51.751 fio: pid=108210, err=95/file:io_u.c:1889, func=io_u error, error=Operation not supported 00:26:51.751 11:48:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_raid_delete raid0 00:26:51.751 fio: pid=108209, err=95/file:io_u.c:1889, func=io_u error, error=Operation not supported 00:26:51.751 fio: io_u error on file /dev/nvme0n3: Operation not supported: read offset=43003904, buflen=4096 00:26:52.010 11:48:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:26:52.010 11:48:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@66 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_delete Malloc0 00:26:52.010 fio: pid=108207, err=95/file:io_u.c:1889, func=io_u error, error=Operation not supported 00:26:52.010 fio: io_u error on file /dev/nvme0n1: Operation not supported: read offset=46923776, buflen=4096 00:26:52.268 11:48:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:26:52.268 11:48:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@66 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_delete Malloc1 00:26:52.527 fio: pid=108208, err=95/file:io_u.c:1889, func=io_u error, error=Operation not supported 00:26:52.527 fio: io_u error on file /dev/nvme0n2: Operation not supported: read offset=50147328, buflen=4096 00:26:52.527 00:26:52.527 job0: (groupid=0, jobs=1): err=95 (file:io_u.c:1889, func=io_u error, error=Operation not supported): pid=108207: Mon Dec 16 11:48:53 2024 00:26:52.527 read: IOPS=3250, BW=12.7MiB/s (13.3MB/s)(44.8MiB/3525msec) 00:26:52.527 slat (usec): min=10, max=13229, avg=20.50, stdev=209.33 00:26:52.527 clat (usec): min=156, max=7420, avg=285.63, stdev=90.44 00:26:52.527 lat (usec): min=169, max=13486, avg=306.13, stdev=228.11 00:26:52.527 clat percentiles (usec): 00:26:52.527 | 1.00th=[ 176], 5.00th=[ 231], 10.00th=[ 260], 20.00th=[ 277], 00:26:52.527 | 30.00th=[ 281], 40.00th=[ 285], 50.00th=[ 285], 60.00th=[ 289], 00:26:52.527 | 70.00th=[ 293], 80.00th=[ 297], 90.00th=[ 306], 95.00th=[ 310], 00:26:52.527 | 99.00th=[ 396], 99.50th=[ 408], 99.90th=[ 832], 99.95th=[ 1778], 00:26:52.527 | 99.99th=[ 3589] 00:26:52.527 bw ( KiB/s): min=12112, max=13176, per=28.20%, avg=12876.00, stdev=394.74, samples=6 00:26:52.527 iops : min= 3028, max= 3294, avg=3219.00, stdev=98.69, samples=6 00:26:52.527 lat (usec) : 250=7.93%, 500=91.86%, 750=0.09%, 1000=0.04% 00:26:52.527 lat (msec) : 2=0.05%, 4=0.02%, 10=0.01% 00:26:52.527 cpu : usr=0.96%, sys=4.40%, ctx=11468, majf=0, minf=1 00:26:52.527 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:26:52.527 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:26:52.527 complete : 0=0.1%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:26:52.527 issued rwts: total=11457,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:26:52.527 latency : target=0, window=0, percentile=100.00%, depth=1 00:26:52.527 job1: (groupid=0, jobs=1): err=95 (file:io_u.c:1889, func=io_u error, error=Operation not supported): pid=108208: Mon Dec 16 11:48:53 2024 00:26:52.527 read: IOPS=3197, BW=12.5MiB/s (13.1MB/s)(47.8MiB/3829msec) 00:26:52.527 slat (usec): min=9, max=10774, avg=20.07, stdev=192.23 00:26:52.527 clat (usec): min=150, max=206046, avg=291.23, stdev=1860.98 00:26:52.527 lat (usec): min=164, max=206060, avg=311.30, stdev=1871.39 00:26:52.527 clat percentiles (usec): 00:26:52.527 | 1.00th=[ 161], 5.00th=[ 169], 10.00th=[ 182], 20.00th=[ 255], 00:26:52.527 | 30.00th=[ 281], 40.00th=[ 285], 50.00th=[ 285], 60.00th=[ 289], 00:26:52.527 | 70.00th=[ 293], 80.00th=[ 297], 90.00th=[ 306], 95.00th=[ 314], 00:26:52.527 | 99.00th=[ 408], 99.50th=[ 453], 99.90th=[ 799], 99.95th=[ 1287], 00:26:52.527 | 99.99th=[ 2933] 00:26:52.527 bw ( KiB/s): min= 9869, max=13160, per=27.25%, avg=12441.86, stdev=1151.71, samples=7 00:26:52.527 iops : min= 2467, max= 3290, avg=3110.43, stdev=288.02, samples=7 00:26:52.527 lat (usec) : 250=19.42%, 500=80.18%, 750=0.29%, 1000=0.02% 00:26:52.527 lat (msec) : 2=0.05%, 4=0.03%, 250=0.01% 00:26:52.527 cpu : usr=1.07%, sys=3.89%, ctx=12254, majf=0, minf=2 00:26:52.527 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:26:52.527 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:26:52.527 complete : 0=0.1%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:26:52.527 issued rwts: total=12244,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:26:52.527 latency : target=0, window=0, percentile=100.00%, depth=1 00:26:52.527 job2: (groupid=0, jobs=1): err=95 (file:io_u.c:1889, func=io_u error, error=Operation not supported): pid=108209: Mon Dec 16 11:48:53 2024 00:26:52.527 read: IOPS=3227, BW=12.6MiB/s (13.2MB/s)(41.0MiB/3253msec) 00:26:52.527 slat (usec): min=13, max=16339, avg=19.40, stdev=221.47 00:26:52.527 clat (usec): min=169, max=3492, avg=288.94, stdev=56.15 00:26:52.527 lat (usec): min=184, max=16600, avg=308.34, stdev=228.42 00:26:52.527 clat percentiles (usec): 00:26:52.527 | 1.00th=[ 190], 5.00th=[ 251], 10.00th=[ 273], 20.00th=[ 281], 00:26:52.527 | 30.00th=[ 281], 40.00th=[ 285], 50.00th=[ 289], 60.00th=[ 289], 00:26:52.527 | 70.00th=[ 293], 80.00th=[ 297], 90.00th=[ 306], 95.00th=[ 318], 00:26:52.527 | 99.00th=[ 388], 99.50th=[ 404], 99.90th=[ 742], 99.95th=[ 873], 00:26:52.527 | 99.99th=[ 2442] 00:26:52.527 bw ( KiB/s): min=12872, max=13168, per=28.46%, avg=12996.00, stdev=120.03, samples=6 00:26:52.527 iops : min= 3218, max= 3292, avg=3249.00, stdev=30.01, samples=6 00:26:52.527 lat (usec) : 250=4.87%, 500=94.93%, 750=0.10%, 1000=0.05% 00:26:52.527 lat (msec) : 2=0.02%, 4=0.03% 00:26:52.527 cpu : usr=0.83%, sys=4.27%, ctx=10515, majf=0, minf=1 00:26:52.527 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:26:52.527 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:26:52.527 complete : 0=0.1%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:26:52.527 issued rwts: total=10500,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:26:52.527 latency : target=0, window=0, percentile=100.00%, depth=1 00:26:52.527 job3: (groupid=0, jobs=1): err=95 (file:io_u.c:1889, func=io_u error, error=Operation not supported): pid=108210: Mon Dec 16 11:48:53 2024 00:26:52.527 read: IOPS=3213, BW=12.5MiB/s (13.2MB/s)(37.1MiB/2960msec) 00:26:52.527 slat (nsec): min=12788, max=82016, avg=16797.71, stdev=5068.61 00:26:52.527 clat (usec): min=182, max=2172, avg=292.98, stdev=41.44 00:26:52.527 lat (usec): min=197, max=2199, avg=309.78, stdev=42.20 00:26:52.527 clat percentiles (usec): 00:26:52.527 | 1.00th=[ 208], 5.00th=[ 277], 10.00th=[ 281], 20.00th=[ 281], 00:26:52.527 | 30.00th=[ 285], 40.00th=[ 285], 50.00th=[ 289], 60.00th=[ 293], 00:26:52.527 | 70.00th=[ 297], 80.00th=[ 302], 90.00th=[ 310], 95.00th=[ 318], 00:26:52.527 | 99.00th=[ 404], 99.50th=[ 449], 99.90th=[ 693], 99.95th=[ 1074], 00:26:52.527 | 99.99th=[ 2180] 00:26:52.527 bw ( KiB/s): min=12744, max=13160, per=28.36%, avg=12948.80, stdev=175.78, samples=5 00:26:52.527 iops : min= 3186, max= 3290, avg=3237.20, stdev=43.95, samples=5 00:26:52.527 lat (usec) : 250=2.30%, 500=97.31%, 750=0.29%, 1000=0.03% 00:26:52.527 lat (msec) : 2=0.04%, 4=0.01% 00:26:52.527 cpu : usr=1.01%, sys=4.22%, ctx=9512, majf=0, minf=2 00:26:52.527 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:26:52.527 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:26:52.527 complete : 0=0.1%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:26:52.527 issued rwts: total=9511,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:26:52.527 latency : target=0, window=0, percentile=100.00%, depth=1 00:26:52.527 00:26:52.527 Run status group 0 (all jobs): 00:26:52.527 READ: bw=44.6MiB/s (46.8MB/s), 12.5MiB/s-12.7MiB/s (13.1MB/s-13.3MB/s), io=171MiB (179MB), run=2960-3829msec 00:26:52.527 00:26:52.527 Disk stats (read/write): 00:26:52.527 nvme0n1: ios=10892/0, merge=0/0, ticks=3162/0, in_queue=3162, util=95.02% 00:26:52.527 nvme0n2: ios=11162/0, merge=0/0, ticks=3409/0, in_queue=3409, util=95.40% 00:26:52.527 nvme0n3: ios=10033/0, merge=0/0, ticks=2957/0, in_queue=2957, util=95.87% 00:26:52.527 nvme0n4: ios=9213/0, merge=0/0, ticks=2737/0, in_queue=2737, util=96.69% 00:26:52.527 11:48:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:26:52.527 11:48:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@66 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_delete Malloc2 00:26:52.786 11:48:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:26:52.786 11:48:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@66 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_delete Malloc3 00:26:53.353 11:48:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:26:53.353 11:48:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@66 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_delete Malloc4 00:26:53.612 11:48:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:26:53.612 11:48:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@66 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_delete Malloc5 00:26:53.870 11:48:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:26:53.870 11:48:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@66 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_delete Malloc6 00:26:54.129 11:48:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@69 -- # fio_status=0 00:26:54.129 11:48:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@70 -- # wait 108167 00:26:54.129 11:48:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@70 -- # fio_status=4 00:26:54.129 11:48:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@72 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:26:54.129 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:26:54.129 11:48:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@73 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:26:54.129 11:48:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1223 -- # local i=0 00:26:54.129 11:48:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1224 -- # lsblk -o NAME,SERIAL 00:26:54.129 11:48:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1224 -- # grep -q -w SPDKISFASTANDAWESOME 00:26:54.129 11:48:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1231 -- # lsblk -l -o NAME,SERIAL 00:26:54.129 11:48:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1231 -- # grep -q -w SPDKISFASTANDAWESOME 00:26:54.129 nvmf hotplug test: fio failed as expected 00:26:54.129 11:48:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1235 -- # return 0 00:26:54.129 11:48:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@75 -- # '[' 4 -eq 0 ']' 00:26:54.129 11:48:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@80 -- # echo 'nvmf hotplug test: fio failed as expected' 00:26:54.129 11:48:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@83 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:26:54.388 11:48:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@85 -- # rm -f ./local-job0-0-verify.state 00:26:54.388 11:48:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@86 -- # rm -f ./local-job1-1-verify.state 00:26:54.388 11:48:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@87 -- # rm -f ./local-job2-2-verify.state 00:26:54.388 11:48:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@89 -- # trap - SIGINT SIGTERM EXIT 00:26:54.388 11:48:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@91 -- # nvmftestfini 00:26:54.388 11:48:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@516 -- # nvmfcleanup 00:26:54.388 11:48:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@121 -- # sync 00:26:54.388 11:48:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:26:54.388 11:48:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@124 -- # set +e 00:26:54.388 11:48:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@125 -- # for i in {1..20} 00:26:54.388 11:48:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:26:54.388 rmmod nvme_tcp 00:26:54.388 rmmod nvme_fabrics 00:26:54.388 rmmod nvme_keyring 00:26:54.646 11:48:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:26:54.646 11:48:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@128 -- # set -e 00:26:54.646 11:48:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@129 -- # return 0 00:26:54.646 11:48:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@517 -- # '[' -n 107675 ']' 00:26:54.646 11:48:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@518 -- # killprocess 107675 00:26:54.646 11:48:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@954 -- # '[' -z 107675 ']' 00:26:54.646 11:48:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@958 -- # kill -0 107675 00:26:54.646 11:48:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@959 -- # uname 00:26:54.646 11:48:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:26:54.646 11:48:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 107675 00:26:54.646 killing process with pid 107675 00:26:54.646 11:48:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:26:54.646 11:48:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:26:54.646 11:48:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@972 -- # echo 'killing process with pid 107675' 00:26:54.646 11:48:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@973 -- # kill 107675 00:26:54.646 11:48:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@978 -- # wait 107675 00:26:54.646 11:48:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:26:54.646 11:48:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:26:54.646 11:48:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:26:54.646 11:48:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@297 -- # iptr 00:26:54.646 11:48:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@791 -- # iptables-save 00:26:54.646 11:48:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:26:54.646 11:48:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@791 -- # iptables-restore 00:26:54.646 11:48:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@298 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:26:54.646 11:48:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@299 -- # nvmf_veth_fini 00:26:54.646 11:48:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@233 -- # ip link set nvmf_init_br nomaster 00:26:54.904 11:48:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@234 -- # ip link set nvmf_init_br2 nomaster 00:26:54.904 11:48:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@235 -- # ip link set nvmf_tgt_br nomaster 00:26:54.904 11:48:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@236 -- # ip link set nvmf_tgt_br2 nomaster 00:26:54.904 11:48:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@237 -- # ip link set nvmf_init_br down 00:26:54.904 11:48:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@238 -- # ip link set nvmf_init_br2 down 00:26:54.904 11:48:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@239 -- # ip link set nvmf_tgt_br down 00:26:54.904 11:48:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@240 -- # ip link set nvmf_tgt_br2 down 00:26:54.904 11:48:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@241 -- # ip link delete nvmf_br type bridge 00:26:54.904 11:48:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@242 -- # ip link delete nvmf_init_if 00:26:54.904 11:48:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@243 -- # ip link delete nvmf_init_if2 00:26:54.904 11:48:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@244 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:26:54.904 11:48:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@245 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:26:54.904 11:48:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@246 -- # remove_spdk_ns 00:26:54.904 11:48:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:26:54.904 11:48:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:26:54.904 11:48:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:26:54.904 11:48:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@300 -- # return 0 00:26:54.904 ************************************ 00:26:54.904 END TEST nvmf_fio_target 00:26:54.904 ************************************ 00:26:54.904 00:26:54.904 real 0m21.344s 00:26:54.904 user 1m2.446s 00:26:54.904 sys 0m11.754s 00:26:54.904 11:48:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1130 -- # xtrace_disable 00:26:54.904 11:48:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@10 -- # set +x 00:26:55.163 11:48:55 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/nvmf_target_core.sh@35 -- # run_test nvmf_bdevio /home/vagrant/spdk_repo/spdk/test/nvmf/target/bdevio.sh --transport=tcp --interrupt-mode 00:26:55.163 11:48:55 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:26:55.163 11:48:55 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1111 -- # xtrace_disable 00:26:55.163 11:48:55 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@10 -- # set +x 00:26:55.163 ************************************ 00:26:55.163 START TEST nvmf_bdevio 00:26:55.163 ************************************ 00:26:55.163 11:48:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/bdevio.sh --transport=tcp --interrupt-mode 00:26:55.163 * Looking for test storage... 00:26:55.163 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:26:55.163 11:48:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:26:55.163 11:48:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@1711 -- # lcov --version 00:26:55.163 11:48:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:26:55.163 11:48:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:26:55.163 11:48:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:26:55.163 11:48:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@333 -- # local ver1 ver1_l 00:26:55.163 11:48:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@334 -- # local ver2 ver2_l 00:26:55.163 11:48:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@336 -- # IFS=.-: 00:26:55.163 11:48:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@336 -- # read -ra ver1 00:26:55.163 11:48:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@337 -- # IFS=.-: 00:26:55.164 11:48:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@337 -- # read -ra ver2 00:26:55.164 11:48:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@338 -- # local 'op=<' 00:26:55.164 11:48:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@340 -- # ver1_l=2 00:26:55.164 11:48:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@341 -- # ver2_l=1 00:26:55.164 11:48:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:26:55.164 11:48:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@344 -- # case "$op" in 00:26:55.164 11:48:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@345 -- # : 1 00:26:55.164 11:48:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@364 -- # (( v = 0 )) 00:26:55.164 11:48:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:26:55.164 11:48:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@365 -- # decimal 1 00:26:55.164 11:48:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@353 -- # local d=1 00:26:55.164 11:48:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:26:55.164 11:48:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@355 -- # echo 1 00:26:55.164 11:48:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@365 -- # ver1[v]=1 00:26:55.164 11:48:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@366 -- # decimal 2 00:26:55.164 11:48:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@353 -- # local d=2 00:26:55.164 11:48:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:26:55.164 11:48:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@355 -- # echo 2 00:26:55.164 11:48:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@366 -- # ver2[v]=2 00:26:55.164 11:48:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:26:55.164 11:48:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:26:55.164 11:48:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@368 -- # return 0 00:26:55.164 11:48:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:26:55.164 11:48:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:26:55.164 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:26:55.164 --rc genhtml_branch_coverage=1 00:26:55.164 --rc genhtml_function_coverage=1 00:26:55.164 --rc genhtml_legend=1 00:26:55.164 --rc geninfo_all_blocks=1 00:26:55.164 --rc geninfo_unexecuted_blocks=1 00:26:55.164 00:26:55.164 ' 00:26:55.164 11:48:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:26:55.164 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:26:55.164 --rc genhtml_branch_coverage=1 00:26:55.164 --rc genhtml_function_coverage=1 00:26:55.164 --rc genhtml_legend=1 00:26:55.164 --rc geninfo_all_blocks=1 00:26:55.164 --rc geninfo_unexecuted_blocks=1 00:26:55.164 00:26:55.164 ' 00:26:55.164 11:48:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:26:55.164 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:26:55.164 --rc genhtml_branch_coverage=1 00:26:55.164 --rc genhtml_function_coverage=1 00:26:55.164 --rc genhtml_legend=1 00:26:55.164 --rc geninfo_all_blocks=1 00:26:55.164 --rc geninfo_unexecuted_blocks=1 00:26:55.164 00:26:55.164 ' 00:26:55.164 11:48:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:26:55.164 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:26:55.164 --rc genhtml_branch_coverage=1 00:26:55.164 --rc genhtml_function_coverage=1 00:26:55.164 --rc genhtml_legend=1 00:26:55.164 --rc geninfo_all_blocks=1 00:26:55.164 --rc geninfo_unexecuted_blocks=1 00:26:55.164 00:26:55.164 ' 00:26:55.164 11:48:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- target/bdevio.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:26:55.164 11:48:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@7 -- # uname -s 00:26:55.164 11:48:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:26:55.164 11:48:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:26:55.164 11:48:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:26:55.164 11:48:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:26:55.164 11:48:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:26:55.164 11:48:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:26:55.164 11:48:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:26:55.164 11:48:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:26:55.164 11:48:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:26:55.164 11:48:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:26:55.164 11:48:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:2fb5c03d-b67f-4471-b8a9-a83bf7b15be8 00:26:55.164 11:48:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@18 -- # NVME_HOSTID=2fb5c03d-b67f-4471-b8a9-a83bf7b15be8 00:26:55.164 11:48:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:26:55.164 11:48:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:26:55.164 11:48:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:26:55.164 11:48:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:26:55.164 11:48:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@49 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:26:55.164 11:48:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@15 -- # shopt -s extglob 00:26:55.164 11:48:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:26:55.164 11:48:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:26:55.164 11:48:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:26:55.164 11:48:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:26:55.164 11:48:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:26:55.164 11:48:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:26:55.164 11:48:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- paths/export.sh@5 -- # export PATH 00:26:55.164 11:48:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:26:55.427 11:48:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@51 -- # : 0 00:26:55.427 11:48:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:26:55.427 11:48:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:26:55.427 11:48:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:26:55.428 11:48:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:26:55.428 11:48:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:26:55.428 11:48:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@33 -- # '[' 1 -eq 1 ']' 00:26:55.428 11:48:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@34 -- # NVMF_APP+=(--interrupt-mode) 00:26:55.428 11:48:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:26:55.428 11:48:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:26:55.428 11:48:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@55 -- # have_pci_nics=0 00:26:55.428 11:48:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- target/bdevio.sh@11 -- # MALLOC_BDEV_SIZE=64 00:26:55.428 11:48:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- target/bdevio.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:26:55.428 11:48:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- target/bdevio.sh@14 -- # nvmftestinit 00:26:55.428 11:48:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:26:55.428 11:48:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:26:55.428 11:48:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@476 -- # prepare_net_devs 00:26:55.428 11:48:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@438 -- # local -g is_hw=no 00:26:55.428 11:48:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@440 -- # remove_spdk_ns 00:26:55.428 11:48:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:26:55.428 11:48:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:26:55.428 11:48:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:26:55.428 11:48:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@442 -- # [[ virt != virt ]] 00:26:55.428 11:48:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@444 -- # [[ no == yes ]] 00:26:55.428 11:48:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@451 -- # [[ virt == phy ]] 00:26:55.428 11:48:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@454 -- # [[ virt == phy-fallback ]] 00:26:55.428 11:48:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@459 -- # [[ tcp == tcp ]] 00:26:55.428 11:48:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@460 -- # nvmf_veth_init 00:26:55.428 11:48:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@145 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:26:55.428 11:48:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@146 -- # NVMF_SECOND_INITIATOR_IP=10.0.0.2 00:26:55.428 11:48:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@147 -- # NVMF_FIRST_TARGET_IP=10.0.0.3 00:26:55.428 11:48:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@148 -- # NVMF_SECOND_TARGET_IP=10.0.0.4 00:26:55.428 11:48:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@149 -- # NVMF_INITIATOR_IP=10.0.0.1 00:26:55.428 11:48:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@150 -- # NVMF_BRIDGE=nvmf_br 00:26:55.428 11:48:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@151 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:26:55.428 11:48:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@152 -- # NVMF_INITIATOR_INTERFACE2=nvmf_init_if2 00:26:55.428 11:48:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@153 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:26:55.428 11:48:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@154 -- # NVMF_INITIATOR_BRIDGE2=nvmf_init_br2 00:26:55.428 11:48:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@155 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:26:55.428 11:48:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@156 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:26:55.428 11:48:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@157 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:26:55.428 11:48:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@158 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:26:55.428 11:48:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@159 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:26:55.428 11:48:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@160 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:26:55.428 11:48:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@162 -- # ip link set nvmf_init_br nomaster 00:26:55.428 Cannot find device "nvmf_init_br" 00:26:55.428 11:48:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@162 -- # true 00:26:55.428 11:48:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@163 -- # ip link set nvmf_init_br2 nomaster 00:26:55.428 Cannot find device "nvmf_init_br2" 00:26:55.428 11:48:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@163 -- # true 00:26:55.428 11:48:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@164 -- # ip link set nvmf_tgt_br nomaster 00:26:55.428 Cannot find device "nvmf_tgt_br" 00:26:55.428 11:48:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@164 -- # true 00:26:55.428 11:48:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@165 -- # ip link set nvmf_tgt_br2 nomaster 00:26:55.428 Cannot find device "nvmf_tgt_br2" 00:26:55.428 11:48:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@165 -- # true 00:26:55.428 11:48:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@166 -- # ip link set nvmf_init_br down 00:26:55.428 Cannot find device "nvmf_init_br" 00:26:55.428 11:48:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@166 -- # true 00:26:55.428 11:48:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@167 -- # ip link set nvmf_init_br2 down 00:26:55.428 Cannot find device "nvmf_init_br2" 00:26:55.428 11:48:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@167 -- # true 00:26:55.428 11:48:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@168 -- # ip link set nvmf_tgt_br down 00:26:55.428 Cannot find device "nvmf_tgt_br" 00:26:55.428 11:48:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@168 -- # true 00:26:55.428 11:48:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@169 -- # ip link set nvmf_tgt_br2 down 00:26:55.428 Cannot find device "nvmf_tgt_br2" 00:26:55.428 11:48:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@169 -- # true 00:26:55.428 11:48:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@170 -- # ip link delete nvmf_br type bridge 00:26:55.428 Cannot find device "nvmf_br" 00:26:55.428 11:48:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@170 -- # true 00:26:55.428 11:48:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@171 -- # ip link delete nvmf_init_if 00:26:55.428 Cannot find device "nvmf_init_if" 00:26:55.428 11:48:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@171 -- # true 00:26:55.428 11:48:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@172 -- # ip link delete nvmf_init_if2 00:26:55.428 Cannot find device "nvmf_init_if2" 00:26:55.428 11:48:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@172 -- # true 00:26:55.428 11:48:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@173 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:26:55.428 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:26:55.428 11:48:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@173 -- # true 00:26:55.428 11:48:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@174 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:26:55.428 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:26:55.428 11:48:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@174 -- # true 00:26:55.428 11:48:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@177 -- # ip netns add nvmf_tgt_ns_spdk 00:26:55.428 11:48:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@180 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:26:55.428 11:48:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@181 -- # ip link add nvmf_init_if2 type veth peer name nvmf_init_br2 00:26:55.428 11:48:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@182 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:26:55.428 11:48:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@183 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:26:55.428 11:48:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:26:55.428 11:48:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@187 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:26:55.428 11:48:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@190 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:26:55.428 11:48:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@191 -- # ip addr add 10.0.0.2/24 dev nvmf_init_if2 00:26:55.428 11:48:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@192 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if 00:26:55.428 11:48:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@193 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.4/24 dev nvmf_tgt_if2 00:26:55.428 11:48:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@196 -- # ip link set nvmf_init_if up 00:26:55.428 11:48:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@197 -- # ip link set nvmf_init_if2 up 00:26:55.428 11:48:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@198 -- # ip link set nvmf_init_br up 00:26:55.428 11:48:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@199 -- # ip link set nvmf_init_br2 up 00:26:55.428 11:48:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@200 -- # ip link set nvmf_tgt_br up 00:26:55.688 11:48:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@201 -- # ip link set nvmf_tgt_br2 up 00:26:55.688 11:48:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@202 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:26:55.688 11:48:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@203 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:26:55.688 11:48:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@204 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:26:55.688 11:48:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@207 -- # ip link add nvmf_br type bridge 00:26:55.688 11:48:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@208 -- # ip link set nvmf_br up 00:26:55.688 11:48:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@211 -- # ip link set nvmf_init_br master nvmf_br 00:26:55.688 11:48:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@212 -- # ip link set nvmf_init_br2 master nvmf_br 00:26:55.688 11:48:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@213 -- # ip link set nvmf_tgt_br master nvmf_br 00:26:55.688 11:48:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@214 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:26:55.688 11:48:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@217 -- # ipts -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:26:55.688 11:48:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT' 00:26:55.688 11:48:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@218 -- # ipts -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT 00:26:55.688 11:48:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT' 00:26:55.688 11:48:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@219 -- # ipts -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:26:55.688 11:48:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@790 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT -m comment --comment 'SPDK_NVMF:-A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT' 00:26:55.688 11:48:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@222 -- # ping -c 1 10.0.0.3 00:26:55.688 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:26:55.688 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.125 ms 00:26:55.688 00:26:55.688 --- 10.0.0.3 ping statistics --- 00:26:55.688 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:26:55.688 rtt min/avg/max/mdev = 0.125/0.125/0.125/0.000 ms 00:26:55.688 11:48:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@223 -- # ping -c 1 10.0.0.4 00:26:55.688 PING 10.0.0.4 (10.0.0.4) 56(84) bytes of data. 00:26:55.688 64 bytes from 10.0.0.4: icmp_seq=1 ttl=64 time=0.079 ms 00:26:55.688 00:26:55.688 --- 10.0.0.4 ping statistics --- 00:26:55.688 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:26:55.688 rtt min/avg/max/mdev = 0.079/0.079/0.079/0.000 ms 00:26:55.688 11:48:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@224 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:26:55.688 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:26:55.688 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.046 ms 00:26:55.688 00:26:55.688 --- 10.0.0.1 ping statistics --- 00:26:55.688 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:26:55.688 rtt min/avg/max/mdev = 0.046/0.046/0.046/0.000 ms 00:26:55.688 11:48:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@225 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.2 00:26:55.688 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:26:55.688 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.050 ms 00:26:55.688 00:26:55.688 --- 10.0.0.2 ping statistics --- 00:26:55.688 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:26:55.688 rtt min/avg/max/mdev = 0.050/0.050/0.050/0.000 ms 00:26:55.688 11:48:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@227 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:26:55.688 11:48:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@461 -- # return 0 00:26:55.688 11:48:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:26:55.688 11:48:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:26:55.688 11:48:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:26:55.688 11:48:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:26:55.688 11:48:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:26:55.688 11:48:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:26:55.688 11:48:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:26:55.688 11:48:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- target/bdevio.sh@16 -- # nvmfappstart -m 0x78 00:26:55.688 11:48:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:26:55.688 11:48:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@726 -- # xtrace_disable 00:26:55.688 11:48:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:26:55.688 11:48:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@509 -- # nvmfpid=108587 00:26:55.688 11:48:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@508 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --interrupt-mode -m 0x78 00:26:55.688 11:48:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@510 -- # waitforlisten 108587 00:26:55.688 11:48:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@835 -- # '[' -z 108587 ']' 00:26:55.688 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:26:55.688 11:48:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:26:55.688 11:48:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@840 -- # local max_retries=100 00:26:55.688 11:48:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:26:55.688 11:48:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@844 -- # xtrace_disable 00:26:55.688 11:48:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:26:55.688 [2024-12-16 11:48:56.392910] thread.c:3079:spdk_interrupt_mode_enable: *NOTICE*: Set SPDK running in interrupt mode. 00:26:55.688 [2024-12-16 11:48:56.394442] Starting SPDK v25.01-pre git sha1 c360129e7 / DPDK 24.03.0 initialization... 00:26:55.688 [2024-12-16 11:48:56.395287] [ DPDK EAL parameters: nvmf -c 0x78 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:26:55.947 [2024-12-16 11:48:56.553382] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:26:55.947 [2024-12-16 11:48:56.614384] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:26:55.947 [2024-12-16 11:48:56.614433] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:26:55.947 [2024-12-16 11:48:56.614444] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:26:55.947 [2024-12-16 11:48:56.614451] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:26:55.947 [2024-12-16 11:48:56.614458] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:26:55.947 [2024-12-16 11:48:56.615614] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 4 00:26:55.947 [2024-12-16 11:48:56.615667] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 5 00:26:55.947 [2024-12-16 11:48:56.615772] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 6 00:26:55.947 [2024-12-16 11:48:56.615774] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 3 00:26:56.206 [2024-12-16 11:48:56.705501] thread.c:2144:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_000) to intr mode from intr mode. 00:26:56.206 [2024-12-16 11:48:56.705765] thread.c:2144:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_001) to intr mode from intr mode. 00:26:56.206 [2024-12-16 11:48:56.706424] thread.c:2144:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (app_thread) to intr mode from intr mode. 00:26:56.206 [2024-12-16 11:48:56.706988] thread.c:2144:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_003) to intr mode from intr mode. 00:26:56.206 [2024-12-16 11:48:56.707011] thread.c:2144:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_002) to intr mode from intr mode. 00:26:56.206 11:48:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:26:56.206 11:48:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@868 -- # return 0 00:26:56.206 11:48:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:26:56.206 11:48:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@732 -- # xtrace_disable 00:26:56.206 11:48:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:26:56.206 11:48:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:26:56.206 11:48:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- target/bdevio.sh@18 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:26:56.206 11:48:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:56.206 11:48:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:26:56.206 [2024-12-16 11:48:56.788611] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:26:56.206 11:48:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:56.206 11:48:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- target/bdevio.sh@19 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:26:56.206 11:48:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:56.206 11:48:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:26:56.206 Malloc0 00:26:56.206 11:48:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:56.206 11:48:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- target/bdevio.sh@20 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:26:56.206 11:48:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:56.206 11:48:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:26:56.206 11:48:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:56.206 11:48:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- target/bdevio.sh@21 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:26:56.206 11:48:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:56.206 11:48:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:26:56.206 11:48:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:56.206 11:48:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- target/bdevio.sh@22 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 00:26:56.206 11:48:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:56.206 11:48:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:26:56.206 [2024-12-16 11:48:56.856789] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4420 *** 00:26:56.206 11:48:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:56.206 11:48:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- target/bdevio.sh@24 -- # /home/vagrant/spdk_repo/spdk/test/bdev/bdevio/bdevio --json /dev/fd/62 00:26:56.206 11:48:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- target/bdevio.sh@24 -- # gen_nvmf_target_json 00:26:56.206 11:48:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@560 -- # config=() 00:26:56.206 11:48:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@560 -- # local subsystem config 00:26:56.206 11:48:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:26:56.206 11:48:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:26:56.206 { 00:26:56.206 "params": { 00:26:56.206 "name": "Nvme$subsystem", 00:26:56.206 "trtype": "$TEST_TRANSPORT", 00:26:56.206 "traddr": "$NVMF_FIRST_TARGET_IP", 00:26:56.206 "adrfam": "ipv4", 00:26:56.206 "trsvcid": "$NVMF_PORT", 00:26:56.206 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:26:56.206 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:26:56.206 "hdgst": ${hdgst:-false}, 00:26:56.206 "ddgst": ${ddgst:-false} 00:26:56.206 }, 00:26:56.206 "method": "bdev_nvme_attach_controller" 00:26:56.206 } 00:26:56.206 EOF 00:26:56.206 )") 00:26:56.206 11:48:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@582 -- # cat 00:26:56.206 11:48:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@584 -- # jq . 00:26:56.206 11:48:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@585 -- # IFS=, 00:26:56.206 11:48:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:26:56.206 "params": { 00:26:56.206 "name": "Nvme1", 00:26:56.206 "trtype": "tcp", 00:26:56.206 "traddr": "10.0.0.3", 00:26:56.206 "adrfam": "ipv4", 00:26:56.206 "trsvcid": "4420", 00:26:56.206 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:26:56.206 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:26:56.206 "hdgst": false, 00:26:56.206 "ddgst": false 00:26:56.206 }, 00:26:56.206 "method": "bdev_nvme_attach_controller" 00:26:56.206 }' 00:26:56.206 [2024-12-16 11:48:56.921635] Starting SPDK v25.01-pre git sha1 c360129e7 / DPDK 24.03.0 initialization... 00:26:56.206 [2024-12-16 11:48:56.921723] [ DPDK EAL parameters: bdevio --no-shconf -c 0x7 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid108622 ] 00:26:56.465 [2024-12-16 11:48:57.076036] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:26:56.465 [2024-12-16 11:48:57.141156] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 1 00:26:56.465 [2024-12-16 11:48:57.141278] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 2 00:26:56.465 [2024-12-16 11:48:57.141288] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:26:56.723 I/O targets: 00:26:56.723 Nvme1n1: 131072 blocks of 512 bytes (64 MiB) 00:26:56.723 00:26:56.723 00:26:56.723 CUnit - A unit testing framework for C - Version 2.1-3 00:26:56.723 http://cunit.sourceforge.net/ 00:26:56.723 00:26:56.723 00:26:56.723 Suite: bdevio tests on: Nvme1n1 00:26:56.723 Test: blockdev write read block ...passed 00:26:56.723 Test: blockdev write zeroes read block ...passed 00:26:56.723 Test: blockdev write zeroes read no split ...passed 00:26:56.723 Test: blockdev write zeroes read split ...passed 00:26:56.723 Test: blockdev write zeroes read split partial ...passed 00:26:56.723 Test: blockdev reset ...[2024-12-16 11:48:57.433387] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 1] resetting controller 00:26:56.723 [2024-12-16 11:48:57.433501] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb02f70 (9): Bad file descriptor 00:26:56.723 [2024-12-16 11:48:57.436768] bdev_nvme.c:2287:bdev_nvme_reset_ctrlr_complete: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller successful. 00:26:56.723 passed 00:26:56.723 Test: blockdev write read 8 blocks ...passed 00:26:56.723 Test: blockdev write read size > 128k ...passed 00:26:56.723 Test: blockdev write read invalid size ...passed 00:26:56.983 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:26:56.983 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:26:56.983 Test: blockdev write read max offset ...passed 00:26:56.983 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:26:56.983 Test: blockdev writev readv 8 blocks ...passed 00:26:56.983 Test: blockdev writev readv 30 x 1block ...passed 00:26:56.983 Test: blockdev writev readv block ...passed 00:26:56.983 Test: blockdev writev readv size > 128k ...passed 00:26:56.983 Test: blockdev writev readv size > 128k in two iovs ...passed 00:26:56.983 Test: blockdev comparev and writev ...[2024-12-16 11:48:57.611199] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:26:56.983 [2024-12-16 11:48:57.611253] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:26:56.983 [2024-12-16 11:48:57.611274] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:26:56.983 [2024-12-16 11:48:57.611284] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:26:56.983 [2024-12-16 11:48:57.611636] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:26:56.983 [2024-12-16 11:48:57.611654] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:1 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:26:56.983 [2024-12-16 11:48:57.611672] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:26:56.983 [2024-12-16 11:48:57.611682] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:0 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 00:26:56.983 [2024-12-16 11:48:57.612020] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:26:56.983 [2024-12-16 11:48:57.612037] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:0 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:26:56.983 [2024-12-16 11:48:57.612066] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:26:56.983 [2024-12-16 11:48:57.612078] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:1 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:26:56.983 [2024-12-16 11:48:57.612447] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:26:56.983 [2024-12-16 11:48:57.612466] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:1 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:26:56.983 [2024-12-16 11:48:57.612489] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:26:56.983 [2024-12-16 11:48:57.612498] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:0 cdw0:0 sqhd:0028 p:0 m:0 dnr:0 00:26:56.983 passed 00:26:56.983 Test: blockdev nvme passthru rw ...passed 00:26:56.983 Test: blockdev nvme passthru vendor specific ...[2024-12-16 11:48:57.696454] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:26:56.983 [2024-12-16 11:48:57.696483] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002c p:0 m:0 dnr:0 00:26:56.983 [2024-12-16 11:48:57.696775] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:26:56.983 [2024-12-16 11:48:57.696856] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002d p:0 m:0 dnr:0 00:26:56.983 [2024-12-16 11:48:57.696991] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:26:56.983 [2024-12-16 11:48:57.697124] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:26:56.983 [2024-12-16 11:48:57.697518] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:26:56.983 [2024-12-16 11:48:57.697541] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002f p:0 m:0 dnr:0 00:26:56.983 passed 00:26:56.983 Test: blockdev nvme admin passthru ...passed 00:26:57.241 Test: blockdev copy ...passed 00:26:57.241 00:26:57.241 Run Summary: Type Total Ran Passed Failed Inactive 00:26:57.241 suites 1 1 n/a 0 0 00:26:57.241 tests 23 23 23 0 0 00:26:57.241 asserts 152 152 152 0 n/a 00:26:57.241 00:26:57.241 Elapsed time = 0.863 seconds 00:26:57.241 11:48:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- target/bdevio.sh@26 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:26:57.241 11:48:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:57.241 11:48:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:26:57.241 11:48:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:57.241 11:48:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- target/bdevio.sh@28 -- # trap - SIGINT SIGTERM EXIT 00:26:57.241 11:48:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- target/bdevio.sh@30 -- # nvmftestfini 00:26:57.241 11:48:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@516 -- # nvmfcleanup 00:26:57.241 11:48:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@121 -- # sync 00:26:57.241 11:48:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:26:57.241 11:48:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@124 -- # set +e 00:26:57.241 11:48:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@125 -- # for i in {1..20} 00:26:57.241 11:48:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:26:57.241 rmmod nvme_tcp 00:26:57.510 rmmod nvme_fabrics 00:26:57.510 rmmod nvme_keyring 00:26:57.510 11:48:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:26:57.510 11:48:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@128 -- # set -e 00:26:57.510 11:48:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@129 -- # return 0 00:26:57.510 11:48:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@517 -- # '[' -n 108587 ']' 00:26:57.510 11:48:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@518 -- # killprocess 108587 00:26:57.510 11:48:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@954 -- # '[' -z 108587 ']' 00:26:57.510 11:48:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@958 -- # kill -0 108587 00:26:57.510 11:48:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@959 -- # uname 00:26:57.510 11:48:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:26:57.510 11:48:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 108587 00:26:57.510 11:48:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@960 -- # process_name=reactor_3 00:26:57.510 killing process with pid 108587 00:26:57.510 11:48:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@964 -- # '[' reactor_3 = sudo ']' 00:26:57.510 11:48:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@972 -- # echo 'killing process with pid 108587' 00:26:57.510 11:48:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@973 -- # kill 108587 00:26:57.510 11:48:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@978 -- # wait 108587 00:26:57.768 11:48:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:26:57.768 11:48:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:26:57.768 11:48:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:26:57.768 11:48:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@297 -- # iptr 00:26:57.768 11:48:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@791 -- # iptables-save 00:26:57.768 11:48:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@791 -- # iptables-restore 00:26:57.768 11:48:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:26:57.768 11:48:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@298 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:26:57.768 11:48:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@299 -- # nvmf_veth_fini 00:26:57.768 11:48:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@233 -- # ip link set nvmf_init_br nomaster 00:26:57.768 11:48:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@234 -- # ip link set nvmf_init_br2 nomaster 00:26:57.768 11:48:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@235 -- # ip link set nvmf_tgt_br nomaster 00:26:57.768 11:48:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@236 -- # ip link set nvmf_tgt_br2 nomaster 00:26:57.768 11:48:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@237 -- # ip link set nvmf_init_br down 00:26:57.768 11:48:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@238 -- # ip link set nvmf_init_br2 down 00:26:57.768 11:48:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@239 -- # ip link set nvmf_tgt_br down 00:26:57.768 11:48:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@240 -- # ip link set nvmf_tgt_br2 down 00:26:57.768 11:48:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@241 -- # ip link delete nvmf_br type bridge 00:26:57.768 11:48:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@242 -- # ip link delete nvmf_init_if 00:26:57.768 11:48:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@243 -- # ip link delete nvmf_init_if2 00:26:57.768 11:48:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@244 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:26:57.768 11:48:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@245 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:26:57.768 11:48:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@246 -- # remove_spdk_ns 00:26:57.768 11:48:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:26:57.768 11:48:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:26:57.768 11:48:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:26:58.027 11:48:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@300 -- # return 0 00:26:58.027 ************************************ 00:26:58.027 END TEST nvmf_bdevio 00:26:58.027 ************************************ 00:26:58.027 00:26:58.027 real 0m2.856s 00:26:58.027 user 0m7.127s 00:26:58.027 sys 0m1.204s 00:26:58.027 11:48:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@1130 -- # xtrace_disable 00:26:58.027 11:48:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:26:58.027 11:48:58 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/nvmf_target_core.sh@37 -- # trap - SIGINT SIGTERM EXIT 00:26:58.027 00:26:58.027 real 3m37.844s 00:26:58.027 user 9m50.151s 00:26:58.027 sys 1m18.462s 00:26:58.027 11:48:58 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1130 -- # xtrace_disable 00:26:58.027 ************************************ 00:26:58.027 11:48:58 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@10 -- # set +x 00:26:58.027 END TEST nvmf_target_core_interrupt_mode 00:26:58.027 ************************************ 00:26:58.027 11:48:58 nvmf_tcp -- nvmf/nvmf.sh@21 -- # run_test nvmf_interrupt /home/vagrant/spdk_repo/spdk/test/nvmf/target/interrupt.sh --transport=tcp --interrupt-mode 00:26:58.027 11:48:58 nvmf_tcp -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:26:58.027 11:48:58 nvmf_tcp -- common/autotest_common.sh@1111 -- # xtrace_disable 00:26:58.027 11:48:58 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:26:58.027 ************************************ 00:26:58.027 START TEST nvmf_interrupt 00:26:58.027 ************************************ 00:26:58.027 11:48:58 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/interrupt.sh --transport=tcp --interrupt-mode 00:26:58.027 * Looking for test storage... 00:26:58.027 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:26:58.027 11:48:58 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:26:58.027 11:48:58 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1711 -- # lcov --version 00:26:58.027 11:48:58 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:26:58.286 11:48:58 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:26:58.286 11:48:58 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:26:58.286 11:48:58 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@333 -- # local ver1 ver1_l 00:26:58.286 11:48:58 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@334 -- # local ver2 ver2_l 00:26:58.286 11:48:58 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@336 -- # IFS=.-: 00:26:58.286 11:48:58 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@336 -- # read -ra ver1 00:26:58.286 11:48:58 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@337 -- # IFS=.-: 00:26:58.286 11:48:58 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@337 -- # read -ra ver2 00:26:58.286 11:48:58 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@338 -- # local 'op=<' 00:26:58.286 11:48:58 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@340 -- # ver1_l=2 00:26:58.286 11:48:58 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@341 -- # ver2_l=1 00:26:58.286 11:48:58 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:26:58.286 11:48:58 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@344 -- # case "$op" in 00:26:58.286 11:48:58 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@345 -- # : 1 00:26:58.286 11:48:58 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@364 -- # (( v = 0 )) 00:26:58.286 11:48:58 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:26:58.286 11:48:58 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@365 -- # decimal 1 00:26:58.286 11:48:58 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@353 -- # local d=1 00:26:58.286 11:48:58 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:26:58.286 11:48:58 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@355 -- # echo 1 00:26:58.286 11:48:58 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@365 -- # ver1[v]=1 00:26:58.286 11:48:58 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@366 -- # decimal 2 00:26:58.286 11:48:58 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@353 -- # local d=2 00:26:58.286 11:48:58 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:26:58.286 11:48:58 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@355 -- # echo 2 00:26:58.286 11:48:58 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@366 -- # ver2[v]=2 00:26:58.286 11:48:58 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:26:58.287 11:48:58 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:26:58.287 11:48:58 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@368 -- # return 0 00:26:58.287 11:48:58 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:26:58.287 11:48:58 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:26:58.287 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:26:58.287 --rc genhtml_branch_coverage=1 00:26:58.287 --rc genhtml_function_coverage=1 00:26:58.287 --rc genhtml_legend=1 00:26:58.287 --rc geninfo_all_blocks=1 00:26:58.287 --rc geninfo_unexecuted_blocks=1 00:26:58.287 00:26:58.287 ' 00:26:58.287 11:48:58 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:26:58.287 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:26:58.287 --rc genhtml_branch_coverage=1 00:26:58.287 --rc genhtml_function_coverage=1 00:26:58.287 --rc genhtml_legend=1 00:26:58.287 --rc geninfo_all_blocks=1 00:26:58.287 --rc geninfo_unexecuted_blocks=1 00:26:58.287 00:26:58.287 ' 00:26:58.287 11:48:58 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:26:58.287 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:26:58.287 --rc genhtml_branch_coverage=1 00:26:58.287 --rc genhtml_function_coverage=1 00:26:58.287 --rc genhtml_legend=1 00:26:58.287 --rc geninfo_all_blocks=1 00:26:58.287 --rc geninfo_unexecuted_blocks=1 00:26:58.287 00:26:58.287 ' 00:26:58.287 11:48:58 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:26:58.287 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:26:58.287 --rc genhtml_branch_coverage=1 00:26:58.287 --rc genhtml_function_coverage=1 00:26:58.287 --rc genhtml_legend=1 00:26:58.287 --rc geninfo_all_blocks=1 00:26:58.287 --rc geninfo_unexecuted_blocks=1 00:26:58.287 00:26:58.287 ' 00:26:58.287 11:48:58 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:26:58.287 11:48:58 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@7 -- # uname -s 00:26:58.287 11:48:58 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:26:58.287 11:48:58 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:26:58.287 11:48:58 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:26:58.287 11:48:58 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:26:58.287 11:48:58 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:26:58.287 11:48:58 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:26:58.287 11:48:58 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:26:58.287 11:48:58 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:26:58.287 11:48:58 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:26:58.287 11:48:58 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:26:58.287 11:48:58 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:2fb5c03d-b67f-4471-b8a9-a83bf7b15be8 00:26:58.287 11:48:58 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@18 -- # NVME_HOSTID=2fb5c03d-b67f-4471-b8a9-a83bf7b15be8 00:26:58.287 11:48:58 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:26:58.287 11:48:58 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:26:58.287 11:48:58 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:26:58.287 11:48:58 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:26:58.287 11:48:58 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@49 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:26:58.287 11:48:58 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@15 -- # shopt -s extglob 00:26:58.287 11:48:58 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:26:58.287 11:48:58 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:26:58.287 11:48:58 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:26:58.287 11:48:58 nvmf_tcp.nvmf_interrupt -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:26:58.287 11:48:58 nvmf_tcp.nvmf_interrupt -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:26:58.287 11:48:58 nvmf_tcp.nvmf_interrupt -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:26:58.287 11:48:58 nvmf_tcp.nvmf_interrupt -- paths/export.sh@5 -- # export PATH 00:26:58.287 11:48:58 nvmf_tcp.nvmf_interrupt -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:26:58.287 11:48:58 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@51 -- # : 0 00:26:58.287 11:48:58 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:26:58.287 11:48:58 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:26:58.287 11:48:58 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:26:58.287 11:48:58 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:26:58.287 11:48:58 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:26:58.287 11:48:58 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@33 -- # '[' 1 -eq 1 ']' 00:26:58.287 11:48:58 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@34 -- # NVMF_APP+=(--interrupt-mode) 00:26:58.287 11:48:58 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:26:58.287 11:48:58 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:26:58.287 11:48:58 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@55 -- # have_pci_nics=0 00:26:58.287 11:48:58 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@10 -- # source /home/vagrant/spdk_repo/spdk/test/interrupt/common.sh 00:26:58.287 11:48:58 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@12 -- # NQN=nqn.2016-06.io.spdk:cnode1 00:26:58.287 11:48:58 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@14 -- # nvmftestinit 00:26:58.287 11:48:58 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:26:58.287 11:48:58 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:26:58.287 11:48:58 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@476 -- # prepare_net_devs 00:26:58.287 11:48:58 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@438 -- # local -g is_hw=no 00:26:58.287 11:48:58 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@440 -- # remove_spdk_ns 00:26:58.287 11:48:58 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:26:58.287 11:48:58 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:26:58.287 11:48:58 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:26:58.287 11:48:58 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@442 -- # [[ virt != virt ]] 00:26:58.287 11:48:58 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@444 -- # [[ no == yes ]] 00:26:58.287 11:48:58 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@451 -- # [[ virt == phy ]] 00:26:58.287 11:48:58 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@454 -- # [[ virt == phy-fallback ]] 00:26:58.287 11:48:58 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@459 -- # [[ tcp == tcp ]] 00:26:58.287 11:48:58 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@460 -- # nvmf_veth_init 00:26:58.287 11:48:58 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@145 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:26:58.287 11:48:58 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@146 -- # NVMF_SECOND_INITIATOR_IP=10.0.0.2 00:26:58.287 11:48:58 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@147 -- # NVMF_FIRST_TARGET_IP=10.0.0.3 00:26:58.287 11:48:58 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@148 -- # NVMF_SECOND_TARGET_IP=10.0.0.4 00:26:58.287 11:48:58 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@149 -- # NVMF_INITIATOR_IP=10.0.0.1 00:26:58.287 11:48:58 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@150 -- # NVMF_BRIDGE=nvmf_br 00:26:58.287 11:48:58 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@151 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:26:58.287 11:48:58 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@152 -- # NVMF_INITIATOR_INTERFACE2=nvmf_init_if2 00:26:58.287 11:48:58 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@153 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:26:58.287 11:48:58 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@154 -- # NVMF_INITIATOR_BRIDGE2=nvmf_init_br2 00:26:58.287 11:48:58 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@155 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:26:58.287 11:48:58 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@156 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:26:58.287 11:48:58 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@157 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:26:58.287 11:48:58 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@158 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:26:58.287 11:48:58 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@159 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:26:58.287 11:48:58 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@160 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:26:58.287 11:48:58 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@162 -- # ip link set nvmf_init_br nomaster 00:26:58.287 Cannot find device "nvmf_init_br" 00:26:58.287 11:48:58 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@162 -- # true 00:26:58.287 11:48:58 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@163 -- # ip link set nvmf_init_br2 nomaster 00:26:58.287 Cannot find device "nvmf_init_br2" 00:26:58.287 11:48:58 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@163 -- # true 00:26:58.287 11:48:58 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@164 -- # ip link set nvmf_tgt_br nomaster 00:26:58.287 Cannot find device "nvmf_tgt_br" 00:26:58.287 11:48:58 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@164 -- # true 00:26:58.287 11:48:58 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@165 -- # ip link set nvmf_tgt_br2 nomaster 00:26:58.287 Cannot find device "nvmf_tgt_br2" 00:26:58.287 11:48:58 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@165 -- # true 00:26:58.287 11:48:58 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@166 -- # ip link set nvmf_init_br down 00:26:58.287 Cannot find device "nvmf_init_br" 00:26:58.287 11:48:58 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@166 -- # true 00:26:58.287 11:48:58 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@167 -- # ip link set nvmf_init_br2 down 00:26:58.287 Cannot find device "nvmf_init_br2" 00:26:58.287 11:48:58 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@167 -- # true 00:26:58.287 11:48:58 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@168 -- # ip link set nvmf_tgt_br down 00:26:58.287 Cannot find device "nvmf_tgt_br" 00:26:58.287 11:48:58 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@168 -- # true 00:26:58.287 11:48:58 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@169 -- # ip link set nvmf_tgt_br2 down 00:26:58.287 Cannot find device "nvmf_tgt_br2" 00:26:58.288 11:48:58 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@169 -- # true 00:26:58.288 11:48:58 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@170 -- # ip link delete nvmf_br type bridge 00:26:58.288 Cannot find device "nvmf_br" 00:26:58.288 11:48:58 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@170 -- # true 00:26:58.288 11:48:58 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@171 -- # ip link delete nvmf_init_if 00:26:58.288 Cannot find device "nvmf_init_if" 00:26:58.288 11:48:58 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@171 -- # true 00:26:58.288 11:48:58 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@172 -- # ip link delete nvmf_init_if2 00:26:58.288 Cannot find device "nvmf_init_if2" 00:26:58.288 11:48:58 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@172 -- # true 00:26:58.288 11:48:58 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@173 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:26:58.288 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:26:58.288 11:48:58 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@173 -- # true 00:26:58.288 11:48:58 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@174 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:26:58.288 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:26:58.288 11:48:58 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@174 -- # true 00:26:58.288 11:48:58 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@177 -- # ip netns add nvmf_tgt_ns_spdk 00:26:58.288 11:48:58 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@180 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:26:58.288 11:48:58 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@181 -- # ip link add nvmf_init_if2 type veth peer name nvmf_init_br2 00:26:58.288 11:48:59 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@182 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:26:58.288 11:48:59 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@183 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:26:58.288 11:48:59 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:26:58.547 11:48:59 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@187 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:26:58.547 11:48:59 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@190 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:26:58.547 11:48:59 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@191 -- # ip addr add 10.0.0.2/24 dev nvmf_init_if2 00:26:58.547 11:48:59 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@192 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if 00:26:58.547 11:48:59 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@193 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.4/24 dev nvmf_tgt_if2 00:26:58.547 11:48:59 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@196 -- # ip link set nvmf_init_if up 00:26:58.547 11:48:59 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@197 -- # ip link set nvmf_init_if2 up 00:26:58.547 11:48:59 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@198 -- # ip link set nvmf_init_br up 00:26:58.547 11:48:59 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@199 -- # ip link set nvmf_init_br2 up 00:26:58.547 11:48:59 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@200 -- # ip link set nvmf_tgt_br up 00:26:58.547 11:48:59 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@201 -- # ip link set nvmf_tgt_br2 up 00:26:58.547 11:48:59 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@202 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:26:58.547 11:48:59 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@203 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:26:58.547 11:48:59 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@204 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:26:58.547 11:48:59 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@207 -- # ip link add nvmf_br type bridge 00:26:58.547 11:48:59 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@208 -- # ip link set nvmf_br up 00:26:58.547 11:48:59 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@211 -- # ip link set nvmf_init_br master nvmf_br 00:26:58.548 11:48:59 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@212 -- # ip link set nvmf_init_br2 master nvmf_br 00:26:58.548 11:48:59 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@213 -- # ip link set nvmf_tgt_br master nvmf_br 00:26:58.548 11:48:59 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@214 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:26:58.548 11:48:59 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@217 -- # ipts -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:26:58.548 11:48:59 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT' 00:26:58.548 11:48:59 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@218 -- # ipts -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT 00:26:58.548 11:48:59 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT' 00:26:58.548 11:48:59 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@219 -- # ipts -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:26:58.548 11:48:59 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@790 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT -m comment --comment 'SPDK_NVMF:-A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT' 00:26:58.548 11:48:59 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@222 -- # ping -c 1 10.0.0.3 00:26:58.548 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:26:58.548 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.074 ms 00:26:58.548 00:26:58.548 --- 10.0.0.3 ping statistics --- 00:26:58.548 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:26:58.548 rtt min/avg/max/mdev = 0.074/0.074/0.074/0.000 ms 00:26:58.548 11:48:59 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@223 -- # ping -c 1 10.0.0.4 00:26:58.548 PING 10.0.0.4 (10.0.0.4) 56(84) bytes of data. 00:26:58.548 64 bytes from 10.0.0.4: icmp_seq=1 ttl=64 time=0.079 ms 00:26:58.548 00:26:58.548 --- 10.0.0.4 ping statistics --- 00:26:58.548 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:26:58.548 rtt min/avg/max/mdev = 0.079/0.079/0.079/0.000 ms 00:26:58.548 11:48:59 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@224 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:26:58.548 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:26:58.548 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.041 ms 00:26:58.548 00:26:58.548 --- 10.0.0.1 ping statistics --- 00:26:58.548 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:26:58.548 rtt min/avg/max/mdev = 0.041/0.041/0.041/0.000 ms 00:26:58.548 11:48:59 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@225 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.2 00:26:58.548 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:26:58.548 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.049 ms 00:26:58.548 00:26:58.548 --- 10.0.0.2 ping statistics --- 00:26:58.548 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:26:58.548 rtt min/avg/max/mdev = 0.049/0.049/0.049/0.000 ms 00:26:58.548 11:48:59 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@227 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:26:58.548 11:48:59 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@461 -- # return 0 00:26:58.548 11:48:59 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:26:58.548 11:48:59 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:26:58.548 11:48:59 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:26:58.548 11:48:59 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:26:58.548 11:48:59 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:26:58.548 11:48:59 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:26:58.548 11:48:59 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:26:58.548 11:48:59 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@15 -- # nvmfappstart -m 0x3 00:26:58.548 11:48:59 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:26:58.548 11:48:59 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@726 -- # xtrace_disable 00:26:58.548 11:48:59 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@10 -- # set +x 00:26:58.548 11:48:59 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@509 -- # nvmfpid=108874 00:26:58.548 11:48:59 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@508 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --interrupt-mode -m 0x3 00:26:58.548 11:48:59 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@510 -- # waitforlisten 108874 00:26:58.548 11:48:59 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@835 -- # '[' -z 108874 ']' 00:26:58.548 11:48:59 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:26:58.548 11:48:59 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@840 -- # local max_retries=100 00:26:58.548 11:48:59 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:26:58.548 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:26:58.548 11:48:59 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@844 -- # xtrace_disable 00:26:58.548 11:48:59 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@10 -- # set +x 00:26:58.807 [2024-12-16 11:48:59.313719] thread.c:3079:spdk_interrupt_mode_enable: *NOTICE*: Set SPDK running in interrupt mode. 00:26:58.807 [2024-12-16 11:48:59.315058] Starting SPDK v25.01-pre git sha1 c360129e7 / DPDK 24.03.0 initialization... 00:26:58.807 [2024-12-16 11:48:59.315178] [ DPDK EAL parameters: nvmf -c 0x3 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:26:58.807 [2024-12-16 11:48:59.468699] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:26:58.807 [2024-12-16 11:48:59.532089] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:26:58.807 [2024-12-16 11:48:59.532173] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:26:58.807 [2024-12-16 11:48:59.532193] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:26:58.807 [2024-12-16 11:48:59.532203] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:26:58.807 [2024-12-16 11:48:59.532213] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:26:58.807 [2024-12-16 11:48:59.533523] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 1 00:26:58.807 [2024-12-16 11:48:59.533538] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:26:59.065 [2024-12-16 11:48:59.633115] thread.c:2144:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (app_thread) to intr mode from intr mode. 00:26:59.065 [2024-12-16 11:48:59.633787] thread.c:2144:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_000) to intr mode from intr mode. 00:26:59.065 [2024-12-16 11:48:59.633804] thread.c:2144:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_001) to intr mode from intr mode. 00:26:59.632 11:49:00 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:26:59.632 11:49:00 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@868 -- # return 0 00:26:59.632 11:49:00 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:26:59.632 11:49:00 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@732 -- # xtrace_disable 00:26:59.632 11:49:00 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@10 -- # set +x 00:26:59.632 11:49:00 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:26:59.632 11:49:00 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@16 -- # setup_bdev_aio 00:26:59.632 11:49:00 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@77 -- # uname -s 00:26:59.632 11:49:00 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@77 -- # [[ Linux != \F\r\e\e\B\S\D ]] 00:26:59.632 11:49:00 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@78 -- # dd if=/dev/zero of=/home/vagrant/spdk_repo/spdk/test/nvmf/target/aiofile bs=2048 count=5000 00:26:59.891 5000+0 records in 00:26:59.891 5000+0 records out 00:26:59.891 10240000 bytes (10 MB, 9.8 MiB) copied, 0.0333642 s, 307 MB/s 00:26:59.891 11:49:00 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@79 -- # rpc_cmd bdev_aio_create /home/vagrant/spdk_repo/spdk/test/nvmf/target/aiofile AIO0 2048 00:26:59.891 11:49:00 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:59.891 11:49:00 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@10 -- # set +x 00:26:59.891 AIO0 00:26:59.891 11:49:00 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:59.891 11:49:00 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@18 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 -q 256 00:26:59.891 11:49:00 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:59.891 11:49:00 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@10 -- # set +x 00:26:59.891 [2024-12-16 11:49:00.434537] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:26:59.891 11:49:00 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:59.891 11:49:00 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@19 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:26:59.891 11:49:00 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:59.891 11:49:00 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@10 -- # set +x 00:26:59.891 11:49:00 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:59.891 11:49:00 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@20 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 AIO0 00:26:59.891 11:49:00 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:59.891 11:49:00 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@10 -- # set +x 00:26:59.891 11:49:00 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:59.891 11:49:00 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@21 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 00:26:59.891 11:49:00 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:59.891 11:49:00 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@10 -- # set +x 00:26:59.891 [2024-12-16 11:49:00.479040] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4420 *** 00:26:59.891 11:49:00 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:59.891 11:49:00 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@24 -- # for i in {0..1} 00:26:59.891 11:49:00 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@25 -- # reactor_is_idle 108874 0 00:26:59.891 11:49:00 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@53 -- # reactor_is_busy_or_idle 108874 0 idle 00:26:59.891 11:49:00 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@10 -- # local pid=108874 00:26:59.891 11:49:00 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@11 -- # local idx=0 00:26:59.891 11:49:00 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@12 -- # local state=idle 00:26:59.891 11:49:00 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@13 -- # local busy_threshold=65 00:26:59.891 11:49:00 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@14 -- # local idle_threshold=30 00:26:59.891 11:49:00 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@16 -- # [[ idle != \b\u\s\y ]] 00:26:59.891 11:49:00 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@16 -- # [[ idle != \i\d\l\e ]] 00:26:59.891 11:49:00 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@20 -- # hash top 00:26:59.891 11:49:00 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@25 -- # (( j = 10 )) 00:26:59.891 11:49:00 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@25 -- # (( j != 0 )) 00:26:59.891 11:49:00 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # top -bHn 1 -p 108874 -w 256 00:26:59.891 11:49:00 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # grep reactor_0 00:27:00.151 11:49:00 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # top_reactor=' 108874 root 20 0 64.2g 45056 32768 S 0.0 0.4 0:00.30 reactor_0' 00:27:00.151 11:49:00 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # echo 108874 root 20 0 64.2g 45056 32768 S 0.0 0.4 0:00.30 reactor_0 00:27:00.151 11:49:00 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # sed -e 's/^\s*//g' 00:27:00.151 11:49:00 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # awk '{print $9}' 00:27:00.151 11:49:00 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # cpu_rate=0.0 00:27:00.151 11:49:00 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@28 -- # cpu_rate=0 00:27:00.151 11:49:00 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@30 -- # [[ idle = \b\u\s\y ]] 00:27:00.151 11:49:00 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@32 -- # [[ idle = \i\d\l\e ]] 00:27:00.151 11:49:00 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@32 -- # (( cpu_rate > idle_threshold )) 00:27:00.151 11:49:00 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@35 -- # return 0 00:27:00.151 11:49:00 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@24 -- # for i in {0..1} 00:27:00.151 11:49:00 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@25 -- # reactor_is_idle 108874 1 00:27:00.151 11:49:00 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@53 -- # reactor_is_busy_or_idle 108874 1 idle 00:27:00.151 11:49:00 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@10 -- # local pid=108874 00:27:00.151 11:49:00 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@11 -- # local idx=1 00:27:00.151 11:49:00 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@12 -- # local state=idle 00:27:00.151 11:49:00 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@13 -- # local busy_threshold=65 00:27:00.151 11:49:00 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@14 -- # local idle_threshold=30 00:27:00.151 11:49:00 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@16 -- # [[ idle != \b\u\s\y ]] 00:27:00.151 11:49:00 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@16 -- # [[ idle != \i\d\l\e ]] 00:27:00.151 11:49:00 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@20 -- # hash top 00:27:00.151 11:49:00 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@25 -- # (( j = 10 )) 00:27:00.151 11:49:00 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@25 -- # (( j != 0 )) 00:27:00.151 11:49:00 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # top -bHn 1 -p 108874 -w 256 00:27:00.151 11:49:00 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # grep reactor_1 00:27:00.151 11:49:00 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # top_reactor=' 108878 root 20 0 64.2g 45056 32768 S 0.0 0.4 0:00.00 reactor_1' 00:27:00.151 11:49:00 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # echo 108878 root 20 0 64.2g 45056 32768 S 0.0 0.4 0:00.00 reactor_1 00:27:00.151 11:49:00 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # sed -e 's/^\s*//g' 00:27:00.151 11:49:00 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # awk '{print $9}' 00:27:00.151 11:49:00 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # cpu_rate=0.0 00:27:00.151 11:49:00 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@28 -- # cpu_rate=0 00:27:00.151 11:49:00 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@30 -- # [[ idle = \b\u\s\y ]] 00:27:00.151 11:49:00 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@32 -- # [[ idle = \i\d\l\e ]] 00:27:00.151 11:49:00 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@32 -- # (( cpu_rate > idle_threshold )) 00:27:00.151 11:49:00 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@35 -- # return 0 00:27:00.151 11:49:00 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@28 -- # perf=/home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf 00:27:00.151 11:49:00 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@35 -- # perf_pid=108943 00:27:00.151 11:49:00 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@31 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf -q 256 -o 4096 -w randrw -M 30 -t 10 -c 0xC -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.3 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' 00:27:00.151 11:49:00 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@38 -- # for i in {0..1} 00:27:00.151 11:49:00 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@39 -- # BUSY_THRESHOLD=30 00:27:00.151 11:49:00 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@39 -- # reactor_is_busy 108874 0 00:27:00.151 11:49:00 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@49 -- # reactor_is_busy_or_idle 108874 0 busy 00:27:00.151 11:49:00 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@10 -- # local pid=108874 00:27:00.151 11:49:00 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@11 -- # local idx=0 00:27:00.151 11:49:00 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@12 -- # local state=busy 00:27:00.151 11:49:00 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@13 -- # local busy_threshold=30 00:27:00.151 11:49:00 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@14 -- # local idle_threshold=30 00:27:00.151 11:49:00 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@16 -- # [[ busy != \b\u\s\y ]] 00:27:00.151 11:49:00 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@20 -- # hash top 00:27:00.151 11:49:00 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@25 -- # (( j = 10 )) 00:27:00.151 11:49:00 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@25 -- # (( j != 0 )) 00:27:00.151 11:49:00 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # top -bHn 1 -p 108874 -w 256 00:27:00.151 11:49:00 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # grep reactor_0 00:27:00.410 11:49:01 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # top_reactor=' 108874 root 20 0 64.2g 45056 32768 S 0.0 0.4 0:00.30 reactor_0' 00:27:00.410 11:49:01 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # echo 108874 root 20 0 64.2g 45056 32768 S 0.0 0.4 0:00.30 reactor_0 00:27:00.410 11:49:01 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # sed -e 's/^\s*//g' 00:27:00.410 11:49:01 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # awk '{print $9}' 00:27:00.410 11:49:01 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # cpu_rate=0.0 00:27:00.410 11:49:01 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@28 -- # cpu_rate=0 00:27:00.410 11:49:01 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@30 -- # [[ busy = \b\u\s\y ]] 00:27:00.410 11:49:01 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@30 -- # (( cpu_rate < busy_threshold )) 00:27:00.410 11:49:01 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@31 -- # sleep 1 00:27:01.343 11:49:02 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@25 -- # (( j-- )) 00:27:01.343 11:49:02 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@25 -- # (( j != 0 )) 00:27:01.343 11:49:02 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # top -bHn 1 -p 108874 -w 256 00:27:01.343 11:49:02 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # grep reactor_0 00:27:01.600 11:49:02 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # top_reactor=' 108874 root 20 0 64.2g 46336 33152 D 99.9 0.4 0:01.69 reactor_0' 00:27:01.600 11:49:02 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # echo 108874 root 20 0 64.2g 46336 33152 D 99.9 0.4 0:01.69 reactor_0 00:27:01.600 11:49:02 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # sed -e 's/^\s*//g' 00:27:01.600 11:49:02 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # awk '{print $9}' 00:27:01.600 11:49:02 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # cpu_rate=99.9 00:27:01.600 11:49:02 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@28 -- # cpu_rate=99 00:27:01.600 11:49:02 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@30 -- # [[ busy = \b\u\s\y ]] 00:27:01.600 11:49:02 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@30 -- # (( cpu_rate < busy_threshold )) 00:27:01.600 11:49:02 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@32 -- # [[ busy = \i\d\l\e ]] 00:27:01.600 11:49:02 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@35 -- # return 0 00:27:01.600 11:49:02 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@38 -- # for i in {0..1} 00:27:01.600 11:49:02 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@39 -- # BUSY_THRESHOLD=30 00:27:01.600 11:49:02 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@39 -- # reactor_is_busy 108874 1 00:27:01.600 11:49:02 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@49 -- # reactor_is_busy_or_idle 108874 1 busy 00:27:01.600 11:49:02 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@10 -- # local pid=108874 00:27:01.600 11:49:02 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@11 -- # local idx=1 00:27:01.600 11:49:02 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@12 -- # local state=busy 00:27:01.600 11:49:02 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@13 -- # local busy_threshold=30 00:27:01.600 11:49:02 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@14 -- # local idle_threshold=30 00:27:01.600 11:49:02 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@16 -- # [[ busy != \b\u\s\y ]] 00:27:01.600 11:49:02 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@20 -- # hash top 00:27:01.600 11:49:02 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@25 -- # (( j = 10 )) 00:27:01.600 11:49:02 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@25 -- # (( j != 0 )) 00:27:01.600 11:49:02 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # top -bHn 1 -p 108874 -w 256 00:27:01.600 11:49:02 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # grep reactor_1 00:27:01.857 11:49:02 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # top_reactor=' 108878 root 20 0 64.2g 46336 33152 R 66.7 0.4 0:00.81 reactor_1' 00:27:01.857 11:49:02 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # echo 108878 root 20 0 64.2g 46336 33152 R 66.7 0.4 0:00.81 reactor_1 00:27:01.857 11:49:02 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # sed -e 's/^\s*//g' 00:27:01.857 11:49:02 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # awk '{print $9}' 00:27:01.857 11:49:02 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # cpu_rate=66.7 00:27:01.857 11:49:02 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@28 -- # cpu_rate=66 00:27:01.857 11:49:02 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@30 -- # [[ busy = \b\u\s\y ]] 00:27:01.857 11:49:02 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@30 -- # (( cpu_rate < busy_threshold )) 00:27:01.857 11:49:02 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@32 -- # [[ busy = \i\d\l\e ]] 00:27:01.857 11:49:02 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@35 -- # return 0 00:27:01.857 11:49:02 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@42 -- # wait 108943 00:27:11.821 Initializing NVMe Controllers 00:27:11.821 Attached to NVMe over Fabrics controller at 10.0.0.3:4420: nqn.2016-06.io.spdk:cnode1 00:27:11.821 Controller IO queue size 256, less than required. 00:27:11.821 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:27:11.821 Associating TCP (addr:10.0.0.3 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 2 00:27:11.821 Associating TCP (addr:10.0.0.3 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 3 00:27:11.821 Initialization complete. Launching workers. 00:27:11.821 ======================================================== 00:27:11.821 Latency(us) 00:27:11.821 Device Information : IOPS MiB/s Average min max 00:27:11.821 TCP (addr:10.0.0.3 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 2: 7234.80 28.26 35436.77 4735.31 75456.82 00:27:11.821 TCP (addr:10.0.0.3 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 3: 7107.30 27.76 36082.35 6061.76 77489.89 00:27:11.822 ======================================================== 00:27:11.822 Total : 14342.10 56.02 35756.69 4735.31 77489.89 00:27:11.822 00:27:11.822 11:49:11 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@45 -- # for i in {0..1} 00:27:11.822 11:49:11 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@46 -- # reactor_is_idle 108874 0 00:27:11.822 11:49:11 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@53 -- # reactor_is_busy_or_idle 108874 0 idle 00:27:11.822 11:49:11 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@10 -- # local pid=108874 00:27:11.822 11:49:11 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@11 -- # local idx=0 00:27:11.822 11:49:11 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@12 -- # local state=idle 00:27:11.822 11:49:11 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@13 -- # local busy_threshold=65 00:27:11.822 11:49:11 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@14 -- # local idle_threshold=30 00:27:11.822 11:49:11 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@16 -- # [[ idle != \b\u\s\y ]] 00:27:11.822 11:49:11 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@16 -- # [[ idle != \i\d\l\e ]] 00:27:11.822 11:49:11 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@20 -- # hash top 00:27:11.822 11:49:11 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@25 -- # (( j = 10 )) 00:27:11.822 11:49:11 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@25 -- # (( j != 0 )) 00:27:11.822 11:49:11 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # top -bHn 1 -p 108874 -w 256 00:27:11.822 11:49:11 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # grep reactor_0 00:27:11.822 11:49:11 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # top_reactor=' 108874 root 20 0 64.2g 46336 33152 S 0.0 0.4 0:13.70 reactor_0' 00:27:11.822 11:49:11 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # echo 108874 root 20 0 64.2g 46336 33152 S 0.0 0.4 0:13.70 reactor_0 00:27:11.822 11:49:11 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # sed -e 's/^\s*//g' 00:27:11.822 11:49:11 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # awk '{print $9}' 00:27:11.822 11:49:11 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # cpu_rate=0.0 00:27:11.822 11:49:11 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@28 -- # cpu_rate=0 00:27:11.822 11:49:11 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@30 -- # [[ idle = \b\u\s\y ]] 00:27:11.822 11:49:11 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@32 -- # [[ idle = \i\d\l\e ]] 00:27:11.822 11:49:11 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@32 -- # (( cpu_rate > idle_threshold )) 00:27:11.822 11:49:11 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@35 -- # return 0 00:27:11.822 11:49:11 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@45 -- # for i in {0..1} 00:27:11.822 11:49:11 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@46 -- # reactor_is_idle 108874 1 00:27:11.822 11:49:11 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@53 -- # reactor_is_busy_or_idle 108874 1 idle 00:27:11.822 11:49:11 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@10 -- # local pid=108874 00:27:11.822 11:49:11 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@11 -- # local idx=1 00:27:11.822 11:49:11 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@12 -- # local state=idle 00:27:11.822 11:49:11 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@13 -- # local busy_threshold=65 00:27:11.822 11:49:11 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@14 -- # local idle_threshold=30 00:27:11.822 11:49:11 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@16 -- # [[ idle != \b\u\s\y ]] 00:27:11.822 11:49:11 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@16 -- # [[ idle != \i\d\l\e ]] 00:27:11.822 11:49:11 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@20 -- # hash top 00:27:11.822 11:49:11 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@25 -- # (( j = 10 )) 00:27:11.822 11:49:11 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@25 -- # (( j != 0 )) 00:27:11.822 11:49:11 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # top -bHn 1 -p 108874 -w 256 00:27:11.822 11:49:11 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # grep reactor_1 00:27:11.822 11:49:11 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # top_reactor=' 108878 root 20 0 64.2g 46336 33152 S 0.0 0.4 0:06.69 reactor_1' 00:27:11.822 11:49:11 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # echo 108878 root 20 0 64.2g 46336 33152 S 0.0 0.4 0:06.69 reactor_1 00:27:11.822 11:49:11 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # sed -e 's/^\s*//g' 00:27:11.822 11:49:11 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # awk '{print $9}' 00:27:11.822 11:49:11 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # cpu_rate=0.0 00:27:11.822 11:49:11 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@28 -- # cpu_rate=0 00:27:11.822 11:49:11 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@30 -- # [[ idle = \b\u\s\y ]] 00:27:11.822 11:49:11 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@32 -- # [[ idle = \i\d\l\e ]] 00:27:11.822 11:49:11 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@32 -- # (( cpu_rate > idle_threshold )) 00:27:11.822 11:49:11 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@35 -- # return 0 00:27:11.822 11:49:11 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@50 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:2fb5c03d-b67f-4471-b8a9-a83bf7b15be8 --hostid=2fb5c03d-b67f-4471-b8a9-a83bf7b15be8 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.3 -s 4420 00:27:11.822 11:49:11 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@51 -- # waitforserial SPDKISFASTANDAWESOME 00:27:11.822 11:49:11 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1202 -- # local i=0 00:27:11.822 11:49:11 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1203 -- # local nvme_device_counter=1 nvme_devices=0 00:27:11.822 11:49:11 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1204 -- # [[ -n '' ]] 00:27:11.822 11:49:11 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1209 -- # sleep 2 00:27:13.197 11:49:13 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1210 -- # (( i++ <= 15 )) 00:27:13.197 11:49:13 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1211 -- # lsblk -l -o NAME,SERIAL 00:27:13.197 11:49:13 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1211 -- # grep -c SPDKISFASTANDAWESOME 00:27:13.197 11:49:13 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1211 -- # nvme_devices=1 00:27:13.197 11:49:13 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1212 -- # (( nvme_devices == nvme_device_counter )) 00:27:13.197 11:49:13 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1212 -- # return 0 00:27:13.197 11:49:13 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@52 -- # for i in {0..1} 00:27:13.197 11:49:13 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@53 -- # reactor_is_idle 108874 0 00:27:13.197 11:49:13 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@53 -- # reactor_is_busy_or_idle 108874 0 idle 00:27:13.197 11:49:13 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@10 -- # local pid=108874 00:27:13.197 11:49:13 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@11 -- # local idx=0 00:27:13.197 11:49:13 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@12 -- # local state=idle 00:27:13.197 11:49:13 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@13 -- # local busy_threshold=65 00:27:13.197 11:49:13 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@14 -- # local idle_threshold=30 00:27:13.197 11:49:13 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@16 -- # [[ idle != \b\u\s\y ]] 00:27:13.197 11:49:13 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@16 -- # [[ idle != \i\d\l\e ]] 00:27:13.197 11:49:13 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@20 -- # hash top 00:27:13.197 11:49:13 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@25 -- # (( j = 10 )) 00:27:13.197 11:49:13 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@25 -- # (( j != 0 )) 00:27:13.197 11:49:13 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # top -bHn 1 -p 108874 -w 256 00:27:13.197 11:49:13 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # grep reactor_0 00:27:13.198 11:49:13 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # top_reactor=' 108874 root 20 0 64.2g 48640 33152 S 0.0 0.4 0:13.75 reactor_0' 00:27:13.198 11:49:13 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # echo 108874 root 20 0 64.2g 48640 33152 S 0.0 0.4 0:13.75 reactor_0 00:27:13.198 11:49:13 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # sed -e 's/^\s*//g' 00:27:13.198 11:49:13 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # awk '{print $9}' 00:27:13.198 11:49:13 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # cpu_rate=0.0 00:27:13.198 11:49:13 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@28 -- # cpu_rate=0 00:27:13.198 11:49:13 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@30 -- # [[ idle = \b\u\s\y ]] 00:27:13.198 11:49:13 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@32 -- # [[ idle = \i\d\l\e ]] 00:27:13.198 11:49:13 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@32 -- # (( cpu_rate > idle_threshold )) 00:27:13.198 11:49:13 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@35 -- # return 0 00:27:13.198 11:49:13 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@52 -- # for i in {0..1} 00:27:13.198 11:49:13 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@53 -- # reactor_is_idle 108874 1 00:27:13.198 11:49:13 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@53 -- # reactor_is_busy_or_idle 108874 1 idle 00:27:13.198 11:49:13 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@10 -- # local pid=108874 00:27:13.198 11:49:13 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@11 -- # local idx=1 00:27:13.198 11:49:13 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@12 -- # local state=idle 00:27:13.198 11:49:13 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@13 -- # local busy_threshold=65 00:27:13.198 11:49:13 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@14 -- # local idle_threshold=30 00:27:13.198 11:49:13 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@16 -- # [[ idle != \b\u\s\y ]] 00:27:13.198 11:49:13 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@16 -- # [[ idle != \i\d\l\e ]] 00:27:13.198 11:49:13 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@20 -- # hash top 00:27:13.198 11:49:13 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@25 -- # (( j = 10 )) 00:27:13.198 11:49:13 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@25 -- # (( j != 0 )) 00:27:13.198 11:49:13 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # top -bHn 1 -p 108874 -w 256 00:27:13.198 11:49:13 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # grep reactor_1 00:27:13.198 11:49:13 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # top_reactor=' 108878 root 20 0 64.2g 48640 33152 S 0.0 0.4 0:06.69 reactor_1' 00:27:13.198 11:49:13 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # echo 108878 root 20 0 64.2g 48640 33152 S 0.0 0.4 0:06.69 reactor_1 00:27:13.198 11:49:13 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # sed -e 's/^\s*//g' 00:27:13.198 11:49:13 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # awk '{print $9}' 00:27:13.198 11:49:13 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # cpu_rate=0.0 00:27:13.198 11:49:13 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@28 -- # cpu_rate=0 00:27:13.198 11:49:13 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@30 -- # [[ idle = \b\u\s\y ]] 00:27:13.198 11:49:13 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@32 -- # [[ idle = \i\d\l\e ]] 00:27:13.198 11:49:13 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@32 -- # (( cpu_rate > idle_threshold )) 00:27:13.198 11:49:13 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@35 -- # return 0 00:27:13.198 11:49:13 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@55 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:27:13.456 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:27:13.456 11:49:13 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@56 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:27:13.456 11:49:13 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1223 -- # local i=0 00:27:13.456 11:49:13 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1224 -- # grep -q -w SPDKISFASTANDAWESOME 00:27:13.456 11:49:13 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1224 -- # lsblk -o NAME,SERIAL 00:27:13.456 11:49:14 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1231 -- # lsblk -l -o NAME,SERIAL 00:27:13.456 11:49:14 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1231 -- # grep -q -w SPDKISFASTANDAWESOME 00:27:13.456 11:49:14 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1235 -- # return 0 00:27:13.456 11:49:14 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@58 -- # trap - SIGINT SIGTERM EXIT 00:27:13.456 11:49:14 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@59 -- # nvmftestfini 00:27:13.456 11:49:14 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@516 -- # nvmfcleanup 00:27:13.456 11:49:14 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@121 -- # sync 00:27:13.714 11:49:14 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:27:13.714 11:49:14 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@124 -- # set +e 00:27:13.714 11:49:14 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@125 -- # for i in {1..20} 00:27:13.714 11:49:14 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:27:13.714 rmmod nvme_tcp 00:27:13.714 rmmod nvme_fabrics 00:27:13.714 rmmod nvme_keyring 00:27:13.714 11:49:14 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:27:13.714 11:49:14 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@128 -- # set -e 00:27:13.714 11:49:14 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@129 -- # return 0 00:27:13.714 11:49:14 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@517 -- # '[' -n 108874 ']' 00:27:13.714 11:49:14 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@518 -- # killprocess 108874 00:27:13.714 11:49:14 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@954 -- # '[' -z 108874 ']' 00:27:13.714 11:49:14 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@958 -- # kill -0 108874 00:27:13.714 11:49:14 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@959 -- # uname 00:27:13.714 11:49:14 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:27:13.714 11:49:14 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 108874 00:27:13.714 11:49:14 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:27:13.714 11:49:14 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:27:13.714 killing process with pid 108874 00:27:13.714 11:49:14 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@972 -- # echo 'killing process with pid 108874' 00:27:13.714 11:49:14 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@973 -- # kill 108874 00:27:13.714 11:49:14 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@978 -- # wait 108874 00:27:13.972 11:49:14 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:27:13.972 11:49:14 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:27:13.972 11:49:14 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:27:13.972 11:49:14 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@297 -- # iptr 00:27:13.972 11:49:14 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@791 -- # iptables-save 00:27:13.972 11:49:14 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:27:13.972 11:49:14 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@791 -- # iptables-restore 00:27:13.972 11:49:14 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@298 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:27:13.972 11:49:14 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@299 -- # nvmf_veth_fini 00:27:13.972 11:49:14 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@233 -- # ip link set nvmf_init_br nomaster 00:27:13.972 11:49:14 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@234 -- # ip link set nvmf_init_br2 nomaster 00:27:13.972 11:49:14 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@235 -- # ip link set nvmf_tgt_br nomaster 00:27:13.972 11:49:14 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@236 -- # ip link set nvmf_tgt_br2 nomaster 00:27:14.231 11:49:14 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@237 -- # ip link set nvmf_init_br down 00:27:14.231 11:49:14 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@238 -- # ip link set nvmf_init_br2 down 00:27:14.231 11:49:14 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@239 -- # ip link set nvmf_tgt_br down 00:27:14.231 11:49:14 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@240 -- # ip link set nvmf_tgt_br2 down 00:27:14.231 11:49:14 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@241 -- # ip link delete nvmf_br type bridge 00:27:14.231 11:49:14 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@242 -- # ip link delete nvmf_init_if 00:27:14.231 11:49:14 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@243 -- # ip link delete nvmf_init_if2 00:27:14.231 11:49:14 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@244 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:27:14.231 11:49:14 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@245 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:27:14.231 11:49:14 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@246 -- # remove_spdk_ns 00:27:14.231 11:49:14 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:27:14.231 11:49:14 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:27:14.231 11:49:14 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:27:14.231 11:49:14 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@300 -- # return 0 00:27:14.231 00:27:14.231 real 0m16.255s 00:27:14.231 user 0m28.190s 00:27:14.231 sys 0m7.482s 00:27:14.231 11:49:14 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1130 -- # xtrace_disable 00:27:14.231 11:49:14 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@10 -- # set +x 00:27:14.231 ************************************ 00:27:14.231 END TEST nvmf_interrupt 00:27:14.231 ************************************ 00:27:14.231 00:27:14.231 real 20m36.921s 00:27:14.231 user 54m16.927s 00:27:14.231 sys 4m58.927s 00:27:14.231 11:49:14 nvmf_tcp -- common/autotest_common.sh@1130 -- # xtrace_disable 00:27:14.231 11:49:14 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:27:14.231 ************************************ 00:27:14.231 END TEST nvmf_tcp 00:27:14.231 ************************************ 00:27:14.492 11:49:14 -- spdk/autotest.sh@285 -- # [[ 0 -eq 0 ]] 00:27:14.492 11:49:14 -- spdk/autotest.sh@286 -- # run_test spdkcli_nvmf_tcp /home/vagrant/spdk_repo/spdk/test/spdkcli/nvmf.sh --transport=tcp 00:27:14.492 11:49:14 -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:27:14.492 11:49:14 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:27:14.492 11:49:14 -- common/autotest_common.sh@10 -- # set +x 00:27:14.492 ************************************ 00:27:14.492 START TEST spdkcli_nvmf_tcp 00:27:14.492 ************************************ 00:27:14.492 11:49:14 spdkcli_nvmf_tcp -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/spdkcli/nvmf.sh --transport=tcp 00:27:14.492 * Looking for test storage... 00:27:14.492 * Found test storage at /home/vagrant/spdk_repo/spdk/test/spdkcli 00:27:14.492 11:49:15 spdkcli_nvmf_tcp -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:27:14.492 11:49:15 spdkcli_nvmf_tcp -- common/autotest_common.sh@1711 -- # lcov --version 00:27:14.492 11:49:15 spdkcli_nvmf_tcp -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:27:14.492 11:49:15 spdkcli_nvmf_tcp -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:27:14.492 11:49:15 spdkcli_nvmf_tcp -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:27:14.492 11:49:15 spdkcli_nvmf_tcp -- scripts/common.sh@333 -- # local ver1 ver1_l 00:27:14.492 11:49:15 spdkcli_nvmf_tcp -- scripts/common.sh@334 -- # local ver2 ver2_l 00:27:14.492 11:49:15 spdkcli_nvmf_tcp -- scripts/common.sh@336 -- # IFS=.-: 00:27:14.492 11:49:15 spdkcli_nvmf_tcp -- scripts/common.sh@336 -- # read -ra ver1 00:27:14.492 11:49:15 spdkcli_nvmf_tcp -- scripts/common.sh@337 -- # IFS=.-: 00:27:14.492 11:49:15 spdkcli_nvmf_tcp -- scripts/common.sh@337 -- # read -ra ver2 00:27:14.492 11:49:15 spdkcli_nvmf_tcp -- scripts/common.sh@338 -- # local 'op=<' 00:27:14.492 11:49:15 spdkcli_nvmf_tcp -- scripts/common.sh@340 -- # ver1_l=2 00:27:14.492 11:49:15 spdkcli_nvmf_tcp -- scripts/common.sh@341 -- # ver2_l=1 00:27:14.492 11:49:15 spdkcli_nvmf_tcp -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:27:14.492 11:49:15 spdkcli_nvmf_tcp -- scripts/common.sh@344 -- # case "$op" in 00:27:14.492 11:49:15 spdkcli_nvmf_tcp -- scripts/common.sh@345 -- # : 1 00:27:14.492 11:49:15 spdkcli_nvmf_tcp -- scripts/common.sh@364 -- # (( v = 0 )) 00:27:14.492 11:49:15 spdkcli_nvmf_tcp -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:27:14.492 11:49:15 spdkcli_nvmf_tcp -- scripts/common.sh@365 -- # decimal 1 00:27:14.492 11:49:15 spdkcli_nvmf_tcp -- scripts/common.sh@353 -- # local d=1 00:27:14.492 11:49:15 spdkcli_nvmf_tcp -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:27:14.492 11:49:15 spdkcli_nvmf_tcp -- scripts/common.sh@355 -- # echo 1 00:27:14.492 11:49:15 spdkcli_nvmf_tcp -- scripts/common.sh@365 -- # ver1[v]=1 00:27:14.492 11:49:15 spdkcli_nvmf_tcp -- scripts/common.sh@366 -- # decimal 2 00:27:14.492 11:49:15 spdkcli_nvmf_tcp -- scripts/common.sh@353 -- # local d=2 00:27:14.492 11:49:15 spdkcli_nvmf_tcp -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:27:14.492 11:49:15 spdkcli_nvmf_tcp -- scripts/common.sh@355 -- # echo 2 00:27:14.492 11:49:15 spdkcli_nvmf_tcp -- scripts/common.sh@366 -- # ver2[v]=2 00:27:14.492 11:49:15 spdkcli_nvmf_tcp -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:27:14.492 11:49:15 spdkcli_nvmf_tcp -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:27:14.492 11:49:15 spdkcli_nvmf_tcp -- scripts/common.sh@368 -- # return 0 00:27:14.492 11:49:15 spdkcli_nvmf_tcp -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:27:14.492 11:49:15 spdkcli_nvmf_tcp -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:27:14.492 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:27:14.492 --rc genhtml_branch_coverage=1 00:27:14.492 --rc genhtml_function_coverage=1 00:27:14.492 --rc genhtml_legend=1 00:27:14.492 --rc geninfo_all_blocks=1 00:27:14.492 --rc geninfo_unexecuted_blocks=1 00:27:14.492 00:27:14.492 ' 00:27:14.492 11:49:15 spdkcli_nvmf_tcp -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:27:14.492 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:27:14.492 --rc genhtml_branch_coverage=1 00:27:14.492 --rc genhtml_function_coverage=1 00:27:14.492 --rc genhtml_legend=1 00:27:14.492 --rc geninfo_all_blocks=1 00:27:14.492 --rc geninfo_unexecuted_blocks=1 00:27:14.492 00:27:14.492 ' 00:27:14.492 11:49:15 spdkcli_nvmf_tcp -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:27:14.492 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:27:14.492 --rc genhtml_branch_coverage=1 00:27:14.492 --rc genhtml_function_coverage=1 00:27:14.492 --rc genhtml_legend=1 00:27:14.492 --rc geninfo_all_blocks=1 00:27:14.492 --rc geninfo_unexecuted_blocks=1 00:27:14.492 00:27:14.492 ' 00:27:14.492 11:49:15 spdkcli_nvmf_tcp -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:27:14.492 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:27:14.492 --rc genhtml_branch_coverage=1 00:27:14.492 --rc genhtml_function_coverage=1 00:27:14.492 --rc genhtml_legend=1 00:27:14.492 --rc geninfo_all_blocks=1 00:27:14.492 --rc geninfo_unexecuted_blocks=1 00:27:14.492 00:27:14.492 ' 00:27:14.492 11:49:15 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/spdkcli/common.sh 00:27:14.492 11:49:15 spdkcli_nvmf_tcp -- spdkcli/common.sh@6 -- # spdkcli_job=/home/vagrant/spdk_repo/spdk/test/spdkcli/spdkcli_job.py 00:27:14.492 11:49:15 spdkcli_nvmf_tcp -- spdkcli/common.sh@7 -- # spdk_clear_config_py=/home/vagrant/spdk_repo/spdk/test/json_config/clear_config.py 00:27:14.492 11:49:15 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@10 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:27:14.492 11:49:15 spdkcli_nvmf_tcp -- nvmf/common.sh@7 -- # uname -s 00:27:14.492 11:49:15 spdkcli_nvmf_tcp -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:27:14.492 11:49:15 spdkcli_nvmf_tcp -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:27:14.492 11:49:15 spdkcli_nvmf_tcp -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:27:14.492 11:49:15 spdkcli_nvmf_tcp -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:27:14.492 11:49:15 spdkcli_nvmf_tcp -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:27:14.492 11:49:15 spdkcli_nvmf_tcp -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:27:14.492 11:49:15 spdkcli_nvmf_tcp -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:27:14.492 11:49:15 spdkcli_nvmf_tcp -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:27:14.492 11:49:15 spdkcli_nvmf_tcp -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:27:14.492 11:49:15 spdkcli_nvmf_tcp -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:27:14.492 11:49:15 spdkcli_nvmf_tcp -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:2fb5c03d-b67f-4471-b8a9-a83bf7b15be8 00:27:14.492 11:49:15 spdkcli_nvmf_tcp -- nvmf/common.sh@18 -- # NVME_HOSTID=2fb5c03d-b67f-4471-b8a9-a83bf7b15be8 00:27:14.492 11:49:15 spdkcli_nvmf_tcp -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:27:14.492 11:49:15 spdkcli_nvmf_tcp -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:27:14.492 11:49:15 spdkcli_nvmf_tcp -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:27:14.492 11:49:15 spdkcli_nvmf_tcp -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:27:14.492 11:49:15 spdkcli_nvmf_tcp -- nvmf/common.sh@49 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:27:14.492 11:49:15 spdkcli_nvmf_tcp -- scripts/common.sh@15 -- # shopt -s extglob 00:27:14.492 11:49:15 spdkcli_nvmf_tcp -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:27:14.492 11:49:15 spdkcli_nvmf_tcp -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:27:14.492 11:49:15 spdkcli_nvmf_tcp -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:27:14.492 11:49:15 spdkcli_nvmf_tcp -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:27:14.492 11:49:15 spdkcli_nvmf_tcp -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:27:14.492 11:49:15 spdkcli_nvmf_tcp -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:27:14.492 11:49:15 spdkcli_nvmf_tcp -- paths/export.sh@5 -- # export PATH 00:27:14.492 11:49:15 spdkcli_nvmf_tcp -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:27:14.492 11:49:15 spdkcli_nvmf_tcp -- nvmf/common.sh@51 -- # : 0 00:27:14.492 11:49:15 spdkcli_nvmf_tcp -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:27:14.492 11:49:15 spdkcli_nvmf_tcp -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:27:14.492 11:49:15 spdkcli_nvmf_tcp -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:27:14.492 11:49:15 spdkcli_nvmf_tcp -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:27:14.492 11:49:15 spdkcli_nvmf_tcp -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:27:14.492 11:49:15 spdkcli_nvmf_tcp -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:27:14.492 /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:27:14.492 11:49:15 spdkcli_nvmf_tcp -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:27:14.492 11:49:15 spdkcli_nvmf_tcp -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:27:14.492 11:49:15 spdkcli_nvmf_tcp -- nvmf/common.sh@55 -- # have_pci_nics=0 00:27:14.492 11:49:15 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@12 -- # MATCH_FILE=spdkcli_nvmf.test 00:27:14.492 11:49:15 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@13 -- # SPDKCLI_BRANCH=/nvmf 00:27:14.492 11:49:15 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@15 -- # trap cleanup EXIT 00:27:14.492 11:49:15 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@17 -- # timing_enter run_nvmf_tgt 00:27:14.492 11:49:15 spdkcli_nvmf_tcp -- common/autotest_common.sh@726 -- # xtrace_disable 00:27:14.492 11:49:15 spdkcli_nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:27:14.492 11:49:15 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@18 -- # run_nvmf_tgt 00:27:14.492 11:49:15 spdkcli_nvmf_tcp -- spdkcli/common.sh@33 -- # nvmf_tgt_pid=109283 00:27:14.492 11:49:15 spdkcli_nvmf_tcp -- spdkcli/common.sh@32 -- # /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -m 0x3 -p 0 00:27:14.492 11:49:15 spdkcli_nvmf_tcp -- spdkcli/common.sh@34 -- # waitforlisten 109283 00:27:14.492 11:49:15 spdkcli_nvmf_tcp -- common/autotest_common.sh@835 -- # '[' -z 109283 ']' 00:27:14.492 11:49:15 spdkcli_nvmf_tcp -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:27:14.492 11:49:15 spdkcli_nvmf_tcp -- common/autotest_common.sh@840 -- # local max_retries=100 00:27:14.492 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:27:14.492 11:49:15 spdkcli_nvmf_tcp -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:27:14.492 11:49:15 spdkcli_nvmf_tcp -- common/autotest_common.sh@844 -- # xtrace_disable 00:27:14.492 11:49:15 spdkcli_nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:27:14.751 [2024-12-16 11:49:15.281823] Starting SPDK v25.01-pre git sha1 c360129e7 / DPDK 24.03.0 initialization... 00:27:14.751 [2024-12-16 11:49:15.281950] [ DPDK EAL parameters: nvmf --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid109283 ] 00:27:14.751 [2024-12-16 11:49:15.433499] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:27:14.751 [2024-12-16 11:49:15.502658] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 1 00:27:14.751 [2024-12-16 11:49:15.502673] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:27:15.009 11:49:15 spdkcli_nvmf_tcp -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:27:15.009 11:49:15 spdkcli_nvmf_tcp -- common/autotest_common.sh@868 -- # return 0 00:27:15.009 11:49:15 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@19 -- # timing_exit run_nvmf_tgt 00:27:15.009 11:49:15 spdkcli_nvmf_tcp -- common/autotest_common.sh@732 -- # xtrace_disable 00:27:15.009 11:49:15 spdkcli_nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:27:15.009 11:49:15 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@21 -- # NVMF_TARGET_IP=127.0.0.1 00:27:15.009 11:49:15 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@22 -- # [[ tcp == \r\d\m\a ]] 00:27:15.009 11:49:15 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@27 -- # timing_enter spdkcli_create_nvmf_config 00:27:15.009 11:49:15 spdkcli_nvmf_tcp -- common/autotest_common.sh@726 -- # xtrace_disable 00:27:15.009 11:49:15 spdkcli_nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:27:15.009 11:49:15 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@65 -- # /home/vagrant/spdk_repo/spdk/test/spdkcli/spdkcli_job.py ''\''/bdevs/malloc create 32 512 Malloc1'\'' '\''Malloc1'\'' True 00:27:15.009 '\''/bdevs/malloc create 32 512 Malloc2'\'' '\''Malloc2'\'' True 00:27:15.009 '\''/bdevs/malloc create 32 512 Malloc3'\'' '\''Malloc3'\'' True 00:27:15.009 '\''/bdevs/malloc create 32 512 Malloc4'\'' '\''Malloc4'\'' True 00:27:15.009 '\''/bdevs/malloc create 32 512 Malloc5'\'' '\''Malloc5'\'' True 00:27:15.009 '\''/bdevs/malloc create 32 512 Malloc6'\'' '\''Malloc6'\'' True 00:27:15.009 '\''nvmf/transport create tcp max_io_qpairs_per_ctrlr=4 io_unit_size=8192'\'' '\'''\'' True 00:27:15.009 '\''/nvmf/subsystem create nqn.2014-08.org.spdk:cnode1 N37SXV509SRW max_namespaces=4 allow_any_host=True'\'' '\''nqn.2014-08.org.spdk:cnode1'\'' True 00:27:15.009 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces create Malloc3 1'\'' '\''Malloc3'\'' True 00:27:15.009 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces create Malloc4 2'\'' '\''Malloc4'\'' True 00:27:15.009 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses create tcp 127.0.0.1 4260 IPv4'\'' '\''127.0.0.1:4260'\'' True 00:27:15.009 '\''/nvmf/subsystem create nqn.2014-08.org.spdk:cnode2 N37SXV509SRD max_namespaces=2 allow_any_host=True'\'' '\''nqn.2014-08.org.spdk:cnode2'\'' True 00:27:15.009 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode2/namespaces create Malloc2'\'' '\''Malloc2'\'' True 00:27:15.009 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode2/listen_addresses create tcp 127.0.0.1 4260 IPv4'\'' '\''127.0.0.1:4260'\'' True 00:27:15.009 '\''/nvmf/subsystem create nqn.2014-08.org.spdk:cnode3 N37SXV509SRR max_namespaces=2 allow_any_host=True'\'' '\''nqn.2014-08.org.spdk:cnode2'\'' True 00:27:15.009 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/namespaces create Malloc1'\'' '\''Malloc1'\'' True 00:27:15.009 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/listen_addresses create tcp 127.0.0.1 4260 IPv4'\'' '\''127.0.0.1:4260'\'' True 00:27:15.009 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/listen_addresses create tcp 127.0.0.1 4261 IPv4'\'' '\''127.0.0.1:4261'\'' True 00:27:15.009 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/hosts create nqn.2014-08.org.spdk:cnode1'\'' '\''nqn.2014-08.org.spdk:cnode1'\'' True 00:27:15.009 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/hosts create nqn.2014-08.org.spdk:cnode2'\'' '\''nqn.2014-08.org.spdk:cnode2'\'' True 00:27:15.009 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1 allow_any_host True'\'' '\''Allow any host'\'' 00:27:15.009 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1 allow_any_host False'\'' '\''Allow any host'\'' True 00:27:15.009 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses create tcp 127.0.0.1 4261 IPv4'\'' '\''127.0.0.1:4261'\'' True 00:27:15.009 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses create tcp 127.0.0.1 4262 IPv4'\'' '\''127.0.0.1:4262'\'' True 00:27:15.009 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/hosts create nqn.2014-08.org.spdk:cnode2'\'' '\''nqn.2014-08.org.spdk:cnode2'\'' True 00:27:15.009 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces create Malloc5'\'' '\''Malloc5'\'' True 00:27:15.009 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces create Malloc6'\'' '\''Malloc6'\'' True 00:27:15.009 '\''/nvmf/referral create tcp 127.0.0.2 4030 IPv4'\'' 00:27:15.009 ' 00:27:18.307 [2024-12-16 11:49:18.505049] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:27:19.242 [2024-12-16 11:49:19.826420] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4260 *** 00:27:21.777 [2024-12-16 11:49:22.292267] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4261 *** 00:27:23.681 [2024-12-16 11:49:24.409862] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4262 *** 00:27:25.582 Executing command: ['/bdevs/malloc create 32 512 Malloc1', 'Malloc1', True] 00:27:25.582 Executing command: ['/bdevs/malloc create 32 512 Malloc2', 'Malloc2', True] 00:27:25.582 Executing command: ['/bdevs/malloc create 32 512 Malloc3', 'Malloc3', True] 00:27:25.583 Executing command: ['/bdevs/malloc create 32 512 Malloc4', 'Malloc4', True] 00:27:25.583 Executing command: ['/bdevs/malloc create 32 512 Malloc5', 'Malloc5', True] 00:27:25.583 Executing command: ['/bdevs/malloc create 32 512 Malloc6', 'Malloc6', True] 00:27:25.583 Executing command: ['nvmf/transport create tcp max_io_qpairs_per_ctrlr=4 io_unit_size=8192', '', True] 00:27:25.583 Executing command: ['/nvmf/subsystem create nqn.2014-08.org.spdk:cnode1 N37SXV509SRW max_namespaces=4 allow_any_host=True', 'nqn.2014-08.org.spdk:cnode1', True] 00:27:25.583 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces create Malloc3 1', 'Malloc3', True] 00:27:25.583 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces create Malloc4 2', 'Malloc4', True] 00:27:25.583 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses create tcp 127.0.0.1 4260 IPv4', '127.0.0.1:4260', True] 00:27:25.583 Executing command: ['/nvmf/subsystem create nqn.2014-08.org.spdk:cnode2 N37SXV509SRD max_namespaces=2 allow_any_host=True', 'nqn.2014-08.org.spdk:cnode2', True] 00:27:25.583 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode2/namespaces create Malloc2', 'Malloc2', True] 00:27:25.583 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode2/listen_addresses create tcp 127.0.0.1 4260 IPv4', '127.0.0.1:4260', True] 00:27:25.583 Executing command: ['/nvmf/subsystem create nqn.2014-08.org.spdk:cnode3 N37SXV509SRR max_namespaces=2 allow_any_host=True', 'nqn.2014-08.org.spdk:cnode2', True] 00:27:25.583 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/namespaces create Malloc1', 'Malloc1', True] 00:27:25.583 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/listen_addresses create tcp 127.0.0.1 4260 IPv4', '127.0.0.1:4260', True] 00:27:25.583 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/listen_addresses create tcp 127.0.0.1 4261 IPv4', '127.0.0.1:4261', True] 00:27:25.583 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/hosts create nqn.2014-08.org.spdk:cnode1', 'nqn.2014-08.org.spdk:cnode1', True] 00:27:25.583 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/hosts create nqn.2014-08.org.spdk:cnode2', 'nqn.2014-08.org.spdk:cnode2', True] 00:27:25.583 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1 allow_any_host True', 'Allow any host', False] 00:27:25.583 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1 allow_any_host False', 'Allow any host', True] 00:27:25.583 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses create tcp 127.0.0.1 4261 IPv4', '127.0.0.1:4261', True] 00:27:25.583 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses create tcp 127.0.0.1 4262 IPv4', '127.0.0.1:4262', True] 00:27:25.583 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/hosts create nqn.2014-08.org.spdk:cnode2', 'nqn.2014-08.org.spdk:cnode2', True] 00:27:25.583 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces create Malloc5', 'Malloc5', True] 00:27:25.583 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces create Malloc6', 'Malloc6', True] 00:27:25.583 Executing command: ['/nvmf/referral create tcp 127.0.0.2 4030 IPv4', False] 00:27:25.583 11:49:26 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@66 -- # timing_exit spdkcli_create_nvmf_config 00:27:25.583 11:49:26 spdkcli_nvmf_tcp -- common/autotest_common.sh@732 -- # xtrace_disable 00:27:25.583 11:49:26 spdkcli_nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:27:25.583 11:49:26 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@68 -- # timing_enter spdkcli_check_match 00:27:25.583 11:49:26 spdkcli_nvmf_tcp -- common/autotest_common.sh@726 -- # xtrace_disable 00:27:25.583 11:49:26 spdkcli_nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:27:25.583 11:49:26 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@69 -- # check_match 00:27:25.583 11:49:26 spdkcli_nvmf_tcp -- spdkcli/common.sh@44 -- # /home/vagrant/spdk_repo/spdk/scripts/spdkcli.py ll /nvmf 00:27:26.150 11:49:26 spdkcli_nvmf_tcp -- spdkcli/common.sh@45 -- # /home/vagrant/spdk_repo/spdk/test/app/match/match /home/vagrant/spdk_repo/spdk/test/spdkcli/match_files/spdkcli_nvmf.test.match 00:27:26.150 11:49:26 spdkcli_nvmf_tcp -- spdkcli/common.sh@46 -- # rm -f /home/vagrant/spdk_repo/spdk/test/spdkcli/match_files/spdkcli_nvmf.test 00:27:26.150 11:49:26 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@70 -- # timing_exit spdkcli_check_match 00:27:26.150 11:49:26 spdkcli_nvmf_tcp -- common/autotest_common.sh@732 -- # xtrace_disable 00:27:26.150 11:49:26 spdkcli_nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:27:26.150 11:49:26 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@72 -- # timing_enter spdkcli_clear_nvmf_config 00:27:26.150 11:49:26 spdkcli_nvmf_tcp -- common/autotest_common.sh@726 -- # xtrace_disable 00:27:26.150 11:49:26 spdkcli_nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:27:26.150 11:49:26 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@87 -- # /home/vagrant/spdk_repo/spdk/test/spdkcli/spdkcli_job.py ''\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces delete nsid=1'\'' '\''Malloc3'\'' 00:27:26.150 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces delete_all'\'' '\''Malloc4'\'' 00:27:26.150 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/hosts delete nqn.2014-08.org.spdk:cnode2'\'' '\''nqn.2014-08.org.spdk:cnode2'\'' 00:27:26.150 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/hosts delete_all'\'' '\''nqn.2014-08.org.spdk:cnode1'\'' 00:27:26.150 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses delete tcp 127.0.0.1 4262'\'' '\''127.0.0.1:4262'\'' 00:27:26.150 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses delete_all'\'' '\''127.0.0.1:4261'\'' 00:27:26.150 '\''/nvmf/subsystem delete nqn.2014-08.org.spdk:cnode3'\'' '\''nqn.2014-08.org.spdk:cnode3'\'' 00:27:26.150 '\''/nvmf/subsystem delete_all'\'' '\''nqn.2014-08.org.spdk:cnode2'\'' 00:27:26.150 '\''/bdevs/malloc delete Malloc6'\'' '\''Malloc6'\'' 00:27:26.150 '\''/bdevs/malloc delete Malloc5'\'' '\''Malloc5'\'' 00:27:26.150 '\''/bdevs/malloc delete Malloc4'\'' '\''Malloc4'\'' 00:27:26.150 '\''/bdevs/malloc delete Malloc3'\'' '\''Malloc3'\'' 00:27:26.150 '\''/bdevs/malloc delete Malloc2'\'' '\''Malloc2'\'' 00:27:26.150 '\''/bdevs/malloc delete Malloc1'\'' '\''Malloc1'\'' 00:27:26.150 ' 00:27:32.765 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces delete nsid=1', 'Malloc3', False] 00:27:32.765 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces delete_all', 'Malloc4', False] 00:27:32.765 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/hosts delete nqn.2014-08.org.spdk:cnode2', 'nqn.2014-08.org.spdk:cnode2', False] 00:27:32.765 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/hosts delete_all', 'nqn.2014-08.org.spdk:cnode1', False] 00:27:32.765 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses delete tcp 127.0.0.1 4262', '127.0.0.1:4262', False] 00:27:32.765 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses delete_all', '127.0.0.1:4261', False] 00:27:32.765 Executing command: ['/nvmf/subsystem delete nqn.2014-08.org.spdk:cnode3', 'nqn.2014-08.org.spdk:cnode3', False] 00:27:32.765 Executing command: ['/nvmf/subsystem delete_all', 'nqn.2014-08.org.spdk:cnode2', False] 00:27:32.765 Executing command: ['/bdevs/malloc delete Malloc6', 'Malloc6', False] 00:27:32.765 Executing command: ['/bdevs/malloc delete Malloc5', 'Malloc5', False] 00:27:32.765 Executing command: ['/bdevs/malloc delete Malloc4', 'Malloc4', False] 00:27:32.765 Executing command: ['/bdevs/malloc delete Malloc3', 'Malloc3', False] 00:27:32.765 Executing command: ['/bdevs/malloc delete Malloc2', 'Malloc2', False] 00:27:32.765 Executing command: ['/bdevs/malloc delete Malloc1', 'Malloc1', False] 00:27:32.765 11:49:32 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@88 -- # timing_exit spdkcli_clear_nvmf_config 00:27:32.765 11:49:32 spdkcli_nvmf_tcp -- common/autotest_common.sh@732 -- # xtrace_disable 00:27:32.766 11:49:32 spdkcli_nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:27:32.766 11:49:32 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@90 -- # killprocess 109283 00:27:32.766 11:49:32 spdkcli_nvmf_tcp -- common/autotest_common.sh@954 -- # '[' -z 109283 ']' 00:27:32.766 11:49:32 spdkcli_nvmf_tcp -- common/autotest_common.sh@958 -- # kill -0 109283 00:27:32.766 11:49:32 spdkcli_nvmf_tcp -- common/autotest_common.sh@959 -- # uname 00:27:32.766 11:49:32 spdkcli_nvmf_tcp -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:27:32.766 11:49:32 spdkcli_nvmf_tcp -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 109283 00:27:32.766 killing process with pid 109283 00:27:32.766 11:49:32 spdkcli_nvmf_tcp -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:27:32.766 11:49:32 spdkcli_nvmf_tcp -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:27:32.766 11:49:32 spdkcli_nvmf_tcp -- common/autotest_common.sh@972 -- # echo 'killing process with pid 109283' 00:27:32.766 11:49:32 spdkcli_nvmf_tcp -- common/autotest_common.sh@973 -- # kill 109283 00:27:32.766 11:49:32 spdkcli_nvmf_tcp -- common/autotest_common.sh@978 -- # wait 109283 00:27:32.766 Process with pid 109283 is not found 00:27:32.766 11:49:32 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@1 -- # cleanup 00:27:32.766 11:49:32 spdkcli_nvmf_tcp -- spdkcli/common.sh@10 -- # '[' -n '' ']' 00:27:32.766 11:49:32 spdkcli_nvmf_tcp -- spdkcli/common.sh@13 -- # '[' -n 109283 ']' 00:27:32.766 11:49:32 spdkcli_nvmf_tcp -- spdkcli/common.sh@14 -- # killprocess 109283 00:27:32.766 11:49:32 spdkcli_nvmf_tcp -- common/autotest_common.sh@954 -- # '[' -z 109283 ']' 00:27:32.766 11:49:32 spdkcli_nvmf_tcp -- common/autotest_common.sh@958 -- # kill -0 109283 00:27:32.766 /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh: line 958: kill: (109283) - No such process 00:27:32.766 11:49:32 spdkcli_nvmf_tcp -- common/autotest_common.sh@981 -- # echo 'Process with pid 109283 is not found' 00:27:32.766 11:49:32 spdkcli_nvmf_tcp -- spdkcli/common.sh@16 -- # '[' -n '' ']' 00:27:32.766 11:49:32 spdkcli_nvmf_tcp -- spdkcli/common.sh@19 -- # '[' -n '' ']' 00:27:32.766 11:49:32 spdkcli_nvmf_tcp -- spdkcli/common.sh@22 -- # rm -f /home/vagrant/spdk_repo/spdk/test/spdkcli/spdkcli_nvmf.test /home/vagrant/spdk_repo/spdk/test/spdkcli/match_files/spdkcli_details_vhost.test /tmp/sample_aio 00:27:32.766 00:27:32.766 real 0m17.761s 00:27:32.766 user 0m38.733s 00:27:32.766 sys 0m0.967s 00:27:32.766 11:49:32 spdkcli_nvmf_tcp -- common/autotest_common.sh@1130 -- # xtrace_disable 00:27:32.766 11:49:32 spdkcli_nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:27:32.766 ************************************ 00:27:32.766 END TEST spdkcli_nvmf_tcp 00:27:32.766 ************************************ 00:27:32.766 11:49:32 -- spdk/autotest.sh@287 -- # run_test nvmf_identify_passthru /home/vagrant/spdk_repo/spdk/test/nvmf/target/identify_passthru.sh --transport=tcp 00:27:32.766 11:49:32 -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:27:32.766 11:49:32 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:27:32.766 11:49:32 -- common/autotest_common.sh@10 -- # set +x 00:27:32.766 ************************************ 00:27:32.766 START TEST nvmf_identify_passthru 00:27:32.766 ************************************ 00:27:32.766 11:49:32 nvmf_identify_passthru -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/identify_passthru.sh --transport=tcp 00:27:32.766 * Looking for test storage... 00:27:32.766 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:27:32.766 11:49:32 nvmf_identify_passthru -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:27:32.766 11:49:32 nvmf_identify_passthru -- common/autotest_common.sh@1711 -- # lcov --version 00:27:32.766 11:49:32 nvmf_identify_passthru -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:27:32.766 11:49:32 nvmf_identify_passthru -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:27:32.766 11:49:32 nvmf_identify_passthru -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:27:32.766 11:49:32 nvmf_identify_passthru -- scripts/common.sh@333 -- # local ver1 ver1_l 00:27:32.766 11:49:32 nvmf_identify_passthru -- scripts/common.sh@334 -- # local ver2 ver2_l 00:27:32.766 11:49:32 nvmf_identify_passthru -- scripts/common.sh@336 -- # IFS=.-: 00:27:32.766 11:49:32 nvmf_identify_passthru -- scripts/common.sh@336 -- # read -ra ver1 00:27:32.766 11:49:32 nvmf_identify_passthru -- scripts/common.sh@337 -- # IFS=.-: 00:27:32.766 11:49:32 nvmf_identify_passthru -- scripts/common.sh@337 -- # read -ra ver2 00:27:32.766 11:49:32 nvmf_identify_passthru -- scripts/common.sh@338 -- # local 'op=<' 00:27:32.766 11:49:32 nvmf_identify_passthru -- scripts/common.sh@340 -- # ver1_l=2 00:27:32.766 11:49:32 nvmf_identify_passthru -- scripts/common.sh@341 -- # ver2_l=1 00:27:32.766 11:49:32 nvmf_identify_passthru -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:27:32.766 11:49:32 nvmf_identify_passthru -- scripts/common.sh@344 -- # case "$op" in 00:27:32.766 11:49:32 nvmf_identify_passthru -- scripts/common.sh@345 -- # : 1 00:27:32.766 11:49:32 nvmf_identify_passthru -- scripts/common.sh@364 -- # (( v = 0 )) 00:27:32.766 11:49:32 nvmf_identify_passthru -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:27:32.766 11:49:32 nvmf_identify_passthru -- scripts/common.sh@365 -- # decimal 1 00:27:32.766 11:49:32 nvmf_identify_passthru -- scripts/common.sh@353 -- # local d=1 00:27:32.766 11:49:32 nvmf_identify_passthru -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:27:32.766 11:49:32 nvmf_identify_passthru -- scripts/common.sh@355 -- # echo 1 00:27:32.766 11:49:32 nvmf_identify_passthru -- scripts/common.sh@365 -- # ver1[v]=1 00:27:32.766 11:49:32 nvmf_identify_passthru -- scripts/common.sh@366 -- # decimal 2 00:27:32.766 11:49:32 nvmf_identify_passthru -- scripts/common.sh@353 -- # local d=2 00:27:32.766 11:49:32 nvmf_identify_passthru -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:27:32.766 11:49:32 nvmf_identify_passthru -- scripts/common.sh@355 -- # echo 2 00:27:32.766 11:49:32 nvmf_identify_passthru -- scripts/common.sh@366 -- # ver2[v]=2 00:27:32.766 11:49:32 nvmf_identify_passthru -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:27:32.766 11:49:32 nvmf_identify_passthru -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:27:32.766 11:49:32 nvmf_identify_passthru -- scripts/common.sh@368 -- # return 0 00:27:32.766 11:49:32 nvmf_identify_passthru -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:27:32.766 11:49:32 nvmf_identify_passthru -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:27:32.766 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:27:32.766 --rc genhtml_branch_coverage=1 00:27:32.766 --rc genhtml_function_coverage=1 00:27:32.766 --rc genhtml_legend=1 00:27:32.766 --rc geninfo_all_blocks=1 00:27:32.766 --rc geninfo_unexecuted_blocks=1 00:27:32.766 00:27:32.766 ' 00:27:32.766 11:49:32 nvmf_identify_passthru -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:27:32.766 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:27:32.766 --rc genhtml_branch_coverage=1 00:27:32.766 --rc genhtml_function_coverage=1 00:27:32.766 --rc genhtml_legend=1 00:27:32.766 --rc geninfo_all_blocks=1 00:27:32.766 --rc geninfo_unexecuted_blocks=1 00:27:32.766 00:27:32.766 ' 00:27:32.766 11:49:32 nvmf_identify_passthru -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:27:32.766 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:27:32.766 --rc genhtml_branch_coverage=1 00:27:32.766 --rc genhtml_function_coverage=1 00:27:32.766 --rc genhtml_legend=1 00:27:32.766 --rc geninfo_all_blocks=1 00:27:32.766 --rc geninfo_unexecuted_blocks=1 00:27:32.766 00:27:32.766 ' 00:27:32.766 11:49:32 nvmf_identify_passthru -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:27:32.766 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:27:32.766 --rc genhtml_branch_coverage=1 00:27:32.766 --rc genhtml_function_coverage=1 00:27:32.766 --rc genhtml_legend=1 00:27:32.766 --rc geninfo_all_blocks=1 00:27:32.766 --rc geninfo_unexecuted_blocks=1 00:27:32.766 00:27:32.766 ' 00:27:32.766 11:49:32 nvmf_identify_passthru -- target/identify_passthru.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:27:32.766 11:49:32 nvmf_identify_passthru -- nvmf/common.sh@7 -- # uname -s 00:27:32.766 11:49:32 nvmf_identify_passthru -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:27:32.766 11:49:32 nvmf_identify_passthru -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:27:32.766 11:49:32 nvmf_identify_passthru -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:27:32.766 11:49:32 nvmf_identify_passthru -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:27:32.766 11:49:32 nvmf_identify_passthru -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:27:32.766 11:49:32 nvmf_identify_passthru -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:27:32.766 11:49:32 nvmf_identify_passthru -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:27:32.766 11:49:32 nvmf_identify_passthru -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:27:32.766 11:49:32 nvmf_identify_passthru -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:27:32.766 11:49:32 nvmf_identify_passthru -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:27:32.766 11:49:33 nvmf_identify_passthru -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:2fb5c03d-b67f-4471-b8a9-a83bf7b15be8 00:27:32.766 11:49:33 nvmf_identify_passthru -- nvmf/common.sh@18 -- # NVME_HOSTID=2fb5c03d-b67f-4471-b8a9-a83bf7b15be8 00:27:32.766 11:49:33 nvmf_identify_passthru -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:27:32.766 11:49:33 nvmf_identify_passthru -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:27:32.766 11:49:33 nvmf_identify_passthru -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:27:32.766 11:49:33 nvmf_identify_passthru -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:27:32.766 11:49:33 nvmf_identify_passthru -- nvmf/common.sh@49 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:27:32.766 11:49:33 nvmf_identify_passthru -- scripts/common.sh@15 -- # shopt -s extglob 00:27:32.766 11:49:33 nvmf_identify_passthru -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:27:32.766 11:49:33 nvmf_identify_passthru -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:27:32.766 11:49:33 nvmf_identify_passthru -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:27:32.766 11:49:33 nvmf_identify_passthru -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:27:32.766 11:49:33 nvmf_identify_passthru -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:27:32.767 11:49:33 nvmf_identify_passthru -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:27:32.767 11:49:33 nvmf_identify_passthru -- paths/export.sh@5 -- # export PATH 00:27:32.767 11:49:33 nvmf_identify_passthru -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:27:32.767 11:49:33 nvmf_identify_passthru -- nvmf/common.sh@51 -- # : 0 00:27:32.767 11:49:33 nvmf_identify_passthru -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:27:32.767 11:49:33 nvmf_identify_passthru -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:27:32.767 11:49:33 nvmf_identify_passthru -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:27:32.767 11:49:33 nvmf_identify_passthru -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:27:32.767 11:49:33 nvmf_identify_passthru -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:27:32.767 11:49:33 nvmf_identify_passthru -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:27:32.767 /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:27:32.767 11:49:33 nvmf_identify_passthru -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:27:32.767 11:49:33 nvmf_identify_passthru -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:27:32.767 11:49:33 nvmf_identify_passthru -- nvmf/common.sh@55 -- # have_pci_nics=0 00:27:32.767 11:49:33 nvmf_identify_passthru -- target/identify_passthru.sh@10 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:27:32.767 11:49:33 nvmf_identify_passthru -- scripts/common.sh@15 -- # shopt -s extglob 00:27:32.767 11:49:33 nvmf_identify_passthru -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:27:32.767 11:49:33 nvmf_identify_passthru -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:27:32.767 11:49:33 nvmf_identify_passthru -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:27:32.767 11:49:33 nvmf_identify_passthru -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:27:32.767 11:49:33 nvmf_identify_passthru -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:27:32.767 11:49:33 nvmf_identify_passthru -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:27:32.767 11:49:33 nvmf_identify_passthru -- paths/export.sh@5 -- # export PATH 00:27:32.767 11:49:33 nvmf_identify_passthru -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:27:32.767 11:49:33 nvmf_identify_passthru -- target/identify_passthru.sh@12 -- # nvmftestinit 00:27:32.767 11:49:33 nvmf_identify_passthru -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:27:32.767 11:49:33 nvmf_identify_passthru -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:27:32.767 11:49:33 nvmf_identify_passthru -- nvmf/common.sh@476 -- # prepare_net_devs 00:27:32.767 11:49:33 nvmf_identify_passthru -- nvmf/common.sh@438 -- # local -g is_hw=no 00:27:32.767 11:49:33 nvmf_identify_passthru -- nvmf/common.sh@440 -- # remove_spdk_ns 00:27:32.767 11:49:33 nvmf_identify_passthru -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:27:32.767 11:49:33 nvmf_identify_passthru -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 13> /dev/null' 00:27:32.767 11:49:33 nvmf_identify_passthru -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:27:32.767 11:49:33 nvmf_identify_passthru -- nvmf/common.sh@442 -- # [[ virt != virt ]] 00:27:32.767 11:49:33 nvmf_identify_passthru -- nvmf/common.sh@444 -- # [[ no == yes ]] 00:27:32.767 11:49:33 nvmf_identify_passthru -- nvmf/common.sh@451 -- # [[ virt == phy ]] 00:27:32.767 11:49:33 nvmf_identify_passthru -- nvmf/common.sh@454 -- # [[ virt == phy-fallback ]] 00:27:32.767 11:49:33 nvmf_identify_passthru -- nvmf/common.sh@459 -- # [[ tcp == tcp ]] 00:27:32.767 11:49:33 nvmf_identify_passthru -- nvmf/common.sh@460 -- # nvmf_veth_init 00:27:32.767 11:49:33 nvmf_identify_passthru -- nvmf/common.sh@145 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:27:32.767 11:49:33 nvmf_identify_passthru -- nvmf/common.sh@146 -- # NVMF_SECOND_INITIATOR_IP=10.0.0.2 00:27:32.767 11:49:33 nvmf_identify_passthru -- nvmf/common.sh@147 -- # NVMF_FIRST_TARGET_IP=10.0.0.3 00:27:32.767 11:49:33 nvmf_identify_passthru -- nvmf/common.sh@148 -- # NVMF_SECOND_TARGET_IP=10.0.0.4 00:27:32.767 11:49:33 nvmf_identify_passthru -- nvmf/common.sh@149 -- # NVMF_INITIATOR_IP=10.0.0.1 00:27:32.767 11:49:33 nvmf_identify_passthru -- nvmf/common.sh@150 -- # NVMF_BRIDGE=nvmf_br 00:27:32.767 11:49:33 nvmf_identify_passthru -- nvmf/common.sh@151 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:27:32.767 11:49:33 nvmf_identify_passthru -- nvmf/common.sh@152 -- # NVMF_INITIATOR_INTERFACE2=nvmf_init_if2 00:27:32.767 11:49:33 nvmf_identify_passthru -- nvmf/common.sh@153 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:27:32.767 11:49:33 nvmf_identify_passthru -- nvmf/common.sh@154 -- # NVMF_INITIATOR_BRIDGE2=nvmf_init_br2 00:27:32.767 11:49:33 nvmf_identify_passthru -- nvmf/common.sh@155 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:27:32.767 11:49:33 nvmf_identify_passthru -- nvmf/common.sh@156 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:27:32.767 11:49:33 nvmf_identify_passthru -- nvmf/common.sh@157 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:27:32.767 11:49:33 nvmf_identify_passthru -- nvmf/common.sh@158 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:27:32.767 11:49:33 nvmf_identify_passthru -- nvmf/common.sh@159 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:27:32.767 11:49:33 nvmf_identify_passthru -- nvmf/common.sh@160 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:27:32.767 11:49:33 nvmf_identify_passthru -- nvmf/common.sh@162 -- # ip link set nvmf_init_br nomaster 00:27:32.767 Cannot find device "nvmf_init_br" 00:27:32.767 11:49:33 nvmf_identify_passthru -- nvmf/common.sh@162 -- # true 00:27:32.767 11:49:33 nvmf_identify_passthru -- nvmf/common.sh@163 -- # ip link set nvmf_init_br2 nomaster 00:27:32.767 Cannot find device "nvmf_init_br2" 00:27:32.767 11:49:33 nvmf_identify_passthru -- nvmf/common.sh@163 -- # true 00:27:32.767 11:49:33 nvmf_identify_passthru -- nvmf/common.sh@164 -- # ip link set nvmf_tgt_br nomaster 00:27:32.767 Cannot find device "nvmf_tgt_br" 00:27:32.767 11:49:33 nvmf_identify_passthru -- nvmf/common.sh@164 -- # true 00:27:32.767 11:49:33 nvmf_identify_passthru -- nvmf/common.sh@165 -- # ip link set nvmf_tgt_br2 nomaster 00:27:32.767 Cannot find device "nvmf_tgt_br2" 00:27:32.767 11:49:33 nvmf_identify_passthru -- nvmf/common.sh@165 -- # true 00:27:32.767 11:49:33 nvmf_identify_passthru -- nvmf/common.sh@166 -- # ip link set nvmf_init_br down 00:27:32.767 Cannot find device "nvmf_init_br" 00:27:32.767 11:49:33 nvmf_identify_passthru -- nvmf/common.sh@166 -- # true 00:27:32.767 11:49:33 nvmf_identify_passthru -- nvmf/common.sh@167 -- # ip link set nvmf_init_br2 down 00:27:32.767 Cannot find device "nvmf_init_br2" 00:27:32.767 11:49:33 nvmf_identify_passthru -- nvmf/common.sh@167 -- # true 00:27:32.767 11:49:33 nvmf_identify_passthru -- nvmf/common.sh@168 -- # ip link set nvmf_tgt_br down 00:27:32.767 Cannot find device "nvmf_tgt_br" 00:27:32.767 11:49:33 nvmf_identify_passthru -- nvmf/common.sh@168 -- # true 00:27:32.767 11:49:33 nvmf_identify_passthru -- nvmf/common.sh@169 -- # ip link set nvmf_tgt_br2 down 00:27:32.767 Cannot find device "nvmf_tgt_br2" 00:27:32.767 11:49:33 nvmf_identify_passthru -- nvmf/common.sh@169 -- # true 00:27:32.767 11:49:33 nvmf_identify_passthru -- nvmf/common.sh@170 -- # ip link delete nvmf_br type bridge 00:27:32.767 Cannot find device "nvmf_br" 00:27:32.767 11:49:33 nvmf_identify_passthru -- nvmf/common.sh@170 -- # true 00:27:32.767 11:49:33 nvmf_identify_passthru -- nvmf/common.sh@171 -- # ip link delete nvmf_init_if 00:27:32.767 Cannot find device "nvmf_init_if" 00:27:32.767 11:49:33 nvmf_identify_passthru -- nvmf/common.sh@171 -- # true 00:27:32.767 11:49:33 nvmf_identify_passthru -- nvmf/common.sh@172 -- # ip link delete nvmf_init_if2 00:27:32.767 Cannot find device "nvmf_init_if2" 00:27:32.767 11:49:33 nvmf_identify_passthru -- nvmf/common.sh@172 -- # true 00:27:32.767 11:49:33 nvmf_identify_passthru -- nvmf/common.sh@173 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:27:32.767 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:27:32.767 11:49:33 nvmf_identify_passthru -- nvmf/common.sh@173 -- # true 00:27:32.767 11:49:33 nvmf_identify_passthru -- nvmf/common.sh@174 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:27:32.767 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:27:32.767 11:49:33 nvmf_identify_passthru -- nvmf/common.sh@174 -- # true 00:27:32.767 11:49:33 nvmf_identify_passthru -- nvmf/common.sh@177 -- # ip netns add nvmf_tgt_ns_spdk 00:27:32.767 11:49:33 nvmf_identify_passthru -- nvmf/common.sh@180 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:27:32.767 11:49:33 nvmf_identify_passthru -- nvmf/common.sh@181 -- # ip link add nvmf_init_if2 type veth peer name nvmf_init_br2 00:27:32.767 11:49:33 nvmf_identify_passthru -- nvmf/common.sh@182 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:27:32.767 11:49:33 nvmf_identify_passthru -- nvmf/common.sh@183 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:27:32.767 11:49:33 nvmf_identify_passthru -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:27:32.767 11:49:33 nvmf_identify_passthru -- nvmf/common.sh@187 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:27:32.767 11:49:33 nvmf_identify_passthru -- nvmf/common.sh@190 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:27:32.767 11:49:33 nvmf_identify_passthru -- nvmf/common.sh@191 -- # ip addr add 10.0.0.2/24 dev nvmf_init_if2 00:27:32.767 11:49:33 nvmf_identify_passthru -- nvmf/common.sh@192 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if 00:27:32.767 11:49:33 nvmf_identify_passthru -- nvmf/common.sh@193 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.4/24 dev nvmf_tgt_if2 00:27:32.767 11:49:33 nvmf_identify_passthru -- nvmf/common.sh@196 -- # ip link set nvmf_init_if up 00:27:32.767 11:49:33 nvmf_identify_passthru -- nvmf/common.sh@197 -- # ip link set nvmf_init_if2 up 00:27:32.767 11:49:33 nvmf_identify_passthru -- nvmf/common.sh@198 -- # ip link set nvmf_init_br up 00:27:32.767 11:49:33 nvmf_identify_passthru -- nvmf/common.sh@199 -- # ip link set nvmf_init_br2 up 00:27:32.767 11:49:33 nvmf_identify_passthru -- nvmf/common.sh@200 -- # ip link set nvmf_tgt_br up 00:27:32.768 11:49:33 nvmf_identify_passthru -- nvmf/common.sh@201 -- # ip link set nvmf_tgt_br2 up 00:27:32.768 11:49:33 nvmf_identify_passthru -- nvmf/common.sh@202 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:27:32.768 11:49:33 nvmf_identify_passthru -- nvmf/common.sh@203 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:27:32.768 11:49:33 nvmf_identify_passthru -- nvmf/common.sh@204 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:27:32.768 11:49:33 nvmf_identify_passthru -- nvmf/common.sh@207 -- # ip link add nvmf_br type bridge 00:27:32.768 11:49:33 nvmf_identify_passthru -- nvmf/common.sh@208 -- # ip link set nvmf_br up 00:27:32.768 11:49:33 nvmf_identify_passthru -- nvmf/common.sh@211 -- # ip link set nvmf_init_br master nvmf_br 00:27:32.768 11:49:33 nvmf_identify_passthru -- nvmf/common.sh@212 -- # ip link set nvmf_init_br2 master nvmf_br 00:27:32.768 11:49:33 nvmf_identify_passthru -- nvmf/common.sh@213 -- # ip link set nvmf_tgt_br master nvmf_br 00:27:32.768 11:49:33 nvmf_identify_passthru -- nvmf/common.sh@214 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:27:32.768 11:49:33 nvmf_identify_passthru -- nvmf/common.sh@217 -- # ipts -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:27:32.768 11:49:33 nvmf_identify_passthru -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT' 00:27:32.768 11:49:33 nvmf_identify_passthru -- nvmf/common.sh@218 -- # ipts -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT 00:27:32.768 11:49:33 nvmf_identify_passthru -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT' 00:27:32.768 11:49:33 nvmf_identify_passthru -- nvmf/common.sh@219 -- # ipts -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:27:32.768 11:49:33 nvmf_identify_passthru -- nvmf/common.sh@790 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT -m comment --comment 'SPDK_NVMF:-A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT' 00:27:32.768 11:49:33 nvmf_identify_passthru -- nvmf/common.sh@222 -- # ping -c 1 10.0.0.3 00:27:32.768 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:27:32.768 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.054 ms 00:27:32.768 00:27:32.768 --- 10.0.0.3 ping statistics --- 00:27:32.768 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:27:32.768 rtt min/avg/max/mdev = 0.054/0.054/0.054/0.000 ms 00:27:32.768 11:49:33 nvmf_identify_passthru -- nvmf/common.sh@223 -- # ping -c 1 10.0.0.4 00:27:32.768 PING 10.0.0.4 (10.0.0.4) 56(84) bytes of data. 00:27:32.768 64 bytes from 10.0.0.4: icmp_seq=1 ttl=64 time=0.033 ms 00:27:32.768 00:27:32.768 --- 10.0.0.4 ping statistics --- 00:27:32.768 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:27:32.768 rtt min/avg/max/mdev = 0.033/0.033/0.033/0.000 ms 00:27:32.768 11:49:33 nvmf_identify_passthru -- nvmf/common.sh@224 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:27:32.768 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:27:32.768 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.032 ms 00:27:32.768 00:27:32.768 --- 10.0.0.1 ping statistics --- 00:27:32.768 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:27:32.768 rtt min/avg/max/mdev = 0.032/0.032/0.032/0.000 ms 00:27:32.768 11:49:33 nvmf_identify_passthru -- nvmf/common.sh@225 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.2 00:27:32.768 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:27:32.768 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.097 ms 00:27:32.768 00:27:32.768 --- 10.0.0.2 ping statistics --- 00:27:32.768 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:27:32.768 rtt min/avg/max/mdev = 0.097/0.097/0.097/0.000 ms 00:27:32.768 11:49:33 nvmf_identify_passthru -- nvmf/common.sh@227 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:27:32.768 11:49:33 nvmf_identify_passthru -- nvmf/common.sh@461 -- # return 0 00:27:32.768 11:49:33 nvmf_identify_passthru -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:27:32.768 11:49:33 nvmf_identify_passthru -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:27:32.768 11:49:33 nvmf_identify_passthru -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:27:32.768 11:49:33 nvmf_identify_passthru -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:27:32.768 11:49:33 nvmf_identify_passthru -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:27:32.768 11:49:33 nvmf_identify_passthru -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:27:32.768 11:49:33 nvmf_identify_passthru -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:27:32.768 11:49:33 nvmf_identify_passthru -- target/identify_passthru.sh@14 -- # timing_enter nvme_identify 00:27:32.768 11:49:33 nvmf_identify_passthru -- common/autotest_common.sh@726 -- # xtrace_disable 00:27:32.768 11:49:33 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:27:32.768 11:49:33 nvmf_identify_passthru -- target/identify_passthru.sh@16 -- # get_first_nvme_bdf 00:27:32.768 11:49:33 nvmf_identify_passthru -- common/autotest_common.sh@1509 -- # bdfs=() 00:27:32.768 11:49:33 nvmf_identify_passthru -- common/autotest_common.sh@1509 -- # local bdfs 00:27:32.768 11:49:33 nvmf_identify_passthru -- common/autotest_common.sh@1510 -- # bdfs=($(get_nvme_bdfs)) 00:27:32.768 11:49:33 nvmf_identify_passthru -- common/autotest_common.sh@1510 -- # get_nvme_bdfs 00:27:32.768 11:49:33 nvmf_identify_passthru -- common/autotest_common.sh@1498 -- # bdfs=() 00:27:32.768 11:49:33 nvmf_identify_passthru -- common/autotest_common.sh@1498 -- # local bdfs 00:27:32.768 11:49:33 nvmf_identify_passthru -- common/autotest_common.sh@1499 -- # bdfs=($("$rootdir/scripts/gen_nvme.sh" | jq -r '.config[].params.traddr')) 00:27:32.768 11:49:33 nvmf_identify_passthru -- common/autotest_common.sh@1499 -- # /home/vagrant/spdk_repo/spdk/scripts/gen_nvme.sh 00:27:32.768 11:49:33 nvmf_identify_passthru -- common/autotest_common.sh@1499 -- # jq -r '.config[].params.traddr' 00:27:32.768 11:49:33 nvmf_identify_passthru -- common/autotest_common.sh@1500 -- # (( 2 == 0 )) 00:27:32.768 11:49:33 nvmf_identify_passthru -- common/autotest_common.sh@1504 -- # printf '%s\n' 0000:00:10.0 0000:00:11.0 00:27:32.768 11:49:33 nvmf_identify_passthru -- common/autotest_common.sh@1512 -- # echo 0000:00:10.0 00:27:32.768 11:49:33 nvmf_identify_passthru -- target/identify_passthru.sh@16 -- # bdf=0000:00:10.0 00:27:32.768 11:49:33 nvmf_identify_passthru -- target/identify_passthru.sh@17 -- # '[' -z 0000:00:10.0 ']' 00:27:32.768 11:49:33 nvmf_identify_passthru -- target/identify_passthru.sh@23 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_identify -r 'trtype:PCIe traddr:0000:00:10.0' -i 0 00:27:32.768 11:49:33 nvmf_identify_passthru -- target/identify_passthru.sh@23 -- # grep 'Serial Number:' 00:27:32.768 11:49:33 nvmf_identify_passthru -- target/identify_passthru.sh@23 -- # awk '{print $3}' 00:27:33.027 11:49:33 nvmf_identify_passthru -- target/identify_passthru.sh@23 -- # nvme_serial_number=12340 00:27:33.027 11:49:33 nvmf_identify_passthru -- target/identify_passthru.sh@24 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_identify -r 'trtype:PCIe traddr:0000:00:10.0' -i 0 00:27:33.027 11:49:33 nvmf_identify_passthru -- target/identify_passthru.sh@24 -- # grep 'Model Number:' 00:27:33.027 11:49:33 nvmf_identify_passthru -- target/identify_passthru.sh@24 -- # awk '{print $3}' 00:27:33.286 11:49:33 nvmf_identify_passthru -- target/identify_passthru.sh@24 -- # nvme_model_number=QEMU 00:27:33.286 11:49:33 nvmf_identify_passthru -- target/identify_passthru.sh@26 -- # timing_exit nvme_identify 00:27:33.286 11:49:33 nvmf_identify_passthru -- common/autotest_common.sh@732 -- # xtrace_disable 00:27:33.286 11:49:33 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:27:33.286 11:49:33 nvmf_identify_passthru -- target/identify_passthru.sh@28 -- # timing_enter start_nvmf_tgt 00:27:33.286 11:49:33 nvmf_identify_passthru -- common/autotest_common.sh@726 -- # xtrace_disable 00:27:33.286 11:49:33 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:27:33.286 11:49:33 nvmf_identify_passthru -- target/identify_passthru.sh@31 -- # nvmfpid=109795 00:27:33.286 11:49:33 nvmf_identify_passthru -- target/identify_passthru.sh@30 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF --wait-for-rpc 00:27:33.286 11:49:33 nvmf_identify_passthru -- target/identify_passthru.sh@33 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:27:33.286 11:49:33 nvmf_identify_passthru -- target/identify_passthru.sh@35 -- # waitforlisten 109795 00:27:33.286 11:49:33 nvmf_identify_passthru -- common/autotest_common.sh@835 -- # '[' -z 109795 ']' 00:27:33.286 11:49:33 nvmf_identify_passthru -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:27:33.286 11:49:33 nvmf_identify_passthru -- common/autotest_common.sh@840 -- # local max_retries=100 00:27:33.286 11:49:33 nvmf_identify_passthru -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:27:33.286 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:27:33.286 11:49:33 nvmf_identify_passthru -- common/autotest_common.sh@844 -- # xtrace_disable 00:27:33.286 11:49:33 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:27:33.286 [2024-12-16 11:49:33.922002] Starting SPDK v25.01-pre git sha1 c360129e7 / DPDK 24.03.0 initialization... 00:27:33.286 [2024-12-16 11:49:33.922090] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:27:33.545 [2024-12-16 11:49:34.059912] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:27:33.545 [2024-12-16 11:49:34.123734] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:27:33.545 [2024-12-16 11:49:34.123805] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:27:33.545 [2024-12-16 11:49:34.123821] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:27:33.545 [2024-12-16 11:49:34.123833] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:27:33.545 [2024-12-16 11:49:34.123843] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:27:33.545 [2024-12-16 11:49:34.125293] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 1 00:27:33.545 [2024-12-16 11:49:34.125396] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 2 00:27:33.545 [2024-12-16 11:49:34.125494] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 3 00:27:33.545 [2024-12-16 11:49:34.125504] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:27:34.481 11:49:34 nvmf_identify_passthru -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:27:34.481 11:49:34 nvmf_identify_passthru -- common/autotest_common.sh@868 -- # return 0 00:27:34.481 11:49:34 nvmf_identify_passthru -- target/identify_passthru.sh@36 -- # rpc_cmd -v nvmf_set_config --passthru-identify-ctrlr 00:27:34.481 11:49:34 nvmf_identify_passthru -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:34.481 11:49:34 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:27:34.481 11:49:34 nvmf_identify_passthru -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:34.481 11:49:34 nvmf_identify_passthru -- target/identify_passthru.sh@37 -- # rpc_cmd -v framework_start_init 00:27:34.481 11:49:34 nvmf_identify_passthru -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:34.481 11:49:34 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:27:34.481 [2024-12-16 11:49:35.036820] nvmf_tgt.c: 462:nvmf_tgt_advance_state: *NOTICE*: Custom identify ctrlr handler enabled 00:27:34.481 11:49:35 nvmf_identify_passthru -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:34.481 11:49:35 nvmf_identify_passthru -- target/identify_passthru.sh@38 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:27:34.481 11:49:35 nvmf_identify_passthru -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:34.481 11:49:35 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:27:34.481 [2024-12-16 11:49:35.046878] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:27:34.481 11:49:35 nvmf_identify_passthru -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:34.481 11:49:35 nvmf_identify_passthru -- target/identify_passthru.sh@39 -- # timing_exit start_nvmf_tgt 00:27:34.481 11:49:35 nvmf_identify_passthru -- common/autotest_common.sh@732 -- # xtrace_disable 00:27:34.481 11:49:35 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:27:34.481 11:49:35 nvmf_identify_passthru -- target/identify_passthru.sh@41 -- # rpc_cmd bdev_nvme_attach_controller -b Nvme0 -t PCIe -a 0000:00:10.0 00:27:34.481 11:49:35 nvmf_identify_passthru -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:34.481 11:49:35 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:27:34.481 Nvme0n1 00:27:34.481 11:49:35 nvmf_identify_passthru -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:34.481 11:49:35 nvmf_identify_passthru -- target/identify_passthru.sh@42 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 1 00:27:34.481 11:49:35 nvmf_identify_passthru -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:34.481 11:49:35 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:27:34.481 11:49:35 nvmf_identify_passthru -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:34.481 11:49:35 nvmf_identify_passthru -- target/identify_passthru.sh@43 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Nvme0n1 00:27:34.481 11:49:35 nvmf_identify_passthru -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:34.481 11:49:35 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:27:34.481 11:49:35 nvmf_identify_passthru -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:34.481 11:49:35 nvmf_identify_passthru -- target/identify_passthru.sh@44 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 00:27:34.481 11:49:35 nvmf_identify_passthru -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:34.481 11:49:35 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:27:34.481 [2024-12-16 11:49:35.203392] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4420 *** 00:27:34.481 11:49:35 nvmf_identify_passthru -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:34.481 11:49:35 nvmf_identify_passthru -- target/identify_passthru.sh@46 -- # rpc_cmd nvmf_get_subsystems 00:27:34.481 11:49:35 nvmf_identify_passthru -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:34.481 11:49:35 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:27:34.481 [ 00:27:34.481 { 00:27:34.481 "allow_any_host": true, 00:27:34.481 "hosts": [], 00:27:34.481 "listen_addresses": [], 00:27:34.481 "nqn": "nqn.2014-08.org.nvmexpress.discovery", 00:27:34.481 "subtype": "Discovery" 00:27:34.481 }, 00:27:34.481 { 00:27:34.481 "allow_any_host": true, 00:27:34.481 "hosts": [], 00:27:34.481 "listen_addresses": [ 00:27:34.481 { 00:27:34.481 "adrfam": "IPv4", 00:27:34.481 "traddr": "10.0.0.3", 00:27:34.481 "trsvcid": "4420", 00:27:34.481 "trtype": "TCP" 00:27:34.481 } 00:27:34.481 ], 00:27:34.481 "max_cntlid": 65519, 00:27:34.481 "max_namespaces": 1, 00:27:34.481 "min_cntlid": 1, 00:27:34.481 "model_number": "SPDK bdev Controller", 00:27:34.481 "namespaces": [ 00:27:34.481 { 00:27:34.481 "bdev_name": "Nvme0n1", 00:27:34.481 "name": "Nvme0n1", 00:27:34.481 "nguid": "9381BF29D66A474E8B35CA33838F435E", 00:27:34.481 "nsid": 1, 00:27:34.481 "uuid": "9381bf29-d66a-474e-8b35-ca33838f435e" 00:27:34.481 } 00:27:34.481 ], 00:27:34.481 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:27:34.481 "serial_number": "SPDK00000000000001", 00:27:34.481 "subtype": "NVMe" 00:27:34.481 } 00:27:34.481 ] 00:27:34.481 11:49:35 nvmf_identify_passthru -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:34.481 11:49:35 nvmf_identify_passthru -- target/identify_passthru.sh@54 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_identify -r ' trtype:tcp adrfam:IPv4 traddr:10.0.0.3 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' 00:27:34.481 11:49:35 nvmf_identify_passthru -- target/identify_passthru.sh@54 -- # grep 'Serial Number:' 00:27:34.481 11:49:35 nvmf_identify_passthru -- target/identify_passthru.sh@54 -- # awk '{print $3}' 00:27:34.739 11:49:35 nvmf_identify_passthru -- target/identify_passthru.sh@54 -- # nvmf_serial_number=12340 00:27:34.739 11:49:35 nvmf_identify_passthru -- target/identify_passthru.sh@61 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_identify -r ' trtype:tcp adrfam:IPv4 traddr:10.0.0.3 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' 00:27:34.739 11:49:35 nvmf_identify_passthru -- target/identify_passthru.sh@61 -- # grep 'Model Number:' 00:27:34.739 11:49:35 nvmf_identify_passthru -- target/identify_passthru.sh@61 -- # awk '{print $3}' 00:27:34.997 11:49:35 nvmf_identify_passthru -- target/identify_passthru.sh@61 -- # nvmf_model_number=QEMU 00:27:34.997 11:49:35 nvmf_identify_passthru -- target/identify_passthru.sh@63 -- # '[' 12340 '!=' 12340 ']' 00:27:34.997 11:49:35 nvmf_identify_passthru -- target/identify_passthru.sh@68 -- # '[' QEMU '!=' QEMU ']' 00:27:34.997 11:49:35 nvmf_identify_passthru -- target/identify_passthru.sh@73 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:27:34.997 11:49:35 nvmf_identify_passthru -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:34.997 11:49:35 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:27:34.997 11:49:35 nvmf_identify_passthru -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:34.997 11:49:35 nvmf_identify_passthru -- target/identify_passthru.sh@75 -- # trap - SIGINT SIGTERM EXIT 00:27:34.997 11:49:35 nvmf_identify_passthru -- target/identify_passthru.sh@77 -- # nvmftestfini 00:27:34.997 11:49:35 nvmf_identify_passthru -- nvmf/common.sh@516 -- # nvmfcleanup 00:27:34.997 11:49:35 nvmf_identify_passthru -- nvmf/common.sh@121 -- # sync 00:27:35.256 11:49:35 nvmf_identify_passthru -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:27:35.256 11:49:35 nvmf_identify_passthru -- nvmf/common.sh@124 -- # set +e 00:27:35.256 11:49:35 nvmf_identify_passthru -- nvmf/common.sh@125 -- # for i in {1..20} 00:27:35.256 11:49:35 nvmf_identify_passthru -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:27:35.256 rmmod nvme_tcp 00:27:35.256 rmmod nvme_fabrics 00:27:35.256 rmmod nvme_keyring 00:27:35.256 11:49:35 nvmf_identify_passthru -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:27:35.256 11:49:35 nvmf_identify_passthru -- nvmf/common.sh@128 -- # set -e 00:27:35.256 11:49:35 nvmf_identify_passthru -- nvmf/common.sh@129 -- # return 0 00:27:35.256 11:49:35 nvmf_identify_passthru -- nvmf/common.sh@517 -- # '[' -n 109795 ']' 00:27:35.256 11:49:35 nvmf_identify_passthru -- nvmf/common.sh@518 -- # killprocess 109795 00:27:35.256 11:49:35 nvmf_identify_passthru -- common/autotest_common.sh@954 -- # '[' -z 109795 ']' 00:27:35.256 11:49:35 nvmf_identify_passthru -- common/autotest_common.sh@958 -- # kill -0 109795 00:27:35.256 11:49:35 nvmf_identify_passthru -- common/autotest_common.sh@959 -- # uname 00:27:35.256 11:49:35 nvmf_identify_passthru -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:27:35.256 11:49:35 nvmf_identify_passthru -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 109795 00:27:35.256 killing process with pid 109795 00:27:35.256 11:49:35 nvmf_identify_passthru -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:27:35.256 11:49:35 nvmf_identify_passthru -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:27:35.256 11:49:35 nvmf_identify_passthru -- common/autotest_common.sh@972 -- # echo 'killing process with pid 109795' 00:27:35.256 11:49:35 nvmf_identify_passthru -- common/autotest_common.sh@973 -- # kill 109795 00:27:35.256 11:49:35 nvmf_identify_passthru -- common/autotest_common.sh@978 -- # wait 109795 00:27:35.515 11:49:36 nvmf_identify_passthru -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:27:35.515 11:49:36 nvmf_identify_passthru -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:27:35.515 11:49:36 nvmf_identify_passthru -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:27:35.515 11:49:36 nvmf_identify_passthru -- nvmf/common.sh@297 -- # iptr 00:27:35.515 11:49:36 nvmf_identify_passthru -- nvmf/common.sh@791 -- # iptables-save 00:27:35.515 11:49:36 nvmf_identify_passthru -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:27:35.515 11:49:36 nvmf_identify_passthru -- nvmf/common.sh@791 -- # iptables-restore 00:27:35.515 11:49:36 nvmf_identify_passthru -- nvmf/common.sh@298 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:27:35.515 11:49:36 nvmf_identify_passthru -- nvmf/common.sh@299 -- # nvmf_veth_fini 00:27:35.515 11:49:36 nvmf_identify_passthru -- nvmf/common.sh@233 -- # ip link set nvmf_init_br nomaster 00:27:35.515 11:49:36 nvmf_identify_passthru -- nvmf/common.sh@234 -- # ip link set nvmf_init_br2 nomaster 00:27:35.515 11:49:36 nvmf_identify_passthru -- nvmf/common.sh@235 -- # ip link set nvmf_tgt_br nomaster 00:27:35.515 11:49:36 nvmf_identify_passthru -- nvmf/common.sh@236 -- # ip link set nvmf_tgt_br2 nomaster 00:27:35.515 11:49:36 nvmf_identify_passthru -- nvmf/common.sh@237 -- # ip link set nvmf_init_br down 00:27:35.515 11:49:36 nvmf_identify_passthru -- nvmf/common.sh@238 -- # ip link set nvmf_init_br2 down 00:27:35.515 11:49:36 nvmf_identify_passthru -- nvmf/common.sh@239 -- # ip link set nvmf_tgt_br down 00:27:35.515 11:49:36 nvmf_identify_passthru -- nvmf/common.sh@240 -- # ip link set nvmf_tgt_br2 down 00:27:35.515 11:49:36 nvmf_identify_passthru -- nvmf/common.sh@241 -- # ip link delete nvmf_br type bridge 00:27:35.515 11:49:36 nvmf_identify_passthru -- nvmf/common.sh@242 -- # ip link delete nvmf_init_if 00:27:35.515 11:49:36 nvmf_identify_passthru -- nvmf/common.sh@243 -- # ip link delete nvmf_init_if2 00:27:35.515 11:49:36 nvmf_identify_passthru -- nvmf/common.sh@244 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:27:35.515 11:49:36 nvmf_identify_passthru -- nvmf/common.sh@245 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:27:35.515 11:49:36 nvmf_identify_passthru -- nvmf/common.sh@246 -- # remove_spdk_ns 00:27:35.515 11:49:36 nvmf_identify_passthru -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:27:35.515 11:49:36 nvmf_identify_passthru -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 13> /dev/null' 00:27:35.515 11:49:36 nvmf_identify_passthru -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:27:35.775 11:49:36 nvmf_identify_passthru -- nvmf/common.sh@300 -- # return 0 00:27:35.775 00:27:35.775 real 0m3.471s 00:27:35.775 user 0m8.325s 00:27:35.775 sys 0m0.890s 00:27:35.775 11:49:36 nvmf_identify_passthru -- common/autotest_common.sh@1130 -- # xtrace_disable 00:27:35.775 ************************************ 00:27:35.775 END TEST nvmf_identify_passthru 00:27:35.775 ************************************ 00:27:35.775 11:49:36 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:27:35.775 11:49:36 -- spdk/autotest.sh@289 -- # run_test nvmf_dif /home/vagrant/spdk_repo/spdk/test/nvmf/target/dif.sh 00:27:35.775 11:49:36 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:27:35.775 11:49:36 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:27:35.775 11:49:36 -- common/autotest_common.sh@10 -- # set +x 00:27:35.775 ************************************ 00:27:35.775 START TEST nvmf_dif 00:27:35.775 ************************************ 00:27:35.775 11:49:36 nvmf_dif -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/dif.sh 00:27:35.775 * Looking for test storage... 00:27:35.775 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:27:35.775 11:49:36 nvmf_dif -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:27:35.775 11:49:36 nvmf_dif -- common/autotest_common.sh@1711 -- # lcov --version 00:27:35.775 11:49:36 nvmf_dif -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:27:35.775 11:49:36 nvmf_dif -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:27:35.775 11:49:36 nvmf_dif -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:27:35.775 11:49:36 nvmf_dif -- scripts/common.sh@333 -- # local ver1 ver1_l 00:27:35.775 11:49:36 nvmf_dif -- scripts/common.sh@334 -- # local ver2 ver2_l 00:27:35.775 11:49:36 nvmf_dif -- scripts/common.sh@336 -- # IFS=.-: 00:27:35.775 11:49:36 nvmf_dif -- scripts/common.sh@336 -- # read -ra ver1 00:27:35.775 11:49:36 nvmf_dif -- scripts/common.sh@337 -- # IFS=.-: 00:27:35.775 11:49:36 nvmf_dif -- scripts/common.sh@337 -- # read -ra ver2 00:27:35.775 11:49:36 nvmf_dif -- scripts/common.sh@338 -- # local 'op=<' 00:27:35.775 11:49:36 nvmf_dif -- scripts/common.sh@340 -- # ver1_l=2 00:27:35.775 11:49:36 nvmf_dif -- scripts/common.sh@341 -- # ver2_l=1 00:27:35.775 11:49:36 nvmf_dif -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:27:35.775 11:49:36 nvmf_dif -- scripts/common.sh@344 -- # case "$op" in 00:27:35.775 11:49:36 nvmf_dif -- scripts/common.sh@345 -- # : 1 00:27:35.775 11:49:36 nvmf_dif -- scripts/common.sh@364 -- # (( v = 0 )) 00:27:35.775 11:49:36 nvmf_dif -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:27:35.775 11:49:36 nvmf_dif -- scripts/common.sh@365 -- # decimal 1 00:27:35.775 11:49:36 nvmf_dif -- scripts/common.sh@353 -- # local d=1 00:27:35.775 11:49:36 nvmf_dif -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:27:35.775 11:49:36 nvmf_dif -- scripts/common.sh@355 -- # echo 1 00:27:35.775 11:49:36 nvmf_dif -- scripts/common.sh@365 -- # ver1[v]=1 00:27:35.775 11:49:36 nvmf_dif -- scripts/common.sh@366 -- # decimal 2 00:27:35.775 11:49:36 nvmf_dif -- scripts/common.sh@353 -- # local d=2 00:27:35.775 11:49:36 nvmf_dif -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:27:35.775 11:49:36 nvmf_dif -- scripts/common.sh@355 -- # echo 2 00:27:35.775 11:49:36 nvmf_dif -- scripts/common.sh@366 -- # ver2[v]=2 00:27:35.775 11:49:36 nvmf_dif -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:27:35.775 11:49:36 nvmf_dif -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:27:35.775 11:49:36 nvmf_dif -- scripts/common.sh@368 -- # return 0 00:27:35.775 11:49:36 nvmf_dif -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:27:35.775 11:49:36 nvmf_dif -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:27:35.775 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:27:35.775 --rc genhtml_branch_coverage=1 00:27:35.775 --rc genhtml_function_coverage=1 00:27:35.775 --rc genhtml_legend=1 00:27:35.775 --rc geninfo_all_blocks=1 00:27:35.775 --rc geninfo_unexecuted_blocks=1 00:27:35.775 00:27:35.775 ' 00:27:35.775 11:49:36 nvmf_dif -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:27:35.775 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:27:35.775 --rc genhtml_branch_coverage=1 00:27:35.775 --rc genhtml_function_coverage=1 00:27:35.775 --rc genhtml_legend=1 00:27:35.775 --rc geninfo_all_blocks=1 00:27:35.775 --rc geninfo_unexecuted_blocks=1 00:27:35.775 00:27:35.775 ' 00:27:35.775 11:49:36 nvmf_dif -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:27:35.775 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:27:35.775 --rc genhtml_branch_coverage=1 00:27:35.775 --rc genhtml_function_coverage=1 00:27:35.775 --rc genhtml_legend=1 00:27:35.775 --rc geninfo_all_blocks=1 00:27:35.775 --rc geninfo_unexecuted_blocks=1 00:27:35.775 00:27:35.775 ' 00:27:35.775 11:49:36 nvmf_dif -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:27:35.775 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:27:35.775 --rc genhtml_branch_coverage=1 00:27:35.775 --rc genhtml_function_coverage=1 00:27:35.775 --rc genhtml_legend=1 00:27:35.775 --rc geninfo_all_blocks=1 00:27:35.775 --rc geninfo_unexecuted_blocks=1 00:27:35.775 00:27:35.775 ' 00:27:35.775 11:49:36 nvmf_dif -- target/dif.sh@13 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:27:35.775 11:49:36 nvmf_dif -- nvmf/common.sh@7 -- # uname -s 00:27:35.775 11:49:36 nvmf_dif -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:27:35.775 11:49:36 nvmf_dif -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:27:35.775 11:49:36 nvmf_dif -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:27:35.775 11:49:36 nvmf_dif -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:27:35.775 11:49:36 nvmf_dif -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:27:35.775 11:49:36 nvmf_dif -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:27:35.775 11:49:36 nvmf_dif -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:27:35.775 11:49:36 nvmf_dif -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:27:35.775 11:49:36 nvmf_dif -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:27:35.775 11:49:36 nvmf_dif -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:27:35.775 11:49:36 nvmf_dif -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:2fb5c03d-b67f-4471-b8a9-a83bf7b15be8 00:27:35.775 11:49:36 nvmf_dif -- nvmf/common.sh@18 -- # NVME_HOSTID=2fb5c03d-b67f-4471-b8a9-a83bf7b15be8 00:27:35.775 11:49:36 nvmf_dif -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:27:35.775 11:49:36 nvmf_dif -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:27:35.775 11:49:36 nvmf_dif -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:27:35.775 11:49:36 nvmf_dif -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:27:35.775 11:49:36 nvmf_dif -- nvmf/common.sh@49 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:27:35.775 11:49:36 nvmf_dif -- scripts/common.sh@15 -- # shopt -s extglob 00:27:35.775 11:49:36 nvmf_dif -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:27:35.775 11:49:36 nvmf_dif -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:27:35.775 11:49:36 nvmf_dif -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:27:35.775 11:49:36 nvmf_dif -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:27:35.775 11:49:36 nvmf_dif -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:27:35.775 11:49:36 nvmf_dif -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:27:35.775 11:49:36 nvmf_dif -- paths/export.sh@5 -- # export PATH 00:27:35.775 11:49:36 nvmf_dif -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:27:35.775 11:49:36 nvmf_dif -- nvmf/common.sh@51 -- # : 0 00:27:35.775 11:49:36 nvmf_dif -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:27:35.775 11:49:36 nvmf_dif -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:27:35.775 11:49:36 nvmf_dif -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:27:35.775 11:49:36 nvmf_dif -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:27:35.775 11:49:36 nvmf_dif -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:27:35.775 11:49:36 nvmf_dif -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:27:35.775 /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:27:35.775 11:49:36 nvmf_dif -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:27:35.775 11:49:36 nvmf_dif -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:27:35.775 11:49:36 nvmf_dif -- nvmf/common.sh@55 -- # have_pci_nics=0 00:27:35.775 11:49:36 nvmf_dif -- target/dif.sh@15 -- # NULL_META=16 00:27:35.776 11:49:36 nvmf_dif -- target/dif.sh@15 -- # NULL_BLOCK_SIZE=512 00:27:35.776 11:49:36 nvmf_dif -- target/dif.sh@15 -- # NULL_SIZE=64 00:27:35.776 11:49:36 nvmf_dif -- target/dif.sh@15 -- # NULL_DIF=1 00:27:35.776 11:49:36 nvmf_dif -- target/dif.sh@135 -- # nvmftestinit 00:27:35.776 11:49:36 nvmf_dif -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:27:35.776 11:49:36 nvmf_dif -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:27:35.776 11:49:36 nvmf_dif -- nvmf/common.sh@476 -- # prepare_net_devs 00:27:35.776 11:49:36 nvmf_dif -- nvmf/common.sh@438 -- # local -g is_hw=no 00:27:35.776 11:49:36 nvmf_dif -- nvmf/common.sh@440 -- # remove_spdk_ns 00:27:35.776 11:49:36 nvmf_dif -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:27:35.776 11:49:36 nvmf_dif -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 13> /dev/null' 00:27:35.776 11:49:36 nvmf_dif -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:27:35.776 11:49:36 nvmf_dif -- nvmf/common.sh@442 -- # [[ virt != virt ]] 00:27:35.776 11:49:36 nvmf_dif -- nvmf/common.sh@444 -- # [[ no == yes ]] 00:27:35.776 11:49:36 nvmf_dif -- nvmf/common.sh@451 -- # [[ virt == phy ]] 00:27:35.776 11:49:36 nvmf_dif -- nvmf/common.sh@454 -- # [[ virt == phy-fallback ]] 00:27:35.776 11:49:36 nvmf_dif -- nvmf/common.sh@459 -- # [[ tcp == tcp ]] 00:27:35.776 11:49:36 nvmf_dif -- nvmf/common.sh@460 -- # nvmf_veth_init 00:27:35.776 11:49:36 nvmf_dif -- nvmf/common.sh@145 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:27:35.776 11:49:36 nvmf_dif -- nvmf/common.sh@146 -- # NVMF_SECOND_INITIATOR_IP=10.0.0.2 00:27:35.776 11:49:36 nvmf_dif -- nvmf/common.sh@147 -- # NVMF_FIRST_TARGET_IP=10.0.0.3 00:27:35.776 11:49:36 nvmf_dif -- nvmf/common.sh@148 -- # NVMF_SECOND_TARGET_IP=10.0.0.4 00:27:35.776 11:49:36 nvmf_dif -- nvmf/common.sh@149 -- # NVMF_INITIATOR_IP=10.0.0.1 00:27:35.776 11:49:36 nvmf_dif -- nvmf/common.sh@150 -- # NVMF_BRIDGE=nvmf_br 00:27:35.776 11:49:36 nvmf_dif -- nvmf/common.sh@151 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:27:35.776 11:49:36 nvmf_dif -- nvmf/common.sh@152 -- # NVMF_INITIATOR_INTERFACE2=nvmf_init_if2 00:27:35.776 11:49:36 nvmf_dif -- nvmf/common.sh@153 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:27:35.776 11:49:36 nvmf_dif -- nvmf/common.sh@154 -- # NVMF_INITIATOR_BRIDGE2=nvmf_init_br2 00:27:35.776 11:49:36 nvmf_dif -- nvmf/common.sh@155 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:27:35.776 11:49:36 nvmf_dif -- nvmf/common.sh@156 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:27:35.776 11:49:36 nvmf_dif -- nvmf/common.sh@157 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:27:35.776 11:49:36 nvmf_dif -- nvmf/common.sh@158 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:27:35.776 11:49:36 nvmf_dif -- nvmf/common.sh@159 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:27:35.776 11:49:36 nvmf_dif -- nvmf/common.sh@160 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:27:35.776 11:49:36 nvmf_dif -- nvmf/common.sh@162 -- # ip link set nvmf_init_br nomaster 00:27:36.035 Cannot find device "nvmf_init_br" 00:27:36.035 11:49:36 nvmf_dif -- nvmf/common.sh@162 -- # true 00:27:36.035 11:49:36 nvmf_dif -- nvmf/common.sh@163 -- # ip link set nvmf_init_br2 nomaster 00:27:36.035 Cannot find device "nvmf_init_br2" 00:27:36.035 11:49:36 nvmf_dif -- nvmf/common.sh@163 -- # true 00:27:36.035 11:49:36 nvmf_dif -- nvmf/common.sh@164 -- # ip link set nvmf_tgt_br nomaster 00:27:36.035 Cannot find device "nvmf_tgt_br" 00:27:36.035 11:49:36 nvmf_dif -- nvmf/common.sh@164 -- # true 00:27:36.035 11:49:36 nvmf_dif -- nvmf/common.sh@165 -- # ip link set nvmf_tgt_br2 nomaster 00:27:36.035 Cannot find device "nvmf_tgt_br2" 00:27:36.035 11:49:36 nvmf_dif -- nvmf/common.sh@165 -- # true 00:27:36.035 11:49:36 nvmf_dif -- nvmf/common.sh@166 -- # ip link set nvmf_init_br down 00:27:36.035 Cannot find device "nvmf_init_br" 00:27:36.035 11:49:36 nvmf_dif -- nvmf/common.sh@166 -- # true 00:27:36.035 11:49:36 nvmf_dif -- nvmf/common.sh@167 -- # ip link set nvmf_init_br2 down 00:27:36.035 Cannot find device "nvmf_init_br2" 00:27:36.035 11:49:36 nvmf_dif -- nvmf/common.sh@167 -- # true 00:27:36.035 11:49:36 nvmf_dif -- nvmf/common.sh@168 -- # ip link set nvmf_tgt_br down 00:27:36.035 Cannot find device "nvmf_tgt_br" 00:27:36.035 11:49:36 nvmf_dif -- nvmf/common.sh@168 -- # true 00:27:36.035 11:49:36 nvmf_dif -- nvmf/common.sh@169 -- # ip link set nvmf_tgt_br2 down 00:27:36.035 Cannot find device "nvmf_tgt_br2" 00:27:36.035 11:49:36 nvmf_dif -- nvmf/common.sh@169 -- # true 00:27:36.035 11:49:36 nvmf_dif -- nvmf/common.sh@170 -- # ip link delete nvmf_br type bridge 00:27:36.035 Cannot find device "nvmf_br" 00:27:36.035 11:49:36 nvmf_dif -- nvmf/common.sh@170 -- # true 00:27:36.035 11:49:36 nvmf_dif -- nvmf/common.sh@171 -- # ip link delete nvmf_init_if 00:27:36.035 Cannot find device "nvmf_init_if" 00:27:36.035 11:49:36 nvmf_dif -- nvmf/common.sh@171 -- # true 00:27:36.035 11:49:36 nvmf_dif -- nvmf/common.sh@172 -- # ip link delete nvmf_init_if2 00:27:36.035 Cannot find device "nvmf_init_if2" 00:27:36.035 11:49:36 nvmf_dif -- nvmf/common.sh@172 -- # true 00:27:36.035 11:49:36 nvmf_dif -- nvmf/common.sh@173 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:27:36.035 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:27:36.035 11:49:36 nvmf_dif -- nvmf/common.sh@173 -- # true 00:27:36.035 11:49:36 nvmf_dif -- nvmf/common.sh@174 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:27:36.035 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:27:36.035 11:49:36 nvmf_dif -- nvmf/common.sh@174 -- # true 00:27:36.035 11:49:36 nvmf_dif -- nvmf/common.sh@177 -- # ip netns add nvmf_tgt_ns_spdk 00:27:36.035 11:49:36 nvmf_dif -- nvmf/common.sh@180 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:27:36.035 11:49:36 nvmf_dif -- nvmf/common.sh@181 -- # ip link add nvmf_init_if2 type veth peer name nvmf_init_br2 00:27:36.035 11:49:36 nvmf_dif -- nvmf/common.sh@182 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:27:36.035 11:49:36 nvmf_dif -- nvmf/common.sh@183 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:27:36.035 11:49:36 nvmf_dif -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:27:36.035 11:49:36 nvmf_dif -- nvmf/common.sh@187 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:27:36.035 11:49:36 nvmf_dif -- nvmf/common.sh@190 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:27:36.035 11:49:36 nvmf_dif -- nvmf/common.sh@191 -- # ip addr add 10.0.0.2/24 dev nvmf_init_if2 00:27:36.035 11:49:36 nvmf_dif -- nvmf/common.sh@192 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if 00:27:36.035 11:49:36 nvmf_dif -- nvmf/common.sh@193 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.4/24 dev nvmf_tgt_if2 00:27:36.035 11:49:36 nvmf_dif -- nvmf/common.sh@196 -- # ip link set nvmf_init_if up 00:27:36.035 11:49:36 nvmf_dif -- nvmf/common.sh@197 -- # ip link set nvmf_init_if2 up 00:27:36.035 11:49:36 nvmf_dif -- nvmf/common.sh@198 -- # ip link set nvmf_init_br up 00:27:36.035 11:49:36 nvmf_dif -- nvmf/common.sh@199 -- # ip link set nvmf_init_br2 up 00:27:36.294 11:49:36 nvmf_dif -- nvmf/common.sh@200 -- # ip link set nvmf_tgt_br up 00:27:36.294 11:49:36 nvmf_dif -- nvmf/common.sh@201 -- # ip link set nvmf_tgt_br2 up 00:27:36.294 11:49:36 nvmf_dif -- nvmf/common.sh@202 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:27:36.294 11:49:36 nvmf_dif -- nvmf/common.sh@203 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:27:36.294 11:49:36 nvmf_dif -- nvmf/common.sh@204 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:27:36.294 11:49:36 nvmf_dif -- nvmf/common.sh@207 -- # ip link add nvmf_br type bridge 00:27:36.294 11:49:36 nvmf_dif -- nvmf/common.sh@208 -- # ip link set nvmf_br up 00:27:36.294 11:49:36 nvmf_dif -- nvmf/common.sh@211 -- # ip link set nvmf_init_br master nvmf_br 00:27:36.294 11:49:36 nvmf_dif -- nvmf/common.sh@212 -- # ip link set nvmf_init_br2 master nvmf_br 00:27:36.294 11:49:36 nvmf_dif -- nvmf/common.sh@213 -- # ip link set nvmf_tgt_br master nvmf_br 00:27:36.294 11:49:36 nvmf_dif -- nvmf/common.sh@214 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:27:36.294 11:49:36 nvmf_dif -- nvmf/common.sh@217 -- # ipts -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:27:36.294 11:49:36 nvmf_dif -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT' 00:27:36.294 11:49:36 nvmf_dif -- nvmf/common.sh@218 -- # ipts -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT 00:27:36.294 11:49:36 nvmf_dif -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT' 00:27:36.294 11:49:36 nvmf_dif -- nvmf/common.sh@219 -- # ipts -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:27:36.294 11:49:36 nvmf_dif -- nvmf/common.sh@790 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT -m comment --comment 'SPDK_NVMF:-A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT' 00:27:36.294 11:49:36 nvmf_dif -- nvmf/common.sh@222 -- # ping -c 1 10.0.0.3 00:27:36.294 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:27:36.294 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.049 ms 00:27:36.294 00:27:36.294 --- 10.0.0.3 ping statistics --- 00:27:36.294 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:27:36.294 rtt min/avg/max/mdev = 0.049/0.049/0.049/0.000 ms 00:27:36.294 11:49:36 nvmf_dif -- nvmf/common.sh@223 -- # ping -c 1 10.0.0.4 00:27:36.294 PING 10.0.0.4 (10.0.0.4) 56(84) bytes of data. 00:27:36.294 64 bytes from 10.0.0.4: icmp_seq=1 ttl=64 time=0.045 ms 00:27:36.294 00:27:36.295 --- 10.0.0.4 ping statistics --- 00:27:36.295 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:27:36.295 rtt min/avg/max/mdev = 0.045/0.045/0.045/0.000 ms 00:27:36.295 11:49:36 nvmf_dif -- nvmf/common.sh@224 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:27:36.295 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:27:36.295 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.031 ms 00:27:36.295 00:27:36.295 --- 10.0.0.1 ping statistics --- 00:27:36.295 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:27:36.295 rtt min/avg/max/mdev = 0.031/0.031/0.031/0.000 ms 00:27:36.295 11:49:36 nvmf_dif -- nvmf/common.sh@225 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.2 00:27:36.295 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:27:36.295 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.075 ms 00:27:36.295 00:27:36.295 --- 10.0.0.2 ping statistics --- 00:27:36.295 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:27:36.295 rtt min/avg/max/mdev = 0.075/0.075/0.075/0.000 ms 00:27:36.295 11:49:36 nvmf_dif -- nvmf/common.sh@227 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:27:36.295 11:49:36 nvmf_dif -- nvmf/common.sh@461 -- # return 0 00:27:36.295 11:49:36 nvmf_dif -- nvmf/common.sh@478 -- # '[' iso == iso ']' 00:27:36.295 11:49:36 nvmf_dif -- nvmf/common.sh@479 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:27:36.553 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:27:36.553 0000:00:11.0 (1b36 0010): Already using the uio_pci_generic driver 00:27:36.553 0000:00:10.0 (1b36 0010): Already using the uio_pci_generic driver 00:27:36.553 11:49:37 nvmf_dif -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:27:36.553 11:49:37 nvmf_dif -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:27:36.553 11:49:37 nvmf_dif -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:27:36.553 11:49:37 nvmf_dif -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:27:36.553 11:49:37 nvmf_dif -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:27:36.553 11:49:37 nvmf_dif -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:27:36.812 11:49:37 nvmf_dif -- target/dif.sh@136 -- # NVMF_TRANSPORT_OPTS+=' --dif-insert-or-strip' 00:27:36.812 11:49:37 nvmf_dif -- target/dif.sh@137 -- # nvmfappstart 00:27:36.812 11:49:37 nvmf_dif -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:27:36.812 11:49:37 nvmf_dif -- common/autotest_common.sh@726 -- # xtrace_disable 00:27:36.812 11:49:37 nvmf_dif -- common/autotest_common.sh@10 -- # set +x 00:27:36.812 11:49:37 nvmf_dif -- nvmf/common.sh@509 -- # nvmfpid=110190 00:27:36.812 11:49:37 nvmf_dif -- nvmf/common.sh@510 -- # waitforlisten 110190 00:27:36.812 11:49:37 nvmf_dif -- common/autotest_common.sh@835 -- # '[' -z 110190 ']' 00:27:36.812 11:49:37 nvmf_dif -- nvmf/common.sh@508 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF 00:27:36.812 11:49:37 nvmf_dif -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:27:36.812 11:49:37 nvmf_dif -- common/autotest_common.sh@840 -- # local max_retries=100 00:27:36.812 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:27:36.812 11:49:37 nvmf_dif -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:27:36.812 11:49:37 nvmf_dif -- common/autotest_common.sh@844 -- # xtrace_disable 00:27:36.812 11:49:37 nvmf_dif -- common/autotest_common.sh@10 -- # set +x 00:27:36.812 [2024-12-16 11:49:37.392096] Starting SPDK v25.01-pre git sha1 c360129e7 / DPDK 24.03.0 initialization... 00:27:36.812 [2024-12-16 11:49:37.392864] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:27:36.812 [2024-12-16 11:49:37.542756] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:27:37.071 [2024-12-16 11:49:37.586348] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:27:37.071 [2024-12-16 11:49:37.586423] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:27:37.071 [2024-12-16 11:49:37.586449] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:27:37.071 [2024-12-16 11:49:37.586457] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:27:37.071 [2024-12-16 11:49:37.586464] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:27:37.071 [2024-12-16 11:49:37.586875] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:27:37.071 11:49:37 nvmf_dif -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:27:37.071 11:49:37 nvmf_dif -- common/autotest_common.sh@868 -- # return 0 00:27:37.071 11:49:37 nvmf_dif -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:27:37.071 11:49:37 nvmf_dif -- common/autotest_common.sh@732 -- # xtrace_disable 00:27:37.071 11:49:37 nvmf_dif -- common/autotest_common.sh@10 -- # set +x 00:27:37.071 11:49:37 nvmf_dif -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:27:37.071 11:49:37 nvmf_dif -- target/dif.sh@139 -- # create_transport 00:27:37.071 11:49:37 nvmf_dif -- target/dif.sh@50 -- # rpc_cmd nvmf_create_transport -t tcp -o --dif-insert-or-strip 00:27:37.071 11:49:37 nvmf_dif -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:37.071 11:49:37 nvmf_dif -- common/autotest_common.sh@10 -- # set +x 00:27:37.071 [2024-12-16 11:49:37.769323] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:27:37.071 11:49:37 nvmf_dif -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:37.071 11:49:37 nvmf_dif -- target/dif.sh@141 -- # run_test fio_dif_1_default fio_dif_1 00:27:37.071 11:49:37 nvmf_dif -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:27:37.071 11:49:37 nvmf_dif -- common/autotest_common.sh@1111 -- # xtrace_disable 00:27:37.071 11:49:37 nvmf_dif -- common/autotest_common.sh@10 -- # set +x 00:27:37.071 ************************************ 00:27:37.071 START TEST fio_dif_1_default 00:27:37.071 ************************************ 00:27:37.071 11:49:37 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1129 -- # fio_dif_1 00:27:37.071 11:49:37 nvmf_dif.fio_dif_1_default -- target/dif.sh@86 -- # create_subsystems 0 00:27:37.071 11:49:37 nvmf_dif.fio_dif_1_default -- target/dif.sh@28 -- # local sub 00:27:37.071 11:49:37 nvmf_dif.fio_dif_1_default -- target/dif.sh@30 -- # for sub in "$@" 00:27:37.071 11:49:37 nvmf_dif.fio_dif_1_default -- target/dif.sh@31 -- # create_subsystem 0 00:27:37.071 11:49:37 nvmf_dif.fio_dif_1_default -- target/dif.sh@18 -- # local sub_id=0 00:27:37.071 11:49:37 nvmf_dif.fio_dif_1_default -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null0 64 512 --md-size 16 --dif-type 1 00:27:37.071 11:49:37 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:37.071 11:49:37 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@10 -- # set +x 00:27:37.071 bdev_null0 00:27:37.071 11:49:37 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:37.071 11:49:37 nvmf_dif.fio_dif_1_default -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 --serial-number 53313233-0 --allow-any-host 00:27:37.071 11:49:37 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:37.071 11:49:37 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@10 -- # set +x 00:27:37.071 11:49:37 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:37.071 11:49:37 nvmf_dif.fio_dif_1_default -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 bdev_null0 00:27:37.071 11:49:37 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:37.071 11:49:37 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@10 -- # set +x 00:27:37.071 11:49:37 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:37.071 11:49:37 nvmf_dif.fio_dif_1_default -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.3 -s 4420 00:27:37.071 11:49:37 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:37.071 11:49:37 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@10 -- # set +x 00:27:37.071 [2024-12-16 11:49:37.817455] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4420 *** 00:27:37.071 11:49:37 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:37.071 11:49:37 nvmf_dif.fio_dif_1_default -- target/dif.sh@87 -- # fio /dev/fd/62 00:27:37.071 11:49:37 nvmf_dif.fio_dif_1_default -- target/dif.sh@87 -- # create_json_sub_conf 0 00:27:37.071 11:49:37 nvmf_dif.fio_dif_1_default -- target/dif.sh@51 -- # gen_nvmf_target_json 0 00:27:37.071 11:49:37 nvmf_dif.fio_dif_1_default -- nvmf/common.sh@560 -- # config=() 00:27:37.071 11:49:37 nvmf_dif.fio_dif_1_default -- nvmf/common.sh@560 -- # local subsystem config 00:27:37.330 11:49:37 nvmf_dif.fio_dif_1_default -- target/dif.sh@82 -- # fio_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:27:37.330 11:49:37 nvmf_dif.fio_dif_1_default -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:27:37.330 11:49:37 nvmf_dif.fio_dif_1_default -- target/dif.sh@82 -- # gen_fio_conf 00:27:37.330 11:49:37 nvmf_dif.fio_dif_1_default -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:27:37.330 { 00:27:37.330 "params": { 00:27:37.330 "name": "Nvme$subsystem", 00:27:37.330 "trtype": "$TEST_TRANSPORT", 00:27:37.330 "traddr": "$NVMF_FIRST_TARGET_IP", 00:27:37.330 "adrfam": "ipv4", 00:27:37.330 "trsvcid": "$NVMF_PORT", 00:27:37.330 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:27:37.330 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:27:37.330 "hdgst": ${hdgst:-false}, 00:27:37.330 "ddgst": ${ddgst:-false} 00:27:37.330 }, 00:27:37.330 "method": "bdev_nvme_attach_controller" 00:27:37.330 } 00:27:37.330 EOF 00:27:37.330 )") 00:27:37.330 11:49:37 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1360 -- # fio_plugin /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:27:37.330 11:49:37 nvmf_dif.fio_dif_1_default -- target/dif.sh@54 -- # local file 00:27:37.330 11:49:37 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1341 -- # local fio_dir=/usr/src/fio 00:27:37.330 11:49:37 nvmf_dif.fio_dif_1_default -- target/dif.sh@56 -- # cat 00:27:37.330 11:49:37 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1343 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:27:37.330 11:49:37 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1343 -- # local sanitizers 00:27:37.331 11:49:37 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1344 -- # local plugin=/home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:27:37.331 11:49:37 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1345 -- # shift 00:27:37.331 11:49:37 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1347 -- # local asan_lib= 00:27:37.331 11:49:37 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1348 -- # for sanitizer in "${sanitizers[@]}" 00:27:37.331 11:49:37 nvmf_dif.fio_dif_1_default -- nvmf/common.sh@582 -- # cat 00:27:37.331 11:49:37 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1349 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:27:37.331 11:49:37 nvmf_dif.fio_dif_1_default -- target/dif.sh@72 -- # (( file = 1 )) 00:27:37.331 11:49:37 nvmf_dif.fio_dif_1_default -- target/dif.sh@72 -- # (( file <= files )) 00:27:37.331 11:49:37 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1349 -- # grep libasan 00:27:37.331 11:49:37 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1349 -- # awk '{print $3}' 00:27:37.331 11:49:37 nvmf_dif.fio_dif_1_default -- nvmf/common.sh@584 -- # jq . 00:27:37.331 11:49:37 nvmf_dif.fio_dif_1_default -- nvmf/common.sh@585 -- # IFS=, 00:27:37.331 11:49:37 nvmf_dif.fio_dif_1_default -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:27:37.331 "params": { 00:27:37.331 "name": "Nvme0", 00:27:37.331 "trtype": "tcp", 00:27:37.331 "traddr": "10.0.0.3", 00:27:37.331 "adrfam": "ipv4", 00:27:37.331 "trsvcid": "4420", 00:27:37.331 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:27:37.331 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:27:37.331 "hdgst": false, 00:27:37.331 "ddgst": false 00:27:37.331 }, 00:27:37.331 "method": "bdev_nvme_attach_controller" 00:27:37.331 }' 00:27:37.331 11:49:37 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1349 -- # asan_lib= 00:27:37.331 11:49:37 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1350 -- # [[ -n '' ]] 00:27:37.331 11:49:37 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1348 -- # for sanitizer in "${sanitizers[@]}" 00:27:37.331 11:49:37 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1349 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:27:37.331 11:49:37 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1349 -- # awk '{print $3}' 00:27:37.331 11:49:37 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1349 -- # grep libclang_rt.asan 00:27:37.331 11:49:37 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1349 -- # asan_lib= 00:27:37.331 11:49:37 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1350 -- # [[ -n '' ]] 00:27:37.331 11:49:37 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1356 -- # LD_PRELOAD=' /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev' 00:27:37.331 11:49:37 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1356 -- # /usr/src/fio/fio --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:27:37.331 filename0: (g=0): rw=randread, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=4 00:27:37.331 fio-3.35 00:27:37.331 Starting 1 thread 00:27:49.534 00:27:49.534 filename0: (groupid=0, jobs=1): err= 0: pid=110261: Mon Dec 16 11:49:48 2024 00:27:49.534 read: IOPS=142, BW=570KiB/s (584kB/s)(5712KiB/10013msec) 00:27:49.534 slat (nsec): min=5981, max=52627, avg=9034.21, stdev=4612.71 00:27:49.534 clat (usec): min=377, max=42618, avg=28016.82, stdev=18862.66 00:27:49.534 lat (usec): min=383, max=42628, avg=28025.85, stdev=18862.84 00:27:49.534 clat percentiles (usec): 00:27:49.534 | 1.00th=[ 392], 5.00th=[ 420], 10.00th=[ 441], 20.00th=[ 482], 00:27:49.534 | 30.00th=[ 537], 40.00th=[40633], 50.00th=[40633], 60.00th=[41157], 00:27:49.534 | 70.00th=[41157], 80.00th=[41157], 90.00th=[41157], 95.00th=[41157], 00:27:49.534 | 99.00th=[41681], 99.50th=[41681], 99.90th=[42730], 99.95th=[42730], 00:27:49.534 | 99.99th=[42730] 00:27:49.534 bw ( KiB/s): min= 384, max= 736, per=99.74%, avg=569.60, stdev=97.17, samples=20 00:27:49.534 iops : min= 96, max= 184, avg=142.40, stdev=24.29, samples=20 00:27:49.534 lat (usec) : 500=24.79%, 750=6.86% 00:27:49.534 lat (msec) : 4=0.28%, 50=68.07% 00:27:49.534 cpu : usr=92.01%, sys=7.57%, ctx=22, majf=0, minf=9 00:27:49.534 IO depths : 1=25.0%, 2=50.0%, 4=25.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:27:49.534 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:27:49.534 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:27:49.534 issued rwts: total=1428,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:27:49.534 latency : target=0, window=0, percentile=100.00%, depth=4 00:27:49.534 00:27:49.534 Run status group 0 (all jobs): 00:27:49.534 READ: bw=570KiB/s (584kB/s), 570KiB/s-570KiB/s (584kB/s-584kB/s), io=5712KiB (5849kB), run=10013-10013msec 00:27:49.534 11:49:48 nvmf_dif.fio_dif_1_default -- target/dif.sh@88 -- # destroy_subsystems 0 00:27:49.534 11:49:48 nvmf_dif.fio_dif_1_default -- target/dif.sh@43 -- # local sub 00:27:49.534 11:49:48 nvmf_dif.fio_dif_1_default -- target/dif.sh@45 -- # for sub in "$@" 00:27:49.534 11:49:48 nvmf_dif.fio_dif_1_default -- target/dif.sh@46 -- # destroy_subsystem 0 00:27:49.534 11:49:48 nvmf_dif.fio_dif_1_default -- target/dif.sh@36 -- # local sub_id=0 00:27:49.534 11:49:48 nvmf_dif.fio_dif_1_default -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:27:49.534 11:49:48 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:49.534 11:49:48 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@10 -- # set +x 00:27:49.534 11:49:48 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:49.534 11:49:48 nvmf_dif.fio_dif_1_default -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null0 00:27:49.534 11:49:48 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:49.534 11:49:48 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@10 -- # set +x 00:27:49.534 11:49:48 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:49.534 00:27:49.534 real 0m11.101s 00:27:49.534 user 0m9.906s 00:27:49.534 sys 0m1.045s 00:27:49.534 ************************************ 00:27:49.534 END TEST fio_dif_1_default 00:27:49.534 ************************************ 00:27:49.534 11:49:48 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1130 -- # xtrace_disable 00:27:49.534 11:49:48 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@10 -- # set +x 00:27:49.534 11:49:48 nvmf_dif -- target/dif.sh@142 -- # run_test fio_dif_1_multi_subsystems fio_dif_1_multi_subsystems 00:27:49.534 11:49:48 nvmf_dif -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:27:49.534 11:49:48 nvmf_dif -- common/autotest_common.sh@1111 -- # xtrace_disable 00:27:49.534 11:49:48 nvmf_dif -- common/autotest_common.sh@10 -- # set +x 00:27:49.534 ************************************ 00:27:49.534 START TEST fio_dif_1_multi_subsystems 00:27:49.534 ************************************ 00:27:49.534 11:49:48 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1129 -- # fio_dif_1_multi_subsystems 00:27:49.534 11:49:48 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@92 -- # local files=1 00:27:49.534 11:49:48 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@94 -- # create_subsystems 0 1 00:27:49.534 11:49:48 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@28 -- # local sub 00:27:49.534 11:49:48 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@30 -- # for sub in "$@" 00:27:49.534 11:49:48 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@31 -- # create_subsystem 0 00:27:49.534 11:49:48 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@18 -- # local sub_id=0 00:27:49.534 11:49:48 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null0 64 512 --md-size 16 --dif-type 1 00:27:49.534 11:49:48 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:49.534 11:49:48 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:27:49.534 bdev_null0 00:27:49.534 11:49:48 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:49.534 11:49:48 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 --serial-number 53313233-0 --allow-any-host 00:27:49.534 11:49:48 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:49.534 11:49:48 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:27:49.534 11:49:48 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:49.534 11:49:48 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 bdev_null0 00:27:49.534 11:49:48 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:49.534 11:49:48 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:27:49.534 11:49:48 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:49.534 11:49:48 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.3 -s 4420 00:27:49.534 11:49:48 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:49.534 11:49:48 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:27:49.534 [2024-12-16 11:49:48.974526] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4420 *** 00:27:49.534 11:49:48 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:49.534 11:49:48 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@30 -- # for sub in "$@" 00:27:49.534 11:49:48 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@31 -- # create_subsystem 1 00:27:49.534 11:49:48 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@18 -- # local sub_id=1 00:27:49.534 11:49:48 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null1 64 512 --md-size 16 --dif-type 1 00:27:49.534 11:49:48 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:49.534 11:49:48 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:27:49.534 bdev_null1 00:27:49.534 11:49:48 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:49.534 11:49:48 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 --serial-number 53313233-1 --allow-any-host 00:27:49.534 11:49:48 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:49.534 11:49:48 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:27:49.534 11:49:48 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:49.534 11:49:48 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 bdev_null1 00:27:49.534 11:49:48 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:49.534 11:49:48 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:27:49.534 11:49:49 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:49.534 11:49:49 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 00:27:49.534 11:49:49 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:49.534 11:49:49 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:27:49.534 11:49:49 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:49.534 11:49:49 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@95 -- # fio /dev/fd/62 00:27:49.534 11:49:49 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@95 -- # create_json_sub_conf 0 1 00:27:49.534 11:49:49 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@51 -- # gen_nvmf_target_json 0 1 00:27:49.535 11:49:49 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@560 -- # config=() 00:27:49.535 11:49:49 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@560 -- # local subsystem config 00:27:49.535 11:49:49 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:27:49.535 11:49:49 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:27:49.535 { 00:27:49.535 "params": { 00:27:49.535 "name": "Nvme$subsystem", 00:27:49.535 "trtype": "$TEST_TRANSPORT", 00:27:49.535 "traddr": "$NVMF_FIRST_TARGET_IP", 00:27:49.535 "adrfam": "ipv4", 00:27:49.535 "trsvcid": "$NVMF_PORT", 00:27:49.535 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:27:49.535 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:27:49.535 "hdgst": ${hdgst:-false}, 00:27:49.535 "ddgst": ${ddgst:-false} 00:27:49.535 }, 00:27:49.535 "method": "bdev_nvme_attach_controller" 00:27:49.535 } 00:27:49.535 EOF 00:27:49.535 )") 00:27:49.535 11:49:49 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@82 -- # fio_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:27:49.535 11:49:49 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@82 -- # gen_fio_conf 00:27:49.535 11:49:49 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1360 -- # fio_plugin /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:27:49.535 11:49:49 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@54 -- # local file 00:27:49.535 11:49:49 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@56 -- # cat 00:27:49.535 11:49:49 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1341 -- # local fio_dir=/usr/src/fio 00:27:49.535 11:49:49 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1343 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:27:49.535 11:49:49 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@582 -- # cat 00:27:49.535 11:49:49 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1343 -- # local sanitizers 00:27:49.535 11:49:49 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1344 -- # local plugin=/home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:27:49.535 11:49:49 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1345 -- # shift 00:27:49.535 11:49:49 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1347 -- # local asan_lib= 00:27:49.535 11:49:49 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1348 -- # for sanitizer in "${sanitizers[@]}" 00:27:49.535 11:49:49 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1349 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:27:49.535 11:49:49 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@72 -- # (( file = 1 )) 00:27:49.535 11:49:49 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:27:49.535 11:49:49 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:27:49.535 { 00:27:49.535 "params": { 00:27:49.535 "name": "Nvme$subsystem", 00:27:49.535 "trtype": "$TEST_TRANSPORT", 00:27:49.535 "traddr": "$NVMF_FIRST_TARGET_IP", 00:27:49.535 "adrfam": "ipv4", 00:27:49.535 "trsvcid": "$NVMF_PORT", 00:27:49.535 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:27:49.535 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:27:49.535 "hdgst": ${hdgst:-false}, 00:27:49.535 "ddgst": ${ddgst:-false} 00:27:49.535 }, 00:27:49.535 "method": "bdev_nvme_attach_controller" 00:27:49.535 } 00:27:49.535 EOF 00:27:49.535 )") 00:27:49.535 11:49:49 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@72 -- # (( file <= files )) 00:27:49.535 11:49:49 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@73 -- # cat 00:27:49.535 11:49:49 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1349 -- # grep libasan 00:27:49.535 11:49:49 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@582 -- # cat 00:27:49.535 11:49:49 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1349 -- # awk '{print $3}' 00:27:49.535 11:49:49 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@72 -- # (( file++ )) 00:27:49.535 11:49:49 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@72 -- # (( file <= files )) 00:27:49.535 11:49:49 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@584 -- # jq . 00:27:49.535 11:49:49 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@585 -- # IFS=, 00:27:49.535 11:49:49 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:27:49.535 "params": { 00:27:49.535 "name": "Nvme0", 00:27:49.535 "trtype": "tcp", 00:27:49.535 "traddr": "10.0.0.3", 00:27:49.535 "adrfam": "ipv4", 00:27:49.535 "trsvcid": "4420", 00:27:49.535 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:27:49.535 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:27:49.535 "hdgst": false, 00:27:49.535 "ddgst": false 00:27:49.535 }, 00:27:49.535 "method": "bdev_nvme_attach_controller" 00:27:49.535 },{ 00:27:49.535 "params": { 00:27:49.535 "name": "Nvme1", 00:27:49.535 "trtype": "tcp", 00:27:49.535 "traddr": "10.0.0.3", 00:27:49.535 "adrfam": "ipv4", 00:27:49.535 "trsvcid": "4420", 00:27:49.535 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:27:49.535 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:27:49.535 "hdgst": false, 00:27:49.535 "ddgst": false 00:27:49.535 }, 00:27:49.535 "method": "bdev_nvme_attach_controller" 00:27:49.535 }' 00:27:49.535 11:49:49 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1349 -- # asan_lib= 00:27:49.535 11:49:49 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1350 -- # [[ -n '' ]] 00:27:49.535 11:49:49 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1348 -- # for sanitizer in "${sanitizers[@]}" 00:27:49.535 11:49:49 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1349 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:27:49.535 11:49:49 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1349 -- # awk '{print $3}' 00:27:49.535 11:49:49 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1349 -- # grep libclang_rt.asan 00:27:49.535 11:49:49 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1349 -- # asan_lib= 00:27:49.535 11:49:49 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1350 -- # [[ -n '' ]] 00:27:49.535 11:49:49 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1356 -- # LD_PRELOAD=' /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev' 00:27:49.535 11:49:49 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1356 -- # /usr/src/fio/fio --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:27:49.535 filename0: (g=0): rw=randread, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=4 00:27:49.535 filename1: (g=0): rw=randread, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=4 00:27:49.535 fio-3.35 00:27:49.535 Starting 2 threads 00:27:59.507 00:27:59.507 filename0: (groupid=0, jobs=1): err= 0: pid=110427: Mon Dec 16 11:49:59 2024 00:27:59.507 read: IOPS=142, BW=572KiB/s (585kB/s)(5728KiB/10022msec) 00:27:59.507 slat (nsec): min=6743, max=36192, avg=9084.52, stdev=3728.43 00:27:59.507 clat (usec): min=382, max=41524, avg=27965.88, stdev=18888.80 00:27:59.507 lat (usec): min=389, max=41538, avg=27974.97, stdev=18888.43 00:27:59.507 clat percentiles (usec): 00:27:59.507 | 1.00th=[ 408], 5.00th=[ 429], 10.00th=[ 445], 20.00th=[ 478], 00:27:59.507 | 30.00th=[ 766], 40.00th=[40633], 50.00th=[41157], 60.00th=[41157], 00:27:59.507 | 70.00th=[41157], 80.00th=[41157], 90.00th=[41157], 95.00th=[41157], 00:27:59.507 | 99.00th=[41681], 99.50th=[41681], 99.90th=[41681], 99.95th=[41681], 00:27:59.507 | 99.99th=[41681] 00:27:59.507 bw ( KiB/s): min= 416, max= 832, per=50.16%, avg=571.15, stdev=98.11, samples=20 00:27:59.507 iops : min= 104, max= 208, avg=142.75, stdev=24.52, samples=20 00:27:59.507 lat (usec) : 500=24.51%, 750=5.38%, 1000=1.96% 00:27:59.507 lat (msec) : 4=0.28%, 50=67.88% 00:27:59.507 cpu : usr=95.43%, sys=4.13%, ctx=39, majf=0, minf=0 00:27:59.507 IO depths : 1=25.0%, 2=50.0%, 4=25.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:27:59.507 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:27:59.507 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:27:59.507 issued rwts: total=1432,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:27:59.507 latency : target=0, window=0, percentile=100.00%, depth=4 00:27:59.507 filename1: (groupid=0, jobs=1): err= 0: pid=110428: Mon Dec 16 11:49:59 2024 00:27:59.507 read: IOPS=141, BW=567KiB/s (581kB/s)(5680KiB/10017msec) 00:27:59.507 slat (nsec): min=6588, max=36487, avg=8811.66, stdev=2984.11 00:27:59.508 clat (usec): min=380, max=41577, avg=28188.22, stdev=18794.10 00:27:59.508 lat (usec): min=387, max=41590, avg=28197.03, stdev=18793.75 00:27:59.508 clat percentiles (usec): 00:27:59.508 | 1.00th=[ 404], 5.00th=[ 433], 10.00th=[ 449], 20.00th=[ 478], 00:27:59.508 | 30.00th=[ 791], 40.00th=[40633], 50.00th=[41157], 60.00th=[41157], 00:27:59.508 | 70.00th=[41157], 80.00th=[41157], 90.00th=[41157], 95.00th=[41157], 00:27:59.508 | 99.00th=[41681], 99.50th=[41681], 99.90th=[41681], 99.95th=[41681], 00:27:59.508 | 99.99th=[41681] 00:27:59.508 bw ( KiB/s): min= 416, max= 832, per=49.72%, avg=566.30, stdev=105.44, samples=20 00:27:59.508 iops : min= 104, max= 208, avg=141.55, stdev=26.37, samples=20 00:27:59.508 lat (usec) : 500=26.69%, 750=3.17%, 1000=1.41% 00:27:59.508 lat (msec) : 10=0.28%, 50=68.45% 00:27:59.508 cpu : usr=96.05%, sys=3.53%, ctx=28, majf=0, minf=0 00:27:59.508 IO depths : 1=25.0%, 2=50.0%, 4=25.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:27:59.508 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:27:59.508 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:27:59.508 issued rwts: total=1420,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:27:59.508 latency : target=0, window=0, percentile=100.00%, depth=4 00:27:59.508 00:27:59.508 Run status group 0 (all jobs): 00:27:59.508 READ: bw=1138KiB/s (1166kB/s), 567KiB/s-572KiB/s (581kB/s-585kB/s), io=11.1MiB (11.7MB), run=10017-10022msec 00:27:59.508 11:50:00 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@96 -- # destroy_subsystems 0 1 00:27:59.508 11:50:00 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@43 -- # local sub 00:27:59.508 11:50:00 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@45 -- # for sub in "$@" 00:27:59.508 11:50:00 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@46 -- # destroy_subsystem 0 00:27:59.508 11:50:00 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@36 -- # local sub_id=0 00:27:59.508 11:50:00 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:27:59.508 11:50:00 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:59.508 11:50:00 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:27:59.508 11:50:00 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:59.508 11:50:00 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null0 00:27:59.508 11:50:00 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:59.508 11:50:00 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:27:59.508 11:50:00 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:59.508 11:50:00 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@45 -- # for sub in "$@" 00:27:59.508 11:50:00 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@46 -- # destroy_subsystem 1 00:27:59.508 11:50:00 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@36 -- # local sub_id=1 00:27:59.508 11:50:00 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:27:59.508 11:50:00 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:59.508 11:50:00 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:27:59.508 11:50:00 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:59.508 11:50:00 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null1 00:27:59.508 11:50:00 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:59.508 11:50:00 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:27:59.508 11:50:00 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:59.508 00:27:59.508 real 0m11.293s 00:27:59.508 user 0m20.058s 00:27:59.508 sys 0m1.066s 00:27:59.508 11:50:00 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1130 -- # xtrace_disable 00:27:59.508 ************************************ 00:27:59.508 END TEST fio_dif_1_multi_subsystems 00:27:59.508 ************************************ 00:27:59.508 11:50:00 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:27:59.767 11:50:00 nvmf_dif -- target/dif.sh@143 -- # run_test fio_dif_rand_params fio_dif_rand_params 00:27:59.767 11:50:00 nvmf_dif -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:27:59.767 11:50:00 nvmf_dif -- common/autotest_common.sh@1111 -- # xtrace_disable 00:27:59.767 11:50:00 nvmf_dif -- common/autotest_common.sh@10 -- # set +x 00:27:59.767 ************************************ 00:27:59.767 START TEST fio_dif_rand_params 00:27:59.767 ************************************ 00:27:59.767 11:50:00 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1129 -- # fio_dif_rand_params 00:27:59.767 11:50:00 nvmf_dif.fio_dif_rand_params -- target/dif.sh@100 -- # local NULL_DIF 00:27:59.767 11:50:00 nvmf_dif.fio_dif_rand_params -- target/dif.sh@101 -- # local bs numjobs runtime iodepth files 00:27:59.767 11:50:00 nvmf_dif.fio_dif_rand_params -- target/dif.sh@103 -- # NULL_DIF=3 00:27:59.767 11:50:00 nvmf_dif.fio_dif_rand_params -- target/dif.sh@103 -- # bs=128k 00:27:59.767 11:50:00 nvmf_dif.fio_dif_rand_params -- target/dif.sh@103 -- # numjobs=3 00:27:59.767 11:50:00 nvmf_dif.fio_dif_rand_params -- target/dif.sh@103 -- # iodepth=3 00:27:59.767 11:50:00 nvmf_dif.fio_dif_rand_params -- target/dif.sh@103 -- # runtime=5 00:27:59.767 11:50:00 nvmf_dif.fio_dif_rand_params -- target/dif.sh@105 -- # create_subsystems 0 00:27:59.767 11:50:00 nvmf_dif.fio_dif_rand_params -- target/dif.sh@28 -- # local sub 00:27:59.767 11:50:00 nvmf_dif.fio_dif_rand_params -- target/dif.sh@30 -- # for sub in "$@" 00:27:59.767 11:50:00 nvmf_dif.fio_dif_rand_params -- target/dif.sh@31 -- # create_subsystem 0 00:27:59.767 11:50:00 nvmf_dif.fio_dif_rand_params -- target/dif.sh@18 -- # local sub_id=0 00:27:59.767 11:50:00 nvmf_dif.fio_dif_rand_params -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null0 64 512 --md-size 16 --dif-type 3 00:27:59.767 11:50:00 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:59.767 11:50:00 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:27:59.767 bdev_null0 00:27:59.767 11:50:00 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:59.767 11:50:00 nvmf_dif.fio_dif_rand_params -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 --serial-number 53313233-0 --allow-any-host 00:27:59.767 11:50:00 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:59.767 11:50:00 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:27:59.767 11:50:00 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:59.767 11:50:00 nvmf_dif.fio_dif_rand_params -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 bdev_null0 00:27:59.767 11:50:00 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:59.767 11:50:00 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:27:59.767 11:50:00 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:59.767 11:50:00 nvmf_dif.fio_dif_rand_params -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.3 -s 4420 00:27:59.767 11:50:00 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:59.767 11:50:00 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:27:59.767 [2024-12-16 11:50:00.318115] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4420 *** 00:27:59.767 11:50:00 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:59.767 11:50:00 nvmf_dif.fio_dif_rand_params -- target/dif.sh@106 -- # fio /dev/fd/62 00:27:59.767 11:50:00 nvmf_dif.fio_dif_rand_params -- target/dif.sh@106 -- # create_json_sub_conf 0 00:27:59.767 11:50:00 nvmf_dif.fio_dif_rand_params -- target/dif.sh@51 -- # gen_nvmf_target_json 0 00:27:59.767 11:50:00 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@560 -- # config=() 00:27:59.767 11:50:00 nvmf_dif.fio_dif_rand_params -- target/dif.sh@82 -- # fio_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:27:59.767 11:50:00 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@560 -- # local subsystem config 00:27:59.767 11:50:00 nvmf_dif.fio_dif_rand_params -- target/dif.sh@82 -- # gen_fio_conf 00:27:59.767 11:50:00 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1360 -- # fio_plugin /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:27:59.767 11:50:00 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:27:59.767 11:50:00 nvmf_dif.fio_dif_rand_params -- target/dif.sh@54 -- # local file 00:27:59.767 11:50:00 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:27:59.767 { 00:27:59.767 "params": { 00:27:59.767 "name": "Nvme$subsystem", 00:27:59.767 "trtype": "$TEST_TRANSPORT", 00:27:59.767 "traddr": "$NVMF_FIRST_TARGET_IP", 00:27:59.768 "adrfam": "ipv4", 00:27:59.768 "trsvcid": "$NVMF_PORT", 00:27:59.768 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:27:59.768 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:27:59.768 "hdgst": ${hdgst:-false}, 00:27:59.768 "ddgst": ${ddgst:-false} 00:27:59.768 }, 00:27:59.768 "method": "bdev_nvme_attach_controller" 00:27:59.768 } 00:27:59.768 EOF 00:27:59.768 )") 00:27:59.768 11:50:00 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1341 -- # local fio_dir=/usr/src/fio 00:27:59.768 11:50:00 nvmf_dif.fio_dif_rand_params -- target/dif.sh@56 -- # cat 00:27:59.768 11:50:00 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1343 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:27:59.768 11:50:00 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1343 -- # local sanitizers 00:27:59.768 11:50:00 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1344 -- # local plugin=/home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:27:59.768 11:50:00 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # shift 00:27:59.768 11:50:00 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1347 -- # local asan_lib= 00:27:59.768 11:50:00 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1348 -- # for sanitizer in "${sanitizers[@]}" 00:27:59.768 11:50:00 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@582 -- # cat 00:27:59.768 11:50:00 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1349 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:27:59.768 11:50:00 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1349 -- # awk '{print $3}' 00:27:59.768 11:50:00 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1349 -- # grep libasan 00:27:59.768 11:50:00 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file = 1 )) 00:27:59.768 11:50:00 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file <= files )) 00:27:59.768 11:50:00 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@584 -- # jq . 00:27:59.768 11:50:00 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@585 -- # IFS=, 00:27:59.768 11:50:00 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:27:59.768 "params": { 00:27:59.768 "name": "Nvme0", 00:27:59.768 "trtype": "tcp", 00:27:59.768 "traddr": "10.0.0.3", 00:27:59.768 "adrfam": "ipv4", 00:27:59.768 "trsvcid": "4420", 00:27:59.768 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:27:59.768 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:27:59.768 "hdgst": false, 00:27:59.768 "ddgst": false 00:27:59.768 }, 00:27:59.768 "method": "bdev_nvme_attach_controller" 00:27:59.768 }' 00:27:59.768 11:50:00 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1349 -- # asan_lib= 00:27:59.768 11:50:00 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1350 -- # [[ -n '' ]] 00:27:59.768 11:50:00 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1348 -- # for sanitizer in "${sanitizers[@]}" 00:27:59.768 11:50:00 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1349 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:27:59.768 11:50:00 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1349 -- # grep libclang_rt.asan 00:27:59.768 11:50:00 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1349 -- # awk '{print $3}' 00:27:59.768 11:50:00 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1349 -- # asan_lib= 00:27:59.768 11:50:00 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1350 -- # [[ -n '' ]] 00:27:59.768 11:50:00 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1356 -- # LD_PRELOAD=' /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev' 00:27:59.768 11:50:00 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1356 -- # /usr/src/fio/fio --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:28:00.026 filename0: (g=0): rw=randread, bs=(R) 128KiB-128KiB, (W) 128KiB-128KiB, (T) 128KiB-128KiB, ioengine=spdk_bdev, iodepth=3 00:28:00.026 ... 00:28:00.026 fio-3.35 00:28:00.026 Starting 3 threads 00:28:06.597 00:28:06.597 filename0: (groupid=0, jobs=1): err= 0: pid=110579: Mon Dec 16 11:50:06 2024 00:28:06.597 read: IOPS=238, BW=29.8MiB/s (31.3MB/s)(149MiB/5001msec) 00:28:06.597 slat (nsec): min=6535, max=56928, avg=9727.73, stdev=4339.50 00:28:06.597 clat (usec): min=3767, max=18461, avg=12541.15, stdev=2886.70 00:28:06.597 lat (usec): min=3774, max=18475, avg=12550.88, stdev=2887.12 00:28:06.597 clat percentiles (usec): 00:28:06.597 | 1.00th=[ 3851], 5.00th=[ 8029], 10.00th=[ 8356], 20.00th=[ 9110], 00:28:06.597 | 30.00th=[12256], 40.00th=[13304], 50.00th=[13566], 60.00th=[13960], 00:28:06.597 | 70.00th=[14353], 80.00th=[14746], 90.00th=[15270], 95.00th=[15795], 00:28:06.597 | 99.00th=[16712], 99.50th=[17171], 99.90th=[18482], 99.95th=[18482], 00:28:06.597 | 99.99th=[18482] 00:28:06.597 bw ( KiB/s): min=26112, max=33024, per=30.43%, avg=29696.00, stdev=2518.06, samples=9 00:28:06.597 iops : min= 204, max= 258, avg=232.00, stdev=19.67, samples=9 00:28:06.597 lat (msec) : 4=2.26%, 10=22.86%, 20=74.87% 00:28:06.597 cpu : usr=92.72%, sys=5.66%, ctx=21, majf=0, minf=0 00:28:06.597 IO depths : 1=33.0%, 2=67.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:28:06.597 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:28:06.597 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:28:06.597 issued rwts: total=1194,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:28:06.597 latency : target=0, window=0, percentile=100.00%, depth=3 00:28:06.597 filename0: (groupid=0, jobs=1): err= 0: pid=110580: Mon Dec 16 11:50:06 2024 00:28:06.597 read: IOPS=291, BW=36.5MiB/s (38.2MB/s)(182MiB/5002msec) 00:28:06.597 slat (nsec): min=6839, max=56957, avg=11800.47, stdev=4204.46 00:28:06.597 clat (usec): min=5342, max=53506, avg=10269.52, stdev=5399.94 00:28:06.597 lat (usec): min=5352, max=53526, avg=10281.32, stdev=5399.82 00:28:06.597 clat percentiles (usec): 00:28:06.597 | 1.00th=[ 5735], 5.00th=[ 6783], 10.00th=[ 7242], 20.00th=[ 8356], 00:28:06.597 | 30.00th=[ 9110], 40.00th=[ 9634], 50.00th=[ 9896], 60.00th=[10159], 00:28:06.597 | 70.00th=[10421], 80.00th=[10814], 90.00th=[11338], 95.00th=[11863], 00:28:06.597 | 99.00th=[49546], 99.50th=[51643], 99.90th=[52691], 99.95th=[53740], 00:28:06.597 | 99.99th=[53740] 00:28:06.597 bw ( KiB/s): min=33024, max=43008, per=38.42%, avg=37489.78, stdev=3184.89, samples=9 00:28:06.597 iops : min= 258, max= 336, avg=292.89, stdev=24.88, samples=9 00:28:06.597 lat (msec) : 10=51.88%, 20=46.47%, 50=0.75%, 100=0.89% 00:28:06.597 cpu : usr=91.94%, sys=6.28%, ctx=6, majf=0, minf=0 00:28:06.597 IO depths : 1=0.6%, 2=99.4%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:28:06.597 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:28:06.597 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:28:06.597 issued rwts: total=1459,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:28:06.597 latency : target=0, window=0, percentile=100.00%, depth=3 00:28:06.597 filename0: (groupid=0, jobs=1): err= 0: pid=110581: Mon Dec 16 11:50:06 2024 00:28:06.597 read: IOPS=231, BW=29.0MiB/s (30.4MB/s)(145MiB/5002msec) 00:28:06.597 slat (nsec): min=6553, max=38566, avg=10830.46, stdev=3575.01 00:28:06.597 clat (usec): min=4931, max=55023, avg=12918.20, stdev=8272.53 00:28:06.597 lat (usec): min=4941, max=55037, avg=12929.03, stdev=8272.64 00:28:06.597 clat percentiles (usec): 00:28:06.597 | 1.00th=[ 6128], 5.00th=[ 7439], 10.00th=[ 8979], 20.00th=[10290], 00:28:06.597 | 30.00th=[10945], 40.00th=[11338], 50.00th=[11600], 60.00th=[11863], 00:28:06.597 | 70.00th=[12256], 80.00th=[12649], 90.00th=[13173], 95.00th=[14222], 00:28:06.597 | 99.00th=[53216], 99.50th=[54264], 99.90th=[54789], 99.95th=[54789], 00:28:06.597 | 99.99th=[54789] 00:28:06.597 bw ( KiB/s): min=26112, max=32512, per=30.57%, avg=29832.11, stdev=2680.06, samples=9 00:28:06.597 iops : min= 204, max= 254, avg=233.00, stdev=21.01, samples=9 00:28:06.597 lat (msec) : 10=15.86%, 20=80.00%, 50=0.69%, 100=3.45% 00:28:06.597 cpu : usr=92.72%, sys=5.84%, ctx=14, majf=0, minf=0 00:28:06.597 IO depths : 1=7.1%, 2=92.9%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:28:06.597 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:28:06.597 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:28:06.597 issued rwts: total=1160,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:28:06.597 latency : target=0, window=0, percentile=100.00%, depth=3 00:28:06.597 00:28:06.597 Run status group 0 (all jobs): 00:28:06.597 READ: bw=95.3MiB/s (99.9MB/s), 29.0MiB/s-36.5MiB/s (30.4MB/s-38.2MB/s), io=477MiB (500MB), run=5001-5002msec 00:28:06.597 11:50:06 nvmf_dif.fio_dif_rand_params -- target/dif.sh@107 -- # destroy_subsystems 0 00:28:06.597 11:50:06 nvmf_dif.fio_dif_rand_params -- target/dif.sh@43 -- # local sub 00:28:06.597 11:50:06 nvmf_dif.fio_dif_rand_params -- target/dif.sh@45 -- # for sub in "$@" 00:28:06.597 11:50:06 nvmf_dif.fio_dif_rand_params -- target/dif.sh@46 -- # destroy_subsystem 0 00:28:06.597 11:50:06 nvmf_dif.fio_dif_rand_params -- target/dif.sh@36 -- # local sub_id=0 00:28:06.597 11:50:06 nvmf_dif.fio_dif_rand_params -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:28:06.597 11:50:06 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:06.597 11:50:06 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:28:06.597 11:50:06 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:06.597 11:50:06 nvmf_dif.fio_dif_rand_params -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null0 00:28:06.597 11:50:06 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:06.597 11:50:06 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:28:06.597 11:50:06 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:06.597 11:50:06 nvmf_dif.fio_dif_rand_params -- target/dif.sh@109 -- # NULL_DIF=2 00:28:06.597 11:50:06 nvmf_dif.fio_dif_rand_params -- target/dif.sh@109 -- # bs=4k 00:28:06.597 11:50:06 nvmf_dif.fio_dif_rand_params -- target/dif.sh@109 -- # numjobs=8 00:28:06.597 11:50:06 nvmf_dif.fio_dif_rand_params -- target/dif.sh@109 -- # iodepth=16 00:28:06.597 11:50:06 nvmf_dif.fio_dif_rand_params -- target/dif.sh@109 -- # runtime= 00:28:06.597 11:50:06 nvmf_dif.fio_dif_rand_params -- target/dif.sh@109 -- # files=2 00:28:06.597 11:50:06 nvmf_dif.fio_dif_rand_params -- target/dif.sh@111 -- # create_subsystems 0 1 2 00:28:06.597 11:50:06 nvmf_dif.fio_dif_rand_params -- target/dif.sh@28 -- # local sub 00:28:06.597 11:50:06 nvmf_dif.fio_dif_rand_params -- target/dif.sh@30 -- # for sub in "$@" 00:28:06.597 11:50:06 nvmf_dif.fio_dif_rand_params -- target/dif.sh@31 -- # create_subsystem 0 00:28:06.597 11:50:06 nvmf_dif.fio_dif_rand_params -- target/dif.sh@18 -- # local sub_id=0 00:28:06.597 11:50:06 nvmf_dif.fio_dif_rand_params -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null0 64 512 --md-size 16 --dif-type 2 00:28:06.597 11:50:06 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:06.597 11:50:06 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:28:06.597 bdev_null0 00:28:06.597 11:50:06 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:06.597 11:50:06 nvmf_dif.fio_dif_rand_params -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 --serial-number 53313233-0 --allow-any-host 00:28:06.597 11:50:06 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:06.597 11:50:06 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:28:06.597 11:50:06 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:06.597 11:50:06 nvmf_dif.fio_dif_rand_params -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 bdev_null0 00:28:06.597 11:50:06 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:06.597 11:50:06 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:28:06.597 11:50:06 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:06.597 11:50:06 nvmf_dif.fio_dif_rand_params -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.3 -s 4420 00:28:06.597 11:50:06 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:06.597 11:50:06 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:28:06.597 [2024-12-16 11:50:06.411189] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4420 *** 00:28:06.597 11:50:06 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:06.597 11:50:06 nvmf_dif.fio_dif_rand_params -- target/dif.sh@30 -- # for sub in "$@" 00:28:06.597 11:50:06 nvmf_dif.fio_dif_rand_params -- target/dif.sh@31 -- # create_subsystem 1 00:28:06.597 11:50:06 nvmf_dif.fio_dif_rand_params -- target/dif.sh@18 -- # local sub_id=1 00:28:06.597 11:50:06 nvmf_dif.fio_dif_rand_params -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null1 64 512 --md-size 16 --dif-type 2 00:28:06.597 11:50:06 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:06.597 11:50:06 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:28:06.597 bdev_null1 00:28:06.597 11:50:06 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:06.597 11:50:06 nvmf_dif.fio_dif_rand_params -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 --serial-number 53313233-1 --allow-any-host 00:28:06.597 11:50:06 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:06.597 11:50:06 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:28:06.597 11:50:06 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:06.597 11:50:06 nvmf_dif.fio_dif_rand_params -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 bdev_null1 00:28:06.597 11:50:06 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:06.598 11:50:06 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:28:06.598 11:50:06 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:06.598 11:50:06 nvmf_dif.fio_dif_rand_params -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 00:28:06.598 11:50:06 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:06.598 11:50:06 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:28:06.598 11:50:06 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:06.598 11:50:06 nvmf_dif.fio_dif_rand_params -- target/dif.sh@30 -- # for sub in "$@" 00:28:06.598 11:50:06 nvmf_dif.fio_dif_rand_params -- target/dif.sh@31 -- # create_subsystem 2 00:28:06.598 11:50:06 nvmf_dif.fio_dif_rand_params -- target/dif.sh@18 -- # local sub_id=2 00:28:06.598 11:50:06 nvmf_dif.fio_dif_rand_params -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null2 64 512 --md-size 16 --dif-type 2 00:28:06.598 11:50:06 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:06.598 11:50:06 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:28:06.598 bdev_null2 00:28:06.598 11:50:06 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:06.598 11:50:06 nvmf_dif.fio_dif_rand_params -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode2 --serial-number 53313233-2 --allow-any-host 00:28:06.598 11:50:06 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:06.598 11:50:06 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:28:06.598 11:50:06 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:06.598 11:50:06 nvmf_dif.fio_dif_rand_params -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode2 bdev_null2 00:28:06.598 11:50:06 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:06.598 11:50:06 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:28:06.598 11:50:06 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:06.598 11:50:06 nvmf_dif.fio_dif_rand_params -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode2 -t tcp -a 10.0.0.3 -s 4420 00:28:06.598 11:50:06 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:06.598 11:50:06 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:28:06.598 11:50:06 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:06.598 11:50:06 nvmf_dif.fio_dif_rand_params -- target/dif.sh@112 -- # fio /dev/fd/62 00:28:06.598 11:50:06 nvmf_dif.fio_dif_rand_params -- target/dif.sh@112 -- # create_json_sub_conf 0 1 2 00:28:06.598 11:50:06 nvmf_dif.fio_dif_rand_params -- target/dif.sh@51 -- # gen_nvmf_target_json 0 1 2 00:28:06.598 11:50:06 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@560 -- # config=() 00:28:06.598 11:50:06 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@560 -- # local subsystem config 00:28:06.598 11:50:06 nvmf_dif.fio_dif_rand_params -- target/dif.sh@82 -- # fio_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:28:06.598 11:50:06 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:28:06.598 11:50:06 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:28:06.598 { 00:28:06.598 "params": { 00:28:06.598 "name": "Nvme$subsystem", 00:28:06.598 "trtype": "$TEST_TRANSPORT", 00:28:06.598 "traddr": "$NVMF_FIRST_TARGET_IP", 00:28:06.598 "adrfam": "ipv4", 00:28:06.598 "trsvcid": "$NVMF_PORT", 00:28:06.598 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:28:06.598 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:28:06.598 "hdgst": ${hdgst:-false}, 00:28:06.598 "ddgst": ${ddgst:-false} 00:28:06.598 }, 00:28:06.598 "method": "bdev_nvme_attach_controller" 00:28:06.598 } 00:28:06.598 EOF 00:28:06.598 )") 00:28:06.598 11:50:06 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1360 -- # fio_plugin /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:28:06.598 11:50:06 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1341 -- # local fio_dir=/usr/src/fio 00:28:06.598 11:50:06 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1343 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:28:06.598 11:50:06 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1343 -- # local sanitizers 00:28:06.598 11:50:06 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1344 -- # local plugin=/home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:28:06.598 11:50:06 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # shift 00:28:06.598 11:50:06 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1347 -- # local asan_lib= 00:28:06.598 11:50:06 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1348 -- # for sanitizer in "${sanitizers[@]}" 00:28:06.598 11:50:06 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@582 -- # cat 00:28:06.598 11:50:06 nvmf_dif.fio_dif_rand_params -- target/dif.sh@82 -- # gen_fio_conf 00:28:06.598 11:50:06 nvmf_dif.fio_dif_rand_params -- target/dif.sh@54 -- # local file 00:28:06.598 11:50:06 nvmf_dif.fio_dif_rand_params -- target/dif.sh@56 -- # cat 00:28:06.598 11:50:06 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1349 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:28:06.598 11:50:06 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1349 -- # grep libasan 00:28:06.598 11:50:06 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1349 -- # awk '{print $3}' 00:28:06.598 11:50:06 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:28:06.598 11:50:06 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:28:06.598 { 00:28:06.598 "params": { 00:28:06.598 "name": "Nvme$subsystem", 00:28:06.598 "trtype": "$TEST_TRANSPORT", 00:28:06.598 "traddr": "$NVMF_FIRST_TARGET_IP", 00:28:06.598 "adrfam": "ipv4", 00:28:06.598 "trsvcid": "$NVMF_PORT", 00:28:06.598 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:28:06.598 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:28:06.598 "hdgst": ${hdgst:-false}, 00:28:06.598 "ddgst": ${ddgst:-false} 00:28:06.598 }, 00:28:06.598 "method": "bdev_nvme_attach_controller" 00:28:06.598 } 00:28:06.598 EOF 00:28:06.598 )") 00:28:06.598 11:50:06 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file = 1 )) 00:28:06.598 11:50:06 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file <= files )) 00:28:06.598 11:50:06 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@582 -- # cat 00:28:06.598 11:50:06 nvmf_dif.fio_dif_rand_params -- target/dif.sh@73 -- # cat 00:28:06.598 11:50:06 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file++ )) 00:28:06.598 11:50:06 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file <= files )) 00:28:06.598 11:50:06 nvmf_dif.fio_dif_rand_params -- target/dif.sh@73 -- # cat 00:28:06.598 11:50:06 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file++ )) 00:28:06.598 11:50:06 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:28:06.598 11:50:06 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file <= files )) 00:28:06.598 11:50:06 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:28:06.598 { 00:28:06.598 "params": { 00:28:06.598 "name": "Nvme$subsystem", 00:28:06.598 "trtype": "$TEST_TRANSPORT", 00:28:06.598 "traddr": "$NVMF_FIRST_TARGET_IP", 00:28:06.598 "adrfam": "ipv4", 00:28:06.598 "trsvcid": "$NVMF_PORT", 00:28:06.598 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:28:06.598 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:28:06.598 "hdgst": ${hdgst:-false}, 00:28:06.598 "ddgst": ${ddgst:-false} 00:28:06.598 }, 00:28:06.598 "method": "bdev_nvme_attach_controller" 00:28:06.598 } 00:28:06.598 EOF 00:28:06.598 )") 00:28:06.598 11:50:06 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@582 -- # cat 00:28:06.598 11:50:06 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@584 -- # jq . 00:28:06.598 11:50:06 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@585 -- # IFS=, 00:28:06.598 11:50:06 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:28:06.598 "params": { 00:28:06.598 "name": "Nvme0", 00:28:06.598 "trtype": "tcp", 00:28:06.598 "traddr": "10.0.0.3", 00:28:06.598 "adrfam": "ipv4", 00:28:06.598 "trsvcid": "4420", 00:28:06.598 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:28:06.598 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:28:06.598 "hdgst": false, 00:28:06.598 "ddgst": false 00:28:06.598 }, 00:28:06.598 "method": "bdev_nvme_attach_controller" 00:28:06.598 },{ 00:28:06.598 "params": { 00:28:06.598 "name": "Nvme1", 00:28:06.598 "trtype": "tcp", 00:28:06.598 "traddr": "10.0.0.3", 00:28:06.598 "adrfam": "ipv4", 00:28:06.598 "trsvcid": "4420", 00:28:06.598 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:28:06.598 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:28:06.598 "hdgst": false, 00:28:06.598 "ddgst": false 00:28:06.598 }, 00:28:06.598 "method": "bdev_nvme_attach_controller" 00:28:06.598 },{ 00:28:06.598 "params": { 00:28:06.598 "name": "Nvme2", 00:28:06.598 "trtype": "tcp", 00:28:06.598 "traddr": "10.0.0.3", 00:28:06.598 "adrfam": "ipv4", 00:28:06.598 "trsvcid": "4420", 00:28:06.598 "subnqn": "nqn.2016-06.io.spdk:cnode2", 00:28:06.598 "hostnqn": "nqn.2016-06.io.spdk:host2", 00:28:06.598 "hdgst": false, 00:28:06.598 "ddgst": false 00:28:06.598 }, 00:28:06.598 "method": "bdev_nvme_attach_controller" 00:28:06.598 }' 00:28:06.598 11:50:06 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1349 -- # asan_lib= 00:28:06.598 11:50:06 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1350 -- # [[ -n '' ]] 00:28:06.598 11:50:06 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1348 -- # for sanitizer in "${sanitizers[@]}" 00:28:06.598 11:50:06 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1349 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:28:06.598 11:50:06 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1349 -- # grep libclang_rt.asan 00:28:06.598 11:50:06 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1349 -- # awk '{print $3}' 00:28:06.598 11:50:06 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1349 -- # asan_lib= 00:28:06.598 11:50:06 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1350 -- # [[ -n '' ]] 00:28:06.598 11:50:06 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1356 -- # LD_PRELOAD=' /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev' 00:28:06.598 11:50:06 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1356 -- # /usr/src/fio/fio --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:28:06.598 filename0: (g=0): rw=randread, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=16 00:28:06.598 ... 00:28:06.598 filename1: (g=0): rw=randread, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=16 00:28:06.598 ... 00:28:06.598 filename2: (g=0): rw=randread, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=16 00:28:06.598 ... 00:28:06.598 fio-3.35 00:28:06.598 Starting 24 threads 00:28:18.812 00:28:18.812 filename0: (groupid=0, jobs=1): err= 0: pid=110676: Mon Dec 16 11:50:17 2024 00:28:18.812 read: IOPS=204, BW=817KiB/s (836kB/s)(8196KiB/10035msec) 00:28:18.812 slat (usec): min=5, max=12024, avg=18.57, stdev=279.79 00:28:18.812 clat (msec): min=10, max=247, avg=78.19, stdev=32.01 00:28:18.812 lat (msec): min=10, max=247, avg=78.21, stdev=32.01 00:28:18.812 clat percentiles (msec): 00:28:18.812 | 1.00th=[ 22], 5.00th=[ 33], 10.00th=[ 44], 20.00th=[ 50], 00:28:18.812 | 30.00th=[ 59], 40.00th=[ 68], 50.00th=[ 74], 60.00th=[ 81], 00:28:18.813 | 70.00th=[ 93], 80.00th=[ 105], 90.00th=[ 112], 95.00th=[ 130], 00:28:18.813 | 99.00th=[ 209], 99.50th=[ 209], 99.90th=[ 249], 99.95th=[ 249], 00:28:18.813 | 99.99th=[ 249] 00:28:18.813 bw ( KiB/s): min= 384, max= 1376, per=4.52%, avg=814.60, stdev=232.24, samples=20 00:28:18.813 iops : min= 96, max= 344, avg=203.60, stdev=58.07, samples=20 00:28:18.813 lat (msec) : 20=0.78%, 50=20.25%, 100=56.91%, 250=22.06% 00:28:18.813 cpu : usr=36.82%, sys=1.01%, ctx=1065, majf=0, minf=9 00:28:18.813 IO depths : 1=0.8%, 2=1.7%, 4=8.6%, 8=75.9%, 16=13.0%, 32=0.0%, >=64=0.0% 00:28:18.813 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:28:18.813 complete : 0=0.0%, 4=89.6%, 8=6.0%, 16=4.3%, 32=0.0%, 64=0.0%, >=64=0.0% 00:28:18.813 issued rwts: total=2049,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:28:18.813 latency : target=0, window=0, percentile=100.00%, depth=16 00:28:18.813 filename0: (groupid=0, jobs=1): err= 0: pid=110677: Mon Dec 16 11:50:17 2024 00:28:18.813 read: IOPS=205, BW=820KiB/s (840kB/s)(8232KiB/10035msec) 00:28:18.813 slat (usec): min=6, max=12031, avg=19.62, stdev=318.36 00:28:18.813 clat (msec): min=14, max=192, avg=77.82, stdev=32.86 00:28:18.813 lat (msec): min=14, max=192, avg=77.84, stdev=32.86 00:28:18.813 clat percentiles (msec): 00:28:18.813 | 1.00th=[ 23], 5.00th=[ 34], 10.00th=[ 41], 20.00th=[ 48], 00:28:18.813 | 30.00th=[ 54], 40.00th=[ 65], 50.00th=[ 72], 60.00th=[ 83], 00:28:18.813 | 70.00th=[ 95], 80.00th=[ 108], 90.00th=[ 121], 95.00th=[ 140], 00:28:18.813 | 99.00th=[ 167], 99.50th=[ 188], 99.90th=[ 192], 99.95th=[ 192], 00:28:18.813 | 99.99th=[ 192] 00:28:18.813 bw ( KiB/s): min= 464, max= 1320, per=4.54%, avg=818.65, stdev=222.04, samples=20 00:28:18.813 iops : min= 116, max= 330, avg=204.60, stdev=55.55, samples=20 00:28:18.813 lat (msec) : 20=0.78%, 50=24.39%, 100=50.05%, 250=24.78% 00:28:18.813 cpu : usr=34.55%, sys=0.97%, ctx=1037, majf=0, minf=9 00:28:18.813 IO depths : 1=0.2%, 2=0.4%, 4=5.7%, 8=79.8%, 16=13.8%, 32=0.0%, >=64=0.0% 00:28:18.813 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:28:18.813 complete : 0=0.0%, 4=89.4%, 8=6.4%, 16=4.2%, 32=0.0%, 64=0.0%, >=64=0.0% 00:28:18.813 issued rwts: total=2058,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:28:18.813 latency : target=0, window=0, percentile=100.00%, depth=16 00:28:18.813 filename0: (groupid=0, jobs=1): err= 0: pid=110678: Mon Dec 16 11:50:17 2024 00:28:18.813 read: IOPS=186, BW=746KiB/s (764kB/s)(7496KiB/10047msec) 00:28:18.813 slat (nsec): min=5160, max=51859, avg=10590.68, stdev=3697.03 00:28:18.813 clat (msec): min=31, max=217, avg=85.65, stdev=33.31 00:28:18.813 lat (msec): min=31, max=217, avg=85.66, stdev=33.31 00:28:18.813 clat percentiles (msec): 00:28:18.813 | 1.00th=[ 32], 5.00th=[ 39], 10.00th=[ 47], 20.00th=[ 58], 00:28:18.813 | 30.00th=[ 69], 40.00th=[ 72], 50.00th=[ 81], 60.00th=[ 92], 00:28:18.813 | 70.00th=[ 99], 80.00th=[ 108], 90.00th=[ 131], 95.00th=[ 150], 00:28:18.813 | 99.00th=[ 194], 99.50th=[ 218], 99.90th=[ 218], 99.95th=[ 218], 00:28:18.813 | 99.99th=[ 218] 00:28:18.813 bw ( KiB/s): min= 432, max= 1128, per=4.16%, avg=750.42, stdev=208.75, samples=19 00:28:18.813 iops : min= 108, max= 282, avg=187.58, stdev=52.16, samples=19 00:28:18.813 lat (msec) : 50=14.25%, 100=58.22%, 250=27.53% 00:28:18.813 cpu : usr=31.01%, sys=0.87%, ctx=897, majf=0, minf=9 00:28:18.813 IO depths : 1=0.9%, 2=1.8%, 4=7.8%, 8=76.2%, 16=13.2%, 32=0.0%, >=64=0.0% 00:28:18.813 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:28:18.813 complete : 0=0.0%, 4=89.5%, 8=6.5%, 16=4.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:28:18.813 issued rwts: total=1874,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:28:18.813 latency : target=0, window=0, percentile=100.00%, depth=16 00:28:18.813 filename0: (groupid=0, jobs=1): err= 0: pid=110679: Mon Dec 16 11:50:17 2024 00:28:18.813 read: IOPS=195, BW=780KiB/s (799kB/s)(7824KiB/10026msec) 00:28:18.813 slat (usec): min=5, max=4025, avg=12.71, stdev=90.87 00:28:18.813 clat (msec): min=24, max=191, avg=81.90, stdev=32.08 00:28:18.813 lat (msec): min=24, max=191, avg=81.91, stdev=32.08 00:28:18.813 clat percentiles (msec): 00:28:18.813 | 1.00th=[ 26], 5.00th=[ 37], 10.00th=[ 46], 20.00th=[ 58], 00:28:18.813 | 30.00th=[ 63], 40.00th=[ 72], 50.00th=[ 79], 60.00th=[ 84], 00:28:18.813 | 70.00th=[ 96], 80.00th=[ 106], 90.00th=[ 131], 95.00th=[ 144], 00:28:18.813 | 99.00th=[ 192], 99.50th=[ 192], 99.90th=[ 192], 99.95th=[ 192], 00:28:18.813 | 99.99th=[ 192] 00:28:18.813 bw ( KiB/s): min= 336, max= 1232, per=4.30%, avg=775.60, stdev=215.17, samples=20 00:28:18.813 iops : min= 84, max= 308, avg=193.90, stdev=53.79, samples=20 00:28:18.813 lat (msec) : 50=17.54%, 100=58.38%, 250=24.08% 00:28:18.813 cpu : usr=30.42%, sys=0.88%, ctx=838, majf=0, minf=9 00:28:18.813 IO depths : 1=0.7%, 2=1.4%, 4=7.1%, 8=77.6%, 16=13.3%, 32=0.0%, >=64=0.0% 00:28:18.813 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:28:18.813 complete : 0=0.0%, 4=89.3%, 8=6.6%, 16=4.1%, 32=0.0%, 64=0.0%, >=64=0.0% 00:28:18.813 issued rwts: total=1956,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:28:18.813 latency : target=0, window=0, percentile=100.00%, depth=16 00:28:18.813 filename0: (groupid=0, jobs=1): err= 0: pid=110680: Mon Dec 16 11:50:17 2024 00:28:18.813 read: IOPS=175, BW=701KiB/s (718kB/s)(7032KiB/10029msec) 00:28:18.813 slat (nsec): min=5302, max=26911, avg=10333.21, stdev=3476.63 00:28:18.813 clat (msec): min=31, max=238, avg=91.20, stdev=32.90 00:28:18.813 lat (msec): min=31, max=238, avg=91.21, stdev=32.90 00:28:18.813 clat percentiles (msec): 00:28:18.813 | 1.00th=[ 32], 5.00th=[ 46], 10.00th=[ 56], 20.00th=[ 67], 00:28:18.813 | 30.00th=[ 72], 40.00th=[ 75], 50.00th=[ 86], 60.00th=[ 96], 00:28:18.813 | 70.00th=[ 106], 80.00th=[ 118], 90.00th=[ 138], 95.00th=[ 153], 00:28:18.813 | 99.00th=[ 182], 99.50th=[ 197], 99.90th=[ 239], 99.95th=[ 239], 00:28:18.813 | 99.99th=[ 239] 00:28:18.813 bw ( KiB/s): min= 384, max= 896, per=3.83%, avg=690.53, stdev=156.13, samples=19 00:28:18.813 iops : min= 96, max= 224, avg=172.63, stdev=39.03, samples=19 00:28:18.813 lat (msec) : 50=9.50%, 100=57.00%, 250=33.50% 00:28:18.813 cpu : usr=31.36%, sys=0.78%, ctx=1004, majf=0, minf=9 00:28:18.813 IO depths : 1=1.6%, 2=3.7%, 4=12.9%, 8=70.0%, 16=11.8%, 32=0.0%, >=64=0.0% 00:28:18.813 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:28:18.813 complete : 0=0.0%, 4=90.6%, 8=4.7%, 16=4.7%, 32=0.0%, 64=0.0%, >=64=0.0% 00:28:18.813 issued rwts: total=1758,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:28:18.813 latency : target=0, window=0, percentile=100.00%, depth=16 00:28:18.813 filename0: (groupid=0, jobs=1): err= 0: pid=110681: Mon Dec 16 11:50:17 2024 00:28:18.813 read: IOPS=212, BW=851KiB/s (872kB/s)(8552KiB/10047msec) 00:28:18.813 slat (nsec): min=4966, max=62692, avg=10509.73, stdev=3874.06 00:28:18.813 clat (msec): min=3, max=200, avg=74.98, stdev=36.58 00:28:18.813 lat (msec): min=3, max=200, avg=74.99, stdev=36.58 00:28:18.813 clat percentiles (msec): 00:28:18.813 | 1.00th=[ 5], 5.00th=[ 17], 10.00th=[ 22], 20.00th=[ 47], 00:28:18.813 | 30.00th=[ 56], 40.00th=[ 66], 50.00th=[ 72], 60.00th=[ 80], 00:28:18.813 | 70.00th=[ 92], 80.00th=[ 106], 90.00th=[ 125], 95.00th=[ 138], 00:28:18.813 | 99.00th=[ 190], 99.50th=[ 201], 99.90th=[ 201], 99.95th=[ 201], 00:28:18.813 | 99.99th=[ 201] 00:28:18.813 bw ( KiB/s): min= 512, max= 2560, per=4.71%, avg=848.80, stdev=438.72, samples=20 00:28:18.813 iops : min= 128, max= 640, avg=212.20, stdev=109.68, samples=20 00:28:18.813 lat (msec) : 4=0.75%, 10=1.50%, 20=4.49%, 50=19.55%, 100=51.73% 00:28:18.813 lat (msec) : 250=21.98% 00:28:18.813 cpu : usr=42.68%, sys=1.27%, ctx=993, majf=0, minf=9 00:28:18.813 IO depths : 1=2.9%, 2=6.2%, 4=16.4%, 8=64.5%, 16=10.1%, 32=0.0%, >=64=0.0% 00:28:18.813 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:28:18.813 complete : 0=0.0%, 4=91.4%, 8=3.4%, 16=5.2%, 32=0.0%, 64=0.0%, >=64=0.0% 00:28:18.813 issued rwts: total=2138,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:28:18.813 latency : target=0, window=0, percentile=100.00%, depth=16 00:28:18.813 filename0: (groupid=0, jobs=1): err= 0: pid=110682: Mon Dec 16 11:50:17 2024 00:28:18.813 read: IOPS=180, BW=720KiB/s (737kB/s)(7252KiB/10071msec) 00:28:18.813 slat (usec): min=3, max=8024, avg=16.97, stdev=210.55 00:28:18.813 clat (msec): min=30, max=256, avg=88.62, stdev=33.16 00:28:18.813 lat (msec): min=30, max=256, avg=88.64, stdev=33.17 00:28:18.813 clat percentiles (msec): 00:28:18.813 | 1.00th=[ 32], 5.00th=[ 40], 10.00th=[ 47], 20.00th=[ 66], 00:28:18.813 | 30.00th=[ 71], 40.00th=[ 73], 50.00th=[ 83], 60.00th=[ 93], 00:28:18.813 | 70.00th=[ 106], 80.00th=[ 114], 90.00th=[ 134], 95.00th=[ 144], 00:28:18.813 | 99.00th=[ 188], 99.50th=[ 215], 99.90th=[ 257], 99.95th=[ 257], 00:28:18.813 | 99.99th=[ 257] 00:28:18.813 bw ( KiB/s): min= 384, max= 1120, per=3.99%, avg=719.25, stdev=196.24, samples=20 00:28:18.813 iops : min= 96, max= 280, avg=179.80, stdev=49.04, samples=20 00:28:18.813 lat (msec) : 50=13.07%, 100=53.01%, 250=33.65%, 500=0.28% 00:28:18.813 cpu : usr=39.53%, sys=1.01%, ctx=1047, majf=0, minf=9 00:28:18.813 IO depths : 1=3.1%, 2=6.5%, 4=16.0%, 8=64.7%, 16=9.8%, 32=0.0%, >=64=0.0% 00:28:18.813 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:28:18.813 complete : 0=0.0%, 4=91.6%, 8=3.1%, 16=5.3%, 32=0.0%, 64=0.0%, >=64=0.0% 00:28:18.813 issued rwts: total=1813,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:28:18.813 latency : target=0, window=0, percentile=100.00%, depth=16 00:28:18.813 filename0: (groupid=0, jobs=1): err= 0: pid=110683: Mon Dec 16 11:50:17 2024 00:28:18.813 read: IOPS=185, BW=741KiB/s (758kB/s)(7436KiB/10041msec) 00:28:18.813 slat (usec): min=4, max=8022, avg=21.03, stdev=277.95 00:28:18.813 clat (msec): min=19, max=245, avg=86.27, stdev=33.84 00:28:18.813 lat (msec): min=19, max=245, avg=86.29, stdev=33.85 00:28:18.813 clat percentiles (msec): 00:28:18.813 | 1.00th=[ 28], 5.00th=[ 43], 10.00th=[ 48], 20.00th=[ 61], 00:28:18.813 | 30.00th=[ 70], 40.00th=[ 72], 50.00th=[ 80], 60.00th=[ 90], 00:28:18.813 | 70.00th=[ 99], 80.00th=[ 111], 90.00th=[ 132], 95.00th=[ 146], 00:28:18.813 | 99.00th=[ 201], 99.50th=[ 226], 99.90th=[ 247], 99.95th=[ 247], 00:28:18.813 | 99.99th=[ 247] 00:28:18.813 bw ( KiB/s): min= 432, max= 1104, per=4.12%, avg=742.32, stdev=184.94, samples=19 00:28:18.813 iops : min= 108, max= 276, avg=185.58, stdev=46.23, samples=19 00:28:18.813 lat (msec) : 20=0.11%, 50=13.50%, 100=58.58%, 250=27.81% 00:28:18.813 cpu : usr=41.24%, sys=1.24%, ctx=1172, majf=0, minf=9 00:28:18.813 IO depths : 1=2.0%, 2=4.4%, 4=13.3%, 8=69.3%, 16=11.0%, 32=0.0%, >=64=0.0% 00:28:18.814 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:28:18.814 complete : 0=0.0%, 4=90.8%, 8=4.0%, 16=5.1%, 32=0.0%, 64=0.0%, >=64=0.0% 00:28:18.814 issued rwts: total=1859,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:28:18.814 latency : target=0, window=0, percentile=100.00%, depth=16 00:28:18.814 filename1: (groupid=0, jobs=1): err= 0: pid=110684: Mon Dec 16 11:50:17 2024 00:28:18.814 read: IOPS=172, BW=692KiB/s (708kB/s)(6960KiB/10061msec) 00:28:18.814 slat (usec): min=5, max=3473, avg=12.81, stdev=83.10 00:28:18.814 clat (msec): min=31, max=207, avg=92.43, stdev=32.22 00:28:18.814 lat (msec): min=31, max=207, avg=92.44, stdev=32.22 00:28:18.814 clat percentiles (msec): 00:28:18.814 | 1.00th=[ 32], 5.00th=[ 46], 10.00th=[ 55], 20.00th=[ 67], 00:28:18.814 | 30.00th=[ 72], 40.00th=[ 81], 50.00th=[ 91], 60.00th=[ 99], 00:28:18.814 | 70.00th=[ 108], 80.00th=[ 123], 90.00th=[ 133], 95.00th=[ 150], 00:28:18.814 | 99.00th=[ 178], 99.50th=[ 194], 99.90th=[ 209], 99.95th=[ 209], 00:28:18.814 | 99.99th=[ 209] 00:28:18.814 bw ( KiB/s): min= 480, max= 1024, per=3.84%, avg=692.21, stdev=146.17, samples=19 00:28:18.814 iops : min= 120, max= 256, avg=173.05, stdev=36.54, samples=19 00:28:18.814 lat (msec) : 50=9.08%, 100=52.76%, 250=38.16% 00:28:18.814 cpu : usr=33.97%, sys=0.79%, ctx=1061, majf=0, minf=10 00:28:18.814 IO depths : 1=2.4%, 2=4.9%, 4=13.6%, 8=67.9%, 16=11.1%, 32=0.0%, >=64=0.0% 00:28:18.814 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:28:18.814 complete : 0=0.0%, 4=91.0%, 8=4.3%, 16=4.7%, 32=0.0%, 64=0.0%, >=64=0.0% 00:28:18.814 issued rwts: total=1740,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:28:18.814 latency : target=0, window=0, percentile=100.00%, depth=16 00:28:18.814 filename1: (groupid=0, jobs=1): err= 0: pid=110685: Mon Dec 16 11:50:17 2024 00:28:18.814 read: IOPS=210, BW=843KiB/s (863kB/s)(8488KiB/10066msec) 00:28:18.814 slat (nsec): min=3933, max=39099, avg=10349.41, stdev=3788.89 00:28:18.814 clat (msec): min=8, max=180, avg=75.71, stdev=31.67 00:28:18.814 lat (msec): min=8, max=180, avg=75.72, stdev=31.67 00:28:18.814 clat percentiles (msec): 00:28:18.814 | 1.00th=[ 16], 5.00th=[ 25], 10.00th=[ 40], 20.00th=[ 48], 00:28:18.814 | 30.00th=[ 55], 40.00th=[ 65], 50.00th=[ 71], 60.00th=[ 83], 00:28:18.814 | 70.00th=[ 96], 80.00th=[ 107], 90.00th=[ 120], 95.00th=[ 130], 00:28:18.814 | 99.00th=[ 155], 99.50th=[ 157], 99.90th=[ 180], 99.95th=[ 180], 00:28:18.814 | 99.99th=[ 180] 00:28:18.814 bw ( KiB/s): min= 480, max= 1736, per=4.68%, avg=842.05, stdev=273.24, samples=20 00:28:18.814 iops : min= 120, max= 434, avg=210.45, stdev=68.33, samples=20 00:28:18.814 lat (msec) : 10=0.28%, 20=1.79%, 50=22.81%, 100=49.48%, 250=25.64% 00:28:18.814 cpu : usr=38.23%, sys=0.89%, ctx=1160, majf=0, minf=9 00:28:18.814 IO depths : 1=0.1%, 2=0.6%, 4=6.5%, 8=79.0%, 16=13.8%, 32=0.0%, >=64=0.0% 00:28:18.814 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:28:18.814 complete : 0=0.0%, 4=89.3%, 8=6.5%, 16=4.2%, 32=0.0%, 64=0.0%, >=64=0.0% 00:28:18.814 issued rwts: total=2122,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:28:18.814 latency : target=0, window=0, percentile=100.00%, depth=16 00:28:18.814 filename1: (groupid=0, jobs=1): err= 0: pid=110686: Mon Dec 16 11:50:17 2024 00:28:18.814 read: IOPS=179, BW=718KiB/s (736kB/s)(7208KiB/10034msec) 00:28:18.814 slat (usec): min=5, max=13020, avg=31.07, stdev=444.05 00:28:18.814 clat (msec): min=31, max=261, avg=88.81, stdev=36.66 00:28:18.814 lat (msec): min=31, max=261, avg=88.85, stdev=36.67 00:28:18.814 clat percentiles (msec): 00:28:18.814 | 1.00th=[ 33], 5.00th=[ 45], 10.00th=[ 48], 20.00th=[ 55], 00:28:18.814 | 30.00th=[ 68], 40.00th=[ 72], 50.00th=[ 80], 60.00th=[ 91], 00:28:18.814 | 70.00th=[ 108], 80.00th=[ 121], 90.00th=[ 140], 95.00th=[ 150], 00:28:18.814 | 99.00th=[ 203], 99.50th=[ 205], 99.90th=[ 262], 99.95th=[ 262], 00:28:18.814 | 99.99th=[ 262] 00:28:18.814 bw ( KiB/s): min= 384, max= 1123, per=4.00%, avg=720.16, stdev=225.34, samples=19 00:28:18.814 iops : min= 96, max= 280, avg=180.00, stdev=56.26, samples=19 00:28:18.814 lat (msec) : 50=16.65%, 100=50.50%, 250=32.57%, 500=0.28% 00:28:18.814 cpu : usr=36.49%, sys=1.20%, ctx=1035, majf=0, minf=9 00:28:18.814 IO depths : 1=2.1%, 2=4.4%, 4=13.1%, 8=69.2%, 16=11.2%, 32=0.0%, >=64=0.0% 00:28:18.814 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:28:18.814 complete : 0=0.0%, 4=90.9%, 8=4.2%, 16=4.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:28:18.814 issued rwts: total=1802,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:28:18.814 latency : target=0, window=0, percentile=100.00%, depth=16 00:28:18.814 filename1: (groupid=0, jobs=1): err= 0: pid=110687: Mon Dec 16 11:50:17 2024 00:28:18.814 read: IOPS=216, BW=867KiB/s (888kB/s)(8756KiB/10102msec) 00:28:18.814 slat (usec): min=3, max=4019, avg=11.91, stdev=85.80 00:28:18.814 clat (usec): min=1578, max=179756, avg=73582.19, stdev=35506.46 00:28:18.814 lat (usec): min=1585, max=179770, avg=73594.11, stdev=35507.80 00:28:18.814 clat percentiles (usec): 00:28:18.814 | 1.00th=[ 1696], 5.00th=[ 6063], 10.00th=[ 21627], 20.00th=[ 47973], 00:28:18.814 | 30.00th=[ 59507], 40.00th=[ 69731], 50.00th=[ 71828], 60.00th=[ 80217], 00:28:18.814 | 70.00th=[ 91751], 80.00th=[101188], 90.00th=[120062], 95.00th=[133694], 00:28:18.814 | 99.00th=[162530], 99.50th=[168821], 99.90th=[175113], 99.95th=[179307], 00:28:18.814 | 99.99th=[179307] 00:28:18.814 bw ( KiB/s): min= 560, max= 2816, per=4.83%, avg=869.00, stdev=481.97, samples=20 00:28:18.814 iops : min= 140, max= 704, avg=217.20, stdev=120.51, samples=20 00:28:18.814 lat (msec) : 2=2.19%, 4=2.19%, 10=1.05%, 20=3.02%, 50=16.45% 00:28:18.814 lat (msec) : 100=54.36%, 250=20.74% 00:28:18.814 cpu : usr=32.52%, sys=0.81%, ctx=947, majf=0, minf=0 00:28:18.814 IO depths : 1=0.9%, 2=2.4%, 4=10.5%, 8=73.5%, 16=12.7%, 32=0.0%, >=64=0.0% 00:28:18.814 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:28:18.814 complete : 0=0.0%, 4=90.4%, 8=4.9%, 16=4.7%, 32=0.0%, 64=0.0%, >=64=0.0% 00:28:18.814 issued rwts: total=2189,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:28:18.814 latency : target=0, window=0, percentile=100.00%, depth=16 00:28:18.814 filename1: (groupid=0, jobs=1): err= 0: pid=110688: Mon Dec 16 11:50:17 2024 00:28:18.814 read: IOPS=165, BW=661KiB/s (676kB/s)(6632KiB/10040msec) 00:28:18.814 slat (usec): min=4, max=9020, avg=16.16, stdev=221.30 00:28:18.814 clat (msec): min=21, max=204, avg=96.75, stdev=35.76 00:28:18.814 lat (msec): min=21, max=204, avg=96.77, stdev=35.76 00:28:18.814 clat percentiles (msec): 00:28:18.814 | 1.00th=[ 31], 5.00th=[ 51], 10.00th=[ 59], 20.00th=[ 69], 00:28:18.814 | 30.00th=[ 72], 40.00th=[ 79], 50.00th=[ 88], 60.00th=[ 101], 00:28:18.814 | 70.00th=[ 117], 80.00th=[ 132], 90.00th=[ 144], 95.00th=[ 161], 00:28:18.814 | 99.00th=[ 190], 99.50th=[ 197], 99.90th=[ 205], 99.95th=[ 205], 00:28:18.814 | 99.99th=[ 205] 00:28:18.814 bw ( KiB/s): min= 384, max= 944, per=3.65%, avg=657.68, stdev=168.33, samples=19 00:28:18.814 iops : min= 96, max= 236, avg=164.42, stdev=42.08, samples=19 00:28:18.814 lat (msec) : 50=4.95%, 100=55.43%, 250=39.63% 00:28:18.814 cpu : usr=34.78%, sys=0.95%, ctx=1132, majf=0, minf=10 00:28:18.814 IO depths : 1=2.8%, 2=6.2%, 4=16.8%, 8=64.1%, 16=10.1%, 32=0.0%, >=64=0.0% 00:28:18.814 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:28:18.814 complete : 0=0.0%, 4=91.6%, 8=3.1%, 16=5.3%, 32=0.0%, 64=0.0%, >=64=0.0% 00:28:18.814 issued rwts: total=1658,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:28:18.814 latency : target=0, window=0, percentile=100.00%, depth=16 00:28:18.814 filename1: (groupid=0, jobs=1): err= 0: pid=110689: Mon Dec 16 11:50:17 2024 00:28:18.814 read: IOPS=202, BW=811KiB/s (830kB/s)(8184KiB/10092msec) 00:28:18.814 slat (usec): min=7, max=8030, avg=20.58, stdev=234.73 00:28:18.814 clat (msec): min=10, max=250, avg=78.63, stdev=37.07 00:28:18.814 lat (msec): min=10, max=250, avg=78.65, stdev=37.07 00:28:18.814 clat percentiles (msec): 00:28:18.814 | 1.00th=[ 14], 5.00th=[ 22], 10.00th=[ 38], 20.00th=[ 48], 00:28:18.814 | 30.00th=[ 58], 40.00th=[ 65], 50.00th=[ 72], 60.00th=[ 82], 00:28:18.814 | 70.00th=[ 93], 80.00th=[ 107], 90.00th=[ 129], 95.00th=[ 148], 00:28:18.814 | 99.00th=[ 186], 99.50th=[ 203], 99.90th=[ 251], 99.95th=[ 251], 00:28:18.814 | 99.99th=[ 251] 00:28:18.814 bw ( KiB/s): min= 384, max= 1664, per=4.51%, avg=813.00, stdev=289.23, samples=20 00:28:18.814 iops : min= 96, max= 416, avg=203.20, stdev=72.33, samples=20 00:28:18.814 lat (msec) : 20=3.57%, 50=19.99%, 100=54.45%, 250=21.85%, 500=0.15% 00:28:18.814 cpu : usr=33.93%, sys=0.99%, ctx=1085, majf=0, minf=9 00:28:18.814 IO depths : 1=1.3%, 2=2.9%, 4=10.9%, 8=72.9%, 16=12.0%, 32=0.0%, >=64=0.0% 00:28:18.814 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:28:18.814 complete : 0=0.0%, 4=90.2%, 8=4.9%, 16=4.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:28:18.814 issued rwts: total=2046,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:28:18.814 latency : target=0, window=0, percentile=100.00%, depth=16 00:28:18.814 filename1: (groupid=0, jobs=1): err= 0: pid=110690: Mon Dec 16 11:50:17 2024 00:28:18.814 read: IOPS=195, BW=784KiB/s (802kB/s)(7888KiB/10066msec) 00:28:18.814 slat (nsec): min=4054, max=45064, avg=10761.97, stdev=3995.37 00:28:18.814 clat (msec): min=17, max=190, avg=81.55, stdev=31.49 00:28:18.814 lat (msec): min=17, max=190, avg=81.56, stdev=31.49 00:28:18.814 clat percentiles (msec): 00:28:18.814 | 1.00th=[ 23], 5.00th=[ 34], 10.00th=[ 43], 20.00th=[ 56], 00:28:18.814 | 30.00th=[ 66], 40.00th=[ 72], 50.00th=[ 79], 60.00th=[ 85], 00:28:18.814 | 70.00th=[ 95], 80.00th=[ 108], 90.00th=[ 125], 95.00th=[ 142], 00:28:18.814 | 99.00th=[ 161], 99.50th=[ 167], 99.90th=[ 190], 99.95th=[ 190], 00:28:18.814 | 99.99th=[ 190] 00:28:18.814 bw ( KiB/s): min= 432, max= 1456, per=4.34%, avg=782.85, stdev=223.78, samples=20 00:28:18.814 iops : min= 108, max= 364, avg=195.65, stdev=55.92, samples=20 00:28:18.814 lat (msec) : 20=0.25%, 50=15.62%, 100=59.38%, 250=24.75% 00:28:18.814 cpu : usr=35.63%, sys=1.00%, ctx=1219, majf=0, minf=9 00:28:18.814 IO depths : 1=0.5%, 2=1.6%, 4=8.1%, 8=76.2%, 16=13.6%, 32=0.0%, >=64=0.0% 00:28:18.814 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:28:18.814 complete : 0=0.0%, 4=89.8%, 8=6.2%, 16=3.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:28:18.814 issued rwts: total=1972,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:28:18.814 latency : target=0, window=0, percentile=100.00%, depth=16 00:28:18.814 filename1: (groupid=0, jobs=1): err= 0: pid=110691: Mon Dec 16 11:50:17 2024 00:28:18.814 read: IOPS=166, BW=667KiB/s (683kB/s)(6684KiB/10026msec) 00:28:18.814 slat (nsec): min=3775, max=28414, avg=10169.38, stdev=3462.77 00:28:18.814 clat (msec): min=25, max=229, avg=95.87, stdev=36.55 00:28:18.814 lat (msec): min=25, max=229, avg=95.88, stdev=36.55 00:28:18.814 clat percentiles (msec): 00:28:18.814 | 1.00th=[ 38], 5.00th=[ 48], 10.00th=[ 58], 20.00th=[ 68], 00:28:18.814 | 30.00th=[ 73], 40.00th=[ 82], 50.00th=[ 87], 60.00th=[ 103], 00:28:18.814 | 70.00th=[ 109], 80.00th=[ 121], 90.00th=[ 144], 95.00th=[ 163], 00:28:18.814 | 99.00th=[ 209], 99.50th=[ 209], 99.90th=[ 230], 99.95th=[ 230], 00:28:18.814 | 99.99th=[ 230] 00:28:18.814 bw ( KiB/s): min= 384, max= 992, per=3.61%, avg=651.37, stdev=168.22, samples=19 00:28:18.815 iops : min= 96, max= 248, avg=162.84, stdev=42.05, samples=19 00:28:18.815 lat (msec) : 50=8.38%, 100=50.39%, 250=41.23% 00:28:18.815 cpu : usr=34.72%, sys=1.07%, ctx=1000, majf=0, minf=9 00:28:18.815 IO depths : 1=2.0%, 2=4.5%, 4=13.2%, 8=68.5%, 16=11.7%, 32=0.0%, >=64=0.0% 00:28:18.815 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:28:18.815 complete : 0=0.0%, 4=91.2%, 8=4.2%, 16=4.6%, 32=0.0%, 64=0.0%, >=64=0.0% 00:28:18.815 issued rwts: total=1671,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:28:18.815 latency : target=0, window=0, percentile=100.00%, depth=16 00:28:18.815 filename2: (groupid=0, jobs=1): err= 0: pid=110692: Mon Dec 16 11:50:17 2024 00:28:18.815 read: IOPS=165, BW=664KiB/s (680kB/s)(6656KiB/10029msec) 00:28:18.815 slat (usec): min=5, max=8023, avg=17.52, stdev=219.63 00:28:18.815 clat (msec): min=23, max=217, avg=96.29, stdev=32.03 00:28:18.815 lat (msec): min=23, max=217, avg=96.30, stdev=32.03 00:28:18.815 clat percentiles (msec): 00:28:18.815 | 1.00th=[ 28], 5.00th=[ 48], 10.00th=[ 62], 20.00th=[ 71], 00:28:18.815 | 30.00th=[ 77], 40.00th=[ 85], 50.00th=[ 94], 60.00th=[ 100], 00:28:18.815 | 70.00th=[ 109], 80.00th=[ 121], 90.00th=[ 140], 95.00th=[ 155], 00:28:18.815 | 99.00th=[ 192], 99.50th=[ 201], 99.90th=[ 218], 99.95th=[ 218], 00:28:18.815 | 99.99th=[ 218] 00:28:18.815 bw ( KiB/s): min= 384, max= 896, per=3.63%, avg=653.47, stdev=137.15, samples=19 00:28:18.815 iops : min= 96, max= 224, avg=163.37, stdev=34.29, samples=19 00:28:18.815 lat (msec) : 50=6.01%, 100=54.99%, 250=39.00% 00:28:18.815 cpu : usr=32.90%, sys=0.86%, ctx=997, majf=0, minf=9 00:28:18.815 IO depths : 1=2.4%, 2=5.6%, 4=16.0%, 8=65.4%, 16=10.5%, 32=0.0%, >=64=0.0% 00:28:18.815 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:28:18.815 complete : 0=0.0%, 4=91.7%, 8=3.0%, 16=5.3%, 32=0.0%, 64=0.0%, >=64=0.0% 00:28:18.815 issued rwts: total=1664,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:28:18.815 latency : target=0, window=0, percentile=100.00%, depth=16 00:28:18.815 filename2: (groupid=0, jobs=1): err= 0: pid=110693: Mon Dec 16 11:50:17 2024 00:28:18.815 read: IOPS=188, BW=755KiB/s (773kB/s)(7580KiB/10039msec) 00:28:18.815 slat (usec): min=4, max=8018, avg=15.04, stdev=184.62 00:28:18.815 clat (msec): min=30, max=181, avg=84.59, stdev=33.03 00:28:18.815 lat (msec): min=30, max=181, avg=84.60, stdev=33.02 00:28:18.815 clat percentiles (msec): 00:28:18.815 | 1.00th=[ 32], 5.00th=[ 41], 10.00th=[ 45], 20.00th=[ 54], 00:28:18.815 | 30.00th=[ 64], 40.00th=[ 72], 50.00th=[ 82], 60.00th=[ 89], 00:28:18.815 | 70.00th=[ 100], 80.00th=[ 111], 90.00th=[ 128], 95.00th=[ 155], 00:28:18.815 | 99.00th=[ 171], 99.50th=[ 182], 99.90th=[ 182], 99.95th=[ 182], 00:28:18.815 | 99.99th=[ 182] 00:28:18.815 bw ( KiB/s): min= 384, max= 1152, per=4.18%, avg=753.25, stdev=200.66, samples=20 00:28:18.815 iops : min= 96, max= 288, avg=188.30, stdev=50.18, samples=20 00:28:18.815 lat (msec) : 50=16.57%, 100=53.77%, 250=29.66% 00:28:18.815 cpu : usr=38.38%, sys=1.15%, ctx=1436, majf=0, minf=9 00:28:18.815 IO depths : 1=1.6%, 2=3.2%, 4=10.3%, 8=73.1%, 16=11.8%, 32=0.0%, >=64=0.0% 00:28:18.815 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:28:18.815 complete : 0=0.0%, 4=90.3%, 8=5.0%, 16=4.8%, 32=0.0%, 64=0.0%, >=64=0.0% 00:28:18.815 issued rwts: total=1895,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:28:18.815 latency : target=0, window=0, percentile=100.00%, depth=16 00:28:18.815 filename2: (groupid=0, jobs=1): err= 0: pid=110694: Mon Dec 16 11:50:17 2024 00:28:18.815 read: IOPS=178, BW=712KiB/s (729kB/s)(7152KiB/10041msec) 00:28:18.815 slat (usec): min=5, max=12016, avg=23.77, stdev=354.38 00:28:18.815 clat (msec): min=28, max=205, avg=89.73, stdev=33.84 00:28:18.815 lat (msec): min=28, max=205, avg=89.75, stdev=33.83 00:28:18.815 clat percentiles (msec): 00:28:18.815 | 1.00th=[ 32], 5.00th=[ 45], 10.00th=[ 53], 20.00th=[ 62], 00:28:18.815 | 30.00th=[ 69], 40.00th=[ 74], 50.00th=[ 83], 60.00th=[ 95], 00:28:18.815 | 70.00th=[ 105], 80.00th=[ 118], 90.00th=[ 140], 95.00th=[ 148], 00:28:18.815 | 99.00th=[ 180], 99.50th=[ 205], 99.90th=[ 205], 99.95th=[ 205], 00:28:18.815 | 99.99th=[ 205] 00:28:18.815 bw ( KiB/s): min= 384, max= 1024, per=3.91%, avg=705.74, stdev=187.40, samples=19 00:28:18.815 iops : min= 96, max= 256, avg=176.42, stdev=46.84, samples=19 00:28:18.815 lat (msec) : 50=9.34%, 100=55.15%, 250=35.51% 00:28:18.815 cpu : usr=34.35%, sys=0.90%, ctx=1112, majf=0, minf=9 00:28:18.815 IO depths : 1=1.7%, 2=3.7%, 4=11.2%, 8=71.8%, 16=11.6%, 32=0.0%, >=64=0.0% 00:28:18.815 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:28:18.815 complete : 0=0.0%, 4=90.7%, 8=4.4%, 16=4.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:28:18.815 issued rwts: total=1788,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:28:18.815 latency : target=0, window=0, percentile=100.00%, depth=16 00:28:18.815 filename2: (groupid=0, jobs=1): err= 0: pid=110695: Mon Dec 16 11:50:17 2024 00:28:18.815 read: IOPS=193, BW=776KiB/s (794kB/s)(7788KiB/10041msec) 00:28:18.815 slat (usec): min=5, max=4020, avg=12.78, stdev=91.05 00:28:18.815 clat (msec): min=10, max=207, avg=82.34, stdev=35.26 00:28:18.815 lat (msec): min=11, max=208, avg=82.35, stdev=35.26 00:28:18.815 clat percentiles (msec): 00:28:18.815 | 1.00th=[ 17], 5.00th=[ 32], 10.00th=[ 41], 20.00th=[ 55], 00:28:18.815 | 30.00th=[ 65], 40.00th=[ 71], 50.00th=[ 78], 60.00th=[ 85], 00:28:18.815 | 70.00th=[ 96], 80.00th=[ 109], 90.00th=[ 133], 95.00th=[ 148], 00:28:18.815 | 99.00th=[ 192], 99.50th=[ 192], 99.90th=[ 209], 99.95th=[ 209], 00:28:18.815 | 99.99th=[ 209] 00:28:18.815 bw ( KiB/s): min= 384, max= 1536, per=4.29%, avg=773.85, stdev=257.72, samples=20 00:28:18.815 iops : min= 96, max= 384, avg=193.40, stdev=64.44, samples=20 00:28:18.815 lat (msec) : 20=1.64%, 50=15.92%, 100=56.81%, 250=25.63% 00:28:18.815 cpu : usr=37.11%, sys=1.10%, ctx=1164, majf=0, minf=9 00:28:18.815 IO depths : 1=1.4%, 2=2.9%, 4=10.3%, 8=72.8%, 16=12.5%, 32=0.0%, >=64=0.0% 00:28:18.815 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:28:18.815 complete : 0=0.0%, 4=90.3%, 8=5.4%, 16=4.2%, 32=0.0%, 64=0.0%, >=64=0.0% 00:28:18.815 issued rwts: total=1947,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:28:18.815 latency : target=0, window=0, percentile=100.00%, depth=16 00:28:18.815 filename2: (groupid=0, jobs=1): err= 0: pid=110696: Mon Dec 16 11:50:17 2024 00:28:18.815 read: IOPS=166, BW=667KiB/s (683kB/s)(6704KiB/10044msec) 00:28:18.815 slat (usec): min=4, max=8036, avg=15.40, stdev=196.09 00:28:18.815 clat (msec): min=31, max=253, avg=95.76, stdev=34.14 00:28:18.815 lat (msec): min=31, max=253, avg=95.78, stdev=34.14 00:28:18.815 clat percentiles (msec): 00:28:18.815 | 1.00th=[ 36], 5.00th=[ 52], 10.00th=[ 61], 20.00th=[ 69], 00:28:18.815 | 30.00th=[ 73], 40.00th=[ 82], 50.00th=[ 90], 60.00th=[ 99], 00:28:18.815 | 70.00th=[ 109], 80.00th=[ 120], 90.00th=[ 142], 95.00th=[ 159], 00:28:18.815 | 99.00th=[ 209], 99.50th=[ 209], 99.90th=[ 243], 99.95th=[ 253], 00:28:18.815 | 99.99th=[ 253] 00:28:18.815 bw ( KiB/s): min= 336, max= 984, per=3.66%, avg=660.63, stdev=184.04, samples=19 00:28:18.815 iops : min= 84, max= 246, avg=165.16, stdev=46.01, samples=19 00:28:18.815 lat (msec) : 50=4.59%, 100=56.44%, 250=38.90%, 500=0.06% 00:28:18.815 cpu : usr=33.33%, sys=0.95%, ctx=1113, majf=0, minf=9 00:28:18.815 IO depths : 1=1.7%, 2=3.9%, 4=13.7%, 8=69.0%, 16=11.6%, 32=0.0%, >=64=0.0% 00:28:18.815 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:28:18.815 complete : 0=0.0%, 4=90.4%, 8=4.6%, 16=4.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:28:18.815 issued rwts: total=1676,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:28:18.815 latency : target=0, window=0, percentile=100.00%, depth=16 00:28:18.815 filename2: (groupid=0, jobs=1): err= 0: pid=110697: Mon Dec 16 11:50:17 2024 00:28:18.815 read: IOPS=171, BW=688KiB/s (704kB/s)(6904KiB/10042msec) 00:28:18.815 slat (usec): min=4, max=11025, avg=19.12, stdev=282.22 00:28:18.815 clat (msec): min=20, max=255, avg=92.81, stdev=41.03 00:28:18.815 lat (msec): min=20, max=255, avg=92.83, stdev=41.03 00:28:18.815 clat percentiles (msec): 00:28:18.815 | 1.00th=[ 23], 5.00th=[ 40], 10.00th=[ 50], 20.00th=[ 62], 00:28:18.815 | 30.00th=[ 71], 40.00th=[ 72], 50.00th=[ 84], 60.00th=[ 96], 00:28:18.815 | 70.00th=[ 105], 80.00th=[ 129], 90.00th=[ 153], 95.00th=[ 171], 00:28:18.815 | 99.00th=[ 209], 99.50th=[ 209], 99.90th=[ 255], 99.95th=[ 255], 00:28:18.815 | 99.99th=[ 255] 00:28:18.815 bw ( KiB/s): min= 344, max= 1282, per=3.77%, avg=679.68, stdev=233.04, samples=19 00:28:18.815 iops : min= 86, max= 320, avg=169.89, stdev=58.19, samples=19 00:28:18.815 lat (msec) : 50=11.70%, 100=56.26%, 250=31.69%, 500=0.35% 00:28:18.815 cpu : usr=33.88%, sys=1.01%, ctx=1075, majf=0, minf=9 00:28:18.815 IO depths : 1=1.6%, 2=3.4%, 4=12.4%, 8=71.3%, 16=11.4%, 32=0.0%, >=64=0.0% 00:28:18.815 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:28:18.815 complete : 0=0.0%, 4=90.2%, 8=4.6%, 16=5.2%, 32=0.0%, 64=0.0%, >=64=0.0% 00:28:18.815 issued rwts: total=1726,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:28:18.815 latency : target=0, window=0, percentile=100.00%, depth=16 00:28:18.815 filename2: (groupid=0, jobs=1): err= 0: pid=110698: Mon Dec 16 11:50:17 2024 00:28:18.815 read: IOPS=197, BW=792KiB/s (811kB/s)(7988KiB/10092msec) 00:28:18.815 slat (usec): min=7, max=12033, avg=20.37, stdev=323.36 00:28:18.815 clat (msec): min=13, max=191, avg=80.54, stdev=34.80 00:28:18.815 lat (msec): min=13, max=191, avg=80.56, stdev=34.81 00:28:18.815 clat percentiles (msec): 00:28:18.815 | 1.00th=[ 16], 5.00th=[ 35], 10.00th=[ 45], 20.00th=[ 50], 00:28:18.815 | 30.00th=[ 61], 40.00th=[ 68], 50.00th=[ 73], 60.00th=[ 83], 00:28:18.815 | 70.00th=[ 95], 80.00th=[ 111], 90.00th=[ 133], 95.00th=[ 144], 00:28:18.815 | 99.00th=[ 176], 99.50th=[ 192], 99.90th=[ 192], 99.95th=[ 192], 00:28:18.815 | 99.99th=[ 192] 00:28:18.815 bw ( KiB/s): min= 384, max= 1368, per=4.39%, avg=791.75, stdev=242.28, samples=20 00:28:18.815 iops : min= 96, max= 342, avg=197.90, stdev=60.55, samples=20 00:28:18.815 lat (msec) : 20=1.60%, 50=19.03%, 100=54.18%, 250=25.19% 00:28:18.815 cpu : usr=33.16%, sys=1.16%, ctx=1038, majf=0, minf=9 00:28:18.815 IO depths : 1=0.9%, 2=2.0%, 4=8.6%, 8=75.4%, 16=13.1%, 32=0.0%, >=64=0.0% 00:28:18.815 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:28:18.815 complete : 0=0.0%, 4=89.6%, 8=6.3%, 16=4.2%, 32=0.0%, 64=0.0%, >=64=0.0% 00:28:18.815 issued rwts: total=1997,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:28:18.815 latency : target=0, window=0, percentile=100.00%, depth=16 00:28:18.815 filename2: (groupid=0, jobs=1): err= 0: pid=110699: Mon Dec 16 11:50:17 2024 00:28:18.815 read: IOPS=207, BW=829KiB/s (849kB/s)(8332KiB/10050msec) 00:28:18.815 slat (usec): min=4, max=12027, avg=15.84, stdev=263.33 00:28:18.815 clat (msec): min=3, max=226, avg=76.89, stdev=36.63 00:28:18.815 lat (msec): min=3, max=226, avg=76.90, stdev=36.63 00:28:18.815 clat percentiles (msec): 00:28:18.815 | 1.00th=[ 6], 5.00th=[ 29], 10.00th=[ 35], 20.00th=[ 48], 00:28:18.815 | 30.00th=[ 56], 40.00th=[ 63], 50.00th=[ 72], 60.00th=[ 81], 00:28:18.816 | 70.00th=[ 93], 80.00th=[ 109], 90.00th=[ 125], 95.00th=[ 140], 00:28:18.816 | 99.00th=[ 194], 99.50th=[ 199], 99.90th=[ 228], 99.95th=[ 228], 00:28:18.816 | 99.99th=[ 228] 00:28:18.816 bw ( KiB/s): min= 512, max= 1944, per=4.59%, avg=826.55, stdev=330.61, samples=20 00:28:18.816 iops : min= 128, max= 486, avg=206.60, stdev=82.67, samples=20 00:28:18.816 lat (msec) : 4=0.77%, 10=1.54%, 20=1.87%, 50=22.28%, 100=50.22% 00:28:18.816 lat (msec) : 250=23.33% 00:28:18.816 cpu : usr=31.41%, sys=0.83%, ctx=987, majf=0, minf=9 00:28:18.816 IO depths : 1=0.8%, 2=1.6%, 4=7.1%, 8=77.4%, 16=13.1%, 32=0.0%, >=64=0.0% 00:28:18.816 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:28:18.816 complete : 0=0.0%, 4=89.3%, 8=6.4%, 16=4.2%, 32=0.0%, 64=0.0%, >=64=0.0% 00:28:18.816 issued rwts: total=2083,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:28:18.816 latency : target=0, window=0, percentile=100.00%, depth=16 00:28:18.816 00:28:18.816 Run status group 0 (all jobs): 00:28:18.816 READ: bw=17.6MiB/s (18.4MB/s), 661KiB/s-867KiB/s (676kB/s-888kB/s), io=178MiB (186MB), run=10026-10102msec 00:28:18.816 11:50:17 nvmf_dif.fio_dif_rand_params -- target/dif.sh@113 -- # destroy_subsystems 0 1 2 00:28:18.816 11:50:17 nvmf_dif.fio_dif_rand_params -- target/dif.sh@43 -- # local sub 00:28:18.816 11:50:17 nvmf_dif.fio_dif_rand_params -- target/dif.sh@45 -- # for sub in "$@" 00:28:18.816 11:50:17 nvmf_dif.fio_dif_rand_params -- target/dif.sh@46 -- # destroy_subsystem 0 00:28:18.816 11:50:17 nvmf_dif.fio_dif_rand_params -- target/dif.sh@36 -- # local sub_id=0 00:28:18.816 11:50:17 nvmf_dif.fio_dif_rand_params -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:28:18.816 11:50:17 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:18.816 11:50:17 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:28:18.816 11:50:17 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:18.816 11:50:17 nvmf_dif.fio_dif_rand_params -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null0 00:28:18.816 11:50:17 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:18.816 11:50:17 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:28:18.816 11:50:17 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:18.816 11:50:17 nvmf_dif.fio_dif_rand_params -- target/dif.sh@45 -- # for sub in "$@" 00:28:18.816 11:50:17 nvmf_dif.fio_dif_rand_params -- target/dif.sh@46 -- # destroy_subsystem 1 00:28:18.816 11:50:17 nvmf_dif.fio_dif_rand_params -- target/dif.sh@36 -- # local sub_id=1 00:28:18.816 11:50:17 nvmf_dif.fio_dif_rand_params -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:28:18.816 11:50:17 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:18.816 11:50:17 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:28:18.816 11:50:17 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:18.816 11:50:17 nvmf_dif.fio_dif_rand_params -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null1 00:28:18.816 11:50:17 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:18.816 11:50:17 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:28:18.816 11:50:17 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:18.816 11:50:17 nvmf_dif.fio_dif_rand_params -- target/dif.sh@45 -- # for sub in "$@" 00:28:18.816 11:50:17 nvmf_dif.fio_dif_rand_params -- target/dif.sh@46 -- # destroy_subsystem 2 00:28:18.816 11:50:17 nvmf_dif.fio_dif_rand_params -- target/dif.sh@36 -- # local sub_id=2 00:28:18.816 11:50:17 nvmf_dif.fio_dif_rand_params -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode2 00:28:18.816 11:50:17 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:18.816 11:50:17 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:28:18.816 11:50:17 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:18.816 11:50:17 nvmf_dif.fio_dif_rand_params -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null2 00:28:18.816 11:50:17 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:18.816 11:50:17 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:28:18.816 11:50:17 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:18.816 11:50:17 nvmf_dif.fio_dif_rand_params -- target/dif.sh@115 -- # NULL_DIF=1 00:28:18.816 11:50:17 nvmf_dif.fio_dif_rand_params -- target/dif.sh@115 -- # bs=8k,16k,128k 00:28:18.816 11:50:17 nvmf_dif.fio_dif_rand_params -- target/dif.sh@115 -- # numjobs=2 00:28:18.816 11:50:17 nvmf_dif.fio_dif_rand_params -- target/dif.sh@115 -- # iodepth=8 00:28:18.816 11:50:17 nvmf_dif.fio_dif_rand_params -- target/dif.sh@115 -- # runtime=5 00:28:18.816 11:50:17 nvmf_dif.fio_dif_rand_params -- target/dif.sh@115 -- # files=1 00:28:18.816 11:50:17 nvmf_dif.fio_dif_rand_params -- target/dif.sh@117 -- # create_subsystems 0 1 00:28:18.816 11:50:17 nvmf_dif.fio_dif_rand_params -- target/dif.sh@28 -- # local sub 00:28:18.816 11:50:17 nvmf_dif.fio_dif_rand_params -- target/dif.sh@30 -- # for sub in "$@" 00:28:18.816 11:50:17 nvmf_dif.fio_dif_rand_params -- target/dif.sh@31 -- # create_subsystem 0 00:28:18.816 11:50:17 nvmf_dif.fio_dif_rand_params -- target/dif.sh@18 -- # local sub_id=0 00:28:18.816 11:50:17 nvmf_dif.fio_dif_rand_params -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null0 64 512 --md-size 16 --dif-type 1 00:28:18.816 11:50:17 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:18.816 11:50:17 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:28:18.816 bdev_null0 00:28:18.816 11:50:17 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:18.816 11:50:17 nvmf_dif.fio_dif_rand_params -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 --serial-number 53313233-0 --allow-any-host 00:28:18.816 11:50:17 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:18.816 11:50:17 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:28:18.816 11:50:17 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:18.816 11:50:17 nvmf_dif.fio_dif_rand_params -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 bdev_null0 00:28:18.816 11:50:17 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:18.816 11:50:17 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:28:18.816 11:50:17 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:18.816 11:50:17 nvmf_dif.fio_dif_rand_params -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.3 -s 4420 00:28:18.816 11:50:17 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:18.816 11:50:17 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:28:18.816 [2024-12-16 11:50:17.963002] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4420 *** 00:28:18.816 11:50:17 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:18.816 11:50:17 nvmf_dif.fio_dif_rand_params -- target/dif.sh@30 -- # for sub in "$@" 00:28:18.816 11:50:17 nvmf_dif.fio_dif_rand_params -- target/dif.sh@31 -- # create_subsystem 1 00:28:18.816 11:50:17 nvmf_dif.fio_dif_rand_params -- target/dif.sh@18 -- # local sub_id=1 00:28:18.816 11:50:17 nvmf_dif.fio_dif_rand_params -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null1 64 512 --md-size 16 --dif-type 1 00:28:18.816 11:50:17 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:18.816 11:50:17 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:28:18.816 bdev_null1 00:28:18.816 11:50:17 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:18.816 11:50:17 nvmf_dif.fio_dif_rand_params -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 --serial-number 53313233-1 --allow-any-host 00:28:18.816 11:50:17 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:18.816 11:50:17 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:28:18.816 11:50:17 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:18.816 11:50:17 nvmf_dif.fio_dif_rand_params -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 bdev_null1 00:28:18.816 11:50:17 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:18.816 11:50:17 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:28:18.816 11:50:17 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:18.816 11:50:17 nvmf_dif.fio_dif_rand_params -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 00:28:18.816 11:50:17 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:18.816 11:50:17 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:28:18.816 11:50:17 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:18.816 11:50:17 nvmf_dif.fio_dif_rand_params -- target/dif.sh@118 -- # fio /dev/fd/62 00:28:18.816 11:50:17 nvmf_dif.fio_dif_rand_params -- target/dif.sh@118 -- # create_json_sub_conf 0 1 00:28:18.816 11:50:17 nvmf_dif.fio_dif_rand_params -- target/dif.sh@51 -- # gen_nvmf_target_json 0 1 00:28:18.816 11:50:17 nvmf_dif.fio_dif_rand_params -- target/dif.sh@82 -- # fio_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:28:18.816 11:50:18 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@560 -- # config=() 00:28:18.816 11:50:18 nvmf_dif.fio_dif_rand_params -- target/dif.sh@82 -- # gen_fio_conf 00:28:18.816 11:50:18 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1360 -- # fio_plugin /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:28:18.816 11:50:18 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@560 -- # local subsystem config 00:28:18.816 11:50:18 nvmf_dif.fio_dif_rand_params -- target/dif.sh@54 -- # local file 00:28:18.816 11:50:18 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1341 -- # local fio_dir=/usr/src/fio 00:28:18.816 11:50:18 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:28:18.816 11:50:18 nvmf_dif.fio_dif_rand_params -- target/dif.sh@56 -- # cat 00:28:18.816 11:50:18 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1343 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:28:18.816 11:50:18 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:28:18.816 { 00:28:18.816 "params": { 00:28:18.816 "name": "Nvme$subsystem", 00:28:18.816 "trtype": "$TEST_TRANSPORT", 00:28:18.816 "traddr": "$NVMF_FIRST_TARGET_IP", 00:28:18.816 "adrfam": "ipv4", 00:28:18.816 "trsvcid": "$NVMF_PORT", 00:28:18.816 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:28:18.816 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:28:18.816 "hdgst": ${hdgst:-false}, 00:28:18.816 "ddgst": ${ddgst:-false} 00:28:18.816 }, 00:28:18.816 "method": "bdev_nvme_attach_controller" 00:28:18.816 } 00:28:18.816 EOF 00:28:18.816 )") 00:28:18.816 11:50:18 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1343 -- # local sanitizers 00:28:18.816 11:50:18 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1344 -- # local plugin=/home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:28:18.816 11:50:18 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # shift 00:28:18.816 11:50:18 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1347 -- # local asan_lib= 00:28:18.816 11:50:18 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1348 -- # for sanitizer in "${sanitizers[@]}" 00:28:18.816 11:50:18 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@582 -- # cat 00:28:18.817 11:50:18 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file = 1 )) 00:28:18.817 11:50:18 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file <= files )) 00:28:18.817 11:50:18 nvmf_dif.fio_dif_rand_params -- target/dif.sh@73 -- # cat 00:28:18.817 11:50:18 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1349 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:28:18.817 11:50:18 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1349 -- # awk '{print $3}' 00:28:18.817 11:50:18 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1349 -- # grep libasan 00:28:18.817 11:50:18 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:28:18.817 11:50:18 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:28:18.817 { 00:28:18.817 "params": { 00:28:18.817 "name": "Nvme$subsystem", 00:28:18.817 "trtype": "$TEST_TRANSPORT", 00:28:18.817 "traddr": "$NVMF_FIRST_TARGET_IP", 00:28:18.817 "adrfam": "ipv4", 00:28:18.817 "trsvcid": "$NVMF_PORT", 00:28:18.817 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:28:18.817 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:28:18.817 "hdgst": ${hdgst:-false}, 00:28:18.817 "ddgst": ${ddgst:-false} 00:28:18.817 }, 00:28:18.817 "method": "bdev_nvme_attach_controller" 00:28:18.817 } 00:28:18.817 EOF 00:28:18.817 )") 00:28:18.817 11:50:18 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file++ )) 00:28:18.817 11:50:18 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file <= files )) 00:28:18.817 11:50:18 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@582 -- # cat 00:28:18.817 11:50:18 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@584 -- # jq . 00:28:18.817 11:50:18 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@585 -- # IFS=, 00:28:18.817 11:50:18 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:28:18.817 "params": { 00:28:18.817 "name": "Nvme0", 00:28:18.817 "trtype": "tcp", 00:28:18.817 "traddr": "10.0.0.3", 00:28:18.817 "adrfam": "ipv4", 00:28:18.817 "trsvcid": "4420", 00:28:18.817 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:28:18.817 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:28:18.817 "hdgst": false, 00:28:18.817 "ddgst": false 00:28:18.817 }, 00:28:18.817 "method": "bdev_nvme_attach_controller" 00:28:18.817 },{ 00:28:18.817 "params": { 00:28:18.817 "name": "Nvme1", 00:28:18.817 "trtype": "tcp", 00:28:18.817 "traddr": "10.0.0.3", 00:28:18.817 "adrfam": "ipv4", 00:28:18.817 "trsvcid": "4420", 00:28:18.817 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:28:18.817 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:28:18.817 "hdgst": false, 00:28:18.817 "ddgst": false 00:28:18.817 }, 00:28:18.817 "method": "bdev_nvme_attach_controller" 00:28:18.817 }' 00:28:18.817 11:50:18 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1349 -- # asan_lib= 00:28:18.817 11:50:18 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1350 -- # [[ -n '' ]] 00:28:18.817 11:50:18 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1348 -- # for sanitizer in "${sanitizers[@]}" 00:28:18.817 11:50:18 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1349 -- # grep libclang_rt.asan 00:28:18.817 11:50:18 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1349 -- # awk '{print $3}' 00:28:18.817 11:50:18 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1349 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:28:18.817 11:50:18 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1349 -- # asan_lib= 00:28:18.817 11:50:18 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1350 -- # [[ -n '' ]] 00:28:18.817 11:50:18 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1356 -- # LD_PRELOAD=' /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev' 00:28:18.817 11:50:18 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1356 -- # /usr/src/fio/fio --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:28:18.817 filename0: (g=0): rw=randread, bs=(R) 8192B-8192B, (W) 16.0KiB-16.0KiB, (T) 128KiB-128KiB, ioengine=spdk_bdev, iodepth=8 00:28:18.817 ... 00:28:18.817 filename1: (g=0): rw=randread, bs=(R) 8192B-8192B, (W) 16.0KiB-16.0KiB, (T) 128KiB-128KiB, ioengine=spdk_bdev, iodepth=8 00:28:18.817 ... 00:28:18.817 fio-3.35 00:28:18.817 Starting 4 threads 00:28:24.155 00:28:24.155 filename0: (groupid=0, jobs=1): err= 0: pid=110837: Mon Dec 16 11:50:23 2024 00:28:24.155 read: IOPS=2066, BW=16.1MiB/s (16.9MB/s)(80.8MiB/5002msec) 00:28:24.155 slat (nsec): min=4306, max=73495, avg=16012.51, stdev=5093.26 00:28:24.155 clat (usec): min=2793, max=6349, avg=3793.73, stdev=143.24 00:28:24.155 lat (usec): min=2806, max=6363, avg=3809.75, stdev=143.51 00:28:24.155 clat percentiles (usec): 00:28:24.155 | 1.00th=[ 3556], 5.00th=[ 3621], 10.00th=[ 3654], 20.00th=[ 3687], 00:28:24.155 | 30.00th=[ 3720], 40.00th=[ 3752], 50.00th=[ 3785], 60.00th=[ 3818], 00:28:24.155 | 70.00th=[ 3851], 80.00th=[ 3884], 90.00th=[ 3949], 95.00th=[ 4015], 00:28:24.155 | 99.00th=[ 4113], 99.50th=[ 4146], 99.90th=[ 4555], 99.95th=[ 6325], 00:28:24.155 | 99.99th=[ 6325] 00:28:24.155 bw ( KiB/s): min=16256, max=16768, per=24.97%, avg=16524.80, stdev=185.49, samples=10 00:28:24.155 iops : min= 2032, max= 2096, avg=2065.60, stdev=23.19, samples=10 00:28:24.155 lat (msec) : 4=93.53%, 10=6.47% 00:28:24.155 cpu : usr=93.88%, sys=4.96%, ctx=55, majf=0, minf=0 00:28:24.155 IO depths : 1=12.4%, 2=25.0%, 4=50.0%, 8=12.6%, 16=0.0%, 32=0.0%, >=64=0.0% 00:28:24.155 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:28:24.155 complete : 0=0.0%, 4=88.9%, 8=11.1%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:28:24.155 issued rwts: total=10336,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:28:24.155 latency : target=0, window=0, percentile=100.00%, depth=8 00:28:24.155 filename0: (groupid=0, jobs=1): err= 0: pid=110838: Mon Dec 16 11:50:23 2024 00:28:24.155 read: IOPS=2072, BW=16.2MiB/s (17.0MB/s)(81.0MiB/5003msec) 00:28:24.155 slat (nsec): min=6875, max=89140, avg=8364.04, stdev=3498.45 00:28:24.155 clat (usec): min=1122, max=4247, avg=3816.80, stdev=197.73 00:28:24.155 lat (usec): min=1138, max=4263, avg=3825.16, stdev=197.49 00:28:24.155 clat percentiles (usec): 00:28:24.155 | 1.00th=[ 3589], 5.00th=[ 3654], 10.00th=[ 3687], 20.00th=[ 3752], 00:28:24.155 | 30.00th=[ 3752], 40.00th=[ 3785], 50.00th=[ 3818], 60.00th=[ 3818], 00:28:24.155 | 70.00th=[ 3884], 80.00th=[ 3916], 90.00th=[ 3982], 95.00th=[ 4047], 00:28:24.155 | 99.00th=[ 4113], 99.50th=[ 4146], 99.90th=[ 4228], 99.95th=[ 4228], 00:28:24.155 | 99.99th=[ 4228] 00:28:24.155 bw ( KiB/s): min=16256, max=16768, per=25.05%, avg=16576.00, stdev=173.31, samples=10 00:28:24.155 iops : min= 2032, max= 2096, avg=2072.00, stdev=21.66, samples=10 00:28:24.155 lat (msec) : 2=0.39%, 4=90.10%, 10=9.51% 00:28:24.155 cpu : usr=94.16%, sys=4.74%, ctx=17, majf=0, minf=0 00:28:24.155 IO depths : 1=12.4%, 2=25.0%, 4=50.0%, 8=12.6%, 16=0.0%, 32=0.0%, >=64=0.0% 00:28:24.155 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:28:24.155 complete : 0=0.0%, 4=88.9%, 8=11.1%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:28:24.155 issued rwts: total=10368,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:28:24.155 latency : target=0, window=0, percentile=100.00%, depth=8 00:28:24.155 filename1: (groupid=0, jobs=1): err= 0: pid=110839: Mon Dec 16 11:50:23 2024 00:28:24.155 read: IOPS=2066, BW=16.1MiB/s (16.9MB/s)(80.8MiB/5002msec) 00:28:24.155 slat (nsec): min=3738, max=74681, avg=15402.60, stdev=5152.26 00:28:24.155 clat (usec): min=2618, max=5877, avg=3795.97, stdev=140.65 00:28:24.155 lat (usec): min=2630, max=5897, avg=3811.37, stdev=141.01 00:28:24.155 clat percentiles (usec): 00:28:24.155 | 1.00th=[ 3556], 5.00th=[ 3621], 10.00th=[ 3654], 20.00th=[ 3687], 00:28:24.155 | 30.00th=[ 3720], 40.00th=[ 3752], 50.00th=[ 3785], 60.00th=[ 3818], 00:28:24.155 | 70.00th=[ 3851], 80.00th=[ 3884], 90.00th=[ 3949], 95.00th=[ 4015], 00:28:24.155 | 99.00th=[ 4113], 99.50th=[ 4146], 99.90th=[ 5669], 99.95th=[ 5735], 00:28:24.155 | 99.99th=[ 5800] 00:28:24.155 bw ( KiB/s): min=16256, max=16768, per=24.98%, avg=16528.00, stdev=180.55, samples=10 00:28:24.155 iops : min= 2032, max= 2096, avg=2066.00, stdev=22.57, samples=10 00:28:24.155 lat (msec) : 4=93.50%, 10=6.50% 00:28:24.155 cpu : usr=94.48%, sys=4.44%, ctx=14, majf=0, minf=0 00:28:24.155 IO depths : 1=12.4%, 2=25.0%, 4=50.0%, 8=12.6%, 16=0.0%, 32=0.0%, >=64=0.0% 00:28:24.155 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:28:24.155 complete : 0=0.0%, 4=88.9%, 8=11.1%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:28:24.155 issued rwts: total=10336,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:28:24.155 latency : target=0, window=0, percentile=100.00%, depth=8 00:28:24.155 filename1: (groupid=0, jobs=1): err= 0: pid=110840: Mon Dec 16 11:50:23 2024 00:28:24.155 read: IOPS=2067, BW=16.2MiB/s (16.9MB/s)(80.8MiB/5003msec) 00:28:24.155 slat (usec): min=3, max=254, avg=15.40, stdev= 7.05 00:28:24.155 clat (usec): min=2798, max=4956, avg=3796.15, stdev=127.29 00:28:24.155 lat (usec): min=2805, max=4968, avg=3811.55, stdev=127.44 00:28:24.155 clat percentiles (usec): 00:28:24.155 | 1.00th=[ 3556], 5.00th=[ 3621], 10.00th=[ 3654], 20.00th=[ 3687], 00:28:24.155 | 30.00th=[ 3720], 40.00th=[ 3752], 50.00th=[ 3785], 60.00th=[ 3818], 00:28:24.155 | 70.00th=[ 3851], 80.00th=[ 3884], 90.00th=[ 3982], 95.00th=[ 4015], 00:28:24.155 | 99.00th=[ 4113], 99.50th=[ 4146], 99.90th=[ 4228], 99.95th=[ 4228], 00:28:24.155 | 99.99th=[ 4752] 00:28:24.155 bw ( KiB/s): min=16256, max=16768, per=24.99%, avg=16537.60, stdev=168.52, samples=10 00:28:24.155 iops : min= 2032, max= 2096, avg=2067.20, stdev=21.06, samples=10 00:28:24.155 lat (msec) : 4=93.46%, 10=6.54% 00:28:24.155 cpu : usr=91.80%, sys=6.28%, ctx=755, majf=0, minf=0 00:28:24.155 IO depths : 1=12.5%, 2=25.0%, 4=50.0%, 8=12.5%, 16=0.0%, 32=0.0%, >=64=0.0% 00:28:24.155 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:28:24.155 complete : 0=0.0%, 4=88.9%, 8=11.1%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:28:24.155 issued rwts: total=10344,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:28:24.155 latency : target=0, window=0, percentile=100.00%, depth=8 00:28:24.155 00:28:24.155 Run status group 0 (all jobs): 00:28:24.155 READ: bw=64.6MiB/s (67.8MB/s), 16.1MiB/s-16.2MiB/s (16.9MB/s-17.0MB/s), io=323MiB (339MB), run=5002-5003msec 00:28:24.155 11:50:24 nvmf_dif.fio_dif_rand_params -- target/dif.sh@119 -- # destroy_subsystems 0 1 00:28:24.155 11:50:24 nvmf_dif.fio_dif_rand_params -- target/dif.sh@43 -- # local sub 00:28:24.155 11:50:24 nvmf_dif.fio_dif_rand_params -- target/dif.sh@45 -- # for sub in "$@" 00:28:24.155 11:50:24 nvmf_dif.fio_dif_rand_params -- target/dif.sh@46 -- # destroy_subsystem 0 00:28:24.155 11:50:24 nvmf_dif.fio_dif_rand_params -- target/dif.sh@36 -- # local sub_id=0 00:28:24.155 11:50:24 nvmf_dif.fio_dif_rand_params -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:28:24.155 11:50:24 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:24.155 11:50:24 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:28:24.155 11:50:24 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:24.155 11:50:24 nvmf_dif.fio_dif_rand_params -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null0 00:28:24.155 11:50:24 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:24.155 11:50:24 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:28:24.155 11:50:24 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:24.155 11:50:24 nvmf_dif.fio_dif_rand_params -- target/dif.sh@45 -- # for sub in "$@" 00:28:24.155 11:50:24 nvmf_dif.fio_dif_rand_params -- target/dif.sh@46 -- # destroy_subsystem 1 00:28:24.155 11:50:24 nvmf_dif.fio_dif_rand_params -- target/dif.sh@36 -- # local sub_id=1 00:28:24.155 11:50:24 nvmf_dif.fio_dif_rand_params -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:28:24.155 11:50:24 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:24.155 11:50:24 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:28:24.155 11:50:24 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:24.155 11:50:24 nvmf_dif.fio_dif_rand_params -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null1 00:28:24.155 11:50:24 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:24.155 11:50:24 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:28:24.155 11:50:24 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:24.155 00:28:24.155 real 0m23.881s 00:28:24.155 user 2m0.025s 00:28:24.155 sys 0m5.124s 00:28:24.155 11:50:24 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1130 -- # xtrace_disable 00:28:24.155 ************************************ 00:28:24.155 END TEST fio_dif_rand_params 00:28:24.155 ************************************ 00:28:24.155 11:50:24 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:28:24.155 11:50:24 nvmf_dif -- target/dif.sh@144 -- # run_test fio_dif_digest fio_dif_digest 00:28:24.155 11:50:24 nvmf_dif -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:28:24.155 11:50:24 nvmf_dif -- common/autotest_common.sh@1111 -- # xtrace_disable 00:28:24.155 11:50:24 nvmf_dif -- common/autotest_common.sh@10 -- # set +x 00:28:24.155 ************************************ 00:28:24.155 START TEST fio_dif_digest 00:28:24.155 ************************************ 00:28:24.155 11:50:24 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1129 -- # fio_dif_digest 00:28:24.155 11:50:24 nvmf_dif.fio_dif_digest -- target/dif.sh@123 -- # local NULL_DIF 00:28:24.155 11:50:24 nvmf_dif.fio_dif_digest -- target/dif.sh@124 -- # local bs numjobs runtime iodepth files 00:28:24.155 11:50:24 nvmf_dif.fio_dif_digest -- target/dif.sh@125 -- # local hdgst ddgst 00:28:24.155 11:50:24 nvmf_dif.fio_dif_digest -- target/dif.sh@127 -- # NULL_DIF=3 00:28:24.155 11:50:24 nvmf_dif.fio_dif_digest -- target/dif.sh@127 -- # bs=128k,128k,128k 00:28:24.155 11:50:24 nvmf_dif.fio_dif_digest -- target/dif.sh@127 -- # numjobs=3 00:28:24.155 11:50:24 nvmf_dif.fio_dif_digest -- target/dif.sh@127 -- # iodepth=3 00:28:24.156 11:50:24 nvmf_dif.fio_dif_digest -- target/dif.sh@127 -- # runtime=10 00:28:24.156 11:50:24 nvmf_dif.fio_dif_digest -- target/dif.sh@128 -- # hdgst=true 00:28:24.156 11:50:24 nvmf_dif.fio_dif_digest -- target/dif.sh@128 -- # ddgst=true 00:28:24.156 11:50:24 nvmf_dif.fio_dif_digest -- target/dif.sh@130 -- # create_subsystems 0 00:28:24.156 11:50:24 nvmf_dif.fio_dif_digest -- target/dif.sh@28 -- # local sub 00:28:24.156 11:50:24 nvmf_dif.fio_dif_digest -- target/dif.sh@30 -- # for sub in "$@" 00:28:24.156 11:50:24 nvmf_dif.fio_dif_digest -- target/dif.sh@31 -- # create_subsystem 0 00:28:24.156 11:50:24 nvmf_dif.fio_dif_digest -- target/dif.sh@18 -- # local sub_id=0 00:28:24.156 11:50:24 nvmf_dif.fio_dif_digest -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null0 64 512 --md-size 16 --dif-type 3 00:28:24.156 11:50:24 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:24.156 11:50:24 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@10 -- # set +x 00:28:24.156 bdev_null0 00:28:24.156 11:50:24 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:24.156 11:50:24 nvmf_dif.fio_dif_digest -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 --serial-number 53313233-0 --allow-any-host 00:28:24.156 11:50:24 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:24.156 11:50:24 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@10 -- # set +x 00:28:24.156 11:50:24 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:24.156 11:50:24 nvmf_dif.fio_dif_digest -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 bdev_null0 00:28:24.156 11:50:24 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:24.156 11:50:24 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@10 -- # set +x 00:28:24.156 11:50:24 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:24.156 11:50:24 nvmf_dif.fio_dif_digest -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.3 -s 4420 00:28:24.156 11:50:24 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:24.156 11:50:24 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@10 -- # set +x 00:28:24.156 [2024-12-16 11:50:24.253660] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4420 *** 00:28:24.156 11:50:24 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:24.156 11:50:24 nvmf_dif.fio_dif_digest -- target/dif.sh@131 -- # fio /dev/fd/62 00:28:24.156 11:50:24 nvmf_dif.fio_dif_digest -- target/dif.sh@131 -- # create_json_sub_conf 0 00:28:24.156 11:50:24 nvmf_dif.fio_dif_digest -- target/dif.sh@51 -- # gen_nvmf_target_json 0 00:28:24.156 11:50:24 nvmf_dif.fio_dif_digest -- nvmf/common.sh@560 -- # config=() 00:28:24.156 11:50:24 nvmf_dif.fio_dif_digest -- target/dif.sh@82 -- # fio_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:28:24.156 11:50:24 nvmf_dif.fio_dif_digest -- nvmf/common.sh@560 -- # local subsystem config 00:28:24.156 11:50:24 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1360 -- # fio_plugin /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:28:24.156 11:50:24 nvmf_dif.fio_dif_digest -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:28:24.156 11:50:24 nvmf_dif.fio_dif_digest -- target/dif.sh@82 -- # gen_fio_conf 00:28:24.156 11:50:24 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1341 -- # local fio_dir=/usr/src/fio 00:28:24.156 11:50:24 nvmf_dif.fio_dif_digest -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:28:24.156 { 00:28:24.156 "params": { 00:28:24.156 "name": "Nvme$subsystem", 00:28:24.156 "trtype": "$TEST_TRANSPORT", 00:28:24.156 "traddr": "$NVMF_FIRST_TARGET_IP", 00:28:24.156 "adrfam": "ipv4", 00:28:24.156 "trsvcid": "$NVMF_PORT", 00:28:24.156 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:28:24.156 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:28:24.156 "hdgst": ${hdgst:-false}, 00:28:24.156 "ddgst": ${ddgst:-false} 00:28:24.156 }, 00:28:24.156 "method": "bdev_nvme_attach_controller" 00:28:24.156 } 00:28:24.156 EOF 00:28:24.156 )") 00:28:24.156 11:50:24 nvmf_dif.fio_dif_digest -- target/dif.sh@54 -- # local file 00:28:24.156 11:50:24 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1343 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:28:24.156 11:50:24 nvmf_dif.fio_dif_digest -- target/dif.sh@56 -- # cat 00:28:24.156 11:50:24 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1343 -- # local sanitizers 00:28:24.156 11:50:24 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1344 -- # local plugin=/home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:28:24.156 11:50:24 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1345 -- # shift 00:28:24.156 11:50:24 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1347 -- # local asan_lib= 00:28:24.156 11:50:24 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1348 -- # for sanitizer in "${sanitizers[@]}" 00:28:24.156 11:50:24 nvmf_dif.fio_dif_digest -- nvmf/common.sh@582 -- # cat 00:28:24.156 11:50:24 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1349 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:28:24.156 11:50:24 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1349 -- # grep libasan 00:28:24.156 11:50:24 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1349 -- # awk '{print $3}' 00:28:24.156 11:50:24 nvmf_dif.fio_dif_digest -- nvmf/common.sh@584 -- # jq . 00:28:24.156 11:50:24 nvmf_dif.fio_dif_digest -- target/dif.sh@72 -- # (( file = 1 )) 00:28:24.156 11:50:24 nvmf_dif.fio_dif_digest -- target/dif.sh@72 -- # (( file <= files )) 00:28:24.156 11:50:24 nvmf_dif.fio_dif_digest -- nvmf/common.sh@585 -- # IFS=, 00:28:24.156 11:50:24 nvmf_dif.fio_dif_digest -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:28:24.156 "params": { 00:28:24.156 "name": "Nvme0", 00:28:24.156 "trtype": "tcp", 00:28:24.156 "traddr": "10.0.0.3", 00:28:24.156 "adrfam": "ipv4", 00:28:24.156 "trsvcid": "4420", 00:28:24.156 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:28:24.156 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:28:24.156 "hdgst": true, 00:28:24.156 "ddgst": true 00:28:24.156 }, 00:28:24.156 "method": "bdev_nvme_attach_controller" 00:28:24.156 }' 00:28:24.156 11:50:24 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1349 -- # asan_lib= 00:28:24.156 11:50:24 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1350 -- # [[ -n '' ]] 00:28:24.156 11:50:24 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1348 -- # for sanitizer in "${sanitizers[@]}" 00:28:24.156 11:50:24 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1349 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:28:24.156 11:50:24 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1349 -- # grep libclang_rt.asan 00:28:24.156 11:50:24 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1349 -- # awk '{print $3}' 00:28:24.156 11:50:24 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1349 -- # asan_lib= 00:28:24.156 11:50:24 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1350 -- # [[ -n '' ]] 00:28:24.156 11:50:24 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1356 -- # LD_PRELOAD=' /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev' 00:28:24.156 11:50:24 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1356 -- # /usr/src/fio/fio --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:28:24.156 filename0: (g=0): rw=randread, bs=(R) 128KiB-128KiB, (W) 128KiB-128KiB, (T) 128KiB-128KiB, ioengine=spdk_bdev, iodepth=3 00:28:24.156 ... 00:28:24.156 fio-3.35 00:28:24.156 Starting 3 threads 00:28:36.366 00:28:36.366 filename0: (groupid=0, jobs=1): err= 0: pid=110946: Mon Dec 16 11:50:35 2024 00:28:36.366 read: IOPS=226, BW=28.3MiB/s (29.7MB/s)(284MiB/10044msec) 00:28:36.366 slat (nsec): min=6807, max=53656, avg=11928.25, stdev=3143.94 00:28:36.366 clat (usec): min=6703, max=48204, avg=13216.97, stdev=2124.37 00:28:36.366 lat (usec): min=6715, max=48212, avg=13228.90, stdev=2124.39 00:28:36.366 clat percentiles (usec): 00:28:36.366 | 1.00th=[ 7439], 5.00th=[ 8717], 10.00th=[11469], 20.00th=[12256], 00:28:36.366 | 30.00th=[12649], 40.00th=[13042], 50.00th=[13304], 60.00th=[13698], 00:28:36.366 | 70.00th=[13960], 80.00th=[14484], 90.00th=[15008], 95.00th=[15533], 00:28:36.366 | 99.00th=[16450], 99.50th=[16909], 99.90th=[32113], 99.95th=[46924], 00:28:36.366 | 99.99th=[47973] 00:28:36.366 bw ( KiB/s): min=26624, max=32768, per=33.41%, avg=29078.80, stdev=1456.30, samples=20 00:28:36.366 iops : min= 208, max= 256, avg=227.15, stdev=11.40, samples=20 00:28:36.366 lat (msec) : 10=6.51%, 20=93.27%, 50=0.22% 00:28:36.366 cpu : usr=93.16%, sys=5.29%, ctx=27, majf=0, minf=0 00:28:36.366 IO depths : 1=0.2%, 2=99.8%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:28:36.366 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:28:36.366 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:28:36.366 issued rwts: total=2274,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:28:36.366 latency : target=0, window=0, percentile=100.00%, depth=3 00:28:36.366 filename0: (groupid=0, jobs=1): err= 0: pid=110947: Mon Dec 16 11:50:35 2024 00:28:36.366 read: IOPS=258, BW=32.4MiB/s (33.9MB/s)(324MiB/10006msec) 00:28:36.366 slat (nsec): min=6852, max=77850, avg=12006.09, stdev=3566.43 00:28:36.366 clat (usec): min=7144, max=53489, avg=11569.18, stdev=4646.19 00:28:36.366 lat (usec): min=7153, max=53500, avg=11581.19, stdev=4646.18 00:28:36.366 clat percentiles (usec): 00:28:36.366 | 1.00th=[ 8979], 5.00th=[ 9503], 10.00th=[ 9896], 20.00th=[10290], 00:28:36.366 | 30.00th=[10552], 40.00th=[10814], 50.00th=[11076], 60.00th=[11338], 00:28:36.366 | 70.00th=[11600], 80.00th=[11863], 90.00th=[12256], 95.00th=[12780], 00:28:36.366 | 99.00th=[50594], 99.50th=[52167], 99.90th=[52691], 99.95th=[53216], 00:28:36.366 | 99.99th=[53740] 00:28:36.366 bw ( KiB/s): min=28416, max=37632, per=38.07%, avg=33131.79, stdev=2460.38, samples=19 00:28:36.366 iops : min= 222, max= 294, avg=258.84, stdev=19.22, samples=19 00:28:36.366 lat (msec) : 10=13.16%, 20=85.57%, 100=1.27% 00:28:36.366 cpu : usr=93.02%, sys=5.51%, ctx=16, majf=0, minf=0 00:28:36.366 IO depths : 1=0.3%, 2=99.7%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:28:36.366 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:28:36.366 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:28:36.366 issued rwts: total=2591,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:28:36.366 latency : target=0, window=0, percentile=100.00%, depth=3 00:28:36.366 filename0: (groupid=0, jobs=1): err= 0: pid=110948: Mon Dec 16 11:50:35 2024 00:28:36.366 read: IOPS=196, BW=24.5MiB/s (25.7MB/s)(246MiB/10003msec) 00:28:36.366 slat (nsec): min=7020, max=74829, avg=12966.33, stdev=5536.93 00:28:36.366 clat (usec): min=6198, max=20704, avg=15257.49, stdev=1729.94 00:28:36.366 lat (usec): min=6211, max=20717, avg=15270.45, stdev=1729.91 00:28:36.366 clat percentiles (usec): 00:28:36.366 | 1.00th=[ 9241], 5.00th=[10290], 10.00th=[13960], 20.00th=[14615], 00:28:36.366 | 30.00th=[15008], 40.00th=[15270], 50.00th=[15533], 60.00th=[15795], 00:28:36.366 | 70.00th=[16057], 80.00th=[16319], 90.00th=[16909], 95.00th=[17171], 00:28:36.366 | 99.00th=[17957], 99.50th=[18482], 99.90th=[20317], 99.95th=[20579], 00:28:36.366 | 99.99th=[20579] 00:28:36.366 bw ( KiB/s): min=23808, max=28160, per=28.78%, avg=25047.58, stdev=1152.04, samples=19 00:28:36.366 iops : min= 186, max= 220, avg=195.68, stdev= 9.00, samples=19 00:28:36.366 lat (msec) : 10=3.92%, 20=95.98%, 50=0.10% 00:28:36.366 cpu : usr=92.71%, sys=5.73%, ctx=18, majf=0, minf=0 00:28:36.366 IO depths : 1=14.7%, 2=85.3%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:28:36.366 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:28:36.366 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:28:36.366 issued rwts: total=1964,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:28:36.366 latency : target=0, window=0, percentile=100.00%, depth=3 00:28:36.366 00:28:36.366 Run status group 0 (all jobs): 00:28:36.366 READ: bw=85.0MiB/s (89.1MB/s), 24.5MiB/s-32.4MiB/s (25.7MB/s-33.9MB/s), io=854MiB (895MB), run=10003-10044msec 00:28:36.366 11:50:35 nvmf_dif.fio_dif_digest -- target/dif.sh@132 -- # destroy_subsystems 0 00:28:36.366 11:50:35 nvmf_dif.fio_dif_digest -- target/dif.sh@43 -- # local sub 00:28:36.366 11:50:35 nvmf_dif.fio_dif_digest -- target/dif.sh@45 -- # for sub in "$@" 00:28:36.366 11:50:35 nvmf_dif.fio_dif_digest -- target/dif.sh@46 -- # destroy_subsystem 0 00:28:36.367 11:50:35 nvmf_dif.fio_dif_digest -- target/dif.sh@36 -- # local sub_id=0 00:28:36.367 11:50:35 nvmf_dif.fio_dif_digest -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:28:36.367 11:50:35 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:36.367 11:50:35 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@10 -- # set +x 00:28:36.367 11:50:35 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:36.367 11:50:35 nvmf_dif.fio_dif_digest -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null0 00:28:36.367 11:50:35 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:36.367 11:50:35 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@10 -- # set +x 00:28:36.367 ************************************ 00:28:36.367 END TEST fio_dif_digest 00:28:36.367 ************************************ 00:28:36.367 11:50:35 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:36.367 00:28:36.367 real 0m11.042s 00:28:36.367 user 0m28.621s 00:28:36.367 sys 0m1.902s 00:28:36.367 11:50:35 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1130 -- # xtrace_disable 00:28:36.367 11:50:35 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@10 -- # set +x 00:28:36.367 11:50:35 nvmf_dif -- target/dif.sh@146 -- # trap - SIGINT SIGTERM EXIT 00:28:36.367 11:50:35 nvmf_dif -- target/dif.sh@147 -- # nvmftestfini 00:28:36.367 11:50:35 nvmf_dif -- nvmf/common.sh@516 -- # nvmfcleanup 00:28:36.367 11:50:35 nvmf_dif -- nvmf/common.sh@121 -- # sync 00:28:36.367 11:50:35 nvmf_dif -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:28:36.367 11:50:35 nvmf_dif -- nvmf/common.sh@124 -- # set +e 00:28:36.367 11:50:35 nvmf_dif -- nvmf/common.sh@125 -- # for i in {1..20} 00:28:36.367 11:50:35 nvmf_dif -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:28:36.367 rmmod nvme_tcp 00:28:36.367 rmmod nvme_fabrics 00:28:36.367 rmmod nvme_keyring 00:28:36.367 11:50:35 nvmf_dif -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:28:36.367 11:50:35 nvmf_dif -- nvmf/common.sh@128 -- # set -e 00:28:36.367 11:50:35 nvmf_dif -- nvmf/common.sh@129 -- # return 0 00:28:36.367 11:50:35 nvmf_dif -- nvmf/common.sh@517 -- # '[' -n 110190 ']' 00:28:36.367 11:50:35 nvmf_dif -- nvmf/common.sh@518 -- # killprocess 110190 00:28:36.367 11:50:35 nvmf_dif -- common/autotest_common.sh@954 -- # '[' -z 110190 ']' 00:28:36.367 11:50:35 nvmf_dif -- common/autotest_common.sh@958 -- # kill -0 110190 00:28:36.367 11:50:35 nvmf_dif -- common/autotest_common.sh@959 -- # uname 00:28:36.367 11:50:35 nvmf_dif -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:28:36.367 11:50:35 nvmf_dif -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 110190 00:28:36.367 killing process with pid 110190 00:28:36.367 11:50:35 nvmf_dif -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:28:36.367 11:50:35 nvmf_dif -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:28:36.367 11:50:35 nvmf_dif -- common/autotest_common.sh@972 -- # echo 'killing process with pid 110190' 00:28:36.367 11:50:35 nvmf_dif -- common/autotest_common.sh@973 -- # kill 110190 00:28:36.367 11:50:35 nvmf_dif -- common/autotest_common.sh@978 -- # wait 110190 00:28:36.367 11:50:35 nvmf_dif -- nvmf/common.sh@520 -- # '[' iso == iso ']' 00:28:36.367 11:50:35 nvmf_dif -- nvmf/common.sh@521 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh reset 00:28:36.367 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:28:36.367 Waiting for block devices as requested 00:28:36.367 0000:00:11.0 (1b36 0010): uio_pci_generic -> nvme 00:28:36.367 0000:00:10.0 (1b36 0010): uio_pci_generic -> nvme 00:28:36.367 11:50:36 nvmf_dif -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:28:36.367 11:50:36 nvmf_dif -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:28:36.367 11:50:36 nvmf_dif -- nvmf/common.sh@297 -- # iptr 00:28:36.367 11:50:36 nvmf_dif -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:28:36.367 11:50:36 nvmf_dif -- nvmf/common.sh@791 -- # iptables-save 00:28:36.367 11:50:36 nvmf_dif -- nvmf/common.sh@791 -- # iptables-restore 00:28:36.367 11:50:36 nvmf_dif -- nvmf/common.sh@298 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:28:36.367 11:50:36 nvmf_dif -- nvmf/common.sh@299 -- # nvmf_veth_fini 00:28:36.367 11:50:36 nvmf_dif -- nvmf/common.sh@233 -- # ip link set nvmf_init_br nomaster 00:28:36.367 11:50:36 nvmf_dif -- nvmf/common.sh@234 -- # ip link set nvmf_init_br2 nomaster 00:28:36.367 11:50:36 nvmf_dif -- nvmf/common.sh@235 -- # ip link set nvmf_tgt_br nomaster 00:28:36.367 11:50:36 nvmf_dif -- nvmf/common.sh@236 -- # ip link set nvmf_tgt_br2 nomaster 00:28:36.367 11:50:36 nvmf_dif -- nvmf/common.sh@237 -- # ip link set nvmf_init_br down 00:28:36.367 11:50:36 nvmf_dif -- nvmf/common.sh@238 -- # ip link set nvmf_init_br2 down 00:28:36.367 11:50:36 nvmf_dif -- nvmf/common.sh@239 -- # ip link set nvmf_tgt_br down 00:28:36.367 11:50:36 nvmf_dif -- nvmf/common.sh@240 -- # ip link set nvmf_tgt_br2 down 00:28:36.367 11:50:36 nvmf_dif -- nvmf/common.sh@241 -- # ip link delete nvmf_br type bridge 00:28:36.367 11:50:36 nvmf_dif -- nvmf/common.sh@242 -- # ip link delete nvmf_init_if 00:28:36.367 11:50:36 nvmf_dif -- nvmf/common.sh@243 -- # ip link delete nvmf_init_if2 00:28:36.367 11:50:36 nvmf_dif -- nvmf/common.sh@244 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:28:36.367 11:50:36 nvmf_dif -- nvmf/common.sh@245 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:28:36.367 11:50:36 nvmf_dif -- nvmf/common.sh@246 -- # remove_spdk_ns 00:28:36.367 11:50:36 nvmf_dif -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:28:36.367 11:50:36 nvmf_dif -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 13> /dev/null' 00:28:36.367 11:50:36 nvmf_dif -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:28:36.367 11:50:36 nvmf_dif -- nvmf/common.sh@300 -- # return 0 00:28:36.367 00:28:36.367 real 1m0.101s 00:28:36.367 user 3m46.078s 00:28:36.367 sys 0m14.204s 00:28:36.367 11:50:36 nvmf_dif -- common/autotest_common.sh@1130 -- # xtrace_disable 00:28:36.367 11:50:36 nvmf_dif -- common/autotest_common.sh@10 -- # set +x 00:28:36.367 ************************************ 00:28:36.367 END TEST nvmf_dif 00:28:36.367 ************************************ 00:28:36.367 11:50:36 -- spdk/autotest.sh@290 -- # run_test nvmf_abort_qd_sizes /home/vagrant/spdk_repo/spdk/test/nvmf/target/abort_qd_sizes.sh 00:28:36.367 11:50:36 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:28:36.367 11:50:36 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:28:36.367 11:50:36 -- common/autotest_common.sh@10 -- # set +x 00:28:36.367 ************************************ 00:28:36.367 START TEST nvmf_abort_qd_sizes 00:28:36.367 ************************************ 00:28:36.367 11:50:36 nvmf_abort_qd_sizes -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/abort_qd_sizes.sh 00:28:36.367 * Looking for test storage... 00:28:36.367 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:28:36.367 11:50:36 nvmf_abort_qd_sizes -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:28:36.367 11:50:36 nvmf_abort_qd_sizes -- common/autotest_common.sh@1711 -- # lcov --version 00:28:36.367 11:50:36 nvmf_abort_qd_sizes -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:28:36.367 11:50:36 nvmf_abort_qd_sizes -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:28:36.367 11:50:36 nvmf_abort_qd_sizes -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:28:36.367 11:50:36 nvmf_abort_qd_sizes -- scripts/common.sh@333 -- # local ver1 ver1_l 00:28:36.367 11:50:36 nvmf_abort_qd_sizes -- scripts/common.sh@334 -- # local ver2 ver2_l 00:28:36.367 11:50:36 nvmf_abort_qd_sizes -- scripts/common.sh@336 -- # IFS=.-: 00:28:36.367 11:50:36 nvmf_abort_qd_sizes -- scripts/common.sh@336 -- # read -ra ver1 00:28:36.367 11:50:36 nvmf_abort_qd_sizes -- scripts/common.sh@337 -- # IFS=.-: 00:28:36.367 11:50:36 nvmf_abort_qd_sizes -- scripts/common.sh@337 -- # read -ra ver2 00:28:36.367 11:50:36 nvmf_abort_qd_sizes -- scripts/common.sh@338 -- # local 'op=<' 00:28:36.367 11:50:36 nvmf_abort_qd_sizes -- scripts/common.sh@340 -- # ver1_l=2 00:28:36.367 11:50:36 nvmf_abort_qd_sizes -- scripts/common.sh@341 -- # ver2_l=1 00:28:36.367 11:50:36 nvmf_abort_qd_sizes -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:28:36.367 11:50:36 nvmf_abort_qd_sizes -- scripts/common.sh@344 -- # case "$op" in 00:28:36.367 11:50:36 nvmf_abort_qd_sizes -- scripts/common.sh@345 -- # : 1 00:28:36.367 11:50:36 nvmf_abort_qd_sizes -- scripts/common.sh@364 -- # (( v = 0 )) 00:28:36.367 11:50:36 nvmf_abort_qd_sizes -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:28:36.367 11:50:36 nvmf_abort_qd_sizes -- scripts/common.sh@365 -- # decimal 1 00:28:36.367 11:50:36 nvmf_abort_qd_sizes -- scripts/common.sh@353 -- # local d=1 00:28:36.367 11:50:36 nvmf_abort_qd_sizes -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:28:36.367 11:50:36 nvmf_abort_qd_sizes -- scripts/common.sh@355 -- # echo 1 00:28:36.367 11:50:36 nvmf_abort_qd_sizes -- scripts/common.sh@365 -- # ver1[v]=1 00:28:36.367 11:50:36 nvmf_abort_qd_sizes -- scripts/common.sh@366 -- # decimal 2 00:28:36.367 11:50:36 nvmf_abort_qd_sizes -- scripts/common.sh@353 -- # local d=2 00:28:36.367 11:50:36 nvmf_abort_qd_sizes -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:28:36.367 11:50:36 nvmf_abort_qd_sizes -- scripts/common.sh@355 -- # echo 2 00:28:36.367 11:50:36 nvmf_abort_qd_sizes -- scripts/common.sh@366 -- # ver2[v]=2 00:28:36.367 11:50:36 nvmf_abort_qd_sizes -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:28:36.367 11:50:36 nvmf_abort_qd_sizes -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:28:36.367 11:50:36 nvmf_abort_qd_sizes -- scripts/common.sh@368 -- # return 0 00:28:36.367 11:50:36 nvmf_abort_qd_sizes -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:28:36.367 11:50:36 nvmf_abort_qd_sizes -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:28:36.367 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:28:36.367 --rc genhtml_branch_coverage=1 00:28:36.367 --rc genhtml_function_coverage=1 00:28:36.367 --rc genhtml_legend=1 00:28:36.367 --rc geninfo_all_blocks=1 00:28:36.367 --rc geninfo_unexecuted_blocks=1 00:28:36.367 00:28:36.367 ' 00:28:36.367 11:50:36 nvmf_abort_qd_sizes -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:28:36.367 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:28:36.367 --rc genhtml_branch_coverage=1 00:28:36.367 --rc genhtml_function_coverage=1 00:28:36.367 --rc genhtml_legend=1 00:28:36.367 --rc geninfo_all_blocks=1 00:28:36.367 --rc geninfo_unexecuted_blocks=1 00:28:36.367 00:28:36.367 ' 00:28:36.367 11:50:36 nvmf_abort_qd_sizes -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:28:36.367 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:28:36.367 --rc genhtml_branch_coverage=1 00:28:36.367 --rc genhtml_function_coverage=1 00:28:36.367 --rc genhtml_legend=1 00:28:36.367 --rc geninfo_all_blocks=1 00:28:36.367 --rc geninfo_unexecuted_blocks=1 00:28:36.367 00:28:36.367 ' 00:28:36.367 11:50:36 nvmf_abort_qd_sizes -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:28:36.367 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:28:36.367 --rc genhtml_branch_coverage=1 00:28:36.367 --rc genhtml_function_coverage=1 00:28:36.367 --rc genhtml_legend=1 00:28:36.367 --rc geninfo_all_blocks=1 00:28:36.367 --rc geninfo_unexecuted_blocks=1 00:28:36.367 00:28:36.367 ' 00:28:36.367 11:50:36 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@14 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:28:36.368 11:50:36 nvmf_abort_qd_sizes -- nvmf/common.sh@7 -- # uname -s 00:28:36.368 11:50:36 nvmf_abort_qd_sizes -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:28:36.368 11:50:36 nvmf_abort_qd_sizes -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:28:36.368 11:50:36 nvmf_abort_qd_sizes -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:28:36.368 11:50:36 nvmf_abort_qd_sizes -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:28:36.368 11:50:36 nvmf_abort_qd_sizes -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:28:36.368 11:50:36 nvmf_abort_qd_sizes -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:28:36.368 11:50:36 nvmf_abort_qd_sizes -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:28:36.368 11:50:36 nvmf_abort_qd_sizes -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:28:36.368 11:50:36 nvmf_abort_qd_sizes -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:28:36.368 11:50:36 nvmf_abort_qd_sizes -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:28:36.368 11:50:36 nvmf_abort_qd_sizes -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:2fb5c03d-b67f-4471-b8a9-a83bf7b15be8 00:28:36.368 11:50:36 nvmf_abort_qd_sizes -- nvmf/common.sh@18 -- # NVME_HOSTID=2fb5c03d-b67f-4471-b8a9-a83bf7b15be8 00:28:36.368 11:50:36 nvmf_abort_qd_sizes -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:28:36.368 11:50:36 nvmf_abort_qd_sizes -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:28:36.368 11:50:36 nvmf_abort_qd_sizes -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:28:36.368 11:50:36 nvmf_abort_qd_sizes -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:28:36.368 11:50:36 nvmf_abort_qd_sizes -- nvmf/common.sh@49 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:28:36.368 11:50:36 nvmf_abort_qd_sizes -- scripts/common.sh@15 -- # shopt -s extglob 00:28:36.368 11:50:36 nvmf_abort_qd_sizes -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:28:36.368 11:50:36 nvmf_abort_qd_sizes -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:28:36.368 11:50:36 nvmf_abort_qd_sizes -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:28:36.368 11:50:36 nvmf_abort_qd_sizes -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:28:36.368 11:50:36 nvmf_abort_qd_sizes -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:28:36.368 11:50:36 nvmf_abort_qd_sizes -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:28:36.368 11:50:36 nvmf_abort_qd_sizes -- paths/export.sh@5 -- # export PATH 00:28:36.368 11:50:36 nvmf_abort_qd_sizes -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:28:36.368 11:50:36 nvmf_abort_qd_sizes -- nvmf/common.sh@51 -- # : 0 00:28:36.368 11:50:36 nvmf_abort_qd_sizes -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:28:36.368 11:50:36 nvmf_abort_qd_sizes -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:28:36.368 11:50:36 nvmf_abort_qd_sizes -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:28:36.368 11:50:36 nvmf_abort_qd_sizes -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:28:36.368 11:50:36 nvmf_abort_qd_sizes -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:28:36.368 11:50:36 nvmf_abort_qd_sizes -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:28:36.368 /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:28:36.368 11:50:36 nvmf_abort_qd_sizes -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:28:36.368 11:50:36 nvmf_abort_qd_sizes -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:28:36.368 11:50:36 nvmf_abort_qd_sizes -- nvmf/common.sh@55 -- # have_pci_nics=0 00:28:36.368 11:50:36 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@70 -- # nvmftestinit 00:28:36.368 11:50:36 nvmf_abort_qd_sizes -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:28:36.368 11:50:36 nvmf_abort_qd_sizes -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:28:36.368 11:50:36 nvmf_abort_qd_sizes -- nvmf/common.sh@476 -- # prepare_net_devs 00:28:36.368 11:50:36 nvmf_abort_qd_sizes -- nvmf/common.sh@438 -- # local -g is_hw=no 00:28:36.368 11:50:36 nvmf_abort_qd_sizes -- nvmf/common.sh@440 -- # remove_spdk_ns 00:28:36.368 11:50:36 nvmf_abort_qd_sizes -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:28:36.368 11:50:36 nvmf_abort_qd_sizes -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 13> /dev/null' 00:28:36.368 11:50:36 nvmf_abort_qd_sizes -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:28:36.368 11:50:36 nvmf_abort_qd_sizes -- nvmf/common.sh@442 -- # [[ virt != virt ]] 00:28:36.368 11:50:36 nvmf_abort_qd_sizes -- nvmf/common.sh@444 -- # [[ no == yes ]] 00:28:36.368 11:50:36 nvmf_abort_qd_sizes -- nvmf/common.sh@451 -- # [[ virt == phy ]] 00:28:36.368 11:50:36 nvmf_abort_qd_sizes -- nvmf/common.sh@454 -- # [[ virt == phy-fallback ]] 00:28:36.368 11:50:36 nvmf_abort_qd_sizes -- nvmf/common.sh@459 -- # [[ tcp == tcp ]] 00:28:36.368 11:50:36 nvmf_abort_qd_sizes -- nvmf/common.sh@460 -- # nvmf_veth_init 00:28:36.368 11:50:36 nvmf_abort_qd_sizes -- nvmf/common.sh@145 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:28:36.368 11:50:36 nvmf_abort_qd_sizes -- nvmf/common.sh@146 -- # NVMF_SECOND_INITIATOR_IP=10.0.0.2 00:28:36.368 11:50:36 nvmf_abort_qd_sizes -- nvmf/common.sh@147 -- # NVMF_FIRST_TARGET_IP=10.0.0.3 00:28:36.368 11:50:36 nvmf_abort_qd_sizes -- nvmf/common.sh@148 -- # NVMF_SECOND_TARGET_IP=10.0.0.4 00:28:36.368 11:50:36 nvmf_abort_qd_sizes -- nvmf/common.sh@149 -- # NVMF_INITIATOR_IP=10.0.0.1 00:28:36.368 11:50:36 nvmf_abort_qd_sizes -- nvmf/common.sh@150 -- # NVMF_BRIDGE=nvmf_br 00:28:36.368 11:50:36 nvmf_abort_qd_sizes -- nvmf/common.sh@151 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:28:36.368 11:50:36 nvmf_abort_qd_sizes -- nvmf/common.sh@152 -- # NVMF_INITIATOR_INTERFACE2=nvmf_init_if2 00:28:36.368 11:50:36 nvmf_abort_qd_sizes -- nvmf/common.sh@153 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:28:36.368 11:50:36 nvmf_abort_qd_sizes -- nvmf/common.sh@154 -- # NVMF_INITIATOR_BRIDGE2=nvmf_init_br2 00:28:36.368 11:50:36 nvmf_abort_qd_sizes -- nvmf/common.sh@155 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:28:36.368 11:50:36 nvmf_abort_qd_sizes -- nvmf/common.sh@156 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:28:36.368 11:50:36 nvmf_abort_qd_sizes -- nvmf/common.sh@157 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:28:36.368 11:50:36 nvmf_abort_qd_sizes -- nvmf/common.sh@158 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:28:36.368 11:50:36 nvmf_abort_qd_sizes -- nvmf/common.sh@159 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:28:36.368 11:50:36 nvmf_abort_qd_sizes -- nvmf/common.sh@160 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:28:36.368 11:50:36 nvmf_abort_qd_sizes -- nvmf/common.sh@162 -- # ip link set nvmf_init_br nomaster 00:28:36.368 Cannot find device "nvmf_init_br" 00:28:36.368 11:50:36 nvmf_abort_qd_sizes -- nvmf/common.sh@162 -- # true 00:28:36.368 11:50:36 nvmf_abort_qd_sizes -- nvmf/common.sh@163 -- # ip link set nvmf_init_br2 nomaster 00:28:36.368 Cannot find device "nvmf_init_br2" 00:28:36.368 11:50:36 nvmf_abort_qd_sizes -- nvmf/common.sh@163 -- # true 00:28:36.368 11:50:36 nvmf_abort_qd_sizes -- nvmf/common.sh@164 -- # ip link set nvmf_tgt_br nomaster 00:28:36.368 Cannot find device "nvmf_tgt_br" 00:28:36.368 11:50:36 nvmf_abort_qd_sizes -- nvmf/common.sh@164 -- # true 00:28:36.368 11:50:36 nvmf_abort_qd_sizes -- nvmf/common.sh@165 -- # ip link set nvmf_tgt_br2 nomaster 00:28:36.368 Cannot find device "nvmf_tgt_br2" 00:28:36.368 11:50:36 nvmf_abort_qd_sizes -- nvmf/common.sh@165 -- # true 00:28:36.368 11:50:36 nvmf_abort_qd_sizes -- nvmf/common.sh@166 -- # ip link set nvmf_init_br down 00:28:36.368 Cannot find device "nvmf_init_br" 00:28:36.368 11:50:36 nvmf_abort_qd_sizes -- nvmf/common.sh@166 -- # true 00:28:36.368 11:50:36 nvmf_abort_qd_sizes -- nvmf/common.sh@167 -- # ip link set nvmf_init_br2 down 00:28:36.368 Cannot find device "nvmf_init_br2" 00:28:36.368 11:50:36 nvmf_abort_qd_sizes -- nvmf/common.sh@167 -- # true 00:28:36.368 11:50:36 nvmf_abort_qd_sizes -- nvmf/common.sh@168 -- # ip link set nvmf_tgt_br down 00:28:36.368 Cannot find device "nvmf_tgt_br" 00:28:36.368 11:50:36 nvmf_abort_qd_sizes -- nvmf/common.sh@168 -- # true 00:28:36.368 11:50:36 nvmf_abort_qd_sizes -- nvmf/common.sh@169 -- # ip link set nvmf_tgt_br2 down 00:28:36.368 Cannot find device "nvmf_tgt_br2" 00:28:36.368 11:50:36 nvmf_abort_qd_sizes -- nvmf/common.sh@169 -- # true 00:28:36.368 11:50:36 nvmf_abort_qd_sizes -- nvmf/common.sh@170 -- # ip link delete nvmf_br type bridge 00:28:36.368 Cannot find device "nvmf_br" 00:28:36.368 11:50:36 nvmf_abort_qd_sizes -- nvmf/common.sh@170 -- # true 00:28:36.368 11:50:36 nvmf_abort_qd_sizes -- nvmf/common.sh@171 -- # ip link delete nvmf_init_if 00:28:36.368 Cannot find device "nvmf_init_if" 00:28:36.368 11:50:36 nvmf_abort_qd_sizes -- nvmf/common.sh@171 -- # true 00:28:36.368 11:50:36 nvmf_abort_qd_sizes -- nvmf/common.sh@172 -- # ip link delete nvmf_init_if2 00:28:36.368 Cannot find device "nvmf_init_if2" 00:28:36.368 11:50:36 nvmf_abort_qd_sizes -- nvmf/common.sh@172 -- # true 00:28:36.368 11:50:36 nvmf_abort_qd_sizes -- nvmf/common.sh@173 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:28:36.368 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:28:36.368 11:50:36 nvmf_abort_qd_sizes -- nvmf/common.sh@173 -- # true 00:28:36.368 11:50:36 nvmf_abort_qd_sizes -- nvmf/common.sh@174 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:28:36.368 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:28:36.368 11:50:36 nvmf_abort_qd_sizes -- nvmf/common.sh@174 -- # true 00:28:36.368 11:50:36 nvmf_abort_qd_sizes -- nvmf/common.sh@177 -- # ip netns add nvmf_tgt_ns_spdk 00:28:36.368 11:50:36 nvmf_abort_qd_sizes -- nvmf/common.sh@180 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:28:36.368 11:50:36 nvmf_abort_qd_sizes -- nvmf/common.sh@181 -- # ip link add nvmf_init_if2 type veth peer name nvmf_init_br2 00:28:36.368 11:50:36 nvmf_abort_qd_sizes -- nvmf/common.sh@182 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:28:36.368 11:50:36 nvmf_abort_qd_sizes -- nvmf/common.sh@183 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:28:36.368 11:50:36 nvmf_abort_qd_sizes -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:28:36.368 11:50:36 nvmf_abort_qd_sizes -- nvmf/common.sh@187 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:28:36.368 11:50:36 nvmf_abort_qd_sizes -- nvmf/common.sh@190 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:28:36.368 11:50:36 nvmf_abort_qd_sizes -- nvmf/common.sh@191 -- # ip addr add 10.0.0.2/24 dev nvmf_init_if2 00:28:36.368 11:50:36 nvmf_abort_qd_sizes -- nvmf/common.sh@192 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if 00:28:36.368 11:50:36 nvmf_abort_qd_sizes -- nvmf/common.sh@193 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.4/24 dev nvmf_tgt_if2 00:28:36.368 11:50:36 nvmf_abort_qd_sizes -- nvmf/common.sh@196 -- # ip link set nvmf_init_if up 00:28:36.368 11:50:36 nvmf_abort_qd_sizes -- nvmf/common.sh@197 -- # ip link set nvmf_init_if2 up 00:28:36.368 11:50:36 nvmf_abort_qd_sizes -- nvmf/common.sh@198 -- # ip link set nvmf_init_br up 00:28:36.368 11:50:36 nvmf_abort_qd_sizes -- nvmf/common.sh@199 -- # ip link set nvmf_init_br2 up 00:28:36.369 11:50:36 nvmf_abort_qd_sizes -- nvmf/common.sh@200 -- # ip link set nvmf_tgt_br up 00:28:36.369 11:50:36 nvmf_abort_qd_sizes -- nvmf/common.sh@201 -- # ip link set nvmf_tgt_br2 up 00:28:36.369 11:50:36 nvmf_abort_qd_sizes -- nvmf/common.sh@202 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:28:36.369 11:50:36 nvmf_abort_qd_sizes -- nvmf/common.sh@203 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:28:36.369 11:50:36 nvmf_abort_qd_sizes -- nvmf/common.sh@204 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:28:36.369 11:50:36 nvmf_abort_qd_sizes -- nvmf/common.sh@207 -- # ip link add nvmf_br type bridge 00:28:36.369 11:50:36 nvmf_abort_qd_sizes -- nvmf/common.sh@208 -- # ip link set nvmf_br up 00:28:36.369 11:50:37 nvmf_abort_qd_sizes -- nvmf/common.sh@211 -- # ip link set nvmf_init_br master nvmf_br 00:28:36.369 11:50:37 nvmf_abort_qd_sizes -- nvmf/common.sh@212 -- # ip link set nvmf_init_br2 master nvmf_br 00:28:36.369 11:50:37 nvmf_abort_qd_sizes -- nvmf/common.sh@213 -- # ip link set nvmf_tgt_br master nvmf_br 00:28:36.369 11:50:37 nvmf_abort_qd_sizes -- nvmf/common.sh@214 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:28:36.369 11:50:37 nvmf_abort_qd_sizes -- nvmf/common.sh@217 -- # ipts -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:28:36.369 11:50:37 nvmf_abort_qd_sizes -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT' 00:28:36.369 11:50:37 nvmf_abort_qd_sizes -- nvmf/common.sh@218 -- # ipts -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT 00:28:36.369 11:50:37 nvmf_abort_qd_sizes -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT' 00:28:36.369 11:50:37 nvmf_abort_qd_sizes -- nvmf/common.sh@219 -- # ipts -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:28:36.369 11:50:37 nvmf_abort_qd_sizes -- nvmf/common.sh@790 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT -m comment --comment 'SPDK_NVMF:-A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT' 00:28:36.369 11:50:37 nvmf_abort_qd_sizes -- nvmf/common.sh@222 -- # ping -c 1 10.0.0.3 00:28:36.369 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:28:36.369 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.086 ms 00:28:36.369 00:28:36.369 --- 10.0.0.3 ping statistics --- 00:28:36.369 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:28:36.369 rtt min/avg/max/mdev = 0.086/0.086/0.086/0.000 ms 00:28:36.369 11:50:37 nvmf_abort_qd_sizes -- nvmf/common.sh@223 -- # ping -c 1 10.0.0.4 00:28:36.369 PING 10.0.0.4 (10.0.0.4) 56(84) bytes of data. 00:28:36.369 64 bytes from 10.0.0.4: icmp_seq=1 ttl=64 time=0.043 ms 00:28:36.369 00:28:36.369 --- 10.0.0.4 ping statistics --- 00:28:36.369 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:28:36.369 rtt min/avg/max/mdev = 0.043/0.043/0.043/0.000 ms 00:28:36.369 11:50:37 nvmf_abort_qd_sizes -- nvmf/common.sh@224 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:28:36.369 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:28:36.369 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.023 ms 00:28:36.369 00:28:36.369 --- 10.0.0.1 ping statistics --- 00:28:36.369 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:28:36.369 rtt min/avg/max/mdev = 0.023/0.023/0.023/0.000 ms 00:28:36.369 11:50:37 nvmf_abort_qd_sizes -- nvmf/common.sh@225 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.2 00:28:36.369 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:28:36.369 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.050 ms 00:28:36.369 00:28:36.369 --- 10.0.0.2 ping statistics --- 00:28:36.369 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:28:36.369 rtt min/avg/max/mdev = 0.050/0.050/0.050/0.000 ms 00:28:36.369 11:50:37 nvmf_abort_qd_sizes -- nvmf/common.sh@227 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:28:36.369 11:50:37 nvmf_abort_qd_sizes -- nvmf/common.sh@461 -- # return 0 00:28:36.369 11:50:37 nvmf_abort_qd_sizes -- nvmf/common.sh@478 -- # '[' iso == iso ']' 00:28:36.369 11:50:37 nvmf_abort_qd_sizes -- nvmf/common.sh@479 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:28:37.304 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:28:37.304 0000:00:10.0 (1b36 0010): nvme -> uio_pci_generic 00:28:37.304 0000:00:11.0 (1b36 0010): nvme -> uio_pci_generic 00:28:37.304 11:50:37 nvmf_abort_qd_sizes -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:28:37.304 11:50:37 nvmf_abort_qd_sizes -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:28:37.304 11:50:37 nvmf_abort_qd_sizes -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:28:37.304 11:50:37 nvmf_abort_qd_sizes -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:28:37.304 11:50:37 nvmf_abort_qd_sizes -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:28:37.304 11:50:37 nvmf_abort_qd_sizes -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:28:37.304 11:50:37 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@71 -- # nvmfappstart -m 0xf 00:28:37.304 11:50:37 nvmf_abort_qd_sizes -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:28:37.304 11:50:37 nvmf_abort_qd_sizes -- common/autotest_common.sh@726 -- # xtrace_disable 00:28:37.304 11:50:37 nvmf_abort_qd_sizes -- common/autotest_common.sh@10 -- # set +x 00:28:37.304 11:50:37 nvmf_abort_qd_sizes -- nvmf/common.sh@509 -- # nvmfpid=111600 00:28:37.304 11:50:37 nvmf_abort_qd_sizes -- nvmf/common.sh@508 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xf 00:28:37.304 11:50:37 nvmf_abort_qd_sizes -- nvmf/common.sh@510 -- # waitforlisten 111600 00:28:37.305 11:50:37 nvmf_abort_qd_sizes -- common/autotest_common.sh@835 -- # '[' -z 111600 ']' 00:28:37.305 11:50:37 nvmf_abort_qd_sizes -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:28:37.305 11:50:37 nvmf_abort_qd_sizes -- common/autotest_common.sh@840 -- # local max_retries=100 00:28:37.305 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:28:37.305 11:50:37 nvmf_abort_qd_sizes -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:28:37.305 11:50:37 nvmf_abort_qd_sizes -- common/autotest_common.sh@844 -- # xtrace_disable 00:28:37.305 11:50:37 nvmf_abort_qd_sizes -- common/autotest_common.sh@10 -- # set +x 00:28:37.305 [2024-12-16 11:50:38.045005] Starting SPDK v25.01-pre git sha1 c360129e7 / DPDK 24.03.0 initialization... 00:28:37.305 [2024-12-16 11:50:38.045101] [ DPDK EAL parameters: nvmf -c 0xf --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:28:37.563 [2024-12-16 11:50:38.200440] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:28:37.563 [2024-12-16 11:50:38.260695] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:28:37.564 [2024-12-16 11:50:38.261039] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:28:37.564 [2024-12-16 11:50:38.261243] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:28:37.564 [2024-12-16 11:50:38.261403] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:28:37.564 [2024-12-16 11:50:38.261446] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:28:37.564 [2024-12-16 11:50:38.262948] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 1 00:28:37.564 [2024-12-16 11:50:38.263037] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 2 00:28:37.564 [2024-12-16 11:50:38.263172] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 3 00:28:37.564 [2024-12-16 11:50:38.263239] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:28:37.823 11:50:38 nvmf_abort_qd_sizes -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:28:37.823 11:50:38 nvmf_abort_qd_sizes -- common/autotest_common.sh@868 -- # return 0 00:28:37.823 11:50:38 nvmf_abort_qd_sizes -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:28:37.823 11:50:38 nvmf_abort_qd_sizes -- common/autotest_common.sh@732 -- # xtrace_disable 00:28:37.823 11:50:38 nvmf_abort_qd_sizes -- common/autotest_common.sh@10 -- # set +x 00:28:37.823 11:50:38 nvmf_abort_qd_sizes -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:28:37.823 11:50:38 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@73 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini || :; clean_kernel_target' SIGINT SIGTERM EXIT 00:28:37.823 11:50:38 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@75 -- # mapfile -t nvmes 00:28:37.823 11:50:38 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@75 -- # nvme_in_userspace 00:28:37.823 11:50:38 nvmf_abort_qd_sizes -- scripts/common.sh@312 -- # local bdf bdfs 00:28:37.823 11:50:38 nvmf_abort_qd_sizes -- scripts/common.sh@313 -- # local nvmes 00:28:37.823 11:50:38 nvmf_abort_qd_sizes -- scripts/common.sh@315 -- # [[ -n '' ]] 00:28:37.823 11:50:38 nvmf_abort_qd_sizes -- scripts/common.sh@318 -- # nvmes=($(iter_pci_class_code 01 08 02)) 00:28:37.823 11:50:38 nvmf_abort_qd_sizes -- scripts/common.sh@318 -- # iter_pci_class_code 01 08 02 00:28:37.823 11:50:38 nvmf_abort_qd_sizes -- scripts/common.sh@298 -- # local bdf= 00:28:37.823 11:50:38 nvmf_abort_qd_sizes -- scripts/common.sh@300 -- # iter_all_pci_class_code 01 08 02 00:28:37.823 11:50:38 nvmf_abort_qd_sizes -- scripts/common.sh@233 -- # local class 00:28:37.823 11:50:38 nvmf_abort_qd_sizes -- scripts/common.sh@234 -- # local subclass 00:28:37.823 11:50:38 nvmf_abort_qd_sizes -- scripts/common.sh@235 -- # local progif 00:28:37.823 11:50:38 nvmf_abort_qd_sizes -- scripts/common.sh@236 -- # printf %02x 1 00:28:37.823 11:50:38 nvmf_abort_qd_sizes -- scripts/common.sh@236 -- # class=01 00:28:37.823 11:50:38 nvmf_abort_qd_sizes -- scripts/common.sh@237 -- # printf %02x 8 00:28:37.823 11:50:38 nvmf_abort_qd_sizes -- scripts/common.sh@237 -- # subclass=08 00:28:37.823 11:50:38 nvmf_abort_qd_sizes -- scripts/common.sh@238 -- # printf %02x 2 00:28:37.823 11:50:38 nvmf_abort_qd_sizes -- scripts/common.sh@238 -- # progif=02 00:28:37.823 11:50:38 nvmf_abort_qd_sizes -- scripts/common.sh@240 -- # hash lspci 00:28:37.823 11:50:38 nvmf_abort_qd_sizes -- scripts/common.sh@241 -- # '[' 02 '!=' 00 ']' 00:28:37.823 11:50:38 nvmf_abort_qd_sizes -- scripts/common.sh@242 -- # lspci -mm -n -D 00:28:37.823 11:50:38 nvmf_abort_qd_sizes -- scripts/common.sh@243 -- # grep -i -- -p02 00:28:37.823 11:50:38 nvmf_abort_qd_sizes -- scripts/common.sh@244 -- # awk -v 'cc="0108"' -F ' ' '{if (cc ~ $2) print $1}' 00:28:37.823 11:50:38 nvmf_abort_qd_sizes -- scripts/common.sh@245 -- # tr -d '"' 00:28:37.823 11:50:38 nvmf_abort_qd_sizes -- scripts/common.sh@300 -- # for bdf in $(iter_all_pci_class_code "$@") 00:28:37.823 11:50:38 nvmf_abort_qd_sizes -- scripts/common.sh@301 -- # pci_can_use 0000:00:10.0 00:28:37.823 11:50:38 nvmf_abort_qd_sizes -- scripts/common.sh@18 -- # local i 00:28:37.823 11:50:38 nvmf_abort_qd_sizes -- scripts/common.sh@21 -- # [[ =~ 0000:00:10.0 ]] 00:28:37.823 11:50:38 nvmf_abort_qd_sizes -- scripts/common.sh@25 -- # [[ -z '' ]] 00:28:37.823 11:50:38 nvmf_abort_qd_sizes -- scripts/common.sh@27 -- # return 0 00:28:37.823 11:50:38 nvmf_abort_qd_sizes -- scripts/common.sh@302 -- # echo 0000:00:10.0 00:28:37.823 11:50:38 nvmf_abort_qd_sizes -- scripts/common.sh@300 -- # for bdf in $(iter_all_pci_class_code "$@") 00:28:37.823 11:50:38 nvmf_abort_qd_sizes -- scripts/common.sh@301 -- # pci_can_use 0000:00:11.0 00:28:37.823 11:50:38 nvmf_abort_qd_sizes -- scripts/common.sh@18 -- # local i 00:28:37.823 11:50:38 nvmf_abort_qd_sizes -- scripts/common.sh@21 -- # [[ =~ 0000:00:11.0 ]] 00:28:37.823 11:50:38 nvmf_abort_qd_sizes -- scripts/common.sh@25 -- # [[ -z '' ]] 00:28:37.823 11:50:38 nvmf_abort_qd_sizes -- scripts/common.sh@27 -- # return 0 00:28:37.823 11:50:38 nvmf_abort_qd_sizes -- scripts/common.sh@302 -- # echo 0000:00:11.0 00:28:37.823 11:50:38 nvmf_abort_qd_sizes -- scripts/common.sh@321 -- # for bdf in "${nvmes[@]}" 00:28:37.823 11:50:38 nvmf_abort_qd_sizes -- scripts/common.sh@322 -- # [[ -e /sys/bus/pci/drivers/nvme/0000:00:10.0 ]] 00:28:37.823 11:50:38 nvmf_abort_qd_sizes -- scripts/common.sh@323 -- # uname -s 00:28:37.823 11:50:38 nvmf_abort_qd_sizes -- scripts/common.sh@323 -- # [[ Linux == FreeBSD ]] 00:28:37.823 11:50:38 nvmf_abort_qd_sizes -- scripts/common.sh@326 -- # bdfs+=("$bdf") 00:28:37.823 11:50:38 nvmf_abort_qd_sizes -- scripts/common.sh@321 -- # for bdf in "${nvmes[@]}" 00:28:37.823 11:50:38 nvmf_abort_qd_sizes -- scripts/common.sh@322 -- # [[ -e /sys/bus/pci/drivers/nvme/0000:00:11.0 ]] 00:28:37.823 11:50:38 nvmf_abort_qd_sizes -- scripts/common.sh@323 -- # uname -s 00:28:37.823 11:50:38 nvmf_abort_qd_sizes -- scripts/common.sh@323 -- # [[ Linux == FreeBSD ]] 00:28:37.823 11:50:38 nvmf_abort_qd_sizes -- scripts/common.sh@326 -- # bdfs+=("$bdf") 00:28:37.823 11:50:38 nvmf_abort_qd_sizes -- scripts/common.sh@328 -- # (( 2 )) 00:28:37.823 11:50:38 nvmf_abort_qd_sizes -- scripts/common.sh@329 -- # printf '%s\n' 0000:00:10.0 0000:00:11.0 00:28:37.823 11:50:38 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@76 -- # (( 2 > 0 )) 00:28:37.823 11:50:38 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@78 -- # nvme=0000:00:10.0 00:28:37.823 11:50:38 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@80 -- # run_test spdk_target_abort spdk_target 00:28:37.823 11:50:38 nvmf_abort_qd_sizes -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:28:37.823 11:50:38 nvmf_abort_qd_sizes -- common/autotest_common.sh@1111 -- # xtrace_disable 00:28:37.823 11:50:38 nvmf_abort_qd_sizes -- common/autotest_common.sh@10 -- # set +x 00:28:37.823 ************************************ 00:28:37.823 START TEST spdk_target_abort 00:28:37.823 ************************************ 00:28:37.823 11:50:38 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@1129 -- # spdk_target 00:28:37.823 11:50:38 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@43 -- # local name=spdk_target 00:28:37.823 11:50:38 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@45 -- # rpc_cmd bdev_nvme_attach_controller -t pcie -a 0000:00:10.0 -b spdk_target 00:28:37.823 11:50:38 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:37.823 11:50:38 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@10 -- # set +x 00:28:37.823 spdk_targetn1 00:28:37.823 11:50:38 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:37.823 11:50:38 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@47 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:28:37.823 11:50:38 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:37.823 11:50:38 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@10 -- # set +x 00:28:37.823 [2024-12-16 11:50:38.569610] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:28:38.082 11:50:38 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:38.082 11:50:38 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@48 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:testnqn -a -s SPDKISFASTANDAWESOME 00:28:38.082 11:50:38 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:38.082 11:50:38 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@10 -- # set +x 00:28:38.082 11:50:38 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:38.082 11:50:38 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@49 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:testnqn spdk_targetn1 00:28:38.082 11:50:38 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:38.082 11:50:38 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@10 -- # set +x 00:28:38.082 11:50:38 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:38.082 11:50:38 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@50 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:testnqn -t tcp -a 10.0.0.3 -s 4420 00:28:38.082 11:50:38 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:38.082 11:50:38 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@10 -- # set +x 00:28:38.082 [2024-12-16 11:50:38.613419] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4420 *** 00:28:38.082 11:50:38 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:38.082 11:50:38 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@52 -- # rabort tcp IPv4 10.0.0.3 4420 nqn.2016-06.io.spdk:testnqn 00:28:38.082 11:50:38 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@17 -- # local trtype=tcp 00:28:38.082 11:50:38 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@18 -- # local adrfam=IPv4 00:28:38.082 11:50:38 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@19 -- # local traddr=10.0.0.3 00:28:38.082 11:50:38 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@20 -- # local trsvcid=4420 00:28:38.082 11:50:38 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@21 -- # local subnqn=nqn.2016-06.io.spdk:testnqn 00:28:38.082 11:50:38 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@23 -- # local qds qd 00:28:38.082 11:50:38 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@24 -- # local target r 00:28:38.082 11:50:38 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@26 -- # qds=(4 24 64) 00:28:38.082 11:50:38 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:28:38.082 11:50:38 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@29 -- # target=trtype:tcp 00:28:38.082 11:50:38 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:28:38.082 11:50:38 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@29 -- # target='trtype:tcp adrfam:IPv4' 00:28:38.082 11:50:38 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:28:38.082 11:50:38 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@29 -- # target='trtype:tcp adrfam:IPv4 traddr:10.0.0.3' 00:28:38.082 11:50:38 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:28:38.082 11:50:38 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@29 -- # target='trtype:tcp adrfam:IPv4 traddr:10.0.0.3 trsvcid:4420' 00:28:38.082 11:50:38 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:28:38.082 11:50:38 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@29 -- # target='trtype:tcp adrfam:IPv4 traddr:10.0.0.3 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:testnqn' 00:28:38.082 11:50:38 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@32 -- # for qd in "${qds[@]}" 00:28:38.082 11:50:38 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@34 -- # /home/vagrant/spdk_repo/spdk/build/examples/abort -q 4 -w rw -M 50 -o 4096 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.3 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:testnqn' 00:28:41.392 Initializing NVMe Controllers 00:28:41.392 Attached to NVMe over Fabrics controller at 10.0.0.3:4420: nqn.2016-06.io.spdk:testnqn 00:28:41.392 Associating TCP (addr:10.0.0.3 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 with lcore 0 00:28:41.392 Initialization complete. Launching workers. 00:28:41.392 NS: TCP (addr:10.0.0.3 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 I/O completed: 10410, failed: 0 00:28:41.392 CTRLR: TCP (addr:10.0.0.3 subnqn:nqn.2016-06.io.spdk:testnqn) abort submitted 1104, failed to submit 9306 00:28:41.392 success 754, unsuccessful 350, failed 0 00:28:41.392 11:50:41 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@32 -- # for qd in "${qds[@]}" 00:28:41.392 11:50:41 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@34 -- # /home/vagrant/spdk_repo/spdk/build/examples/abort -q 24 -w rw -M 50 -o 4096 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.3 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:testnqn' 00:28:44.675 Initializing NVMe Controllers 00:28:44.675 Attached to NVMe over Fabrics controller at 10.0.0.3:4420: nqn.2016-06.io.spdk:testnqn 00:28:44.675 Associating TCP (addr:10.0.0.3 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 with lcore 0 00:28:44.675 Initialization complete. Launching workers. 00:28:44.675 NS: TCP (addr:10.0.0.3 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 I/O completed: 6007, failed: 0 00:28:44.675 CTRLR: TCP (addr:10.0.0.3 subnqn:nqn.2016-06.io.spdk:testnqn) abort submitted 1269, failed to submit 4738 00:28:44.675 success 256, unsuccessful 1013, failed 0 00:28:44.675 11:50:45 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@32 -- # for qd in "${qds[@]}" 00:28:44.675 11:50:45 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@34 -- # /home/vagrant/spdk_repo/spdk/build/examples/abort -q 64 -w rw -M 50 -o 4096 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.3 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:testnqn' 00:28:47.960 Initializing NVMe Controllers 00:28:47.960 Attached to NVMe over Fabrics controller at 10.0.0.3:4420: nqn.2016-06.io.spdk:testnqn 00:28:47.960 Associating TCP (addr:10.0.0.3 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 with lcore 0 00:28:47.960 Initialization complete. Launching workers. 00:28:47.960 NS: TCP (addr:10.0.0.3 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 I/O completed: 28680, failed: 0 00:28:47.960 CTRLR: TCP (addr:10.0.0.3 subnqn:nqn.2016-06.io.spdk:testnqn) abort submitted 2559, failed to submit 26121 00:28:47.960 success 324, unsuccessful 2235, failed 0 00:28:47.960 11:50:48 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@54 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:testnqn 00:28:47.960 11:50:48 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:47.960 11:50:48 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@10 -- # set +x 00:28:47.960 11:50:48 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:47.960 11:50:48 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@55 -- # rpc_cmd bdev_nvme_detach_controller spdk_target 00:28:47.960 11:50:48 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:47.960 11:50:48 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@10 -- # set +x 00:28:48.896 11:50:49 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:48.896 11:50:49 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@61 -- # killprocess 111600 00:28:48.896 11:50:49 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@954 -- # '[' -z 111600 ']' 00:28:48.896 11:50:49 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@958 -- # kill -0 111600 00:28:48.896 11:50:49 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@959 -- # uname 00:28:48.896 11:50:49 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:28:48.896 11:50:49 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 111600 00:28:48.896 11:50:49 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:28:48.896 killing process with pid 111600 00:28:48.896 11:50:49 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:28:48.896 11:50:49 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@972 -- # echo 'killing process with pid 111600' 00:28:48.896 11:50:49 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@973 -- # kill 111600 00:28:48.896 11:50:49 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@978 -- # wait 111600 00:28:49.155 00:28:49.155 real 0m11.202s 00:28:49.155 user 0m43.229s 00:28:49.155 sys 0m1.666s 00:28:49.155 11:50:49 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@1130 -- # xtrace_disable 00:28:49.155 11:50:49 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@10 -- # set +x 00:28:49.155 ************************************ 00:28:49.155 END TEST spdk_target_abort 00:28:49.155 ************************************ 00:28:49.155 11:50:49 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@81 -- # run_test kernel_target_abort kernel_target 00:28:49.155 11:50:49 nvmf_abort_qd_sizes -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:28:49.155 11:50:49 nvmf_abort_qd_sizes -- common/autotest_common.sh@1111 -- # xtrace_disable 00:28:49.155 11:50:49 nvmf_abort_qd_sizes -- common/autotest_common.sh@10 -- # set +x 00:28:49.155 ************************************ 00:28:49.155 START TEST kernel_target_abort 00:28:49.155 ************************************ 00:28:49.155 11:50:49 nvmf_abort_qd_sizes.kernel_target_abort -- common/autotest_common.sh@1129 -- # kernel_target 00:28:49.155 11:50:49 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@65 -- # get_main_ns_ip 00:28:49.155 11:50:49 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@769 -- # local ip 00:28:49.155 11:50:49 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@770 -- # ip_candidates=() 00:28:49.155 11:50:49 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@770 -- # local -A ip_candidates 00:28:49.155 11:50:49 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:28:49.155 11:50:49 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:28:49.155 11:50:49 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:28:49.155 11:50:49 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:28:49.155 11:50:49 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:28:49.155 11:50:49 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:28:49.155 11:50:49 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:28:49.155 11:50:49 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@65 -- # configure_kernel_target nqn.2016-06.io.spdk:testnqn 10.0.0.1 00:28:49.155 11:50:49 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@660 -- # local kernel_name=nqn.2016-06.io.spdk:testnqn kernel_target_ip=10.0.0.1 00:28:49.155 11:50:49 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@662 -- # nvmet=/sys/kernel/config/nvmet 00:28:49.155 11:50:49 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@663 -- # kernel_subsystem=/sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn 00:28:49.155 11:50:49 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@664 -- # kernel_namespace=/sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn/namespaces/1 00:28:49.155 11:50:49 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@665 -- # kernel_port=/sys/kernel/config/nvmet/ports/1 00:28:49.155 11:50:49 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@667 -- # local block nvme 00:28:49.155 11:50:49 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@669 -- # [[ ! -e /sys/module/nvmet ]] 00:28:49.155 11:50:49 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@670 -- # modprobe nvmet 00:28:49.155 11:50:49 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@673 -- # [[ -e /sys/kernel/config/nvmet ]] 00:28:49.155 11:50:49 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@675 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh reset 00:28:49.414 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:28:49.414 Waiting for block devices as requested 00:28:49.672 0000:00:11.0 (1b36 0010): uio_pci_generic -> nvme 00:28:49.672 0000:00:10.0 (1b36 0010): uio_pci_generic -> nvme 00:28:49.672 11:50:50 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@678 -- # for block in /sys/block/nvme* 00:28:49.672 11:50:50 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@679 -- # [[ -e /sys/block/nvme0n1 ]] 00:28:49.672 11:50:50 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@680 -- # is_block_zoned nvme0n1 00:28:49.672 11:50:50 nvmf_abort_qd_sizes.kernel_target_abort -- common/autotest_common.sh@1650 -- # local device=nvme0n1 00:28:49.672 11:50:50 nvmf_abort_qd_sizes.kernel_target_abort -- common/autotest_common.sh@1652 -- # [[ -e /sys/block/nvme0n1/queue/zoned ]] 00:28:49.672 11:50:50 nvmf_abort_qd_sizes.kernel_target_abort -- common/autotest_common.sh@1653 -- # [[ none != none ]] 00:28:49.672 11:50:50 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@681 -- # block_in_use nvme0n1 00:28:49.672 11:50:50 nvmf_abort_qd_sizes.kernel_target_abort -- scripts/common.sh@381 -- # local block=nvme0n1 pt 00:28:49.672 11:50:50 nvmf_abort_qd_sizes.kernel_target_abort -- scripts/common.sh@390 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py nvme0n1 00:28:49.672 No valid GPT data, bailing 00:28:49.672 11:50:50 nvmf_abort_qd_sizes.kernel_target_abort -- scripts/common.sh@394 -- # blkid -s PTTYPE -o value /dev/nvme0n1 00:28:49.931 11:50:50 nvmf_abort_qd_sizes.kernel_target_abort -- scripts/common.sh@394 -- # pt= 00:28:49.931 11:50:50 nvmf_abort_qd_sizes.kernel_target_abort -- scripts/common.sh@395 -- # return 1 00:28:49.931 11:50:50 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@681 -- # nvme=/dev/nvme0n1 00:28:49.931 11:50:50 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@678 -- # for block in /sys/block/nvme* 00:28:49.931 11:50:50 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@679 -- # [[ -e /sys/block/nvme0n2 ]] 00:28:49.931 11:50:50 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@680 -- # is_block_zoned nvme0n2 00:28:49.931 11:50:50 nvmf_abort_qd_sizes.kernel_target_abort -- common/autotest_common.sh@1650 -- # local device=nvme0n2 00:28:49.931 11:50:50 nvmf_abort_qd_sizes.kernel_target_abort -- common/autotest_common.sh@1652 -- # [[ -e /sys/block/nvme0n2/queue/zoned ]] 00:28:49.931 11:50:50 nvmf_abort_qd_sizes.kernel_target_abort -- common/autotest_common.sh@1653 -- # [[ none != none ]] 00:28:49.931 11:50:50 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@681 -- # block_in_use nvme0n2 00:28:49.931 11:50:50 nvmf_abort_qd_sizes.kernel_target_abort -- scripts/common.sh@381 -- # local block=nvme0n2 pt 00:28:49.931 11:50:50 nvmf_abort_qd_sizes.kernel_target_abort -- scripts/common.sh@390 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py nvme0n2 00:28:49.931 No valid GPT data, bailing 00:28:49.931 11:50:50 nvmf_abort_qd_sizes.kernel_target_abort -- scripts/common.sh@394 -- # blkid -s PTTYPE -o value /dev/nvme0n2 00:28:49.931 11:50:50 nvmf_abort_qd_sizes.kernel_target_abort -- scripts/common.sh@394 -- # pt= 00:28:49.931 11:50:50 nvmf_abort_qd_sizes.kernel_target_abort -- scripts/common.sh@395 -- # return 1 00:28:49.931 11:50:50 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@681 -- # nvme=/dev/nvme0n2 00:28:49.931 11:50:50 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@678 -- # for block in /sys/block/nvme* 00:28:49.931 11:50:50 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@679 -- # [[ -e /sys/block/nvme0n3 ]] 00:28:49.931 11:50:50 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@680 -- # is_block_zoned nvme0n3 00:28:49.932 11:50:50 nvmf_abort_qd_sizes.kernel_target_abort -- common/autotest_common.sh@1650 -- # local device=nvme0n3 00:28:49.932 11:50:50 nvmf_abort_qd_sizes.kernel_target_abort -- common/autotest_common.sh@1652 -- # [[ -e /sys/block/nvme0n3/queue/zoned ]] 00:28:49.932 11:50:50 nvmf_abort_qd_sizes.kernel_target_abort -- common/autotest_common.sh@1653 -- # [[ none != none ]] 00:28:49.932 11:50:50 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@681 -- # block_in_use nvme0n3 00:28:49.932 11:50:50 nvmf_abort_qd_sizes.kernel_target_abort -- scripts/common.sh@381 -- # local block=nvme0n3 pt 00:28:49.932 11:50:50 nvmf_abort_qd_sizes.kernel_target_abort -- scripts/common.sh@390 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py nvme0n3 00:28:49.932 No valid GPT data, bailing 00:28:49.932 11:50:50 nvmf_abort_qd_sizes.kernel_target_abort -- scripts/common.sh@394 -- # blkid -s PTTYPE -o value /dev/nvme0n3 00:28:49.932 11:50:50 nvmf_abort_qd_sizes.kernel_target_abort -- scripts/common.sh@394 -- # pt= 00:28:49.932 11:50:50 nvmf_abort_qd_sizes.kernel_target_abort -- scripts/common.sh@395 -- # return 1 00:28:49.932 11:50:50 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@681 -- # nvme=/dev/nvme0n3 00:28:49.932 11:50:50 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@678 -- # for block in /sys/block/nvme* 00:28:49.932 11:50:50 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@679 -- # [[ -e /sys/block/nvme1n1 ]] 00:28:49.932 11:50:50 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@680 -- # is_block_zoned nvme1n1 00:28:49.932 11:50:50 nvmf_abort_qd_sizes.kernel_target_abort -- common/autotest_common.sh@1650 -- # local device=nvme1n1 00:28:49.932 11:50:50 nvmf_abort_qd_sizes.kernel_target_abort -- common/autotest_common.sh@1652 -- # [[ -e /sys/block/nvme1n1/queue/zoned ]] 00:28:49.932 11:50:50 nvmf_abort_qd_sizes.kernel_target_abort -- common/autotest_common.sh@1653 -- # [[ none != none ]] 00:28:49.932 11:50:50 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@681 -- # block_in_use nvme1n1 00:28:49.932 11:50:50 nvmf_abort_qd_sizes.kernel_target_abort -- scripts/common.sh@381 -- # local block=nvme1n1 pt 00:28:49.932 11:50:50 nvmf_abort_qd_sizes.kernel_target_abort -- scripts/common.sh@390 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py nvme1n1 00:28:49.932 No valid GPT data, bailing 00:28:49.932 11:50:50 nvmf_abort_qd_sizes.kernel_target_abort -- scripts/common.sh@394 -- # blkid -s PTTYPE -o value /dev/nvme1n1 00:28:49.932 11:50:50 nvmf_abort_qd_sizes.kernel_target_abort -- scripts/common.sh@394 -- # pt= 00:28:49.932 11:50:50 nvmf_abort_qd_sizes.kernel_target_abort -- scripts/common.sh@395 -- # return 1 00:28:49.932 11:50:50 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@681 -- # nvme=/dev/nvme1n1 00:28:49.932 11:50:50 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@684 -- # [[ -b /dev/nvme1n1 ]] 00:28:49.932 11:50:50 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@686 -- # mkdir /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn 00:28:49.932 11:50:50 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@687 -- # mkdir /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn/namespaces/1 00:28:49.932 11:50:50 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@688 -- # mkdir /sys/kernel/config/nvmet/ports/1 00:28:49.932 11:50:50 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@693 -- # echo SPDK-nqn.2016-06.io.spdk:testnqn 00:28:49.932 11:50:50 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@695 -- # echo 1 00:28:49.932 11:50:50 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@696 -- # echo /dev/nvme1n1 00:28:49.932 11:50:50 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@697 -- # echo 1 00:28:49.932 11:50:50 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@699 -- # echo 10.0.0.1 00:28:49.932 11:50:50 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@700 -- # echo tcp 00:28:49.932 11:50:50 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@701 -- # echo 4420 00:28:49.932 11:50:50 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@702 -- # echo ipv4 00:28:49.932 11:50:50 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@705 -- # ln -s /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn /sys/kernel/config/nvmet/ports/1/subsystems/ 00:28:50.191 11:50:50 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@708 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:2fb5c03d-b67f-4471-b8a9-a83bf7b15be8 --hostid=2fb5c03d-b67f-4471-b8a9-a83bf7b15be8 -a 10.0.0.1 -t tcp -s 4420 00:28:50.191 00:28:50.191 Discovery Log Number of Records 2, Generation counter 2 00:28:50.191 =====Discovery Log Entry 0====== 00:28:50.191 trtype: tcp 00:28:50.191 adrfam: ipv4 00:28:50.191 subtype: current discovery subsystem 00:28:50.191 treq: not specified, sq flow control disable supported 00:28:50.191 portid: 1 00:28:50.191 trsvcid: 4420 00:28:50.191 subnqn: nqn.2014-08.org.nvmexpress.discovery 00:28:50.191 traddr: 10.0.0.1 00:28:50.191 eflags: none 00:28:50.191 sectype: none 00:28:50.191 =====Discovery Log Entry 1====== 00:28:50.191 trtype: tcp 00:28:50.191 adrfam: ipv4 00:28:50.191 subtype: nvme subsystem 00:28:50.191 treq: not specified, sq flow control disable supported 00:28:50.191 portid: 1 00:28:50.191 trsvcid: 4420 00:28:50.191 subnqn: nqn.2016-06.io.spdk:testnqn 00:28:50.191 traddr: 10.0.0.1 00:28:50.191 eflags: none 00:28:50.191 sectype: none 00:28:50.191 11:50:50 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@66 -- # rabort tcp IPv4 10.0.0.1 4420 nqn.2016-06.io.spdk:testnqn 00:28:50.191 11:50:50 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@17 -- # local trtype=tcp 00:28:50.191 11:50:50 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@18 -- # local adrfam=IPv4 00:28:50.191 11:50:50 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@19 -- # local traddr=10.0.0.1 00:28:50.191 11:50:50 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@20 -- # local trsvcid=4420 00:28:50.191 11:50:50 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@21 -- # local subnqn=nqn.2016-06.io.spdk:testnqn 00:28:50.191 11:50:50 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@23 -- # local qds qd 00:28:50.191 11:50:50 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@24 -- # local target r 00:28:50.191 11:50:50 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@26 -- # qds=(4 24 64) 00:28:50.191 11:50:50 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:28:50.191 11:50:50 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@29 -- # target=trtype:tcp 00:28:50.191 11:50:50 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:28:50.191 11:50:50 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@29 -- # target='trtype:tcp adrfam:IPv4' 00:28:50.191 11:50:50 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:28:50.191 11:50:50 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@29 -- # target='trtype:tcp adrfam:IPv4 traddr:10.0.0.1' 00:28:50.191 11:50:50 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:28:50.191 11:50:50 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@29 -- # target='trtype:tcp adrfam:IPv4 traddr:10.0.0.1 trsvcid:4420' 00:28:50.191 11:50:50 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:28:50.191 11:50:50 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@29 -- # target='trtype:tcp adrfam:IPv4 traddr:10.0.0.1 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:testnqn' 00:28:50.191 11:50:50 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@32 -- # for qd in "${qds[@]}" 00:28:50.191 11:50:50 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@34 -- # /home/vagrant/spdk_repo/spdk/build/examples/abort -q 4 -w rw -M 50 -o 4096 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.1 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:testnqn' 00:28:53.505 Initializing NVMe Controllers 00:28:53.505 Attached to NVMe over Fabrics controller at 10.0.0.1:4420: nqn.2016-06.io.spdk:testnqn 00:28:53.505 Associating TCP (addr:10.0.0.1 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 with lcore 0 00:28:53.505 Initialization complete. Launching workers. 00:28:53.505 NS: TCP (addr:10.0.0.1 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 I/O completed: 32711, failed: 0 00:28:53.505 CTRLR: TCP (addr:10.0.0.1 subnqn:nqn.2016-06.io.spdk:testnqn) abort submitted 32711, failed to submit 0 00:28:53.505 success 0, unsuccessful 32711, failed 0 00:28:53.505 11:50:53 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@32 -- # for qd in "${qds[@]}" 00:28:53.505 11:50:53 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@34 -- # /home/vagrant/spdk_repo/spdk/build/examples/abort -q 24 -w rw -M 50 -o 4096 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.1 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:testnqn' 00:28:56.792 Initializing NVMe Controllers 00:28:56.792 Attached to NVMe over Fabrics controller at 10.0.0.1:4420: nqn.2016-06.io.spdk:testnqn 00:28:56.792 Associating TCP (addr:10.0.0.1 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 with lcore 0 00:28:56.792 Initialization complete. Launching workers. 00:28:56.792 NS: TCP (addr:10.0.0.1 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 I/O completed: 64841, failed: 0 00:28:56.792 CTRLR: TCP (addr:10.0.0.1 subnqn:nqn.2016-06.io.spdk:testnqn) abort submitted 26915, failed to submit 37926 00:28:56.792 success 0, unsuccessful 26915, failed 0 00:28:56.792 11:50:57 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@32 -- # for qd in "${qds[@]}" 00:28:56.792 11:50:57 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@34 -- # /home/vagrant/spdk_repo/spdk/build/examples/abort -q 64 -w rw -M 50 -o 4096 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.1 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:testnqn' 00:29:00.079 Initializing NVMe Controllers 00:29:00.079 Attached to NVMe over Fabrics controller at 10.0.0.1:4420: nqn.2016-06.io.spdk:testnqn 00:29:00.079 Associating TCP (addr:10.0.0.1 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 with lcore 0 00:29:00.079 Initialization complete. Launching workers. 00:29:00.079 NS: TCP (addr:10.0.0.1 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 I/O completed: 70192, failed: 0 00:29:00.079 CTRLR: TCP (addr:10.0.0.1 subnqn:nqn.2016-06.io.spdk:testnqn) abort submitted 17542, failed to submit 52650 00:29:00.079 success 0, unsuccessful 17542, failed 0 00:29:00.079 11:51:00 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@67 -- # clean_kernel_target 00:29:00.079 11:51:00 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@712 -- # [[ -e /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn ]] 00:29:00.079 11:51:00 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@714 -- # echo 0 00:29:00.079 11:51:00 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@716 -- # rm -f /sys/kernel/config/nvmet/ports/1/subsystems/nqn.2016-06.io.spdk:testnqn 00:29:00.079 11:51:00 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@717 -- # rmdir /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn/namespaces/1 00:29:00.079 11:51:00 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@718 -- # rmdir /sys/kernel/config/nvmet/ports/1 00:29:00.079 11:51:00 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@719 -- # rmdir /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn 00:29:00.079 11:51:00 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@721 -- # modules=(/sys/module/nvmet/holders/*) 00:29:00.079 11:51:00 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@723 -- # modprobe -r nvmet_tcp nvmet 00:29:00.079 11:51:00 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@726 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:29:00.338 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:29:01.270 0000:00:10.0 (1b36 0010): nvme -> uio_pci_generic 00:29:01.270 0000:00:11.0 (1b36 0010): nvme -> uio_pci_generic 00:29:01.528 00:29:01.528 real 0m12.282s 00:29:01.528 user 0m5.690s 00:29:01.528 sys 0m3.999s 00:29:01.528 11:51:02 nvmf_abort_qd_sizes.kernel_target_abort -- common/autotest_common.sh@1130 -- # xtrace_disable 00:29:01.528 11:51:02 nvmf_abort_qd_sizes.kernel_target_abort -- common/autotest_common.sh@10 -- # set +x 00:29:01.528 ************************************ 00:29:01.528 END TEST kernel_target_abort 00:29:01.528 ************************************ 00:29:01.528 11:51:02 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@83 -- # trap - SIGINT SIGTERM EXIT 00:29:01.528 11:51:02 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@84 -- # nvmftestfini 00:29:01.528 11:51:02 nvmf_abort_qd_sizes -- nvmf/common.sh@516 -- # nvmfcleanup 00:29:01.528 11:51:02 nvmf_abort_qd_sizes -- nvmf/common.sh@121 -- # sync 00:29:01.528 11:51:02 nvmf_abort_qd_sizes -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:29:01.528 11:51:02 nvmf_abort_qd_sizes -- nvmf/common.sh@124 -- # set +e 00:29:01.528 11:51:02 nvmf_abort_qd_sizes -- nvmf/common.sh@125 -- # for i in {1..20} 00:29:01.528 11:51:02 nvmf_abort_qd_sizes -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:29:01.528 rmmod nvme_tcp 00:29:01.528 rmmod nvme_fabrics 00:29:01.528 rmmod nvme_keyring 00:29:01.528 11:51:02 nvmf_abort_qd_sizes -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:29:01.528 Process with pid 111600 is not found 00:29:01.528 11:51:02 nvmf_abort_qd_sizes -- nvmf/common.sh@128 -- # set -e 00:29:01.528 11:51:02 nvmf_abort_qd_sizes -- nvmf/common.sh@129 -- # return 0 00:29:01.529 11:51:02 nvmf_abort_qd_sizes -- nvmf/common.sh@517 -- # '[' -n 111600 ']' 00:29:01.529 11:51:02 nvmf_abort_qd_sizes -- nvmf/common.sh@518 -- # killprocess 111600 00:29:01.529 11:51:02 nvmf_abort_qd_sizes -- common/autotest_common.sh@954 -- # '[' -z 111600 ']' 00:29:01.529 11:51:02 nvmf_abort_qd_sizes -- common/autotest_common.sh@958 -- # kill -0 111600 00:29:01.529 /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh: line 958: kill: (111600) - No such process 00:29:01.529 11:51:02 nvmf_abort_qd_sizes -- common/autotest_common.sh@981 -- # echo 'Process with pid 111600 is not found' 00:29:01.529 11:51:02 nvmf_abort_qd_sizes -- nvmf/common.sh@520 -- # '[' iso == iso ']' 00:29:01.529 11:51:02 nvmf_abort_qd_sizes -- nvmf/common.sh@521 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh reset 00:29:01.787 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:29:02.046 Waiting for block devices as requested 00:29:02.046 0000:00:11.0 (1b36 0010): uio_pci_generic -> nvme 00:29:02.046 0000:00:10.0 (1b36 0010): uio_pci_generic -> nvme 00:29:02.046 11:51:02 nvmf_abort_qd_sizes -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:29:02.046 11:51:02 nvmf_abort_qd_sizes -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:29:02.046 11:51:02 nvmf_abort_qd_sizes -- nvmf/common.sh@297 -- # iptr 00:29:02.046 11:51:02 nvmf_abort_qd_sizes -- nvmf/common.sh@791 -- # iptables-save 00:29:02.046 11:51:02 nvmf_abort_qd_sizes -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:29:02.046 11:51:02 nvmf_abort_qd_sizes -- nvmf/common.sh@791 -- # iptables-restore 00:29:02.046 11:51:02 nvmf_abort_qd_sizes -- nvmf/common.sh@298 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:29:02.046 11:51:02 nvmf_abort_qd_sizes -- nvmf/common.sh@299 -- # nvmf_veth_fini 00:29:02.046 11:51:02 nvmf_abort_qd_sizes -- nvmf/common.sh@233 -- # ip link set nvmf_init_br nomaster 00:29:02.046 11:51:02 nvmf_abort_qd_sizes -- nvmf/common.sh@234 -- # ip link set nvmf_init_br2 nomaster 00:29:02.046 11:51:02 nvmf_abort_qd_sizes -- nvmf/common.sh@235 -- # ip link set nvmf_tgt_br nomaster 00:29:02.304 11:51:02 nvmf_abort_qd_sizes -- nvmf/common.sh@236 -- # ip link set nvmf_tgt_br2 nomaster 00:29:02.304 11:51:02 nvmf_abort_qd_sizes -- nvmf/common.sh@237 -- # ip link set nvmf_init_br down 00:29:02.304 11:51:02 nvmf_abort_qd_sizes -- nvmf/common.sh@238 -- # ip link set nvmf_init_br2 down 00:29:02.304 11:51:02 nvmf_abort_qd_sizes -- nvmf/common.sh@239 -- # ip link set nvmf_tgt_br down 00:29:02.304 11:51:02 nvmf_abort_qd_sizes -- nvmf/common.sh@240 -- # ip link set nvmf_tgt_br2 down 00:29:02.304 11:51:02 nvmf_abort_qd_sizes -- nvmf/common.sh@241 -- # ip link delete nvmf_br type bridge 00:29:02.304 11:51:02 nvmf_abort_qd_sizes -- nvmf/common.sh@242 -- # ip link delete nvmf_init_if 00:29:02.304 11:51:02 nvmf_abort_qd_sizes -- nvmf/common.sh@243 -- # ip link delete nvmf_init_if2 00:29:02.304 11:51:02 nvmf_abort_qd_sizes -- nvmf/common.sh@244 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:29:02.305 11:51:02 nvmf_abort_qd_sizes -- nvmf/common.sh@245 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:29:02.305 11:51:02 nvmf_abort_qd_sizes -- nvmf/common.sh@246 -- # remove_spdk_ns 00:29:02.305 11:51:02 nvmf_abort_qd_sizes -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:29:02.305 11:51:02 nvmf_abort_qd_sizes -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 13> /dev/null' 00:29:02.305 11:51:02 nvmf_abort_qd_sizes -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:29:02.305 11:51:02 nvmf_abort_qd_sizes -- nvmf/common.sh@300 -- # return 0 00:29:02.305 00:29:02.305 real 0m26.523s 00:29:02.305 user 0m50.101s 00:29:02.305 sys 0m7.084s 00:29:02.305 ************************************ 00:29:02.305 END TEST nvmf_abort_qd_sizes 00:29:02.305 ************************************ 00:29:02.305 11:51:02 nvmf_abort_qd_sizes -- common/autotest_common.sh@1130 -- # xtrace_disable 00:29:02.305 11:51:02 nvmf_abort_qd_sizes -- common/autotest_common.sh@10 -- # set +x 00:29:02.305 11:51:03 -- spdk/autotest.sh@292 -- # run_test keyring_file /home/vagrant/spdk_repo/spdk/test/keyring/file.sh 00:29:02.305 11:51:03 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:29:02.305 11:51:03 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:29:02.305 11:51:03 -- common/autotest_common.sh@10 -- # set +x 00:29:02.305 ************************************ 00:29:02.305 START TEST keyring_file 00:29:02.305 ************************************ 00:29:02.305 11:51:03 keyring_file -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/keyring/file.sh 00:29:02.564 * Looking for test storage... 00:29:02.564 * Found test storage at /home/vagrant/spdk_repo/spdk/test/keyring 00:29:02.564 11:51:03 keyring_file -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:29:02.564 11:51:03 keyring_file -- common/autotest_common.sh@1711 -- # lcov --version 00:29:02.564 11:51:03 keyring_file -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:29:02.564 11:51:03 keyring_file -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:29:02.564 11:51:03 keyring_file -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:29:02.564 11:51:03 keyring_file -- scripts/common.sh@333 -- # local ver1 ver1_l 00:29:02.564 11:51:03 keyring_file -- scripts/common.sh@334 -- # local ver2 ver2_l 00:29:02.564 11:51:03 keyring_file -- scripts/common.sh@336 -- # IFS=.-: 00:29:02.564 11:51:03 keyring_file -- scripts/common.sh@336 -- # read -ra ver1 00:29:02.564 11:51:03 keyring_file -- scripts/common.sh@337 -- # IFS=.-: 00:29:02.564 11:51:03 keyring_file -- scripts/common.sh@337 -- # read -ra ver2 00:29:02.564 11:51:03 keyring_file -- scripts/common.sh@338 -- # local 'op=<' 00:29:02.564 11:51:03 keyring_file -- scripts/common.sh@340 -- # ver1_l=2 00:29:02.564 11:51:03 keyring_file -- scripts/common.sh@341 -- # ver2_l=1 00:29:02.564 11:51:03 keyring_file -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:29:02.564 11:51:03 keyring_file -- scripts/common.sh@344 -- # case "$op" in 00:29:02.564 11:51:03 keyring_file -- scripts/common.sh@345 -- # : 1 00:29:02.564 11:51:03 keyring_file -- scripts/common.sh@364 -- # (( v = 0 )) 00:29:02.564 11:51:03 keyring_file -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:29:02.564 11:51:03 keyring_file -- scripts/common.sh@365 -- # decimal 1 00:29:02.564 11:51:03 keyring_file -- scripts/common.sh@353 -- # local d=1 00:29:02.564 11:51:03 keyring_file -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:29:02.564 11:51:03 keyring_file -- scripts/common.sh@355 -- # echo 1 00:29:02.564 11:51:03 keyring_file -- scripts/common.sh@365 -- # ver1[v]=1 00:29:02.564 11:51:03 keyring_file -- scripts/common.sh@366 -- # decimal 2 00:29:02.564 11:51:03 keyring_file -- scripts/common.sh@353 -- # local d=2 00:29:02.564 11:51:03 keyring_file -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:29:02.564 11:51:03 keyring_file -- scripts/common.sh@355 -- # echo 2 00:29:02.564 11:51:03 keyring_file -- scripts/common.sh@366 -- # ver2[v]=2 00:29:02.564 11:51:03 keyring_file -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:29:02.564 11:51:03 keyring_file -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:29:02.564 11:51:03 keyring_file -- scripts/common.sh@368 -- # return 0 00:29:02.564 11:51:03 keyring_file -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:29:02.564 11:51:03 keyring_file -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:29:02.564 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:29:02.564 --rc genhtml_branch_coverage=1 00:29:02.564 --rc genhtml_function_coverage=1 00:29:02.564 --rc genhtml_legend=1 00:29:02.564 --rc geninfo_all_blocks=1 00:29:02.564 --rc geninfo_unexecuted_blocks=1 00:29:02.564 00:29:02.564 ' 00:29:02.564 11:51:03 keyring_file -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:29:02.564 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:29:02.564 --rc genhtml_branch_coverage=1 00:29:02.564 --rc genhtml_function_coverage=1 00:29:02.564 --rc genhtml_legend=1 00:29:02.564 --rc geninfo_all_blocks=1 00:29:02.564 --rc geninfo_unexecuted_blocks=1 00:29:02.564 00:29:02.564 ' 00:29:02.564 11:51:03 keyring_file -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:29:02.564 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:29:02.564 --rc genhtml_branch_coverage=1 00:29:02.564 --rc genhtml_function_coverage=1 00:29:02.564 --rc genhtml_legend=1 00:29:02.564 --rc geninfo_all_blocks=1 00:29:02.564 --rc geninfo_unexecuted_blocks=1 00:29:02.564 00:29:02.564 ' 00:29:02.564 11:51:03 keyring_file -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:29:02.564 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:29:02.564 --rc genhtml_branch_coverage=1 00:29:02.564 --rc genhtml_function_coverage=1 00:29:02.564 --rc genhtml_legend=1 00:29:02.564 --rc geninfo_all_blocks=1 00:29:02.564 --rc geninfo_unexecuted_blocks=1 00:29:02.564 00:29:02.564 ' 00:29:02.564 11:51:03 keyring_file -- keyring/file.sh@11 -- # source /home/vagrant/spdk_repo/spdk/test/keyring/common.sh 00:29:02.564 11:51:03 keyring_file -- keyring/common.sh@4 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:29:02.564 11:51:03 keyring_file -- nvmf/common.sh@7 -- # uname -s 00:29:02.564 11:51:03 keyring_file -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:29:02.564 11:51:03 keyring_file -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:29:02.564 11:51:03 keyring_file -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:29:02.564 11:51:03 keyring_file -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:29:02.564 11:51:03 keyring_file -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:29:02.564 11:51:03 keyring_file -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:29:02.564 11:51:03 keyring_file -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:29:02.564 11:51:03 keyring_file -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:29:02.564 11:51:03 keyring_file -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:29:02.564 11:51:03 keyring_file -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:29:02.564 11:51:03 keyring_file -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:2fb5c03d-b67f-4471-b8a9-a83bf7b15be8 00:29:02.564 11:51:03 keyring_file -- nvmf/common.sh@18 -- # NVME_HOSTID=2fb5c03d-b67f-4471-b8a9-a83bf7b15be8 00:29:02.564 11:51:03 keyring_file -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:29:02.564 11:51:03 keyring_file -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:29:02.564 11:51:03 keyring_file -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:29:02.564 11:51:03 keyring_file -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:29:02.564 11:51:03 keyring_file -- nvmf/common.sh@49 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:29:02.564 11:51:03 keyring_file -- scripts/common.sh@15 -- # shopt -s extglob 00:29:02.564 11:51:03 keyring_file -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:29:02.564 11:51:03 keyring_file -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:29:02.564 11:51:03 keyring_file -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:29:02.564 11:51:03 keyring_file -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:29:02.564 11:51:03 keyring_file -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:29:02.564 11:51:03 keyring_file -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:29:02.564 11:51:03 keyring_file -- paths/export.sh@5 -- # export PATH 00:29:02.564 11:51:03 keyring_file -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:29:02.564 11:51:03 keyring_file -- nvmf/common.sh@51 -- # : 0 00:29:02.564 11:51:03 keyring_file -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:29:02.564 11:51:03 keyring_file -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:29:02.564 11:51:03 keyring_file -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:29:02.564 11:51:03 keyring_file -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:29:02.564 11:51:03 keyring_file -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:29:02.564 11:51:03 keyring_file -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:29:02.564 /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:29:02.564 11:51:03 keyring_file -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:29:02.564 11:51:03 keyring_file -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:29:02.564 11:51:03 keyring_file -- nvmf/common.sh@55 -- # have_pci_nics=0 00:29:02.564 11:51:03 keyring_file -- keyring/common.sh@6 -- # bperfsock=/var/tmp/bperf.sock 00:29:02.564 11:51:03 keyring_file -- keyring/file.sh@13 -- # subnqn=nqn.2016-06.io.spdk:cnode0 00:29:02.564 11:51:03 keyring_file -- keyring/file.sh@14 -- # hostnqn=nqn.2016-06.io.spdk:host0 00:29:02.823 11:51:03 keyring_file -- keyring/file.sh@15 -- # key0=00112233445566778899aabbccddeeff 00:29:02.823 11:51:03 keyring_file -- keyring/file.sh@16 -- # key1=112233445566778899aabbccddeeff00 00:29:02.823 11:51:03 keyring_file -- keyring/file.sh@24 -- # trap cleanup EXIT 00:29:02.823 11:51:03 keyring_file -- keyring/file.sh@26 -- # prep_key key0 00112233445566778899aabbccddeeff 0 00:29:02.823 11:51:03 keyring_file -- keyring/common.sh@15 -- # local name key digest path 00:29:02.823 11:51:03 keyring_file -- keyring/common.sh@17 -- # name=key0 00:29:02.823 11:51:03 keyring_file -- keyring/common.sh@17 -- # key=00112233445566778899aabbccddeeff 00:29:02.823 11:51:03 keyring_file -- keyring/common.sh@17 -- # digest=0 00:29:02.823 11:51:03 keyring_file -- keyring/common.sh@18 -- # mktemp 00:29:02.823 11:51:03 keyring_file -- keyring/common.sh@18 -- # path=/tmp/tmp.zNq12q3GxO 00:29:02.823 11:51:03 keyring_file -- keyring/common.sh@20 -- # format_interchange_psk 00112233445566778899aabbccddeeff 0 00:29:02.823 11:51:03 keyring_file -- nvmf/common.sh@743 -- # format_key NVMeTLSkey-1 00112233445566778899aabbccddeeff 0 00:29:02.824 11:51:03 keyring_file -- nvmf/common.sh@730 -- # local prefix key digest 00:29:02.824 11:51:03 keyring_file -- nvmf/common.sh@732 -- # prefix=NVMeTLSkey-1 00:29:02.824 11:51:03 keyring_file -- nvmf/common.sh@732 -- # key=00112233445566778899aabbccddeeff 00:29:02.824 11:51:03 keyring_file -- nvmf/common.sh@732 -- # digest=0 00:29:02.824 11:51:03 keyring_file -- nvmf/common.sh@733 -- # python - 00:29:02.824 11:51:03 keyring_file -- keyring/common.sh@21 -- # chmod 0600 /tmp/tmp.zNq12q3GxO 00:29:02.824 11:51:03 keyring_file -- keyring/common.sh@23 -- # echo /tmp/tmp.zNq12q3GxO 00:29:02.824 11:51:03 keyring_file -- keyring/file.sh@26 -- # key0path=/tmp/tmp.zNq12q3GxO 00:29:02.824 11:51:03 keyring_file -- keyring/file.sh@27 -- # prep_key key1 112233445566778899aabbccddeeff00 0 00:29:02.824 11:51:03 keyring_file -- keyring/common.sh@15 -- # local name key digest path 00:29:02.824 11:51:03 keyring_file -- keyring/common.sh@17 -- # name=key1 00:29:02.824 11:51:03 keyring_file -- keyring/common.sh@17 -- # key=112233445566778899aabbccddeeff00 00:29:02.824 11:51:03 keyring_file -- keyring/common.sh@17 -- # digest=0 00:29:02.824 11:51:03 keyring_file -- keyring/common.sh@18 -- # mktemp 00:29:02.824 11:51:03 keyring_file -- keyring/common.sh@18 -- # path=/tmp/tmp.IH52d3juRp 00:29:02.824 11:51:03 keyring_file -- keyring/common.sh@20 -- # format_interchange_psk 112233445566778899aabbccddeeff00 0 00:29:02.824 11:51:03 keyring_file -- nvmf/common.sh@743 -- # format_key NVMeTLSkey-1 112233445566778899aabbccddeeff00 0 00:29:02.824 11:51:03 keyring_file -- nvmf/common.sh@730 -- # local prefix key digest 00:29:02.824 11:51:03 keyring_file -- nvmf/common.sh@732 -- # prefix=NVMeTLSkey-1 00:29:02.824 11:51:03 keyring_file -- nvmf/common.sh@732 -- # key=112233445566778899aabbccddeeff00 00:29:02.824 11:51:03 keyring_file -- nvmf/common.sh@732 -- # digest=0 00:29:02.824 11:51:03 keyring_file -- nvmf/common.sh@733 -- # python - 00:29:02.824 11:51:03 keyring_file -- keyring/common.sh@21 -- # chmod 0600 /tmp/tmp.IH52d3juRp 00:29:02.824 11:51:03 keyring_file -- keyring/common.sh@23 -- # echo /tmp/tmp.IH52d3juRp 00:29:02.824 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:29:02.824 11:51:03 keyring_file -- keyring/file.sh@27 -- # key1path=/tmp/tmp.IH52d3juRp 00:29:02.824 11:51:03 keyring_file -- keyring/file.sh@29 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:29:02.824 11:51:03 keyring_file -- keyring/file.sh@30 -- # tgtpid=112516 00:29:02.824 11:51:03 keyring_file -- keyring/file.sh@32 -- # waitforlisten 112516 00:29:02.824 11:51:03 keyring_file -- common/autotest_common.sh@835 -- # '[' -z 112516 ']' 00:29:02.824 11:51:03 keyring_file -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:29:02.824 11:51:03 keyring_file -- common/autotest_common.sh@840 -- # local max_retries=100 00:29:02.824 11:51:03 keyring_file -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:29:02.824 11:51:03 keyring_file -- common/autotest_common.sh@844 -- # xtrace_disable 00:29:02.824 11:51:03 keyring_file -- common/autotest_common.sh@10 -- # set +x 00:29:02.824 [2024-12-16 11:51:03.503850] Starting SPDK v25.01-pre git sha1 c360129e7 / DPDK 24.03.0 initialization... 00:29:02.824 [2024-12-16 11:51:03.504101] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid112516 ] 00:29:03.096 [2024-12-16 11:51:03.648807] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:29:03.096 [2024-12-16 11:51:03.690757] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:29:03.388 11:51:03 keyring_file -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:29:03.388 11:51:03 keyring_file -- common/autotest_common.sh@868 -- # return 0 00:29:03.388 11:51:03 keyring_file -- keyring/file.sh@33 -- # rpc_cmd 00:29:03.388 11:51:03 keyring_file -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:03.388 11:51:03 keyring_file -- common/autotest_common.sh@10 -- # set +x 00:29:03.388 [2024-12-16 11:51:03.966707] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:29:03.388 null0 00:29:03.388 [2024-12-16 11:51:03.998694] tcp.c:1049:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:29:03.388 [2024-12-16 11:51:03.999050] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4420 *** 00:29:03.388 11:51:04 keyring_file -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:03.388 11:51:04 keyring_file -- keyring/file.sh@44 -- # NOT rpc_cmd nvmf_subsystem_add_listener -t tcp -a 127.0.0.1 -s 4420 nqn.2016-06.io.spdk:cnode0 00:29:03.388 11:51:04 keyring_file -- common/autotest_common.sh@652 -- # local es=0 00:29:03.388 11:51:04 keyring_file -- common/autotest_common.sh@654 -- # valid_exec_arg rpc_cmd nvmf_subsystem_add_listener -t tcp -a 127.0.0.1 -s 4420 nqn.2016-06.io.spdk:cnode0 00:29:03.388 11:51:04 keyring_file -- common/autotest_common.sh@640 -- # local arg=rpc_cmd 00:29:03.388 11:51:04 keyring_file -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:29:03.388 11:51:04 keyring_file -- common/autotest_common.sh@644 -- # type -t rpc_cmd 00:29:03.388 11:51:04 keyring_file -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:29:03.388 11:51:04 keyring_file -- common/autotest_common.sh@655 -- # rpc_cmd nvmf_subsystem_add_listener -t tcp -a 127.0.0.1 -s 4420 nqn.2016-06.io.spdk:cnode0 00:29:03.388 11:51:04 keyring_file -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:03.388 11:51:04 keyring_file -- common/autotest_common.sh@10 -- # set +x 00:29:03.388 [2024-12-16 11:51:04.030681] nvmf_rpc.c: 762:nvmf_rpc_listen_paused: *ERROR*: Listener already exists 00:29:03.388 2024/12/16 11:51:04 error on JSON-RPC call, method: nvmf_subsystem_add_listener, params: map[listen_address:map[traddr:127.0.0.1 trsvcid:4420 trtype:tcp] nqn:nqn.2016-06.io.spdk:cnode0 secure_channel:%!s(bool=false)], err: error received for nvmf_subsystem_add_listener method, err: Code=-32602 Msg=Invalid parameters 00:29:03.388 request: 00:29:03.388 { 00:29:03.388 "method": "nvmf_subsystem_add_listener", 00:29:03.388 "params": { 00:29:03.388 "nqn": "nqn.2016-06.io.spdk:cnode0", 00:29:03.388 "secure_channel": false, 00:29:03.388 "listen_address": { 00:29:03.388 "trtype": "tcp", 00:29:03.388 "traddr": "127.0.0.1", 00:29:03.388 "trsvcid": "4420" 00:29:03.388 } 00:29:03.388 } 00:29:03.388 } 00:29:03.388 Got JSON-RPC error response 00:29:03.388 GoRPCClient: error on JSON-RPC call 00:29:03.388 11:51:04 keyring_file -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:29:03.388 11:51:04 keyring_file -- common/autotest_common.sh@655 -- # es=1 00:29:03.388 11:51:04 keyring_file -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:29:03.388 11:51:04 keyring_file -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:29:03.388 11:51:04 keyring_file -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:29:03.388 11:51:04 keyring_file -- keyring/file.sh@47 -- # bperfpid=112540 00:29:03.388 11:51:04 keyring_file -- keyring/file.sh@46 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -q 128 -o 4k -w randrw -M 50 -t 1 -m 2 -r /var/tmp/bperf.sock -z 00:29:03.388 11:51:04 keyring_file -- keyring/file.sh@49 -- # waitforlisten 112540 /var/tmp/bperf.sock 00:29:03.388 11:51:04 keyring_file -- common/autotest_common.sh@835 -- # '[' -z 112540 ']' 00:29:03.388 11:51:04 keyring_file -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bperf.sock 00:29:03.388 11:51:04 keyring_file -- common/autotest_common.sh@840 -- # local max_retries=100 00:29:03.388 11:51:04 keyring_file -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:29:03.388 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:29:03.388 11:51:04 keyring_file -- common/autotest_common.sh@844 -- # xtrace_disable 00:29:03.388 11:51:04 keyring_file -- common/autotest_common.sh@10 -- # set +x 00:29:03.388 [2024-12-16 11:51:04.090037] Starting SPDK v25.01-pre git sha1 c360129e7 / DPDK 24.03.0 initialization... 00:29:03.388 [2024-12-16 11:51:04.090330] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid112540 ] 00:29:03.647 [2024-12-16 11:51:04.239232] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:29:03.647 [2024-12-16 11:51:04.292220] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 1 00:29:03.905 11:51:04 keyring_file -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:29:03.905 11:51:04 keyring_file -- common/autotest_common.sh@868 -- # return 0 00:29:03.905 11:51:04 keyring_file -- keyring/file.sh@50 -- # bperf_cmd keyring_file_add_key key0 /tmp/tmp.zNq12q3GxO 00:29:03.905 11:51:04 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_add_key key0 /tmp/tmp.zNq12q3GxO 00:29:04.164 11:51:04 keyring_file -- keyring/file.sh@51 -- # bperf_cmd keyring_file_add_key key1 /tmp/tmp.IH52d3juRp 00:29:04.164 11:51:04 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_add_key key1 /tmp/tmp.IH52d3juRp 00:29:04.423 11:51:04 keyring_file -- keyring/file.sh@52 -- # get_key key0 00:29:04.423 11:51:04 keyring_file -- keyring/file.sh@52 -- # jq -r .path 00:29:04.423 11:51:04 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:29:04.423 11:51:04 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:29:04.423 11:51:04 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:29:04.681 11:51:05 keyring_file -- keyring/file.sh@52 -- # [[ /tmp/tmp.zNq12q3GxO == \/\t\m\p\/\t\m\p\.\z\N\q\1\2\q\3\G\x\O ]] 00:29:04.681 11:51:05 keyring_file -- keyring/file.sh@53 -- # get_key key1 00:29:04.681 11:51:05 keyring_file -- keyring/file.sh@53 -- # jq -r .path 00:29:04.681 11:51:05 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:29:04.681 11:51:05 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:29:04.681 11:51:05 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key1")' 00:29:04.939 11:51:05 keyring_file -- keyring/file.sh@53 -- # [[ /tmp/tmp.IH52d3juRp == \/\t\m\p\/\t\m\p\.\I\H\5\2\d\3\j\u\R\p ]] 00:29:04.939 11:51:05 keyring_file -- keyring/file.sh@54 -- # get_refcnt key0 00:29:04.939 11:51:05 keyring_file -- keyring/common.sh@12 -- # get_key key0 00:29:04.939 11:51:05 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:29:04.939 11:51:05 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:29:04.939 11:51:05 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:29:04.939 11:51:05 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:29:05.198 11:51:05 keyring_file -- keyring/file.sh@54 -- # (( 1 == 1 )) 00:29:05.198 11:51:05 keyring_file -- keyring/file.sh@55 -- # get_refcnt key1 00:29:05.198 11:51:05 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:29:05.198 11:51:05 keyring_file -- keyring/common.sh@12 -- # get_key key1 00:29:05.198 11:51:05 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:29:05.198 11:51:05 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:29:05.198 11:51:05 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key1")' 00:29:05.457 11:51:06 keyring_file -- keyring/file.sh@55 -- # (( 1 == 1 )) 00:29:05.457 11:51:06 keyring_file -- keyring/file.sh@58 -- # bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:29:05.457 11:51:06 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:29:05.716 [2024-12-16 11:51:06.330310] bdev_nvme_rpc.c: 515:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:29:05.716 nvme0n1 00:29:05.716 11:51:06 keyring_file -- keyring/file.sh@60 -- # get_refcnt key0 00:29:05.716 11:51:06 keyring_file -- keyring/common.sh@12 -- # get_key key0 00:29:05.716 11:51:06 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:29:05.716 11:51:06 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:29:05.716 11:51:06 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:29:05.716 11:51:06 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:29:05.974 11:51:06 keyring_file -- keyring/file.sh@60 -- # (( 2 == 2 )) 00:29:05.974 11:51:06 keyring_file -- keyring/file.sh@61 -- # get_refcnt key1 00:29:05.974 11:51:06 keyring_file -- keyring/common.sh@12 -- # get_key key1 00:29:05.974 11:51:06 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:29:05.974 11:51:06 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:29:05.974 11:51:06 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key1")' 00:29:05.974 11:51:06 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:29:06.541 11:51:07 keyring_file -- keyring/file.sh@61 -- # (( 1 == 1 )) 00:29:06.541 11:51:07 keyring_file -- keyring/file.sh@63 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:29:06.541 Running I/O for 1 seconds... 00:29:07.477 13903.00 IOPS, 54.31 MiB/s 00:29:07.477 Latency(us) 00:29:07.477 [2024-12-16T11:51:08.230Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:29:07.477 Job: nvme0n1 (Core Mask 0x2, workload: randrw, percentage: 50, depth: 128, IO size: 4096) 00:29:07.477 nvme0n1 : 1.01 13955.11 54.51 0.00 0.00 9148.47 3440.64 13822.14 00:29:07.477 [2024-12-16T11:51:08.230Z] =================================================================================================================== 00:29:07.477 [2024-12-16T11:51:08.230Z] Total : 13955.11 54.51 0.00 0.00 9148.47 3440.64 13822.14 00:29:07.477 { 00:29:07.477 "results": [ 00:29:07.477 { 00:29:07.477 "job": "nvme0n1", 00:29:07.477 "core_mask": "0x2", 00:29:07.477 "workload": "randrw", 00:29:07.477 "percentage": 50, 00:29:07.477 "status": "finished", 00:29:07.477 "queue_depth": 128, 00:29:07.477 "io_size": 4096, 00:29:07.477 "runtime": 1.005653, 00:29:07.477 "iops": 13955.111753258829, 00:29:07.477 "mibps": 54.5121552861673, 00:29:07.477 "io_failed": 0, 00:29:07.477 "io_timeout": 0, 00:29:07.477 "avg_latency_us": 9148.466704496872, 00:29:07.477 "min_latency_us": 3440.64, 00:29:07.477 "max_latency_us": 13822.138181818182 00:29:07.477 } 00:29:07.477 ], 00:29:07.477 "core_count": 1 00:29:07.477 } 00:29:07.477 11:51:08 keyring_file -- keyring/file.sh@65 -- # bperf_cmd bdev_nvme_detach_controller nvme0 00:29:07.477 11:51:08 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_detach_controller nvme0 00:29:07.736 11:51:08 keyring_file -- keyring/file.sh@66 -- # get_refcnt key0 00:29:07.736 11:51:08 keyring_file -- keyring/common.sh@12 -- # get_key key0 00:29:07.736 11:51:08 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:29:07.736 11:51:08 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:29:07.736 11:51:08 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:29:07.736 11:51:08 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:29:07.994 11:51:08 keyring_file -- keyring/file.sh@66 -- # (( 1 == 1 )) 00:29:07.994 11:51:08 keyring_file -- keyring/file.sh@67 -- # get_refcnt key1 00:29:07.994 11:51:08 keyring_file -- keyring/common.sh@12 -- # get_key key1 00:29:07.994 11:51:08 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:29:07.994 11:51:08 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:29:07.994 11:51:08 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:29:07.994 11:51:08 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key1")' 00:29:08.561 11:51:09 keyring_file -- keyring/file.sh@67 -- # (( 1 == 1 )) 00:29:08.561 11:51:09 keyring_file -- keyring/file.sh@70 -- # NOT bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key1 00:29:08.561 11:51:09 keyring_file -- common/autotest_common.sh@652 -- # local es=0 00:29:08.561 11:51:09 keyring_file -- common/autotest_common.sh@654 -- # valid_exec_arg bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key1 00:29:08.561 11:51:09 keyring_file -- common/autotest_common.sh@640 -- # local arg=bperf_cmd 00:29:08.561 11:51:09 keyring_file -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:29:08.561 11:51:09 keyring_file -- common/autotest_common.sh@644 -- # type -t bperf_cmd 00:29:08.561 11:51:09 keyring_file -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:29:08.561 11:51:09 keyring_file -- common/autotest_common.sh@655 -- # bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key1 00:29:08.561 11:51:09 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key1 00:29:08.561 [2024-12-16 11:51:09.253010] /home/vagrant/spdk_repo/spdk/include/spdk_internal/nvme_tcp.h: 421:nvme_tcp_read_data: *ERROR*: spdk_sock_recv() failed, errno 107: Transport endpoint is not connected 00:29:08.561 [2024-12-16 11:51:09.253061] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb381f0 (107): Transport endpoint is not connected 00:29:08.561 [2024-12-16 11:51:09.254037] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb381f0 (9): Bad file descriptor 00:29:08.561 [2024-12-16 11:51:09.255035] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 0] Ctrlr is in error state 00:29:08.561 [2024-12-16 11:51:09.255072] nvme.c: 709:nvme_ctrlr_poll_internal: *ERROR*: Failed to initialize SSD: 127.0.0.1 00:29:08.561 [2024-12-16 11:51:09.255082] nvme.c: 895:nvme_dummy_attach_fail_cb: *ERROR*: Failed to attach nvme ctrlr: trtype=TCP adrfam=IPv4 traddr=127.0.0.1 trsvcid=4420 subnqn=nqn.2016-06.io.spdk:cnode0, Operation not permitted 00:29:08.561 [2024-12-16 11:51:09.255092] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 0] in failed state. 00:29:08.561 2024/12/16 11:51:09 error on JSON-RPC call, method: bdev_nvme_attach_controller, params: map[adrfam:ipv4 allow_unrecognized_csi:%!s(bool=false) ddgst:%!s(bool=false) hdgst:%!s(bool=false) hostnqn:nqn.2016-06.io.spdk:host0 name:nvme0 prchk_guard:%!s(bool=false) prchk_reftag:%!s(bool=false) psk:key1 subnqn:nqn.2016-06.io.spdk:cnode0 traddr:127.0.0.1 trsvcid:4420 trtype:tcp], err: error received for bdev_nvme_attach_controller method, err: Code=-5 Msg=Input/output error 00:29:08.561 request: 00:29:08.561 { 00:29:08.561 "method": "bdev_nvme_attach_controller", 00:29:08.561 "params": { 00:29:08.561 "name": "nvme0", 00:29:08.561 "trtype": "tcp", 00:29:08.561 "traddr": "127.0.0.1", 00:29:08.561 "adrfam": "ipv4", 00:29:08.561 "trsvcid": "4420", 00:29:08.561 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:29:08.561 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:29:08.561 "prchk_reftag": false, 00:29:08.561 "prchk_guard": false, 00:29:08.561 "hdgst": false, 00:29:08.561 "ddgst": false, 00:29:08.561 "psk": "key1", 00:29:08.561 "allow_unrecognized_csi": false 00:29:08.561 } 00:29:08.561 } 00:29:08.561 Got JSON-RPC error response 00:29:08.561 GoRPCClient: error on JSON-RPC call 00:29:08.561 11:51:09 keyring_file -- common/autotest_common.sh@655 -- # es=1 00:29:08.561 11:51:09 keyring_file -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:29:08.561 11:51:09 keyring_file -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:29:08.561 11:51:09 keyring_file -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:29:08.561 11:51:09 keyring_file -- keyring/file.sh@72 -- # get_refcnt key0 00:29:08.561 11:51:09 keyring_file -- keyring/common.sh@12 -- # get_key key0 00:29:08.561 11:51:09 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:29:08.561 11:51:09 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:29:08.561 11:51:09 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:29:08.561 11:51:09 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:29:09.128 11:51:09 keyring_file -- keyring/file.sh@72 -- # (( 1 == 1 )) 00:29:09.128 11:51:09 keyring_file -- keyring/file.sh@73 -- # get_refcnt key1 00:29:09.128 11:51:09 keyring_file -- keyring/common.sh@12 -- # get_key key1 00:29:09.128 11:51:09 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:29:09.128 11:51:09 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:29:09.128 11:51:09 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key1")' 00:29:09.128 11:51:09 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:29:09.128 11:51:09 keyring_file -- keyring/file.sh@73 -- # (( 1 == 1 )) 00:29:09.128 11:51:09 keyring_file -- keyring/file.sh@76 -- # bperf_cmd keyring_file_remove_key key0 00:29:09.128 11:51:09 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_remove_key key0 00:29:09.386 11:51:10 keyring_file -- keyring/file.sh@77 -- # bperf_cmd keyring_file_remove_key key1 00:29:09.386 11:51:10 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_remove_key key1 00:29:09.645 11:51:10 keyring_file -- keyring/file.sh@78 -- # jq length 00:29:09.645 11:51:10 keyring_file -- keyring/file.sh@78 -- # bperf_cmd keyring_get_keys 00:29:09.645 11:51:10 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:29:09.904 11:51:10 keyring_file -- keyring/file.sh@78 -- # (( 0 == 0 )) 00:29:09.904 11:51:10 keyring_file -- keyring/file.sh@81 -- # chmod 0660 /tmp/tmp.zNq12q3GxO 00:29:09.904 11:51:10 keyring_file -- keyring/file.sh@82 -- # NOT bperf_cmd keyring_file_add_key key0 /tmp/tmp.zNq12q3GxO 00:29:09.904 11:51:10 keyring_file -- common/autotest_common.sh@652 -- # local es=0 00:29:09.904 11:51:10 keyring_file -- common/autotest_common.sh@654 -- # valid_exec_arg bperf_cmd keyring_file_add_key key0 /tmp/tmp.zNq12q3GxO 00:29:09.904 11:51:10 keyring_file -- common/autotest_common.sh@640 -- # local arg=bperf_cmd 00:29:09.904 11:51:10 keyring_file -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:29:09.904 11:51:10 keyring_file -- common/autotest_common.sh@644 -- # type -t bperf_cmd 00:29:09.904 11:51:10 keyring_file -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:29:09.904 11:51:10 keyring_file -- common/autotest_common.sh@655 -- # bperf_cmd keyring_file_add_key key0 /tmp/tmp.zNq12q3GxO 00:29:09.904 11:51:10 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_add_key key0 /tmp/tmp.zNq12q3GxO 00:29:10.162 [2024-12-16 11:51:10.809393] keyring.c: 36:keyring_file_check_path: *ERROR*: Invalid permissions for key file '/tmp/tmp.zNq12q3GxO': 0100660 00:29:10.162 [2024-12-16 11:51:10.809432] keyring.c: 126:spdk_keyring_add_key: *ERROR*: Failed to add key 'key0' to the keyring 00:29:10.162 2024/12/16 11:51:10 error on JSON-RPC call, method: keyring_file_add_key, params: map[name:key0 path:/tmp/tmp.zNq12q3GxO], err: error received for keyring_file_add_key method, err: Code=-1 Msg=Operation not permitted 00:29:10.162 request: 00:29:10.162 { 00:29:10.162 "method": "keyring_file_add_key", 00:29:10.162 "params": { 00:29:10.162 "name": "key0", 00:29:10.162 "path": "/tmp/tmp.zNq12q3GxO" 00:29:10.162 } 00:29:10.162 } 00:29:10.162 Got JSON-RPC error response 00:29:10.162 GoRPCClient: error on JSON-RPC call 00:29:10.162 11:51:10 keyring_file -- common/autotest_common.sh@655 -- # es=1 00:29:10.162 11:51:10 keyring_file -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:29:10.162 11:51:10 keyring_file -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:29:10.162 11:51:10 keyring_file -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:29:10.162 11:51:10 keyring_file -- keyring/file.sh@85 -- # chmod 0600 /tmp/tmp.zNq12q3GxO 00:29:10.162 11:51:10 keyring_file -- keyring/file.sh@86 -- # bperf_cmd keyring_file_add_key key0 /tmp/tmp.zNq12q3GxO 00:29:10.162 11:51:10 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_add_key key0 /tmp/tmp.zNq12q3GxO 00:29:10.421 11:51:11 keyring_file -- keyring/file.sh@87 -- # rm -f /tmp/tmp.zNq12q3GxO 00:29:10.421 11:51:11 keyring_file -- keyring/file.sh@89 -- # get_refcnt key0 00:29:10.421 11:51:11 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:29:10.421 11:51:11 keyring_file -- keyring/common.sh@12 -- # get_key key0 00:29:10.421 11:51:11 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:29:10.421 11:51:11 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:29:10.421 11:51:11 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:29:10.680 11:51:11 keyring_file -- keyring/file.sh@89 -- # (( 1 == 1 )) 00:29:10.680 11:51:11 keyring_file -- keyring/file.sh@91 -- # NOT bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:29:10.680 11:51:11 keyring_file -- common/autotest_common.sh@652 -- # local es=0 00:29:10.680 11:51:11 keyring_file -- common/autotest_common.sh@654 -- # valid_exec_arg bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:29:10.680 11:51:11 keyring_file -- common/autotest_common.sh@640 -- # local arg=bperf_cmd 00:29:10.680 11:51:11 keyring_file -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:29:10.680 11:51:11 keyring_file -- common/autotest_common.sh@644 -- # type -t bperf_cmd 00:29:10.680 11:51:11 keyring_file -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:29:10.680 11:51:11 keyring_file -- common/autotest_common.sh@655 -- # bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:29:10.680 11:51:11 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:29:10.939 [2024-12-16 11:51:11.601624] keyring.c: 31:keyring_file_check_path: *ERROR*: Could not stat key file '/tmp/tmp.zNq12q3GxO': No such file or directory 00:29:10.939 [2024-12-16 11:51:11.601676] nvme_tcp.c:2498:nvme_tcp_generate_tls_credentials: *ERROR*: Failed to obtain key 'key0': No such file or directory 00:29:10.939 [2024-12-16 11:51:11.601718] nvme.c: 682:nvme_ctrlr_probe: *ERROR*: Failed to construct NVMe controller for SSD: 127.0.0.1 00:29:10.939 [2024-12-16 11:51:11.601727] nvme.c: 895:nvme_dummy_attach_fail_cb: *ERROR*: Failed to attach nvme ctrlr: trtype=TCP adrfam=IPv4 traddr=127.0.0.1 trsvcid=4420 subnqn=nqn.2016-06.io.spdk:cnode0, No such device 00:29:10.939 [2024-12-16 11:51:11.601737] nvme.c: 842:nvme_probe_internal: *ERROR*: NVMe ctrlr scan failed 00:29:10.939 [2024-12-16 11:51:11.601745] bdev_nvme.c:6801:spdk_bdev_nvme_create: *ERROR*: No controller was found with provided trid (traddr: 127.0.0.1) 00:29:10.939 2024/12/16 11:51:11 error on JSON-RPC call, method: bdev_nvme_attach_controller, params: map[adrfam:ipv4 allow_unrecognized_csi:%!s(bool=false) ddgst:%!s(bool=false) hdgst:%!s(bool=false) hostnqn:nqn.2016-06.io.spdk:host0 name:nvme0 prchk_guard:%!s(bool=false) prchk_reftag:%!s(bool=false) psk:key0 subnqn:nqn.2016-06.io.spdk:cnode0 traddr:127.0.0.1 trsvcid:4420 trtype:tcp], err: error received for bdev_nvme_attach_controller method, err: Code=-19 Msg=No such device 00:29:10.939 request: 00:29:10.939 { 00:29:10.939 "method": "bdev_nvme_attach_controller", 00:29:10.939 "params": { 00:29:10.939 "name": "nvme0", 00:29:10.939 "trtype": "tcp", 00:29:10.939 "traddr": "127.0.0.1", 00:29:10.939 "adrfam": "ipv4", 00:29:10.939 "trsvcid": "4420", 00:29:10.939 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:29:10.939 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:29:10.939 "prchk_reftag": false, 00:29:10.939 "prchk_guard": false, 00:29:10.939 "hdgst": false, 00:29:10.939 "ddgst": false, 00:29:10.939 "psk": "key0", 00:29:10.939 "allow_unrecognized_csi": false 00:29:10.939 } 00:29:10.939 } 00:29:10.939 Got JSON-RPC error response 00:29:10.939 GoRPCClient: error on JSON-RPC call 00:29:10.939 11:51:11 keyring_file -- common/autotest_common.sh@655 -- # es=1 00:29:10.939 11:51:11 keyring_file -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:29:10.939 11:51:11 keyring_file -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:29:10.939 11:51:11 keyring_file -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:29:10.939 11:51:11 keyring_file -- keyring/file.sh@93 -- # bperf_cmd keyring_file_remove_key key0 00:29:10.939 11:51:11 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_remove_key key0 00:29:11.198 11:51:11 keyring_file -- keyring/file.sh@96 -- # prep_key key0 00112233445566778899aabbccddeeff 0 00:29:11.198 11:51:11 keyring_file -- keyring/common.sh@15 -- # local name key digest path 00:29:11.198 11:51:11 keyring_file -- keyring/common.sh@17 -- # name=key0 00:29:11.198 11:51:11 keyring_file -- keyring/common.sh@17 -- # key=00112233445566778899aabbccddeeff 00:29:11.198 11:51:11 keyring_file -- keyring/common.sh@17 -- # digest=0 00:29:11.198 11:51:11 keyring_file -- keyring/common.sh@18 -- # mktemp 00:29:11.198 11:51:11 keyring_file -- keyring/common.sh@18 -- # path=/tmp/tmp.Ng9mDU3DJI 00:29:11.198 11:51:11 keyring_file -- keyring/common.sh@20 -- # format_interchange_psk 00112233445566778899aabbccddeeff 0 00:29:11.198 11:51:11 keyring_file -- nvmf/common.sh@743 -- # format_key NVMeTLSkey-1 00112233445566778899aabbccddeeff 0 00:29:11.198 11:51:11 keyring_file -- nvmf/common.sh@730 -- # local prefix key digest 00:29:11.198 11:51:11 keyring_file -- nvmf/common.sh@732 -- # prefix=NVMeTLSkey-1 00:29:11.198 11:51:11 keyring_file -- nvmf/common.sh@732 -- # key=00112233445566778899aabbccddeeff 00:29:11.198 11:51:11 keyring_file -- nvmf/common.sh@732 -- # digest=0 00:29:11.198 11:51:11 keyring_file -- nvmf/common.sh@733 -- # python - 00:29:11.457 11:51:11 keyring_file -- keyring/common.sh@21 -- # chmod 0600 /tmp/tmp.Ng9mDU3DJI 00:29:11.457 11:51:11 keyring_file -- keyring/common.sh@23 -- # echo /tmp/tmp.Ng9mDU3DJI 00:29:11.457 11:51:11 keyring_file -- keyring/file.sh@96 -- # key0path=/tmp/tmp.Ng9mDU3DJI 00:29:11.457 11:51:11 keyring_file -- keyring/file.sh@97 -- # bperf_cmd keyring_file_add_key key0 /tmp/tmp.Ng9mDU3DJI 00:29:11.457 11:51:11 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_add_key key0 /tmp/tmp.Ng9mDU3DJI 00:29:11.457 11:51:12 keyring_file -- keyring/file.sh@98 -- # bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:29:11.457 11:51:12 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:29:12.025 nvme0n1 00:29:12.025 11:51:12 keyring_file -- keyring/file.sh@100 -- # get_refcnt key0 00:29:12.025 11:51:12 keyring_file -- keyring/common.sh@12 -- # get_key key0 00:29:12.025 11:51:12 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:29:12.025 11:51:12 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:29:12.025 11:51:12 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:29:12.025 11:51:12 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:29:12.283 11:51:12 keyring_file -- keyring/file.sh@100 -- # (( 2 == 2 )) 00:29:12.283 11:51:12 keyring_file -- keyring/file.sh@101 -- # bperf_cmd keyring_file_remove_key key0 00:29:12.283 11:51:12 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_remove_key key0 00:29:12.544 11:51:13 keyring_file -- keyring/file.sh@102 -- # jq -r .removed 00:29:12.544 11:51:13 keyring_file -- keyring/file.sh@102 -- # get_key key0 00:29:12.544 11:51:13 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:29:12.544 11:51:13 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:29:12.544 11:51:13 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:29:12.810 11:51:13 keyring_file -- keyring/file.sh@102 -- # [[ true == \t\r\u\e ]] 00:29:12.810 11:51:13 keyring_file -- keyring/file.sh@103 -- # get_refcnt key0 00:29:12.810 11:51:13 keyring_file -- keyring/common.sh@12 -- # get_key key0 00:29:12.810 11:51:13 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:29:12.810 11:51:13 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:29:12.810 11:51:13 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:29:12.810 11:51:13 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:29:13.069 11:51:13 keyring_file -- keyring/file.sh@103 -- # (( 1 == 1 )) 00:29:13.069 11:51:13 keyring_file -- keyring/file.sh@104 -- # bperf_cmd bdev_nvme_detach_controller nvme0 00:29:13.069 11:51:13 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_detach_controller nvme0 00:29:13.327 11:51:13 keyring_file -- keyring/file.sh@105 -- # bperf_cmd keyring_get_keys 00:29:13.327 11:51:13 keyring_file -- keyring/file.sh@105 -- # jq length 00:29:13.327 11:51:13 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:29:13.586 11:51:14 keyring_file -- keyring/file.sh@105 -- # (( 0 == 0 )) 00:29:13.586 11:51:14 keyring_file -- keyring/file.sh@108 -- # bperf_cmd keyring_file_add_key key0 /tmp/tmp.Ng9mDU3DJI 00:29:13.586 11:51:14 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_add_key key0 /tmp/tmp.Ng9mDU3DJI 00:29:13.845 11:51:14 keyring_file -- keyring/file.sh@109 -- # bperf_cmd keyring_file_add_key key1 /tmp/tmp.IH52d3juRp 00:29:13.845 11:51:14 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_add_key key1 /tmp/tmp.IH52d3juRp 00:29:14.104 11:51:14 keyring_file -- keyring/file.sh@110 -- # bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:29:14.104 11:51:14 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:29:14.363 nvme0n1 00:29:14.363 11:51:15 keyring_file -- keyring/file.sh@113 -- # bperf_cmd save_config 00:29:14.363 11:51:15 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock save_config 00:29:14.930 11:51:15 keyring_file -- keyring/file.sh@113 -- # config='{ 00:29:14.930 "subsystems": [ 00:29:14.930 { 00:29:14.930 "subsystem": "keyring", 00:29:14.930 "config": [ 00:29:14.930 { 00:29:14.930 "method": "keyring_file_add_key", 00:29:14.930 "params": { 00:29:14.930 "name": "key0", 00:29:14.930 "path": "/tmp/tmp.Ng9mDU3DJI" 00:29:14.930 } 00:29:14.930 }, 00:29:14.930 { 00:29:14.930 "method": "keyring_file_add_key", 00:29:14.930 "params": { 00:29:14.930 "name": "key1", 00:29:14.930 "path": "/tmp/tmp.IH52d3juRp" 00:29:14.930 } 00:29:14.930 } 00:29:14.930 ] 00:29:14.930 }, 00:29:14.930 { 00:29:14.930 "subsystem": "iobuf", 00:29:14.930 "config": [ 00:29:14.930 { 00:29:14.930 "method": "iobuf_set_options", 00:29:14.930 "params": { 00:29:14.930 "enable_numa": false, 00:29:14.930 "large_bufsize": 135168, 00:29:14.930 "large_pool_count": 1024, 00:29:14.930 "small_bufsize": 8192, 00:29:14.930 "small_pool_count": 8192 00:29:14.930 } 00:29:14.930 } 00:29:14.930 ] 00:29:14.930 }, 00:29:14.930 { 00:29:14.930 "subsystem": "sock", 00:29:14.930 "config": [ 00:29:14.930 { 00:29:14.930 "method": "sock_set_default_impl", 00:29:14.930 "params": { 00:29:14.930 "impl_name": "posix" 00:29:14.930 } 00:29:14.930 }, 00:29:14.930 { 00:29:14.930 "method": "sock_impl_set_options", 00:29:14.930 "params": { 00:29:14.930 "enable_ktls": false, 00:29:14.930 "enable_placement_id": 0, 00:29:14.930 "enable_quickack": false, 00:29:14.930 "enable_recv_pipe": true, 00:29:14.930 "enable_zerocopy_send_client": false, 00:29:14.930 "enable_zerocopy_send_server": true, 00:29:14.930 "impl_name": "ssl", 00:29:14.930 "recv_buf_size": 4096, 00:29:14.930 "send_buf_size": 4096, 00:29:14.930 "tls_version": 0, 00:29:14.930 "zerocopy_threshold": 0 00:29:14.930 } 00:29:14.930 }, 00:29:14.930 { 00:29:14.930 "method": "sock_impl_set_options", 00:29:14.930 "params": { 00:29:14.930 "enable_ktls": false, 00:29:14.930 "enable_placement_id": 0, 00:29:14.930 "enable_quickack": false, 00:29:14.930 "enable_recv_pipe": true, 00:29:14.930 "enable_zerocopy_send_client": false, 00:29:14.930 "enable_zerocopy_send_server": true, 00:29:14.930 "impl_name": "posix", 00:29:14.930 "recv_buf_size": 2097152, 00:29:14.930 "send_buf_size": 2097152, 00:29:14.930 "tls_version": 0, 00:29:14.930 "zerocopy_threshold": 0 00:29:14.930 } 00:29:14.930 } 00:29:14.930 ] 00:29:14.930 }, 00:29:14.930 { 00:29:14.930 "subsystem": "vmd", 00:29:14.930 "config": [] 00:29:14.930 }, 00:29:14.930 { 00:29:14.930 "subsystem": "accel", 00:29:14.930 "config": [ 00:29:14.930 { 00:29:14.930 "method": "accel_set_options", 00:29:14.930 "params": { 00:29:14.930 "buf_count": 2048, 00:29:14.930 "large_cache_size": 16, 00:29:14.930 "sequence_count": 2048, 00:29:14.930 "small_cache_size": 128, 00:29:14.930 "task_count": 2048 00:29:14.931 } 00:29:14.931 } 00:29:14.931 ] 00:29:14.931 }, 00:29:14.931 { 00:29:14.931 "subsystem": "bdev", 00:29:14.931 "config": [ 00:29:14.931 { 00:29:14.931 "method": "bdev_set_options", 00:29:14.931 "params": { 00:29:14.931 "bdev_auto_examine": true, 00:29:14.931 "bdev_io_cache_size": 256, 00:29:14.931 "bdev_io_pool_size": 65535, 00:29:14.931 "iobuf_large_cache_size": 16, 00:29:14.931 "iobuf_small_cache_size": 128 00:29:14.931 } 00:29:14.931 }, 00:29:14.931 { 00:29:14.931 "method": "bdev_raid_set_options", 00:29:14.931 "params": { 00:29:14.931 "process_max_bandwidth_mb_sec": 0, 00:29:14.931 "process_window_size_kb": 1024 00:29:14.931 } 00:29:14.931 }, 00:29:14.931 { 00:29:14.931 "method": "bdev_iscsi_set_options", 00:29:14.931 "params": { 00:29:14.931 "timeout_sec": 30 00:29:14.931 } 00:29:14.931 }, 00:29:14.931 { 00:29:14.931 "method": "bdev_nvme_set_options", 00:29:14.931 "params": { 00:29:14.931 "action_on_timeout": "none", 00:29:14.931 "allow_accel_sequence": false, 00:29:14.931 "arbitration_burst": 0, 00:29:14.931 "bdev_retry_count": 3, 00:29:14.931 "ctrlr_loss_timeout_sec": 0, 00:29:14.931 "delay_cmd_submit": true, 00:29:14.931 "dhchap_dhgroups": [ 00:29:14.931 "null", 00:29:14.931 "ffdhe2048", 00:29:14.931 "ffdhe3072", 00:29:14.931 "ffdhe4096", 00:29:14.931 "ffdhe6144", 00:29:14.931 "ffdhe8192" 00:29:14.931 ], 00:29:14.931 "dhchap_digests": [ 00:29:14.931 "sha256", 00:29:14.931 "sha384", 00:29:14.931 "sha512" 00:29:14.931 ], 00:29:14.931 "disable_auto_failback": false, 00:29:14.931 "fast_io_fail_timeout_sec": 0, 00:29:14.931 "generate_uuids": false, 00:29:14.931 "high_priority_weight": 0, 00:29:14.931 "io_path_stat": false, 00:29:14.931 "io_queue_requests": 512, 00:29:14.931 "keep_alive_timeout_ms": 10000, 00:29:14.931 "low_priority_weight": 0, 00:29:14.931 "medium_priority_weight": 0, 00:29:14.931 "nvme_adminq_poll_period_us": 10000, 00:29:14.931 "nvme_error_stat": false, 00:29:14.931 "nvme_ioq_poll_period_us": 0, 00:29:14.931 "rdma_cm_event_timeout_ms": 0, 00:29:14.931 "rdma_max_cq_size": 0, 00:29:14.931 "rdma_srq_size": 0, 00:29:14.931 "rdma_umr_per_io": false, 00:29:14.931 "reconnect_delay_sec": 0, 00:29:14.931 "timeout_admin_us": 0, 00:29:14.931 "timeout_us": 0, 00:29:14.931 "transport_ack_timeout": 0, 00:29:14.931 "transport_retry_count": 4, 00:29:14.931 "transport_tos": 0 00:29:14.931 } 00:29:14.931 }, 00:29:14.931 { 00:29:14.931 "method": "bdev_nvme_attach_controller", 00:29:14.931 "params": { 00:29:14.931 "adrfam": "IPv4", 00:29:14.931 "ctrlr_loss_timeout_sec": 0, 00:29:14.931 "ddgst": false, 00:29:14.931 "fast_io_fail_timeout_sec": 0, 00:29:14.931 "hdgst": false, 00:29:14.931 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:29:14.931 "multipath": "multipath", 00:29:14.931 "name": "nvme0", 00:29:14.931 "prchk_guard": false, 00:29:14.931 "prchk_reftag": false, 00:29:14.931 "psk": "key0", 00:29:14.931 "reconnect_delay_sec": 0, 00:29:14.931 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:29:14.931 "traddr": "127.0.0.1", 00:29:14.931 "trsvcid": "4420", 00:29:14.931 "trtype": "TCP" 00:29:14.931 } 00:29:14.931 }, 00:29:14.931 { 00:29:14.931 "method": "bdev_nvme_set_hotplug", 00:29:14.931 "params": { 00:29:14.931 "enable": false, 00:29:14.931 "period_us": 100000 00:29:14.931 } 00:29:14.931 }, 00:29:14.931 { 00:29:14.931 "method": "bdev_wait_for_examine" 00:29:14.931 } 00:29:14.931 ] 00:29:14.931 }, 00:29:14.931 { 00:29:14.931 "subsystem": "nbd", 00:29:14.931 "config": [] 00:29:14.931 } 00:29:14.931 ] 00:29:14.931 }' 00:29:14.931 11:51:15 keyring_file -- keyring/file.sh@115 -- # killprocess 112540 00:29:14.931 11:51:15 keyring_file -- common/autotest_common.sh@954 -- # '[' -z 112540 ']' 00:29:14.931 11:51:15 keyring_file -- common/autotest_common.sh@958 -- # kill -0 112540 00:29:14.931 11:51:15 keyring_file -- common/autotest_common.sh@959 -- # uname 00:29:14.931 11:51:15 keyring_file -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:29:14.931 11:51:15 keyring_file -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 112540 00:29:14.931 killing process with pid 112540 00:29:14.931 Received shutdown signal, test time was about 1.000000 seconds 00:29:14.931 00:29:14.931 Latency(us) 00:29:14.931 [2024-12-16T11:51:15.684Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:29:14.931 [2024-12-16T11:51:15.684Z] =================================================================================================================== 00:29:14.931 [2024-12-16T11:51:15.684Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:29:14.931 11:51:15 keyring_file -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:29:14.931 11:51:15 keyring_file -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:29:14.931 11:51:15 keyring_file -- common/autotest_common.sh@972 -- # echo 'killing process with pid 112540' 00:29:14.931 11:51:15 keyring_file -- common/autotest_common.sh@973 -- # kill 112540 00:29:14.931 11:51:15 keyring_file -- common/autotest_common.sh@978 -- # wait 112540 00:29:14.931 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:29:14.931 11:51:15 keyring_file -- keyring/file.sh@116 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -q 128 -o 4k -w randrw -M 50 -t 1 -m 2 -r /var/tmp/bperf.sock -z -c /dev/fd/63 00:29:14.931 11:51:15 keyring_file -- keyring/file.sh@118 -- # bperfpid=112999 00:29:14.931 11:51:15 keyring_file -- keyring/file.sh@120 -- # waitforlisten 112999 /var/tmp/bperf.sock 00:29:14.931 11:51:15 keyring_file -- common/autotest_common.sh@835 -- # '[' -z 112999 ']' 00:29:14.931 11:51:15 keyring_file -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bperf.sock 00:29:14.931 11:51:15 keyring_file -- common/autotest_common.sh@840 -- # local max_retries=100 00:29:14.931 11:51:15 keyring_file -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:29:14.931 11:51:15 keyring_file -- keyring/file.sh@116 -- # echo '{ 00:29:14.931 "subsystems": [ 00:29:14.931 { 00:29:14.931 "subsystem": "keyring", 00:29:14.931 "config": [ 00:29:14.931 { 00:29:14.931 "method": "keyring_file_add_key", 00:29:14.931 "params": { 00:29:14.931 "name": "key0", 00:29:14.931 "path": "/tmp/tmp.Ng9mDU3DJI" 00:29:14.931 } 00:29:14.931 }, 00:29:14.931 { 00:29:14.931 "method": "keyring_file_add_key", 00:29:14.931 "params": { 00:29:14.931 "name": "key1", 00:29:14.931 "path": "/tmp/tmp.IH52d3juRp" 00:29:14.931 } 00:29:14.931 } 00:29:14.931 ] 00:29:14.931 }, 00:29:14.931 { 00:29:14.931 "subsystem": "iobuf", 00:29:14.931 "config": [ 00:29:14.931 { 00:29:14.931 "method": "iobuf_set_options", 00:29:14.931 "params": { 00:29:14.931 "enable_numa": false, 00:29:14.931 "large_bufsize": 135168, 00:29:14.931 "large_pool_count": 1024, 00:29:14.931 "small_bufsize": 8192, 00:29:14.931 "small_pool_count": 8192 00:29:14.931 } 00:29:14.931 } 00:29:14.931 ] 00:29:14.931 }, 00:29:14.931 { 00:29:14.931 "subsystem": "sock", 00:29:14.931 "config": [ 00:29:14.931 { 00:29:14.931 "method": "sock_set_default_impl", 00:29:14.931 "params": { 00:29:14.931 "impl_name": "posix" 00:29:14.931 } 00:29:14.931 }, 00:29:14.931 { 00:29:14.931 "method": "sock_impl_set_options", 00:29:14.931 "params": { 00:29:14.931 "enable_ktls": false, 00:29:14.931 "enable_placement_id": 0, 00:29:14.931 "enable_quickack": false, 00:29:14.931 "enable_recv_pipe": true, 00:29:14.931 "enable_zerocopy_send_client": false, 00:29:14.931 "enable_zerocopy_send_server": true, 00:29:14.931 "impl_name": "ssl", 00:29:14.931 "recv_buf_size": 4096, 00:29:14.931 "send_buf_size": 4096, 00:29:14.931 "tls_version": 0, 00:29:14.931 "zerocopy_threshold": 0 00:29:14.931 } 00:29:14.931 }, 00:29:14.932 { 00:29:14.932 "method": "sock_impl_set_options", 00:29:14.932 "params": { 00:29:14.932 "enable_ktls": false, 00:29:14.932 "enable_placement_id": 0, 00:29:14.932 "enable_quickack": false, 00:29:14.932 "enable_recv_pipe": true, 00:29:14.932 "enable_zerocopy_send_client": false, 00:29:14.932 "enable_zerocopy_send_server": true, 00:29:14.932 "impl_name": "posix", 00:29:14.932 "recv_buf_size": 2097152, 00:29:14.932 "send_buf_size": 2097152, 00:29:14.932 "tls_version": 0, 00:29:14.932 "zerocopy_threshold": 0 00:29:14.932 } 00:29:14.932 } 00:29:14.932 ] 00:29:14.932 }, 00:29:14.932 { 00:29:14.932 "subsystem": "vmd", 00:29:14.932 "config": [] 00:29:14.932 }, 00:29:14.932 { 00:29:14.932 "subsystem": "accel", 00:29:14.932 "config": [ 00:29:14.932 { 00:29:14.932 "method": "accel_set_options", 00:29:14.932 "params": { 00:29:14.932 "buf_count": 2048, 00:29:14.932 "large_cache_size": 16, 00:29:14.932 "sequence_count": 2048, 00:29:14.932 "small_cache_size": 128, 00:29:14.932 "task_count": 2048 00:29:14.932 } 00:29:14.932 } 00:29:14.932 ] 00:29:14.932 }, 00:29:14.932 { 00:29:14.932 "subsystem": "bdev", 00:29:14.932 "config": [ 00:29:14.932 { 00:29:14.932 "method": "bdev_set_options", 00:29:14.932 "params": { 00:29:14.932 "bdev_auto_examine": true, 00:29:14.932 "bdev_io_cache_size": 256, 00:29:14.932 "bdev_io_pool_size": 65535, 00:29:14.932 "iobuf_large_cache_size": 16, 00:29:14.932 "iobuf_small_cache_size": 128 00:29:14.932 } 00:29:14.932 }, 00:29:14.932 { 00:29:14.932 "method": "bdev_raid_set_options", 00:29:14.932 "params": { 00:29:14.932 "process_max_bandwidth_mb_sec": 0, 00:29:14.932 "process_window_size_kb": 1024 00:29:14.932 } 00:29:14.932 }, 00:29:14.932 { 00:29:14.932 "method": "bdev_iscsi_set_options", 00:29:14.932 "params": { 00:29:14.932 "timeout_sec": 30 00:29:14.932 } 00:29:14.932 }, 00:29:14.932 { 00:29:14.932 "method": "bdev_nvme_set_options", 00:29:14.932 "params": { 00:29:14.932 "action_on_timeout": "none", 00:29:14.932 "allow_accel_sequence": false, 00:29:14.932 "arbitration_burst": 0, 00:29:14.932 "bdev_retry_count": 3, 00:29:14.932 "ctrlr_loss_timeout_sec": 0, 00:29:14.932 "delay_cmd_submit": true, 00:29:14.932 "dhchap_dhgroups": [ 00:29:14.932 "null", 00:29:14.932 "ffdhe2048", 00:29:14.932 "ffdhe3072", 00:29:14.932 "ffdhe4096", 00:29:14.932 "ffdhe6144", 00:29:14.932 "ffdhe8192" 00:29:14.932 ], 00:29:14.932 "dhchap_digests": [ 00:29:14.932 "sha256", 00:29:14.932 "sha384", 00:29:14.932 "sha512" 00:29:14.932 ], 00:29:14.932 "disable_auto_failback": false, 00:29:14.932 "fast_io_fail_timeout_sec": 0, 00:29:14.932 "generate_uuids": false, 00:29:14.932 "high_priority_weight": 0, 00:29:14.932 "io_path_stat": false, 00:29:14.932 "io_queue_requests": 512, 00:29:14.932 "keep_alive_timeout_ms": 10000, 00:29:14.932 "low_priority_weight": 0, 00:29:14.932 "medium_priority_weight": 0, 00:29:14.932 "nvme_adminq_poll_period_us": 10000, 00:29:14.932 "nvme_error_stat": false, 00:29:14.932 "nvme_ioq_poll_period_us": 0, 00:29:14.932 "rdma_cm_event_timeout_ms": 0, 00:29:14.932 "rdma_max_cq_size": 0, 00:29:14.932 "rdma_srq_size": 0, 00:29:14.932 "rdma_umr_per_io": false, 00:29:14.932 "reconnect_delay_sec": 0, 00:29:14.932 "timeout_admin_us": 0, 00:29:14.932 "timeout_us": 0, 00:29:14.932 "transport_ack_timeout": 0, 00:29:14.932 "transport_retry_count": 4, 00:29:14.932 "transport_tos": 0 00:29:14.932 } 00:29:14.932 }, 00:29:14.932 { 00:29:14.932 "method": "bdev_nvme_attach_controller", 00:29:14.932 "params": { 00:29:14.932 "adrfam": "IPv4", 00:29:14.932 "ctrlr_loss_timeout_sec": 0, 00:29:14.932 "ddgst": false, 00:29:14.932 "fast_io_fail_timeout_sec": 0, 00:29:14.932 "hdgst": false, 00:29:14.932 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:29:14.932 "multipath": "multipath", 00:29:14.932 "name": "nvme0", 00:29:14.932 "prchk_guard": false, 00:29:14.932 "prchk_reftag": false, 00:29:14.932 "psk": "key0", 00:29:14.932 "reconnect_delay_sec": 0, 00:29:14.932 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:29:14.932 "traddr": "127.0.0.1", 00:29:14.932 "trsvcid": "4420", 00:29:14.932 "trtype": "TCP" 00:29:14.932 } 00:29:14.932 }, 00:29:14.932 { 00:29:14.932 "method": "bdev_nvme_set_hotplug", 00:29:14.932 "params": { 00:29:14.932 "enable": false, 00:29:14.932 "period_us": 100000 00:29:14.932 } 00:29:14.932 }, 00:29:14.932 { 00:29:14.932 "method": "bdev_wait_for_examine" 00:29:14.932 } 00:29:14.932 ] 00:29:14.932 }, 00:29:14.932 { 00:29:14.932 "subsystem": "nbd", 00:29:14.932 "config": [] 00:29:14.932 } 00:29:14.932 ] 00:29:14.932 }' 00:29:14.932 11:51:15 keyring_file -- common/autotest_common.sh@844 -- # xtrace_disable 00:29:14.932 11:51:15 keyring_file -- common/autotest_common.sh@10 -- # set +x 00:29:14.932 [2024-12-16 11:51:15.663370] Starting SPDK v25.01-pre git sha1 c360129e7 / DPDK 24.03.0 initialization... 00:29:14.932 [2024-12-16 11:51:15.663479] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid112999 ] 00:29:15.191 [2024-12-16 11:51:15.801268] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:29:15.191 [2024-12-16 11:51:15.842289] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 1 00:29:15.449 [2024-12-16 11:51:16.026043] bdev_nvme_rpc.c: 515:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:29:16.017 11:51:16 keyring_file -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:29:16.017 11:51:16 keyring_file -- common/autotest_common.sh@868 -- # return 0 00:29:16.017 11:51:16 keyring_file -- keyring/file.sh@121 -- # bperf_cmd keyring_get_keys 00:29:16.017 11:51:16 keyring_file -- keyring/file.sh@121 -- # jq length 00:29:16.017 11:51:16 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:29:16.276 11:51:16 keyring_file -- keyring/file.sh@121 -- # (( 2 == 2 )) 00:29:16.276 11:51:16 keyring_file -- keyring/file.sh@122 -- # get_refcnt key0 00:29:16.276 11:51:16 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:29:16.276 11:51:16 keyring_file -- keyring/common.sh@12 -- # get_key key0 00:29:16.276 11:51:16 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:29:16.276 11:51:17 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:29:16.276 11:51:17 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:29:16.535 11:51:17 keyring_file -- keyring/file.sh@122 -- # (( 2 == 2 )) 00:29:16.535 11:51:17 keyring_file -- keyring/file.sh@123 -- # get_refcnt key1 00:29:16.535 11:51:17 keyring_file -- keyring/common.sh@12 -- # get_key key1 00:29:16.535 11:51:17 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:29:16.535 11:51:17 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:29:16.535 11:51:17 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:29:16.535 11:51:17 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key1")' 00:29:16.794 11:51:17 keyring_file -- keyring/file.sh@123 -- # (( 1 == 1 )) 00:29:16.794 11:51:17 keyring_file -- keyring/file.sh@124 -- # bperf_cmd bdev_nvme_get_controllers 00:29:16.794 11:51:17 keyring_file -- keyring/file.sh@124 -- # jq -r '.[].name' 00:29:16.794 11:51:17 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_get_controllers 00:29:17.361 11:51:17 keyring_file -- keyring/file.sh@124 -- # [[ nvme0 == nvme0 ]] 00:29:17.361 11:51:17 keyring_file -- keyring/file.sh@1 -- # cleanup 00:29:17.361 11:51:17 keyring_file -- keyring/file.sh@19 -- # rm -f /tmp/tmp.Ng9mDU3DJI /tmp/tmp.IH52d3juRp 00:29:17.362 11:51:17 keyring_file -- keyring/file.sh@20 -- # killprocess 112999 00:29:17.362 11:51:17 keyring_file -- common/autotest_common.sh@954 -- # '[' -z 112999 ']' 00:29:17.362 11:51:17 keyring_file -- common/autotest_common.sh@958 -- # kill -0 112999 00:29:17.362 11:51:17 keyring_file -- common/autotest_common.sh@959 -- # uname 00:29:17.362 11:51:17 keyring_file -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:29:17.362 11:51:17 keyring_file -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 112999 00:29:17.362 killing process with pid 112999 00:29:17.362 Received shutdown signal, test time was about 1.000000 seconds 00:29:17.362 00:29:17.362 Latency(us) 00:29:17.362 [2024-12-16T11:51:18.115Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:29:17.362 [2024-12-16T11:51:18.115Z] =================================================================================================================== 00:29:17.362 [2024-12-16T11:51:18.115Z] Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:29:17.362 11:51:17 keyring_file -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:29:17.362 11:51:17 keyring_file -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:29:17.362 11:51:17 keyring_file -- common/autotest_common.sh@972 -- # echo 'killing process with pid 112999' 00:29:17.362 11:51:17 keyring_file -- common/autotest_common.sh@973 -- # kill 112999 00:29:17.362 11:51:17 keyring_file -- common/autotest_common.sh@978 -- # wait 112999 00:29:17.362 11:51:18 keyring_file -- keyring/file.sh@21 -- # killprocess 112516 00:29:17.362 11:51:18 keyring_file -- common/autotest_common.sh@954 -- # '[' -z 112516 ']' 00:29:17.362 11:51:18 keyring_file -- common/autotest_common.sh@958 -- # kill -0 112516 00:29:17.362 11:51:18 keyring_file -- common/autotest_common.sh@959 -- # uname 00:29:17.362 11:51:18 keyring_file -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:29:17.362 11:51:18 keyring_file -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 112516 00:29:17.362 killing process with pid 112516 00:29:17.362 11:51:18 keyring_file -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:29:17.362 11:51:18 keyring_file -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:29:17.362 11:51:18 keyring_file -- common/autotest_common.sh@972 -- # echo 'killing process with pid 112516' 00:29:17.362 11:51:18 keyring_file -- common/autotest_common.sh@973 -- # kill 112516 00:29:17.362 11:51:18 keyring_file -- common/autotest_common.sh@978 -- # wait 112516 00:29:17.930 00:29:17.930 real 0m15.408s 00:29:17.930 user 0m39.093s 00:29:17.930 sys 0m3.232s 00:29:17.930 11:51:18 keyring_file -- common/autotest_common.sh@1130 -- # xtrace_disable 00:29:17.930 ************************************ 00:29:17.930 END TEST keyring_file 00:29:17.930 ************************************ 00:29:17.930 11:51:18 keyring_file -- common/autotest_common.sh@10 -- # set +x 00:29:17.930 11:51:18 -- spdk/autotest.sh@293 -- # [[ y == y ]] 00:29:17.930 11:51:18 -- spdk/autotest.sh@294 -- # run_test keyring_linux /home/vagrant/spdk_repo/spdk/scripts/keyctl-session-wrapper /home/vagrant/spdk_repo/spdk/test/keyring/linux.sh 00:29:17.930 11:51:18 -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:29:17.930 11:51:18 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:29:17.930 11:51:18 -- common/autotest_common.sh@10 -- # set +x 00:29:17.930 ************************************ 00:29:17.930 START TEST keyring_linux 00:29:17.930 ************************************ 00:29:17.930 11:51:18 keyring_linux -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/scripts/keyctl-session-wrapper /home/vagrant/spdk_repo/spdk/test/keyring/linux.sh 00:29:17.930 Joined session keyring: 860560625 00:29:17.930 * Looking for test storage... 00:29:17.930 * Found test storage at /home/vagrant/spdk_repo/spdk/test/keyring 00:29:17.930 11:51:18 keyring_linux -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:29:17.930 11:51:18 keyring_linux -- common/autotest_common.sh@1711 -- # lcov --version 00:29:17.930 11:51:18 keyring_linux -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:29:18.189 11:51:18 keyring_linux -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:29:18.189 11:51:18 keyring_linux -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:29:18.189 11:51:18 keyring_linux -- scripts/common.sh@333 -- # local ver1 ver1_l 00:29:18.189 11:51:18 keyring_linux -- scripts/common.sh@334 -- # local ver2 ver2_l 00:29:18.189 11:51:18 keyring_linux -- scripts/common.sh@336 -- # IFS=.-: 00:29:18.189 11:51:18 keyring_linux -- scripts/common.sh@336 -- # read -ra ver1 00:29:18.189 11:51:18 keyring_linux -- scripts/common.sh@337 -- # IFS=.-: 00:29:18.189 11:51:18 keyring_linux -- scripts/common.sh@337 -- # read -ra ver2 00:29:18.189 11:51:18 keyring_linux -- scripts/common.sh@338 -- # local 'op=<' 00:29:18.189 11:51:18 keyring_linux -- scripts/common.sh@340 -- # ver1_l=2 00:29:18.189 11:51:18 keyring_linux -- scripts/common.sh@341 -- # ver2_l=1 00:29:18.189 11:51:18 keyring_linux -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:29:18.189 11:51:18 keyring_linux -- scripts/common.sh@344 -- # case "$op" in 00:29:18.189 11:51:18 keyring_linux -- scripts/common.sh@345 -- # : 1 00:29:18.189 11:51:18 keyring_linux -- scripts/common.sh@364 -- # (( v = 0 )) 00:29:18.189 11:51:18 keyring_linux -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:29:18.189 11:51:18 keyring_linux -- scripts/common.sh@365 -- # decimal 1 00:29:18.190 11:51:18 keyring_linux -- scripts/common.sh@353 -- # local d=1 00:29:18.190 11:51:18 keyring_linux -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:29:18.190 11:51:18 keyring_linux -- scripts/common.sh@355 -- # echo 1 00:29:18.190 11:51:18 keyring_linux -- scripts/common.sh@365 -- # ver1[v]=1 00:29:18.190 11:51:18 keyring_linux -- scripts/common.sh@366 -- # decimal 2 00:29:18.190 11:51:18 keyring_linux -- scripts/common.sh@353 -- # local d=2 00:29:18.190 11:51:18 keyring_linux -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:29:18.190 11:51:18 keyring_linux -- scripts/common.sh@355 -- # echo 2 00:29:18.190 11:51:18 keyring_linux -- scripts/common.sh@366 -- # ver2[v]=2 00:29:18.190 11:51:18 keyring_linux -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:29:18.190 11:51:18 keyring_linux -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:29:18.190 11:51:18 keyring_linux -- scripts/common.sh@368 -- # return 0 00:29:18.190 11:51:18 keyring_linux -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:29:18.190 11:51:18 keyring_linux -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:29:18.190 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:29:18.190 --rc genhtml_branch_coverage=1 00:29:18.190 --rc genhtml_function_coverage=1 00:29:18.190 --rc genhtml_legend=1 00:29:18.190 --rc geninfo_all_blocks=1 00:29:18.190 --rc geninfo_unexecuted_blocks=1 00:29:18.190 00:29:18.190 ' 00:29:18.190 11:51:18 keyring_linux -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:29:18.190 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:29:18.190 --rc genhtml_branch_coverage=1 00:29:18.190 --rc genhtml_function_coverage=1 00:29:18.190 --rc genhtml_legend=1 00:29:18.190 --rc geninfo_all_blocks=1 00:29:18.190 --rc geninfo_unexecuted_blocks=1 00:29:18.190 00:29:18.190 ' 00:29:18.190 11:51:18 keyring_linux -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:29:18.190 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:29:18.190 --rc genhtml_branch_coverage=1 00:29:18.190 --rc genhtml_function_coverage=1 00:29:18.190 --rc genhtml_legend=1 00:29:18.190 --rc geninfo_all_blocks=1 00:29:18.190 --rc geninfo_unexecuted_blocks=1 00:29:18.190 00:29:18.190 ' 00:29:18.190 11:51:18 keyring_linux -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:29:18.190 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:29:18.190 --rc genhtml_branch_coverage=1 00:29:18.190 --rc genhtml_function_coverage=1 00:29:18.190 --rc genhtml_legend=1 00:29:18.190 --rc geninfo_all_blocks=1 00:29:18.190 --rc geninfo_unexecuted_blocks=1 00:29:18.190 00:29:18.190 ' 00:29:18.190 11:51:18 keyring_linux -- keyring/linux.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/keyring/common.sh 00:29:18.190 11:51:18 keyring_linux -- keyring/common.sh@4 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:29:18.190 11:51:18 keyring_linux -- nvmf/common.sh@7 -- # uname -s 00:29:18.190 11:51:18 keyring_linux -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:29:18.190 11:51:18 keyring_linux -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:29:18.190 11:51:18 keyring_linux -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:29:18.190 11:51:18 keyring_linux -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:29:18.190 11:51:18 keyring_linux -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:29:18.190 11:51:18 keyring_linux -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:29:18.190 11:51:18 keyring_linux -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:29:18.190 11:51:18 keyring_linux -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:29:18.190 11:51:18 keyring_linux -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:29:18.190 11:51:18 keyring_linux -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:29:18.190 11:51:18 keyring_linux -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:2fb5c03d-b67f-4471-b8a9-a83bf7b15be8 00:29:18.190 11:51:18 keyring_linux -- nvmf/common.sh@18 -- # NVME_HOSTID=2fb5c03d-b67f-4471-b8a9-a83bf7b15be8 00:29:18.190 11:51:18 keyring_linux -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:29:18.190 11:51:18 keyring_linux -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:29:18.190 11:51:18 keyring_linux -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:29:18.190 11:51:18 keyring_linux -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:29:18.190 11:51:18 keyring_linux -- nvmf/common.sh@49 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:29:18.190 11:51:18 keyring_linux -- scripts/common.sh@15 -- # shopt -s extglob 00:29:18.190 11:51:18 keyring_linux -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:29:18.190 11:51:18 keyring_linux -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:29:18.190 11:51:18 keyring_linux -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:29:18.190 11:51:18 keyring_linux -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:29:18.190 11:51:18 keyring_linux -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:29:18.190 11:51:18 keyring_linux -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:29:18.190 11:51:18 keyring_linux -- paths/export.sh@5 -- # export PATH 00:29:18.190 11:51:18 keyring_linux -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:29:18.190 11:51:18 keyring_linux -- nvmf/common.sh@51 -- # : 0 00:29:18.190 11:51:18 keyring_linux -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:29:18.190 11:51:18 keyring_linux -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:29:18.190 11:51:18 keyring_linux -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:29:18.190 11:51:18 keyring_linux -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:29:18.190 11:51:18 keyring_linux -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:29:18.190 11:51:18 keyring_linux -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:29:18.190 /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:29:18.190 11:51:18 keyring_linux -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:29:18.190 11:51:18 keyring_linux -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:29:18.190 11:51:18 keyring_linux -- nvmf/common.sh@55 -- # have_pci_nics=0 00:29:18.190 11:51:18 keyring_linux -- keyring/common.sh@6 -- # bperfsock=/var/tmp/bperf.sock 00:29:18.190 11:51:18 keyring_linux -- keyring/linux.sh@11 -- # subnqn=nqn.2016-06.io.spdk:cnode0 00:29:18.190 11:51:18 keyring_linux -- keyring/linux.sh@12 -- # hostnqn=nqn.2016-06.io.spdk:host0 00:29:18.190 11:51:18 keyring_linux -- keyring/linux.sh@13 -- # key0=00112233445566778899aabbccddeeff 00:29:18.190 11:51:18 keyring_linux -- keyring/linux.sh@14 -- # key1=112233445566778899aabbccddeeff00 00:29:18.190 11:51:18 keyring_linux -- keyring/linux.sh@45 -- # trap cleanup EXIT 00:29:18.190 11:51:18 keyring_linux -- keyring/linux.sh@47 -- # prep_key key0 00112233445566778899aabbccddeeff 0 /tmp/:spdk-test:key0 00:29:18.190 11:51:18 keyring_linux -- keyring/common.sh@15 -- # local name key digest path 00:29:18.190 11:51:18 keyring_linux -- keyring/common.sh@17 -- # name=key0 00:29:18.190 11:51:18 keyring_linux -- keyring/common.sh@17 -- # key=00112233445566778899aabbccddeeff 00:29:18.190 11:51:18 keyring_linux -- keyring/common.sh@17 -- # digest=0 00:29:18.190 11:51:18 keyring_linux -- keyring/common.sh@18 -- # path=/tmp/:spdk-test:key0 00:29:18.190 11:51:18 keyring_linux -- keyring/common.sh@20 -- # format_interchange_psk 00112233445566778899aabbccddeeff 0 00:29:18.190 11:51:18 keyring_linux -- nvmf/common.sh@743 -- # format_key NVMeTLSkey-1 00112233445566778899aabbccddeeff 0 00:29:18.190 11:51:18 keyring_linux -- nvmf/common.sh@730 -- # local prefix key digest 00:29:18.190 11:51:18 keyring_linux -- nvmf/common.sh@732 -- # prefix=NVMeTLSkey-1 00:29:18.190 11:51:18 keyring_linux -- nvmf/common.sh@732 -- # key=00112233445566778899aabbccddeeff 00:29:18.190 11:51:18 keyring_linux -- nvmf/common.sh@732 -- # digest=0 00:29:18.190 11:51:18 keyring_linux -- nvmf/common.sh@733 -- # python - 00:29:18.190 11:51:18 keyring_linux -- keyring/common.sh@21 -- # chmod 0600 /tmp/:spdk-test:key0 00:29:18.190 /tmp/:spdk-test:key0 00:29:18.190 11:51:18 keyring_linux -- keyring/common.sh@23 -- # echo /tmp/:spdk-test:key0 00:29:18.190 11:51:18 keyring_linux -- keyring/linux.sh@48 -- # prep_key key1 112233445566778899aabbccddeeff00 0 /tmp/:spdk-test:key1 00:29:18.190 11:51:18 keyring_linux -- keyring/common.sh@15 -- # local name key digest path 00:29:18.190 11:51:18 keyring_linux -- keyring/common.sh@17 -- # name=key1 00:29:18.190 11:51:18 keyring_linux -- keyring/common.sh@17 -- # key=112233445566778899aabbccddeeff00 00:29:18.190 11:51:18 keyring_linux -- keyring/common.sh@17 -- # digest=0 00:29:18.190 11:51:18 keyring_linux -- keyring/common.sh@18 -- # path=/tmp/:spdk-test:key1 00:29:18.190 11:51:18 keyring_linux -- keyring/common.sh@20 -- # format_interchange_psk 112233445566778899aabbccddeeff00 0 00:29:18.190 11:51:18 keyring_linux -- nvmf/common.sh@743 -- # format_key NVMeTLSkey-1 112233445566778899aabbccddeeff00 0 00:29:18.190 11:51:18 keyring_linux -- nvmf/common.sh@730 -- # local prefix key digest 00:29:18.190 11:51:18 keyring_linux -- nvmf/common.sh@732 -- # prefix=NVMeTLSkey-1 00:29:18.190 11:51:18 keyring_linux -- nvmf/common.sh@732 -- # key=112233445566778899aabbccddeeff00 00:29:18.190 11:51:18 keyring_linux -- nvmf/common.sh@732 -- # digest=0 00:29:18.190 11:51:18 keyring_linux -- nvmf/common.sh@733 -- # python - 00:29:18.190 11:51:18 keyring_linux -- keyring/common.sh@21 -- # chmod 0600 /tmp/:spdk-test:key1 00:29:18.190 11:51:18 keyring_linux -- keyring/common.sh@23 -- # echo /tmp/:spdk-test:key1 00:29:18.190 /tmp/:spdk-test:key1 00:29:18.190 11:51:18 keyring_linux -- keyring/linux.sh@51 -- # tgtpid=113157 00:29:18.190 11:51:18 keyring_linux -- keyring/linux.sh@50 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:29:18.190 11:51:18 keyring_linux -- keyring/linux.sh@53 -- # waitforlisten 113157 00:29:18.190 11:51:18 keyring_linux -- common/autotest_common.sh@835 -- # '[' -z 113157 ']' 00:29:18.190 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:29:18.190 11:51:18 keyring_linux -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:29:18.190 11:51:18 keyring_linux -- common/autotest_common.sh@840 -- # local max_retries=100 00:29:18.191 11:51:18 keyring_linux -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:29:18.191 11:51:18 keyring_linux -- common/autotest_common.sh@844 -- # xtrace_disable 00:29:18.191 11:51:18 keyring_linux -- common/autotest_common.sh@10 -- # set +x 00:29:18.191 [2024-12-16 11:51:18.887851] Starting SPDK v25.01-pre git sha1 c360129e7 / DPDK 24.03.0 initialization... 00:29:18.191 [2024-12-16 11:51:18.888134] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid113157 ] 00:29:18.450 [2024-12-16 11:51:19.030240] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:29:18.450 [2024-12-16 11:51:19.080410] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:29:19.386 11:51:19 keyring_linux -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:29:19.386 11:51:19 keyring_linux -- common/autotest_common.sh@868 -- # return 0 00:29:19.386 11:51:19 keyring_linux -- keyring/linux.sh@54 -- # rpc_cmd 00:29:19.386 11:51:19 keyring_linux -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:19.386 11:51:19 keyring_linux -- common/autotest_common.sh@10 -- # set +x 00:29:19.386 [2024-12-16 11:51:19.894874] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:29:19.386 null0 00:29:19.386 [2024-12-16 11:51:19.926858] tcp.c:1049:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:29:19.386 [2024-12-16 11:51:19.927093] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4420 *** 00:29:19.386 11:51:19 keyring_linux -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:19.386 11:51:19 keyring_linux -- keyring/linux.sh@66 -- # keyctl add user :spdk-test:key0 NVMeTLSkey-1:00:MDAxMTIyMzM0NDU1NjY3Nzg4OTlhYWJiY2NkZGVlZmZwJEiQ: @s 00:29:19.386 966093276 00:29:19.386 11:51:19 keyring_linux -- keyring/linux.sh@67 -- # keyctl add user :spdk-test:key1 NVMeTLSkey-1:00:MTEyMjMzNDQ1NTY2Nzc4ODk5YWFiYmNjZGRlZWZmMDA6CPcs: @s 00:29:19.386 244297994 00:29:19.386 11:51:19 keyring_linux -- keyring/linux.sh@70 -- # bperfpid=113193 00:29:19.386 11:51:19 keyring_linux -- keyring/linux.sh@68 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -q 128 -o 4k -w randread -t 1 -m 2 -r /var/tmp/bperf.sock -z --wait-for-rpc 00:29:19.386 11:51:19 keyring_linux -- keyring/linux.sh@72 -- # waitforlisten 113193 /var/tmp/bperf.sock 00:29:19.386 11:51:19 keyring_linux -- common/autotest_common.sh@835 -- # '[' -z 113193 ']' 00:29:19.386 11:51:19 keyring_linux -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bperf.sock 00:29:19.386 11:51:19 keyring_linux -- common/autotest_common.sh@840 -- # local max_retries=100 00:29:19.386 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:29:19.386 11:51:19 keyring_linux -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:29:19.386 11:51:19 keyring_linux -- common/autotest_common.sh@844 -- # xtrace_disable 00:29:19.386 11:51:19 keyring_linux -- common/autotest_common.sh@10 -- # set +x 00:29:19.386 [2024-12-16 11:51:20.000874] Starting SPDK v25.01-pre git sha1 c360129e7 / DPDK 24.03.0 initialization... 00:29:19.386 [2024-12-16 11:51:20.000971] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid113193 ] 00:29:19.645 [2024-12-16 11:51:20.148459] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:29:19.645 [2024-12-16 11:51:20.200961] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 1 00:29:19.645 11:51:20 keyring_linux -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:29:19.645 11:51:20 keyring_linux -- common/autotest_common.sh@868 -- # return 0 00:29:19.645 11:51:20 keyring_linux -- keyring/linux.sh@73 -- # bperf_cmd keyring_linux_set_options --enable 00:29:19.645 11:51:20 keyring_linux -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_linux_set_options --enable 00:29:19.904 11:51:20 keyring_linux -- keyring/linux.sh@74 -- # bperf_cmd framework_start_init 00:29:19.904 11:51:20 keyring_linux -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock framework_start_init 00:29:20.472 11:51:20 keyring_linux -- keyring/linux.sh@75 -- # bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk :spdk-test:key0 00:29:20.472 11:51:20 keyring_linux -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk :spdk-test:key0 00:29:20.472 [2024-12-16 11:51:21.149387] bdev_nvme_rpc.c: 515:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:29:20.472 nvme0n1 00:29:20.730 11:51:21 keyring_linux -- keyring/linux.sh@77 -- # check_keys 1 :spdk-test:key0 00:29:20.730 11:51:21 keyring_linux -- keyring/linux.sh@19 -- # local count=1 name=:spdk-test:key0 00:29:20.730 11:51:21 keyring_linux -- keyring/linux.sh@20 -- # local sn 00:29:20.730 11:51:21 keyring_linux -- keyring/linux.sh@22 -- # bperf_cmd keyring_get_keys 00:29:20.730 11:51:21 keyring_linux -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:29:20.730 11:51:21 keyring_linux -- keyring/linux.sh@22 -- # jq length 00:29:20.989 11:51:21 keyring_linux -- keyring/linux.sh@22 -- # (( 1 == count )) 00:29:20.989 11:51:21 keyring_linux -- keyring/linux.sh@23 -- # (( count == 0 )) 00:29:20.989 11:51:21 keyring_linux -- keyring/linux.sh@25 -- # get_key :spdk-test:key0 00:29:20.989 11:51:21 keyring_linux -- keyring/linux.sh@25 -- # jq -r .sn 00:29:20.989 11:51:21 keyring_linux -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:29:20.989 11:51:21 keyring_linux -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:29:20.989 11:51:21 keyring_linux -- keyring/common.sh@10 -- # jq '.[] | select(.name == ":spdk-test:key0")' 00:29:21.248 11:51:21 keyring_linux -- keyring/linux.sh@25 -- # sn=966093276 00:29:21.248 11:51:21 keyring_linux -- keyring/linux.sh@26 -- # get_keysn :spdk-test:key0 00:29:21.248 11:51:21 keyring_linux -- keyring/linux.sh@16 -- # keyctl search @s user :spdk-test:key0 00:29:21.248 11:51:21 keyring_linux -- keyring/linux.sh@26 -- # [[ 966093276 == \9\6\6\0\9\3\2\7\6 ]] 00:29:21.248 11:51:21 keyring_linux -- keyring/linux.sh@27 -- # keyctl print 966093276 00:29:21.248 11:51:21 keyring_linux -- keyring/linux.sh@27 -- # [[ NVMeTLSkey-1:00:MDAxMTIyMzM0NDU1NjY3Nzg4OTlhYWJiY2NkZGVlZmZwJEiQ: == \N\V\M\e\T\L\S\k\e\y\-\1\:\0\0\:\M\D\A\x\M\T\I\y\M\z\M\0\N\D\U\1\N\j\Y\3\N\z\g\4\O\T\l\h\Y\W\J\i\Y\2\N\k\Z\G\V\l\Z\m\Z\w\J\E\i\Q\: ]] 00:29:21.248 11:51:21 keyring_linux -- keyring/linux.sh@79 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:29:21.248 Running I/O for 1 seconds... 00:29:22.625 13442.00 IOPS, 52.51 MiB/s 00:29:22.625 Latency(us) 00:29:22.625 [2024-12-16T11:51:23.378Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:29:22.625 Job: nvme0n1 (Core Mask 0x2, workload: randread, depth: 128, IO size: 4096) 00:29:22.625 nvme0n1 : 1.01 13443.15 52.51 0.00 0.00 9472.03 2770.39 11736.90 00:29:22.625 [2024-12-16T11:51:23.378Z] =================================================================================================================== 00:29:22.625 [2024-12-16T11:51:23.378Z] Total : 13443.15 52.51 0.00 0.00 9472.03 2770.39 11736.90 00:29:22.625 { 00:29:22.625 "results": [ 00:29:22.625 { 00:29:22.625 "job": "nvme0n1", 00:29:22.625 "core_mask": "0x2", 00:29:22.625 "workload": "randread", 00:29:22.625 "status": "finished", 00:29:22.625 "queue_depth": 128, 00:29:22.625 "io_size": 4096, 00:29:22.625 "runtime": 1.009436, 00:29:22.625 "iops": 13443.150432518752, 00:29:22.625 "mibps": 52.51230637702638, 00:29:22.625 "io_failed": 0, 00:29:22.625 "io_timeout": 0, 00:29:22.625 "avg_latency_us": 9472.031967843504, 00:29:22.625 "min_latency_us": 2770.3854545454546, 00:29:22.625 "max_latency_us": 11736.901818181817 00:29:22.625 } 00:29:22.625 ], 00:29:22.625 "core_count": 1 00:29:22.625 } 00:29:22.625 11:51:22 keyring_linux -- keyring/linux.sh@80 -- # bperf_cmd bdev_nvme_detach_controller nvme0 00:29:22.625 11:51:22 keyring_linux -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_detach_controller nvme0 00:29:22.625 11:51:23 keyring_linux -- keyring/linux.sh@81 -- # check_keys 0 00:29:22.625 11:51:23 keyring_linux -- keyring/linux.sh@19 -- # local count=0 name= 00:29:22.625 11:51:23 keyring_linux -- keyring/linux.sh@20 -- # local sn 00:29:22.625 11:51:23 keyring_linux -- keyring/linux.sh@22 -- # bperf_cmd keyring_get_keys 00:29:22.625 11:51:23 keyring_linux -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:29:22.625 11:51:23 keyring_linux -- keyring/linux.sh@22 -- # jq length 00:29:22.896 11:51:23 keyring_linux -- keyring/linux.sh@22 -- # (( 0 == count )) 00:29:22.896 11:51:23 keyring_linux -- keyring/linux.sh@23 -- # (( count == 0 )) 00:29:22.896 11:51:23 keyring_linux -- keyring/linux.sh@23 -- # return 00:29:22.896 11:51:23 keyring_linux -- keyring/linux.sh@84 -- # NOT bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk :spdk-test:key1 00:29:22.896 11:51:23 keyring_linux -- common/autotest_common.sh@652 -- # local es=0 00:29:22.896 11:51:23 keyring_linux -- common/autotest_common.sh@654 -- # valid_exec_arg bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk :spdk-test:key1 00:29:22.896 11:51:23 keyring_linux -- common/autotest_common.sh@640 -- # local arg=bperf_cmd 00:29:22.896 11:51:23 keyring_linux -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:29:22.896 11:51:23 keyring_linux -- common/autotest_common.sh@644 -- # type -t bperf_cmd 00:29:22.896 11:51:23 keyring_linux -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:29:22.896 11:51:23 keyring_linux -- common/autotest_common.sh@655 -- # bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk :spdk-test:key1 00:29:22.896 11:51:23 keyring_linux -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk :spdk-test:key1 00:29:23.194 [2024-12-16 11:51:23.839903] /home/vagrant/spdk_repo/spdk/include/spdk_internal/nvme_tcp.h: 421:nvme_tcp_read_data: *ERROR*: spdk_sock_recv() failed, errno 107: Transport endpoint is not connected 00:29:23.195 [2024-12-16 11:51:23.840320] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1851270 (107): Transport endpoint is not connected 00:29:23.195 [2024-12-16 11:51:23.841312] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1851270 (9): Bad file descriptor 00:29:23.195 [2024-12-16 11:51:23.842308] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 0] Ctrlr is in error state 00:29:23.195 [2024-12-16 11:51:23.842348] nvme.c: 709:nvme_ctrlr_poll_internal: *ERROR*: Failed to initialize SSD: 127.0.0.1 00:29:23.195 [2024-12-16 11:51:23.842358] nvme.c: 895:nvme_dummy_attach_fail_cb: *ERROR*: Failed to attach nvme ctrlr: trtype=TCP adrfam=IPv4 traddr=127.0.0.1 trsvcid=4420 subnqn=nqn.2016-06.io.spdk:cnode0, Operation not permitted 00:29:23.195 [2024-12-16 11:51:23.842368] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 0] in failed state. 00:29:23.195 2024/12/16 11:51:23 error on JSON-RPC call, method: bdev_nvme_attach_controller, params: map[adrfam:ipv4 allow_unrecognized_csi:%!s(bool=false) ddgst:%!s(bool=false) hdgst:%!s(bool=false) hostnqn:nqn.2016-06.io.spdk:host0 name:nvme0 prchk_guard:%!s(bool=false) prchk_reftag:%!s(bool=false) psk::spdk-test:key1 subnqn:nqn.2016-06.io.spdk:cnode0 traddr:127.0.0.1 trsvcid:4420 trtype:tcp], err: error received for bdev_nvme_attach_controller method, err: Code=-5 Msg=Input/output error 00:29:23.195 request: 00:29:23.195 { 00:29:23.195 "method": "bdev_nvme_attach_controller", 00:29:23.195 "params": { 00:29:23.195 "name": "nvme0", 00:29:23.195 "trtype": "tcp", 00:29:23.195 "traddr": "127.0.0.1", 00:29:23.195 "adrfam": "ipv4", 00:29:23.195 "trsvcid": "4420", 00:29:23.195 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:29:23.195 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:29:23.195 "prchk_reftag": false, 00:29:23.195 "prchk_guard": false, 00:29:23.195 "hdgst": false, 00:29:23.195 "ddgst": false, 00:29:23.195 "psk": ":spdk-test:key1", 00:29:23.195 "allow_unrecognized_csi": false 00:29:23.195 } 00:29:23.195 } 00:29:23.195 Got JSON-RPC error response 00:29:23.195 GoRPCClient: error on JSON-RPC call 00:29:23.195 11:51:23 keyring_linux -- common/autotest_common.sh@655 -- # es=1 00:29:23.195 11:51:23 keyring_linux -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:29:23.195 11:51:23 keyring_linux -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:29:23.195 11:51:23 keyring_linux -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:29:23.195 11:51:23 keyring_linux -- keyring/linux.sh@1 -- # cleanup 00:29:23.195 11:51:23 keyring_linux -- keyring/linux.sh@38 -- # for key in key0 key1 00:29:23.195 11:51:23 keyring_linux -- keyring/linux.sh@39 -- # unlink_key key0 00:29:23.195 11:51:23 keyring_linux -- keyring/linux.sh@31 -- # local name=key0 sn 00:29:23.195 11:51:23 keyring_linux -- keyring/linux.sh@33 -- # get_keysn :spdk-test:key0 00:29:23.195 11:51:23 keyring_linux -- keyring/linux.sh@16 -- # keyctl search @s user :spdk-test:key0 00:29:23.195 11:51:23 keyring_linux -- keyring/linux.sh@33 -- # sn=966093276 00:29:23.195 11:51:23 keyring_linux -- keyring/linux.sh@34 -- # keyctl unlink 966093276 00:29:23.195 1 links removed 00:29:23.195 11:51:23 keyring_linux -- keyring/linux.sh@38 -- # for key in key0 key1 00:29:23.195 11:51:23 keyring_linux -- keyring/linux.sh@39 -- # unlink_key key1 00:29:23.195 11:51:23 keyring_linux -- keyring/linux.sh@31 -- # local name=key1 sn 00:29:23.195 11:51:23 keyring_linux -- keyring/linux.sh@33 -- # get_keysn :spdk-test:key1 00:29:23.195 11:51:23 keyring_linux -- keyring/linux.sh@16 -- # keyctl search @s user :spdk-test:key1 00:29:23.195 11:51:23 keyring_linux -- keyring/linux.sh@33 -- # sn=244297994 00:29:23.195 11:51:23 keyring_linux -- keyring/linux.sh@34 -- # keyctl unlink 244297994 00:29:23.195 1 links removed 00:29:23.195 11:51:23 keyring_linux -- keyring/linux.sh@41 -- # killprocess 113193 00:29:23.195 11:51:23 keyring_linux -- common/autotest_common.sh@954 -- # '[' -z 113193 ']' 00:29:23.195 11:51:23 keyring_linux -- common/autotest_common.sh@958 -- # kill -0 113193 00:29:23.195 11:51:23 keyring_linux -- common/autotest_common.sh@959 -- # uname 00:29:23.195 11:51:23 keyring_linux -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:29:23.195 11:51:23 keyring_linux -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 113193 00:29:23.195 11:51:23 keyring_linux -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:29:23.195 11:51:23 keyring_linux -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:29:23.195 killing process with pid 113193 00:29:23.195 11:51:23 keyring_linux -- common/autotest_common.sh@972 -- # echo 'killing process with pid 113193' 00:29:23.195 11:51:23 keyring_linux -- common/autotest_common.sh@973 -- # kill 113193 00:29:23.195 Received shutdown signal, test time was about 1.000000 seconds 00:29:23.195 00:29:23.195 Latency(us) 00:29:23.195 [2024-12-16T11:51:23.948Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:29:23.195 [2024-12-16T11:51:23.948Z] =================================================================================================================== 00:29:23.195 [2024-12-16T11:51:23.948Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:29:23.195 11:51:23 keyring_linux -- common/autotest_common.sh@978 -- # wait 113193 00:29:23.454 11:51:24 keyring_linux -- keyring/linux.sh@42 -- # killprocess 113157 00:29:23.454 11:51:24 keyring_linux -- common/autotest_common.sh@954 -- # '[' -z 113157 ']' 00:29:23.454 11:51:24 keyring_linux -- common/autotest_common.sh@958 -- # kill -0 113157 00:29:23.454 11:51:24 keyring_linux -- common/autotest_common.sh@959 -- # uname 00:29:23.454 11:51:24 keyring_linux -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:29:23.454 11:51:24 keyring_linux -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 113157 00:29:23.454 11:51:24 keyring_linux -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:29:23.454 11:51:24 keyring_linux -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:29:23.454 killing process with pid 113157 00:29:23.454 11:51:24 keyring_linux -- common/autotest_common.sh@972 -- # echo 'killing process with pid 113157' 00:29:23.454 11:51:24 keyring_linux -- common/autotest_common.sh@973 -- # kill 113157 00:29:23.454 11:51:24 keyring_linux -- common/autotest_common.sh@978 -- # wait 113157 00:29:24.020 00:29:24.020 real 0m5.984s 00:29:24.020 user 0m11.526s 00:29:24.021 sys 0m1.639s 00:29:24.021 11:51:24 keyring_linux -- common/autotest_common.sh@1130 -- # xtrace_disable 00:29:24.021 ************************************ 00:29:24.021 END TEST keyring_linux 00:29:24.021 11:51:24 keyring_linux -- common/autotest_common.sh@10 -- # set +x 00:29:24.021 ************************************ 00:29:24.021 11:51:24 -- spdk/autotest.sh@311 -- # '[' 0 -eq 1 ']' 00:29:24.021 11:51:24 -- spdk/autotest.sh@315 -- # '[' 0 -eq 1 ']' 00:29:24.021 11:51:24 -- spdk/autotest.sh@319 -- # '[' 0 -eq 1 ']' 00:29:24.021 11:51:24 -- spdk/autotest.sh@324 -- # '[' 0 -eq 1 ']' 00:29:24.021 11:51:24 -- spdk/autotest.sh@333 -- # '[' 0 -eq 1 ']' 00:29:24.021 11:51:24 -- spdk/autotest.sh@338 -- # '[' 0 -eq 1 ']' 00:29:24.021 11:51:24 -- spdk/autotest.sh@342 -- # '[' 0 -eq 1 ']' 00:29:24.021 11:51:24 -- spdk/autotest.sh@346 -- # '[' 0 -eq 1 ']' 00:29:24.021 11:51:24 -- spdk/autotest.sh@350 -- # '[' 0 -eq 1 ']' 00:29:24.021 11:51:24 -- spdk/autotest.sh@355 -- # '[' 0 -eq 1 ']' 00:29:24.021 11:51:24 -- spdk/autotest.sh@359 -- # '[' 0 -eq 1 ']' 00:29:24.021 11:51:24 -- spdk/autotest.sh@366 -- # [[ 0 -eq 1 ]] 00:29:24.021 11:51:24 -- spdk/autotest.sh@370 -- # [[ 0 -eq 1 ]] 00:29:24.021 11:51:24 -- spdk/autotest.sh@374 -- # [[ 0 -eq 1 ]] 00:29:24.021 11:51:24 -- spdk/autotest.sh@378 -- # [[ '' -eq 1 ]] 00:29:24.021 11:51:24 -- spdk/autotest.sh@385 -- # trap - SIGINT SIGTERM EXIT 00:29:24.021 11:51:24 -- spdk/autotest.sh@387 -- # timing_enter post_cleanup 00:29:24.021 11:51:24 -- common/autotest_common.sh@726 -- # xtrace_disable 00:29:24.021 11:51:24 -- common/autotest_common.sh@10 -- # set +x 00:29:24.021 11:51:24 -- spdk/autotest.sh@388 -- # autotest_cleanup 00:29:24.021 11:51:24 -- common/autotest_common.sh@1396 -- # local autotest_es=0 00:29:24.021 11:51:24 -- common/autotest_common.sh@1397 -- # xtrace_disable 00:29:24.021 11:51:24 -- common/autotest_common.sh@10 -- # set +x 00:29:25.920 INFO: APP EXITING 00:29:25.920 INFO: killing all VMs 00:29:25.920 INFO: killing vhost app 00:29:25.920 INFO: EXIT DONE 00:29:26.487 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:29:26.487 0000:00:11.0 (1b36 0010): Already using the nvme driver 00:29:26.487 0000:00:10.0 (1b36 0010): Already using the nvme driver 00:29:27.055 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:29:27.055 Cleaning 00:29:27.055 Removing: /var/run/dpdk/spdk0/config 00:29:27.055 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-0-0 00:29:27.055 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-0-1 00:29:27.055 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-0-2 00:29:27.055 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-0-3 00:29:27.055 Removing: /var/run/dpdk/spdk0/fbarray_memzone 00:29:27.055 Removing: /var/run/dpdk/spdk0/hugepage_info 00:29:27.055 Removing: /var/run/dpdk/spdk1/config 00:29:27.055 Removing: /var/run/dpdk/spdk1/fbarray_memseg-2048k-0-0 00:29:27.055 Removing: /var/run/dpdk/spdk1/fbarray_memseg-2048k-0-1 00:29:27.055 Removing: /var/run/dpdk/spdk1/fbarray_memseg-2048k-0-2 00:29:27.055 Removing: /var/run/dpdk/spdk1/fbarray_memseg-2048k-0-3 00:29:27.055 Removing: /var/run/dpdk/spdk1/fbarray_memzone 00:29:27.055 Removing: /var/run/dpdk/spdk1/hugepage_info 00:29:27.055 Removing: /var/run/dpdk/spdk2/config 00:29:27.055 Removing: /var/run/dpdk/spdk2/fbarray_memseg-2048k-0-0 00:29:27.055 Removing: /var/run/dpdk/spdk2/fbarray_memseg-2048k-0-1 00:29:27.055 Removing: /var/run/dpdk/spdk2/fbarray_memseg-2048k-0-2 00:29:27.055 Removing: /var/run/dpdk/spdk2/fbarray_memseg-2048k-0-3 00:29:27.055 Removing: /var/run/dpdk/spdk2/fbarray_memzone 00:29:27.055 Removing: /var/run/dpdk/spdk2/hugepage_info 00:29:27.055 Removing: /var/run/dpdk/spdk3/config 00:29:27.055 Removing: /var/run/dpdk/spdk3/fbarray_memseg-2048k-0-0 00:29:27.055 Removing: /var/run/dpdk/spdk3/fbarray_memseg-2048k-0-1 00:29:27.314 Removing: /var/run/dpdk/spdk3/fbarray_memseg-2048k-0-2 00:29:27.314 Removing: /var/run/dpdk/spdk3/fbarray_memseg-2048k-0-3 00:29:27.314 Removing: /var/run/dpdk/spdk3/fbarray_memzone 00:29:27.314 Removing: /var/run/dpdk/spdk3/hugepage_info 00:29:27.314 Removing: /var/run/dpdk/spdk4/config 00:29:27.314 Removing: /var/run/dpdk/spdk4/fbarray_memseg-2048k-0-0 00:29:27.314 Removing: /var/run/dpdk/spdk4/fbarray_memseg-2048k-0-1 00:29:27.314 Removing: /var/run/dpdk/spdk4/fbarray_memseg-2048k-0-2 00:29:27.314 Removing: /var/run/dpdk/spdk4/fbarray_memseg-2048k-0-3 00:29:27.314 Removing: /var/run/dpdk/spdk4/fbarray_memzone 00:29:27.314 Removing: /var/run/dpdk/spdk4/hugepage_info 00:29:27.314 Removing: /dev/shm/nvmf_trace.0 00:29:27.314 Removing: /dev/shm/spdk_tgt_trace.pid60343 00:29:27.314 Removing: /var/run/dpdk/spdk0 00:29:27.314 Removing: /var/run/dpdk/spdk1 00:29:27.314 Removing: /var/run/dpdk/spdk2 00:29:27.314 Removing: /var/run/dpdk/spdk3 00:29:27.314 Removing: /var/run/dpdk/spdk4 00:29:27.314 Removing: /var/run/dpdk/spdk_pid100050 00:29:27.314 Removing: /var/run/dpdk/spdk_pid100435 00:29:27.314 Removing: /var/run/dpdk/spdk_pid102921 00:29:27.314 Removing: /var/run/dpdk/spdk_pid102967 00:29:27.314 Removing: /var/run/dpdk/spdk_pid103313 00:29:27.314 Removing: /var/run/dpdk/spdk_pid103359 00:29:27.314 Removing: /var/run/dpdk/spdk_pid103764 00:29:27.314 Removing: /var/run/dpdk/spdk_pid104328 00:29:27.314 Removing: /var/run/dpdk/spdk_pid104777 00:29:27.314 Removing: /var/run/dpdk/spdk_pid105791 00:29:27.314 Removing: /var/run/dpdk/spdk_pid106854 00:29:27.314 Removing: /var/run/dpdk/spdk_pid106971 00:29:27.314 Removing: /var/run/dpdk/spdk_pid107029 00:29:27.314 Removing: /var/run/dpdk/spdk_pid108622 00:29:27.314 Removing: /var/run/dpdk/spdk_pid108943 00:29:27.314 Removing: /var/run/dpdk/spdk_pid109283 00:29:27.314 Removing: /var/run/dpdk/spdk_pid109852 00:29:27.314 Removing: /var/run/dpdk/spdk_pid109857 00:29:27.314 Removing: /var/run/dpdk/spdk_pid110248 00:29:27.314 Removing: /var/run/dpdk/spdk_pid110412 00:29:27.314 Removing: /var/run/dpdk/spdk_pid110569 00:29:27.314 Removing: /var/run/dpdk/spdk_pid110667 00:29:27.314 Removing: /var/run/dpdk/spdk_pid110826 00:29:27.314 Removing: /var/run/dpdk/spdk_pid110932 00:29:27.314 Removing: /var/run/dpdk/spdk_pid111656 00:29:27.314 Removing: /var/run/dpdk/spdk_pid111691 00:29:27.314 Removing: /var/run/dpdk/spdk_pid111727 00:29:27.314 Removing: /var/run/dpdk/spdk_pid111983 00:29:27.314 Removing: /var/run/dpdk/spdk_pid112018 00:29:27.314 Removing: /var/run/dpdk/spdk_pid112048 00:29:27.314 Removing: /var/run/dpdk/spdk_pid112516 00:29:27.314 Removing: /var/run/dpdk/spdk_pid112540 00:29:27.314 Removing: /var/run/dpdk/spdk_pid112999 00:29:27.314 Removing: /var/run/dpdk/spdk_pid113157 00:29:27.314 Removing: /var/run/dpdk/spdk_pid113193 00:29:27.314 Removing: /var/run/dpdk/spdk_pid60190 00:29:27.314 Removing: /var/run/dpdk/spdk_pid60343 00:29:27.314 Removing: /var/run/dpdk/spdk_pid60612 00:29:27.314 Removing: /var/run/dpdk/spdk_pid60705 00:29:27.314 Removing: /var/run/dpdk/spdk_pid60731 00:29:27.314 Removing: /var/run/dpdk/spdk_pid60840 00:29:27.314 Removing: /var/run/dpdk/spdk_pid60857 00:29:27.314 Removing: /var/run/dpdk/spdk_pid60996 00:29:27.314 Removing: /var/run/dpdk/spdk_pid61281 00:29:27.314 Removing: /var/run/dpdk/spdk_pid61465 00:29:27.314 Removing: /var/run/dpdk/spdk_pid61554 00:29:27.314 Removing: /var/run/dpdk/spdk_pid61642 00:29:27.314 Removing: /var/run/dpdk/spdk_pid61745 00:29:27.314 Removing: /var/run/dpdk/spdk_pid61778 00:29:27.315 Removing: /var/run/dpdk/spdk_pid61808 00:29:27.315 Removing: /var/run/dpdk/spdk_pid61883 00:29:27.315 Removing: /var/run/dpdk/spdk_pid61981 00:29:27.315 Removing: /var/run/dpdk/spdk_pid62617 00:29:27.315 Removing: /var/run/dpdk/spdk_pid62667 00:29:27.315 Removing: /var/run/dpdk/spdk_pid62723 00:29:27.315 Removing: /var/run/dpdk/spdk_pid62751 00:29:27.315 Removing: /var/run/dpdk/spdk_pid62830 00:29:27.315 Removing: /var/run/dpdk/spdk_pid62850 00:29:27.315 Removing: /var/run/dpdk/spdk_pid62929 00:29:27.315 Removing: /var/run/dpdk/spdk_pid62944 00:29:27.315 Removing: /var/run/dpdk/spdk_pid62995 00:29:27.315 Removing: /var/run/dpdk/spdk_pid63030 00:29:27.315 Removing: /var/run/dpdk/spdk_pid63077 00:29:27.315 Removing: /var/run/dpdk/spdk_pid63107 00:29:27.315 Removing: /var/run/dpdk/spdk_pid63271 00:29:27.315 Removing: /var/run/dpdk/spdk_pid63302 00:29:27.315 Removing: /var/run/dpdk/spdk_pid63385 00:29:27.315 Removing: /var/run/dpdk/spdk_pid63861 00:29:27.574 Removing: /var/run/dpdk/spdk_pid64239 00:29:27.574 Removing: /var/run/dpdk/spdk_pid66775 00:29:27.574 Removing: /var/run/dpdk/spdk_pid66828 00:29:27.574 Removing: /var/run/dpdk/spdk_pid67170 00:29:27.574 Removing: /var/run/dpdk/spdk_pid67226 00:29:27.574 Removing: /var/run/dpdk/spdk_pid67632 00:29:27.574 Removing: /var/run/dpdk/spdk_pid68224 00:29:27.574 Removing: /var/run/dpdk/spdk_pid68667 00:29:27.574 Removing: /var/run/dpdk/spdk_pid69747 00:29:27.574 Removing: /var/run/dpdk/spdk_pid70821 00:29:27.574 Removing: /var/run/dpdk/spdk_pid70939 00:29:27.574 Removing: /var/run/dpdk/spdk_pid71012 00:29:27.574 Removing: /var/run/dpdk/spdk_pid72646 00:29:27.574 Removing: /var/run/dpdk/spdk_pid73000 00:29:27.574 Removing: /var/run/dpdk/spdk_pid76849 00:29:27.574 Removing: /var/run/dpdk/spdk_pid77269 00:29:27.574 Removing: /var/run/dpdk/spdk_pid77879 00:29:27.574 Removing: /var/run/dpdk/spdk_pid78401 00:29:27.574 Removing: /var/run/dpdk/spdk_pid84215 00:29:27.574 Removing: /var/run/dpdk/spdk_pid84731 00:29:27.574 Removing: /var/run/dpdk/spdk_pid84838 00:29:27.574 Removing: /var/run/dpdk/spdk_pid85001 00:29:27.574 Removing: /var/run/dpdk/spdk_pid85043 00:29:27.574 Removing: /var/run/dpdk/spdk_pid85082 00:29:27.574 Removing: /var/run/dpdk/spdk_pid85140 00:29:27.574 Removing: /var/run/dpdk/spdk_pid85291 00:29:27.574 Removing: /var/run/dpdk/spdk_pid85437 00:29:27.574 Removing: /var/run/dpdk/spdk_pid85705 00:29:27.574 Removing: /var/run/dpdk/spdk_pid85829 00:29:27.574 Removing: /var/run/dpdk/spdk_pid86102 00:29:27.574 Removing: /var/run/dpdk/spdk_pid86214 00:29:27.574 Removing: /var/run/dpdk/spdk_pid86335 00:29:27.574 Removing: /var/run/dpdk/spdk_pid86739 00:29:27.574 Removing: /var/run/dpdk/spdk_pid87195 00:29:27.574 Removing: /var/run/dpdk/spdk_pid87196 00:29:27.574 Removing: /var/run/dpdk/spdk_pid87197 00:29:27.574 Removing: /var/run/dpdk/spdk_pid87474 00:29:27.574 Removing: /var/run/dpdk/spdk_pid87756 00:29:27.574 Removing: /var/run/dpdk/spdk_pid88165 00:29:27.574 Removing: /var/run/dpdk/spdk_pid88515 00:29:27.574 Removing: /var/run/dpdk/spdk_pid89113 00:29:27.574 Removing: /var/run/dpdk/spdk_pid89116 00:29:27.574 Removing: /var/run/dpdk/spdk_pid89499 00:29:27.574 Removing: /var/run/dpdk/spdk_pid89513 00:29:27.574 Removing: /var/run/dpdk/spdk_pid89527 00:29:27.574 Removing: /var/run/dpdk/spdk_pid89558 00:29:27.574 Removing: /var/run/dpdk/spdk_pid89563 00:29:27.574 Removing: /var/run/dpdk/spdk_pid89963 00:29:27.574 Removing: /var/run/dpdk/spdk_pid90006 00:29:27.574 Removing: /var/run/dpdk/spdk_pid90392 00:29:27.574 Removing: /var/run/dpdk/spdk_pid90629 00:29:27.574 Removing: /var/run/dpdk/spdk_pid91145 00:29:27.574 Removing: /var/run/dpdk/spdk_pid91786 00:29:27.574 Removing: /var/run/dpdk/spdk_pid93198 00:29:27.574 Removing: /var/run/dpdk/spdk_pid93853 00:29:27.574 Removing: /var/run/dpdk/spdk_pid93859 00:29:27.574 Removing: /var/run/dpdk/spdk_pid95897 00:29:27.574 Removing: /var/run/dpdk/spdk_pid95974 00:29:27.574 Removing: /var/run/dpdk/spdk_pid96051 00:29:27.574 Removing: /var/run/dpdk/spdk_pid96127 00:29:27.574 Removing: /var/run/dpdk/spdk_pid96278 00:29:27.574 Removing: /var/run/dpdk/spdk_pid96349 00:29:27.574 Removing: /var/run/dpdk/spdk_pid96427 00:29:27.574 Removing: /var/run/dpdk/spdk_pid96504 00:29:27.574 Removing: /var/run/dpdk/spdk_pid96875 00:29:27.574 Removing: /var/run/dpdk/spdk_pid97625 00:29:27.574 Removing: /var/run/dpdk/spdk_pid99035 00:29:27.574 Removing: /var/run/dpdk/spdk_pid99228 00:29:27.574 Removing: /var/run/dpdk/spdk_pid99518 00:29:27.574 Clean 00:29:27.833 11:51:28 -- common/autotest_common.sh@1453 -- # return 0 00:29:27.833 11:51:28 -- spdk/autotest.sh@389 -- # timing_exit post_cleanup 00:29:27.833 11:51:28 -- common/autotest_common.sh@732 -- # xtrace_disable 00:29:27.833 11:51:28 -- common/autotest_common.sh@10 -- # set +x 00:29:27.833 11:51:28 -- spdk/autotest.sh@391 -- # timing_exit autotest 00:29:27.833 11:51:28 -- common/autotest_common.sh@732 -- # xtrace_disable 00:29:27.833 11:51:28 -- common/autotest_common.sh@10 -- # set +x 00:29:27.833 11:51:28 -- spdk/autotest.sh@392 -- # chmod a+r /home/vagrant/spdk_repo/spdk/../output/timing.txt 00:29:27.833 11:51:28 -- spdk/autotest.sh@394 -- # [[ -f /home/vagrant/spdk_repo/spdk/../output/udev.log ]] 00:29:27.833 11:51:28 -- spdk/autotest.sh@394 -- # rm -f /home/vagrant/spdk_repo/spdk/../output/udev.log 00:29:27.833 11:51:28 -- spdk/autotest.sh@396 -- # [[ y == y ]] 00:29:27.833 11:51:28 -- spdk/autotest.sh@398 -- # hostname 00:29:27.833 11:51:28 -- spdk/autotest.sh@398 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -c --no-external -d /home/vagrant/spdk_repo/spdk -t fedora39-cloud-1721788873-2326 -o /home/vagrant/spdk_repo/spdk/../output/cov_test.info 00:29:28.092 geninfo: WARNING: invalid characters removed from testname! 00:29:50.032 11:51:49 -- spdk/autotest.sh@399 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -a /home/vagrant/spdk_repo/spdk/../output/cov_base.info -a /home/vagrant/spdk_repo/spdk/../output/cov_test.info -o /home/vagrant/spdk_repo/spdk/../output/cov_total.info 00:29:52.578 11:51:53 -- spdk/autotest.sh@400 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -r /home/vagrant/spdk_repo/spdk/../output/cov_total.info '*/dpdk/*' -o /home/vagrant/spdk_repo/spdk/../output/cov_total.info 00:29:55.114 11:51:55 -- spdk/autotest.sh@404 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -r /home/vagrant/spdk_repo/spdk/../output/cov_total.info --ignore-errors unused,unused '/usr/*' -o /home/vagrant/spdk_repo/spdk/../output/cov_total.info 00:29:57.647 11:51:57 -- spdk/autotest.sh@405 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -r /home/vagrant/spdk_repo/spdk/../output/cov_total.info '*/examples/vmd/*' -o /home/vagrant/spdk_repo/spdk/../output/cov_total.info 00:30:00.179 11:52:00 -- spdk/autotest.sh@406 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -r /home/vagrant/spdk_repo/spdk/../output/cov_total.info '*/app/spdk_lspci/*' -o /home/vagrant/spdk_repo/spdk/../output/cov_total.info 00:30:02.738 11:52:02 -- spdk/autotest.sh@407 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -r /home/vagrant/spdk_repo/spdk/../output/cov_total.info '*/app/spdk_top/*' -o /home/vagrant/spdk_repo/spdk/../output/cov_total.info 00:30:05.288 11:52:05 -- spdk/autotest.sh@408 -- # rm -f cov_base.info cov_test.info OLD_STDOUT OLD_STDERR 00:30:05.288 11:52:05 -- spdk/autorun.sh@1 -- $ timing_finish 00:30:05.288 11:52:05 -- common/autotest_common.sh@738 -- $ [[ -e /home/vagrant/spdk_repo/spdk/../output/timing.txt ]] 00:30:05.288 11:52:05 -- common/autotest_common.sh@740 -- $ flamegraph=/usr/local/FlameGraph/flamegraph.pl 00:30:05.288 11:52:05 -- common/autotest_common.sh@741 -- $ [[ -x /usr/local/FlameGraph/flamegraph.pl ]] 00:30:05.288 11:52:05 -- common/autotest_common.sh@744 -- $ /usr/local/FlameGraph/flamegraph.pl --title 'Build Timing' --nametype Step: --countname seconds /home/vagrant/spdk_repo/spdk/../output/timing.txt 00:30:05.288 + [[ -n 5260 ]] 00:30:05.288 + sudo kill 5260 00:30:05.297 [Pipeline] } 00:30:05.312 [Pipeline] // timeout 00:30:05.317 [Pipeline] } 00:30:05.331 [Pipeline] // stage 00:30:05.336 [Pipeline] } 00:30:05.349 [Pipeline] // catchError 00:30:05.358 [Pipeline] stage 00:30:05.360 [Pipeline] { (Stop VM) 00:30:05.372 [Pipeline] sh 00:30:05.651 + vagrant halt 00:30:08.938 ==> default: Halting domain... 00:30:14.218 [Pipeline] sh 00:30:14.496 + vagrant destroy -f 00:30:17.788 ==> default: Removing domain... 00:30:17.810 [Pipeline] sh 00:30:18.099 + mv output /var/jenkins/workspace/nvmf-tcp-vg-autotest_3/output 00:30:18.143 [Pipeline] } 00:30:18.158 [Pipeline] // stage 00:30:18.163 [Pipeline] } 00:30:18.177 [Pipeline] // dir 00:30:18.182 [Pipeline] } 00:30:18.196 [Pipeline] // wrap 00:30:18.202 [Pipeline] } 00:30:18.215 [Pipeline] // catchError 00:30:18.224 [Pipeline] stage 00:30:18.226 [Pipeline] { (Epilogue) 00:30:18.238 [Pipeline] sh 00:30:18.519 + jbp/jenkins/jjb-config/jobs/scripts/compress_artifacts.sh 00:30:25.096 [Pipeline] catchError 00:30:25.098 [Pipeline] { 00:30:25.111 [Pipeline] sh 00:30:25.392 + jbp/jenkins/jjb-config/jobs/scripts/check_artifacts_size.sh 00:30:25.392 Artifacts sizes are good 00:30:25.400 [Pipeline] } 00:30:25.414 [Pipeline] // catchError 00:30:25.425 [Pipeline] archiveArtifacts 00:30:25.432 Archiving artifacts 00:30:25.575 [Pipeline] cleanWs 00:30:25.587 [WS-CLEANUP] Deleting project workspace... 00:30:25.587 [WS-CLEANUP] Deferred wipeout is used... 00:30:25.593 [WS-CLEANUP] done 00:30:25.595 [Pipeline] } 00:30:25.610 [Pipeline] // stage 00:30:25.615 [Pipeline] } 00:30:25.629 [Pipeline] // node 00:30:25.634 [Pipeline] End of Pipeline 00:30:25.693 Finished: SUCCESS